Implementing Cloud Storage Metrics

Intel plans to migrate to an enterprise private cloud, designed to support their offices and their enterprise computing applications. This transition is a multi-year, multi phased initiative which will enable greater agility and efficiency of its storage networks.  There is a 35 percent rise in the demand for storage space annually; increasing storage utilization is the key to limiting growth and checking costs. The private cloud is based on a virtualized infrastructure, retrieving data from shared pools of storage via the Storage Area Network (SAN). Optimizing the SAN is the challenge Intel faces while implementing cloud storage.

The optimization of the cloud storage would depend on the efficiency, capacity management and risk management. The present methodology is designed to,

  • Ascertain a clear link between the cost of hardware and use of storage capacity.                          Cloud-Computing
  • Find newer technologies to increase efficiency.
  • Develop on locating duplicated data.
  • Setup operational thresholds to check on occupied space.
  • Have consistent efficiency across supplier product lines and technology generations.

Intel proposes to use these methods to analyze and compare the efficiency of its private cloud at multiple levels, from individual storage pools to a global view. Prospective enhancements to these would include risk management thresholds based on a predictive algorithm that can calculate customer storage requirements based on growth. Similar methods may be applied to the local Network Attached Storage (NAS) environment.

Why the SAN?

SANs not only have global storage but also support Intel’s private cloud and other mission critical applications. Server virtualization and rapid growth in storage demand has accelerated the SAN storage capacity growth.

Size Matters:

The large size of Intel IT environment makes even small efficiency improvements result in significant financial savings because capacity purchases are the largest in Intel IT Storage costs. There is a need for enterprise wide metrics to measure capacity utilization across the SAN environment because storage suppliers use different terminology and methodologies for measuring capacity utilization and existing metrics do not accurately reflect efficiency improvements achieved through new technologies such as Thin Provisioning. Therefore the lack of a standard method for measuring storage utilization leads to inconsistencies in measurements across teams and groups in Intel IT. Efficiency, capacity management and risk management are inter-related strategy focus areas within Intel’s storage management strategy.

Health Metrics:

Performance health metrics are to be made efficient since these metrics enable the understanding of application and system requirements, identify potential bottle necks and assist in successful deployment.

Storage metrics:

A new storage metrics framework comprising cross functional teams have been created in Intel. Intel measures frame capacity utilization through slot utilization percentage metric. Overall storage efficiency metric used to cost effectively store data by internal customers is defined as the ratio of used capacity to raw storage capacity. The challenges of measuring used capacity are hindered by data duplication and inefficient management of orphaned data. By shifting less critical business data to lower cost tiers on the storage framework, Intel plans new storage tier initiative while storing customer needs.

Storage efficiency techniques such as thin provisioning while increasing utilization also expose Intel to being less proactive in meeting changing business needs. Over subscribing capacity means re-working on adding or re-balancing capacity.

Capacity and Risk:

Intel’s key capacity management and risk management metrics measure allocation and utilization within each storage pool as well as provide summary reviews. When allocation or utilization levels reach pre-defined thresholds Intel needs to be alerted to minimize risk for which several initial policy based thresholds have been established.

  • Used Percentage: Currently two operational thresholds, close pool threshold and re-balance pool threshold manage risk as storage pools reach high utilization levels.
  • Allocation Percentage: Percentage of usable capacity allocated to customers
  • Customer utilization of allocated capacity.
  • Allocation Headroom: Additional storage capacity that can be allocated from a pool.

The Goal:

Intel has been building business intelligence and reporting capabilities with the goal of using these capabilities to analyze and manage SAN storage across Intel’s office and enterprise private cloud.

A storage resource management (SRM) tool that automatically gathers storage capacity related data across the SAN environment thereby allowing managers to compare efficiency across different data centers frames and pools has been created. SRM has enabled reports such as the executive storage efficiency report, data center management report and the pool status report.

In conclusion storage utilization capacity metrics have aided in efficiently managing the SAN storage as well as provided information for capacity management and risk management. Storage performance health metrics such as response time, queue length and storage processor utilization have enhanced storage capacity utilization. An efficient SAN is the secret behind a top of the class cloud storage.

Resources:  Intel White Paper

 

You can also keep up to date with current trends and technology by visiting Data Center Talk where we keep you informed on important changes as they occur.

No related content found.