The data center industry is in the midst of several major pivots that are changing established paradigms. Virtualization has largely replaced hardware-based capacity management. Single-instance data centers in which all data and applications receive the same support are evolving into multi-modal operations where resiliency and security are tailored to the application. Historically closed systems are becoming open.
These disruptions are being driven by the need for IT to move faster, improve application support, increase productivity, reduce risk, and lower costs. But the ramifications extend beyond even these powerful benefits: The changes occurring today will ultimately allow organizations to realize the promise of the software-defined data center.
That promise is a location-agnostic “data center” in which capacity is managed fluidly and securely across an ecosystem that includes on-premise data centers and micro data centers, associated colocation facilities, and multiple clouds. Organizations will access capacity based on application-specific requirements for availability, security and cost, regardless of the physical location of that capacity.
A key enabler of this evolution is gaining visibility into real-time operating parameters across systems. That requires increased connectivity and communication across the various devices within each facility and across facilities, as well as the ability to aggregate, analyze and visualize data to impact operations.
Currently, language differences across devices create data silos within systems that limit visibility and control, perpetuating inefficient operating practices and preventing enterprises from creating a connected ecosystem of IT, infrastructure and applications. Unfortunately, the situation that exists today—millions of legacy servers, switches, storage devices, and supporting infrastructure systems using different native languages—is not easy to overcome.
However, a roadmap for realizing the vision of the software-defined data center has emerged. Fueled by virtualization, holistic management platforms and a common, open-language specification for devices, the software-defined future is now foreseeable.
Virtualization has, of course, changed the way server capacity is utilized, but has plateaued in the face of the challenge of managing virtual environments that include compute, networking, storage and power. Instead of simplifying management, virtualization on the facility level is increasing complexity.
Data Center Infrastructure Management (DCIM) should provide the visibility to address that complexity, but closed DCIM systems simply increase the size of the silo being managed rather than enabling true holistic management. Fortunately, DCIM platforms are increasingly using open APIs to facilitate integration with complementary software suites such as IT management and accounting. Through this integration, organizations can get the real-time visibility into resource utilization, available capacity, and costs required for informed decision making.
The final hurdle, hardware communication, is being addressed through the Redfish specification, now under the management of the Distributed Management Task Force (DMTF). The DMTF is an industry standards organization working to simplify the manageability of network-accessible technologies through open and collaborative efforts by leading technology companies, including HP, Dell, Intel, Emerson Network Power, Microsoft and VMware. Redfish is a common language for IT and infrastructure devices that will facilitate greater connectivity and communication across devices and systems without adding complexity.
Version 1.0 of the Redfish specification was released in August of 2015, and its adoption will be aided by the broad industry support of the DMTF. It will take a number of years for the specification to reach critical mass in terms of installed devices, but organizations can begin to capitalize on the value of Redfish and position themselves for true software-defined management through DCIM systems with open APIs supported by strategic use of Redfish translation engines. These Redfish translators will accelerate the industry’s ability to use the new specification to optimize operations.
With this foundation in place, an organization can create and maintain a map of data center resources and their real-time operating parameters to achieve a new level of data-driven, real-time efficiency. IT and data center staff can then focus all their energy on delivering what their customers (internal and external) need in the fastest, most efficient, and most secure way possible. The primary factors that currently consume them—geography, security, power, availability, and connectivity —become non-factors in the open, DCIM-enabled data center.
For data center managers who are wrestling with how to identify and decommission ghost servers, or are deploying cloud-based applications simply because they can’t mobilize their own resources fast enough to meet organizational requirements, all of this may sound like marketing hype. It’s not. The core technologies and specifications are now in place to make this vision a reality, and the market—those who rely on the applications and processing data centers deliver—will demand nothing less.