VMworld 2012: Back to the future

I had a great time, as always, at VMworld last week. My seventh time was busier than ever. If I had to summarise my gut feeling about this year, it was VMware’s return to the future of the datacenter. Yes, there was plenty of cloud-ness, but the main thrust of VMware’s message was: there’s a lot left to virtualise, encapsulate, and mobilise in the datacenter, and we’re the best company to help you do it…whether or not you’re heading for the clouds. The cloud isn’t everything, nor should it be. It’s one of many paths to a more efficient, responsive, and available IT infrastructure. Companies aren’t going from data centres and managed services to the cloud in one monolithic transition. They’re looking at everything from their virtualised workloads to their big databases to their productivity apps and asking two questions: Can I run them cheaper, faster, and better in-house first? And, when will it make more sense to run them in my or someone else’s cloud? Part of that decision is cost — will cloud save money?


A bigger question, though, is: Who decides? Will application teams and app developers go to the cloud themselves, without waiting for IT? In many cases, they already are. Or will today’s virtualisation admins lead the way? VMware’s betting on both, and it used VMworld this year to arm its core audience —VMware admins — with a strategy. My colleague Glenn O’Donnell calls VMware’s core audience the Illuminati (heh), and VMworld is certainly designed for them.


So is the new unified vCloud Suite, combining vSphere, vCenter Operations Management (with config and capacity rolled in last year), and vCloud Director into a complete virt+cloud management stack. That’s a lot of boxes, but VMware has to do it: As the hypervisor itself gets less sticky, and as competition catches up, VMware has to both broaden and simplify the management story to arm datacenter admins with enough of the right tools to not only create a killer private cloud but get their application environments ready for a public cloud. The vCloud Suite, combined with the DynamicOps and Nicira acquisitions, gives VMware a story for virtualisation of every existing data centre tier (and the foundation for the software-defined data centre), plus the connectors and lifecycle management tools to mobilise workloads to public IaaS platforms. Of course, the acquisitions also herald a new era of openness at VMware, with OpenStack also on the horizon.

vCloud Suite is still obviously a marketecture at this point. The acquisitions were just wrapped up a few weeks ago, so I’ll give them a pass, but I’m looking forward to some real guidance on when and how each should box is going to be built, integrated, and deployed. Which VMware orchestrator, service manager, and lifecycle tools have won the day? Which acquired management tools (and there are plenty) will quietly fade away? I’m hoping to hear more on this from Barcelona.


And if you needed any more evidence that the VMware Illuminati still call the shots at VMworld, look at the two vendors that won the TechTarget Best of VMworld 2012 awards Management category, New Technology category, and Best of Show: Intigua and Hotlink. Intigua virtualises and encapsulates management agents, those bugbears of every admin’s life; HotLink lets you manage Hyper-V and other VM types — including AWS instances — from vCenter seamlessly. I’m impressed by both companies and have been diving deeper on both recently — these products are simple, laser-focused, easy to understand and demo, and designed to make real-world, everyday management of virtual environments simpler, faster, and less of a mess. Period.


For VMware, though, this was a management & strategy show, not a product show. Rounding out the news, VMware also announced education and advisory services and enhanced offerings for SMBs. And Dave Johnson covers the View-related content with his usual flair here. Finally, just for the record, I’m neutral about the shift back to processor-based pricing. The backlash was fierce, and it was right to listen to customers, but I still don’t think counting physical infrastructure is the way to license for the next generation of data centres or clouds. It’s clear vRAM wasn’t the way either, though.


Next for VMware? Focus, focus, focus. Where does the company want to win? The current answer is “everywhere”: data centreand cloud, packaged apps and new apps, the entire systems management stack, admins and developers, virtualised storage, software-defined networking, and the virtual desktop. That’s a lot of simultaneous vectors to juggle, a lot of established competitors to chase, a lot of partners to woo, and a lot of new acquisitions to pull together. Over to you, Pat!

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum

Trellis Platform Software Released by Emerson

Emerson Network Power, a business of Emerson and a global leader in maximizing availability, capacity, and efficiency of critical infrastructure, announced the global availability of four software applications for data centre infrastructure management (DCIM), all within the innovative Trellis(TM) platform.

Working with the previously released Avocent Universal Management Gateway, the Trellis software applications deliver real-time infrastructure optimization that enables up to 70 percent improved operational efficiency and 25 percent improved energy efficiency of data centers.

These efficiencies can be realized because, for the first time, data center managers have all the management capabilities they need in one solution and one central location, eliminating redundant processes and the need to link disparate, proprietary systems to view a complete picture of IT and facilities devices and performance.

“Infosys is committed to building environmentally sustainable solutions for our clients’ enterprises,” said Ankush Patel, Infosys vice president of sustainability. “With our deep experience in sustainability and data centers, we see the need for systems that excel in both energy efficiency and performance. The Emerson Network Power Trellis platform is installed in our Bangalore data center as part of the Trellis Early Adopters Program. It shows great promise in being able to continuously monitor critical systems and predict potential issues in a cost- and energy-efficient manner.”

“The Trellis platform is going to be a comprehensive and powerful system,” said Andy Lawrence, research vice president for data center technology at 451 Research. “The system design and architecture, all based on real-time data, are unlike anything on the market today, and should be both flexible and highly scalable. One of the more impressive features of the Trellis platform is that it has been designed and engineered with ease of use and simplicity in mind — the designers have paid a lot of attention to how data center personnel work. The information required to manage the data center is presented in the context of the operation, which should be intuitive and increase productivity.”

Data Center Talk updates its resources every day. Visit us to know of the latest technology and standards from the data center world.
Please leave your views and comments on DCT Forum

DCIM – Yes but…

I am often asked about what DCIM is and what value adds it brings to the company.  My consultation mandates bring me to work with engineering firms that know a lot about BMS (Building Management System) solutions. These solutions deal mainly with data from electromechanical devices such as UPS, air conditioning units, generators.  The data center manager needs this information but a BMS is only one part of its requirements to properly maintain a data center. A DCIM solution adds new elements of visibility within the data center. The cabinet is now part of the equation. The warm / cold air flux within the data center in between cabinets, the notion of available space and the remaining electrical capability left in a cabinet are all important parameters to manage effectively a data center. But is it enough? Is this added capability really justified for the implementation of such a tool? Is the value add generated by this solution justify the effort to operationalize it? Are we to stop just before the real value add is generated for the business?

Throughout the previous articles, we analyzed the importance of having a good inventory within its data centers as well as how to collect the required information. We also discussed the reasons why companies implement data center management tools. We are now at the phase where we want to pick such a solution / tools that will allow us to better manage our assets and add value for the IT department.

The value of such a tool resides in the quality and type of the data maintained within the solution.  Human nature being what it is, it is important to keep in mind the following:

  • If I am to use it,  I need value add
  • Having no spare time, effort to maintain the data must be minimal
  • Information must be pertinent and easy to find.

Considering the above and knowing that a data center is an ecosystem in which numerous departments / individuals with different interests revolve around it, you need to pick the right tool.

The main interest for the following groups is:

  • Data center manager: Available room capacity
  • Buyer: An inventory of equipment / licenses
  • Finance: The cost of goods
  • IT architect: Equipment configuration.
  • System admin: Configuration parameters
  • Incident manager: Impact analysis
  • Clients: Visualization of their assets

All these actors need specific information whether in a read mode or in very specific conditions in writing mode. Their interests being different, the solution must supply them with specific views that will allow them to easily interpret the information in a visual manner if possible.

At this point, it seems risky to try to respond to the needs of such a large crowd with only one tool. However, these needs all relate to the same element: The IT component that is required by the customer. The parameters / attributes that define those components can be numerous, but their management is not that more difficult because of the quantity. Before going further: beware of the artifacts that do not add any real value to the information. You want to make sure that it is not a nice to have, but really of value for the service you want to deliver. Another important aspect that you need to consider is the fact that the solution can tie in nicely with applications that you currently use.  Example: Is it really necessary to add a new CAD application when you already own Microsoft Visio in your operation?

It is important to increase the maturity level of your organization in adding value. The solution must help you to minimize the risks, the costs and help you take decisions. For that, you must be able to visualize complexity, understand interdependence and accelerate the decision process. To achieve this, different views are expected by the main actors:

  • Financial and purchasing
  • Service impacts
  • Electrical impacts
  • Component configurations
  • LAN / WAN / SAN connections
  • Data center plan
  • Equipment location
  • Relationship: Customer / application / databases / virtual server / physical server

Those solutions exist. I have personally implemented in a short time span some complex installations. Early on , even during  implementation, the customer  found some ROI, at different  levels. The requirements are that your evaluation criteria’s be well thought of, that your project plan is defined and the business process in line with your vision.


Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum.

What’s in my cabinet?

In the last few articles, I presented what is needed to be able to rely on an up-to-date inventory
for better control of your data center. Whether you want to migrate or introduce one or
multiple servers, many important parameters must be known before even considering a suitable
position for them. It is a complex task since it involves many parties and multiple parameters
such as: function of the asset, its ownership, its electromechanical properties, its relationship
with other devices… all important inputs that will influence your decisions. Previously, it was
concluded that if you own more than 20 cabinets, the famous spreadsheet is no longer suitable
to provide the type of help you need to properly manage your data center.

DCIM, DCNM, MDCM and all the other acronyms which will be used in the future, propose
some form of inventory capabilities. Built-in inventory functions in these solutions are however
designed to respond to a need required by the solution itself within its ecosystem.

When you decide to look for such a solution, it is important to clearly define what you need
to accomplish overall. These enterprise solutions are expensive and their implementation
complex because of the many inputs necessary. You want to maximize your investment and
provide your company with as much value add as possible. A DCIM (Data Center Infrastructure
Management) solution for example, could greatly help you in managing your data center but do
no good for the IT architect. However, it is important to have the support of this same architect
if you want to get the full benefit of the solution. Another factor you have to consider is for the
solution to be well integrated into your process to avoid duplicate input and redundant data
which are always a great source of error.

So, let’s start at the beginning: Data Collection – which is really the gathering of the content of
the cabinets in the data centers. Here, we are trying to establish our baseline. One option is to
go from cabinet to cabinet with a pad of paper and write down the name of the equipment,
model number, position, asset tag, various connections (electrical and data)… Once this painful
operation has been completed, you need to transfer the info into a program or spreadsheet and
complete the picture by inputting more info such as the electromechanical data on the device.
However there are many solutions available to help you increase efficiency and accuracy in this
important step towards having a perfect picture of your data center. Some of these can even
cut back on the time required to complete the data collection phase by more than 35%. This
type of solution (MDCM – Mobile Data Center Management) can also be reused to audit your
data center once a year or on a periodic basis. It will also allow you to find what is supposed to
be vs. what is found. At this point, it becomes an excellent source for all your needs in regard
to corporate accounting necessity, SOX, PCI and other compliance issues, annual maintenance

This type of solution can also help you manage your day to day operation. MDCM systems will
incorporate a handheld device that will allow the operator to take his entire inventory into
the data center with him and update the core data base with his changes while the operation
takes place. The need to post or pre-process a change will be eliminated since the data center
technician will enter the change into his handheld device as he makes the change. Thus the
data base will update when and where the change occurs. The level of data base accuracy and
currency will be greatly enhanced with the use of these mobile devices.

Once you have collected the data, you are now ready to transfer it into your asset management

In the next article, we will analyze the different types of solutions on the market, try to
demystify them and look at the pros & cons. We will then attempt to build a vision that will
enable better manage our data centers.

How Can a Room Affect Data Center Cooling?

As data centers are increasingly focusing on lowering their PUE, more and more questions are being raised on how to bring down power consumption in a data center. Since cooling takes up a considerably large percent of the total power supplied to any data center, studies are being undertaken all over the world to churn out the most power-efficient cooling systems.

In the process, some have even raised questions on the room specifications affecting data center cooling. Some of the frequently asked questions include:  does the height of the room affect cooling? Does a larger room bring down the cooling requirements? Does a circular room demand lesser cooling? How can one even go about determining in advance how much cooling a room needs? These are some of the questions I would like to address in this article.

First things first; no, a larger room does not imply lesser cooling. On the contrary, in a spacious room, heat gets evenly distributed. Admittedly, the hot spots or heat concentration in the room can be reduced if the equipments are more freely disturbed. This is not necessarily a positive point considering that in a large room, there is more air to be cooled. That implies that you need CRACs of higher capacity which will draw in more air and in turn, draw more power to cool the air. Also, the distance the air has to travel in order to get cooled increases in a large room; the fans have to draw more power to deliver heated air to CRAC.

It is often suggested that you arrange your equipments in a more compact setting. Hot spots can be eliminated by passing blasts of cold air to the affected area. Since you are concentrating on specific regions of the computer room that are more prone to temperature rise and not the entire room, you are bringing down the volume of air that has to be cooled to maintain the desired temperature. Thus, you are also cutting down on power for cooling costs.

If you cannot help but have a large computer room, you can try moving all the equipments to one part of the server room and block the rest of the room and remove perforated tiles and neatly close all floor openings from the blocked part of the room.

As far as possible, do try and avoid structures that can block air pathway or change the direction of airflow. In air blast techniques, cool air is concentrated on the hot spots. Any change in the direction of the air destroys the purpose of this technique. If your data center has a lot of obstructing structures, you have to find your way around the obstructions. In such cases, you can strongly consider looking into in-row cooling solutions. Since these units can be placed as close to the hot spot as possible, they are an ideal solution to cluttered environments. Alternatively, you can look into buying air-diffusers and return-air fan products.

High ceilings are usually preferred while considering space for a data center. The minimum floor to ceiling distance is 9 feet. Any addition to that height is always welcome as it gives the data center professional more space on the top to accommodate overhead cabling and also provides a pathway for heated air to the CRAC units. It also maintains enough space between the rack and the overhead sprinklers.

Saying that high ceiling guarantees power-efficient cooling is slightly far-fetched. It also depends on how the rest of the room is laid out. If the hot aisles and cold aisles are properly distanced, and a 36-inch hot aisle and a 48-inch cold aisle is provided, cooling costs can come down drastically. If hot aisles and cold aisles cannot be accommodated in the data center, it is strongly advised to have a high ceiling.

If you ensure that you stop hot air and cold air from mixing with one another, more than half you battle is won. The coolability of the room is always high when the room is rectangular in shape with very few obstructions. So if your designer wants to insert pillars in your data center or make a circular room for aesthetic purposes, you know what to say.

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum.


Calling out the Cloudwashers


pg” alt=”” width=”300″ height=”208″ />Over the past year, the term “cloud” has become such an overused term to the point that when placed in the description of any type of technological product, it causes the masses to flock for purchasing. This practice has become widespread to the point that the industry has coined the term “cloudwashing” for when a product improperly adopts the cloud label.

Additionally, cloud technology company Apprio held the first ever Washies which is an award focused on calling out the worst cloudwashing offenders. After a public vote during November, the results have been announced and the results are unsurprisingly fitting for the industry.

The Washers

  • The biggest overall washer – Oracle: for their Extralogic box. Essentially a hardware/software device to “provide cloud infrastructure in one stop.” In reality the system is simply a glorified mainframe with all the required software pre-configured.
  • The worst case of cloud advertising – Microsoft: for their “To the cloud!” Television ad series which illustrates consumers and professionals in various dire circumstances, then finding the solution in “the cloud” which as shown in the commercials simply the internet. In particular the main commercial is the one starring two customers stuck in an airport.
    It should be noted that Microsoft still has plenty of valid cloud offerings, such as their Azure platform and this award was issued purely for their horrible ad campaign.
  • The most cloud washed statement - Larry Ellison (CEO of Oracle) and Oracle: for proclaiming that “…we’ve redefined cloud computing to include everything that we already do.” A statement well said considering how the Oracle Extralogic box is essentially a mainframe with the cloud sticker stuck right on it.
  • The biggest personal cloud washer - Larry Ellison: for launching a social media campaign just to win The Washies, and also creating a bot to vote for him in the polls.
  • The most enthusiastic use for the word cloud – Salesforce.com: Unsurprisingly to many in the business world, “overusing” does little justice to describe the use of “cloud” in the SalesForce family. This award however is not meant to bash their products but rather the marketing campaigns from Salesforce which rarely go for more than a few sentences without saying “cloud.”

For more quality articles, visit DataCenterTalk

Importance of Data Center Inventory

In my last article, we spoke about two golden rules to deliver a successful migration or construction datacenter project:

  1. Rule #1: Make sure people understand each other
  2. Rule #2:  Break up your project in smaller phases before redesigning your data center.

In this article, assuming that you have clearly defined your expectations and needs with all parties of the project and that you have also proceeded with the “Cleanup” of your datacenter, we will review one more aspect before moving to the fun part:  The Inventory.

You should have somewhere a list of servers that are supposed to be in your datacenter. Whether it is a spreadsheet or a Visio diagram, it most probably remains a single user tool that is not shareable and lacks accuracy…  Avoid surprises and make sure it is accurate.

I have often seen organizations without accurate information on the content of their datacenter. Here, I am not even talking about an exhaustive list of devices and or attributes.  I am simply referring to the name of the server and its location within the datacenter. In order to start your construction project, you need more than this. And if it is a migration project, you then need a whole other set of information.

The exercise might look easy, but in reality it’s the complete opposite. This is probably the most crucial phase of your project. For example, without proper information, you cannot decide on the server location without taking a risk of disturbing your electrical or mechanical setup. This exercise is a lot more complex than it looks. Some companies specialize in datacenter inventory. Software applications exist to help you in this area as well. Both will help you speed up the process and increase the accuracy of your inventory.

It is important to ask yourself which information is needed before starting your inventory and where to get it!  But first, you need to answer the following: Who will be using it?  It is my experience that if you want your inventory project to be successful, you need to involve all departments, not only the datacenter operation department. An accurate and up-to-date inventory is a powerful decision making tool that adds a lot of value to departments such as purchasing, finance, IT architecture, security… By involving all these departments, the inventory will remain accurate for a long time and everyone involved will benefit. Thus, inventory becomes part of the process and is no longer a project in time that you have to repeat every time you do a migration. We will talk about the process in another article soon. For now, let’s concentrate on the different sources of information we have at our disposal to complete the inventory:

  • Surveillance tools
  • Applications such as Vmware
  • House spreadsheet (purchasing, finance…)
  • Architecture diagrams
  • Network diagrams
  • Business unit information

To complete the picture, you need to perform a physical inventory to tie everything into your datacenter. Every one of these sources will add value to your inventory and will bring you closer to a successful project. We now see that an inventory project is not a simple operation and must not be taken lightly. It is a fundamental part of the larger project: migration or construction of a datacenter.

The following are the important categories of assets that must be taken in consideration:

  • Electromechanically components
  • Servers
  • Network and San components
  • Cables
  • Software applications
  • Business units using the datacenter
  • All links that tie these components together.

Obviously, there is a whole set of sub-categories and or attributes to these assets that should be taken into consideration (ex: # of watts use by a server, its weight, # of U, heat dissipation…). It is not always necessary to gather all this information in the first run, but you should include them in your process when you acquire new devices. It will then be easier on the second run of an inventory project.

In conclusion, the success of a migration or construction project of a datacenter depends on the quality and accuracy of the information on your assets. The operation must be taken very seriously in order to collect the appropriate information for your customers which will provide them value added services.  They then become your ally by adopting the inventory process which will guarantee an accurate inventory at all time.

In the next article, we will take some time to analyze the different functionalities you need to look for in the application you will use to manage your inventory.

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum.


Reducing Electrical Consumption in Existing and New Data Centres.

Growing IT energy consumption is getting more and more attention from the bill payers across all industries. At political level data centre efficiency has been projected to the fore as one of the five main focuses of the new Smart Economy . In addition Sustainable Energy Irelands has established data centre and ICT energy efficiency working groups which are currently assisting both public and private sectors to tackle the problem of increased power usage by IT departments.

Alignment of Management Goals

In most organisations energy efficiency has become an important topic but up to now the  approach has tended to be company wide with no particular focus on the IT department. This stems from a reluctance to dabble in the workings of the IT department which may effect the performance and reliability of the service it offers.  In simple terms the goals of the department creating the IT power bill and those paying the power bill are not aligned. In the IT world energy efficiency will never trump reliability and performance, however it has become the next most important factor when choosing a product. This message is being heard by manufacturers who are now producing processors and servers which do more with less power and can operate at higher temperatures. This technology now offers real opportunity to reduce the IT power bill.


reducing electrical consumption


Fig 1. Data Centre energy reduction methodology – Start at the core

Step 1:   Reducing of core IT power usage

Energy use in a data centre may be broken down into core IT power usage and ancillary power usage. Core IT power usage is the electrical energy needed to keep the IT equipment running. Ancillary power usage is the energy needed to operate the cooling, lighting, monitoring, etc  and is secondary to the core IT process but as in the case of cooling is proportional to it. Hence if we reduce the core IT power usage we will also reduce the ancillary power demand which is the goal of our first step. Steps which reduce the core IT power usage are listed below.

Opportunities Description Potential energy saving
Virtualisation Physical to virtual servers 50%
Hardware refresh Use servers with high SPEC_power ratio. 20%
Server power management Power-down, 10%
Storage power management Spin-down, de-duplication etc 10%

Table 1: Opportunities for reduction in IT power usage

Note : Sustainable energy Ireland working with Byrne Dixon Associates have developed minimum criteria for defining efficient server and storage equipment.

Step 2 :Control of Airflow in the Data Centre

Following a reduction in the core IT power usage further power savings can be made by reducing the power used to cool IT equipment. In many cases cooling units can be power off or returned to idle mode.  We can achieve these savings by altering the room layout so that our cooling equipment cools the IT equipment and not the IT room.  Through the establishment of hot and cold aisle and the reduction of bypass and recirculation air we ensure that our cooling units deliver cold air to the IT equipment only and the exhaust  air returns to the cooling units as hot as possible resulting in an improved Coefficiency Of Performance. This aim is further enhanced by containing the hot / cold aisles using a proprietary or plastic curtain containment system. Addition savings are achieved by reducing air-loss through the floor or through the racks. The use of cable arms at the rear of racks increases the air resistance across the server with a resultant increase in fan power. The use of velco cable ties allows for ease of server movement in and out of the rack. Table 2 shows a list of measure aimed at  controlling airflow in the data centre.

Table 2 : Opportunities for control of airflow in the data centre

Opportunities How Potential energy saving
Create Hot and cold aisles. Relocate racks and/or cooling units. 10%
Reduce cold air loss. Install blanking plates and air brushes. 5%
Contain hot/cold aisle. Install proprietary containment system or cheaper plastic curtain system.

Utilise the ceiling void as a return plenum for hot air.

Remove server cable arms. Install Velcro cable ties to allow server movement. 3%

Room layout – hot aisle/cold aisle

Fundamental to the efficient operation of an IT Room is the segregation of the hot and cold airflows to and from the IT equipment. This segregation can be provided by ensuring the correct rack layout, additionally for rooms with higher power density, containment system ensure higher cooling performance.

The benefits of a hot aisle/cold aisle arrangement

  • Reduces  the risk of hot and cold air mixing
  • Ensures hotter return air temperatures to the AC units
  • Increases the efficiency of the AC units
  • Increases the cooling capacity of the AC units
  • Ensures a higher percentage of cold air gets to servers
  • Reduces the amount of cold air needed
  • Reduces the power requirement for cooling

Figure 2: Benefits of a Hot aisle/ cold aisle arrangement.

reducing electrical consumption

Figure 3: The importance of unhindered air supply and return.

For example, in this server room design we see examples of a clear and unhindered airflow path on the left and a restricted airflow path on the right which will cause inefficiency through air mixing and hot air recirculation.

it is important to consider the location of the cooling units relative to the server intake when laying out a server room. An effort should be made to create an unhindered cold air supply path to the server intake and an unhindered hot air return path back to the cooling unit.

Step 3: Raise the Temperature Set-Point

It is the aim of the cooling system within the IT Room to  ensure that the IT equipment is kept within the manufacturers operating parameters and the recommended industry guidelines.

ASHRAE  Technical Committee  9.9 which focus specifically on data centre design, recently released their “2008 ASHRAE Environmental Guidelines for Data Centre Equipment”. This guideline has expanded the recommended operational bands form the earlier 2004 guidelines in recognition of the proven ability of IT equipment to operate in higher temperature environments and the importance of power reduction:


2004 Version

2008 Version


20°C to 25°C

18°C to 27°C


40% RH to 55% RH

5.5°C DP to 60% RH & 15°C DP

These new temperature recommendations allow the cold supply temperature to be raise to much higher levels than before which in turn allows for greater energy savings.  This higher temperature set points  increase the number of free cooling hours available to the outside condenser units resulting in drastically reduced energy bills.

The aim of the cooling unit in the server room is to cool the IT equipment an not the IT room therefore cooling unit operation should be based on temperature readings at the server intake. Room temperature should be measured at the front of the racks in the cold aisle.

Step 4:

Specification of New Equipment

Sustainable energy Ireland working with Byrne Dixon Associates have set out energy efficiency criteria for all classes of power and cooling equipment within the data centre. These criteria may be reviewed  under the heading for Information and Communications Technology ICT at http://www.sei.ie/Your_Business/Accelerated_Capital_Allowance/ACA_Categories_and_Criteria

In is now possible to stipulate as part of a tender that  proposed equipment should be included in the list of ACA approved equipment.  By purchasing equipment which is listed on the ACA approved equipment Private Sector organisations can offset the full purchase price of the equipment against Tax in the year of purchase saving the company 12.5%

Many forms of power and cooling equipment come with energy efficient options which have short payback periods and in many cases the savings on efficiency can fund the purchase of the new equipment. Previous generations of UPS operated most efficiently at full load although most of their operating life would be spent below 50% capacity. Recent developments in modular and transformerless technology have allowed UPS units to be expanded in line with increased demand ensuring consistently high efficiency. All-on ,all-off cooling equipment has hampered cooling efficiency with its inability to track the cyclical nature of IT loads. The development of electronic computational motors and digital scroll compressors allows the cooling output to more closely demand while reducing power consumption.

Equipment type


Payback period


Modular  capacity


12 months


EC motors

Digital screw compressors

3 months

Step 5: Rethinking Critical and Essential Systems

Many companies are reviewing their classification of essential and critical systems within the IT process with the aim or reducing the UPS load and the associated power losses.  Not all IT processes  within a data centre are critical therefore they do not need to be on UPS load but instead be on Generator power. Additionally how much redundancy is necessary? For example can an N+1 power system reduce to N during the 1% of the year when it hits peak load. By reviewing and implementing a revised UPS design organisations can vastly reduce their Capex and Opex costs.

Step 6: Utilise Free Cooling

Free cooling can used to reduce the cooling load of a date centre with through air-side or water-side economising. Airside economising offers a greater ROI as the temperature bands are wider than that of water-side economising but has its drawbacks. Fresh air at very low or high temperatures and at very low and high humidity levels requires treatment before it can be allowed into an IT environment which requires centralised plant. In new large scale data centres this investment is justified and the services space to house the plant equipment and the large air delivery ductwork can be easily designed into a new building layout. The delivery of large volumes of fresh air throughout the data centre requires large fans with large power loads which has a negative effect on the energy saving opportunity.  The business case tends to work best for smaller power density data centres. The use of heat wheels can overcome the problem of humidity ingress but are more difficult to incorporate into tradition data centre layouts.

For small server rooms office air can be diverted into  the cold aisle  and hot return air can be reused to preheat ventilation air or it can be dumped to atmosphere during the cooling season.

In existing data centres retrofitting ductwork is seldom an option and in such cases waterside economising is favoured through the use of cooling towers or dry coolers. Free-cooling chillers are easily retrofitted and can be specified with heat exchangers which will produce an amount of hot water for domestic use in office environments.

Step 7: Use of CFD Modelling

Computer fluid dynamics is used to assist in assessing the suitability of design layouts and cooling technologies to best suit the clients needs. It allow a virtual 3D room be created in which  different power per rack loads and different cooling designs can be tested . Performance of the cooling system can be compared against floor depth, hot or cold containment, above or below floor cable management, In-Row versus  In -Rack cooling and many other elements. This process offers a no risk method of testing room layouts and cooling solutions before and capital investment is made.


The first step in any data centre energy reduction exercise is to reduce the core IT process load this in turn will reduce the cooling load. This load can be further reduced  by applying recommended airflow management techniques within the data centre space.  These and

1              Virtualisation

2              Apply Server and storage management

3              Refresh IT hardware

4              Remodel room to maximise savings to cooling load

5              Establish hot aisle/ cold aisle arrangement

6              Install hot/cold aisle containment

7              Turn off cooling units where possible

8              Raise server intake temperature

9              Remove cable arms

10           Install temperature monitoring in cold aisle

11           Utilise ceiling void as a return plenum

12           Install blanking panels and air brushes

13           Retrofit variable speed EC fans to CRAC units

14           Specify equipment from the ACA equipment list

15           Utilise free cooling

16           Reuse the exhaust heat.

16           Carry out a CFD analysis of possible options to ensure performance

Byrne Dixon Associates are a data centre consultancy based in Dublin and specialise the design and optimisation of data centres. Byrne Dixon Associates have recently been appointed to the Sustainable Energy Ireland working group on data centre efficiency. They have completed projects on over 70 data centres in Ireland and abroad and run data centre efficiency workshops for clients and consultants in Dublin and Cork and offer CFD modelling at these workshops. For details of the next data centre workshop contact Vincent Byrne. Email vincent@byrnedixon.com of by mobile at 086 8196868.

Backup Problems That Companies Need To Overcome In Order To Capitalize On Big Data

The “Mainframe Age” saw the birth of databases and business process automation. Then, came the “PC Age” which placed the power of computing on the desktops of every worker. Next came the “Internet Age”, which created the virtual world in which most business takes place today. Now, we’re entering the age of “Big Data”.

A major shift is taking place right now in data centers across the world, and it’s spawning a new era in the history of business computing.Until recently, business data was mainly used to support business decisions. Now, data is becoming the business itself.

A combination of cheap storage and extremely powerful processing and analysis capability is spawning new value-creating opportunities which had previously been the domain of science fiction.
Of course, any business that wants to leverage the power of big data will also have to protect this data from disasters, accidents and mechanical failures. This means that certain backup challenges will need to be overcome in order to support the activities associated with Big Data computing.

Manual Labour Costs

Hardware costs are dropping exponentially as complexity is exponentially rising. As this happens, automation is critical in order to prevent labour costs from also increasing exponentially.

Data Transfer Speeds

Transferring data to off-site storage using tape is extremely slow for both backup and recovery. Big data systems will require that data be backed up at network speeds. In cases where data is growing at a faster rate than the public Internet infrastructure can support, companies will require dedicated connections with their remote backup facilities or service providers.

Recovery Speeds

As the strategic importance of corporate data increases, so will the potential risks and costs associated with unplanned downtime. Big data applications will require data backup strategies which are optimized for maximum recovery speeds. This may mean combining both local and remote backups, and it could also mean maintaining a mirrored emergency failover facility at a remote site.

Recovery Point Objectives

Because data loses value with age, the most recent data is also the most valuable. This is especially true for big data apps which work with real-time analysis. Because of this, backup plans will need to be designed to minimize data loss in the event of a disaster.
It’s no longer acceptable to lose 24hours of data because of a daily backup process. Higher frequencies will become the norm.


In order to maximize server performance and adhere to regulatory requirements, inactive, older or low-value data will need to be stored separately from live working data. Data archiving and electronic discovery will become much more important in the big data age.

Backup Windows

Big data takes a long time to back up. It’s simply not practical to take a system off-line for 8 hours for daily backups. Tomorrow’s big data apps will need to perform more frequent backups, faster, and with less interruption.

Data Security

It’s really pretty shocking to think that – even today – most businesses don’t encrypt their backups. Backups should be encrypted, especially when stored off-site. Losing backup media is a common problem and can provide hackers and identity thieves with access to sensitive and private information. And big data will only worsen this threat.

Compression and Deduplication

No matter how cheap storage is today, it will always be cheaper tomorrow. That’s why it’s important to budget accurately and maximize storage utilization to minimize waste. Technologies like storage virtualization, block-level incremental backups, compression, and deduplication will grow in importance thanks to big data. But there will also be a balance and need for native and fast storage and because of the processing required for these activities as well as encryption.

These are just a few of the many new challenges that organizations will need to overcome as they begin to leverage the opportunities presented by big data computing. That’s why many organizations are choosing to outsource their data backup and online storage requirements and partner with industry experts.

About The Author:Storagepipe Solutions has been a leader in data center backups for over 10 years, and they’re experts in backing up large volumes of complex data center data. Leading the industry in the fields of disaster recovery, regulatory compliance, and business continuity backupsolutions in anticipation of rapidly-changing data management trends, its more than just an online backup software company. A comprehensive suite of hosted disaster recovery solutions that give organizations greater control over their retention policies while allowing them to overcome complex backup management challenges through cost-effective automation is what StoragePipe has to offer.


Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum.

Are you in control of your data center?

I have been in the IT business for more than 30 years from managing large IT infrastructure migrations, to moving data centers to helping to design them and managed their construction. During all these years, I have seen a lot of surprising and amazing infrastructures to say the least… I have come across many data center projects that made me wonder why they were started in the first place. It isn’t that the needs weren’t there, but rather the fact that the initial hypotheses were wrong or the information was incomplete. The result: The needs were poorly explained to the engineers and the planning was deficient.

In the following, I propose a couple of rules to follow regarding data centres,

Rule #1: Make sure people understand each other

When a French speaking person speaks to an English speaking person, at least one of them must speak and understand the other person’s language or they must hire a translator. The same principle applies for a Data Center project. The engineer must be able to understand the IT specialist and their needs. Moreover, the IT specialist must be aware where the technology will be in 4 – 5 years down the road and if at the end of the construction project, the data center really meets the requirements. A data center isn’t built for today’s needs! A Data Center must be able to last at least 15 years and must be able to evolve over those years.
Ex: Intel will come out with new technology next year that will be adopted by Vmware that could change multiple factors in your data center… Is the IT specialist aware of this? The engineering firm is rarely tracking those changes in the IT industry. It is the responsibility of the requester to expose the needs correctly and clearly.
How many times have you heard the following from clients? They designed it wrong or they didn’t get me the proper specifications!

So you need:
1. A multi-year vision
2. To clearly define your needs
3. To make sure the other party understands them.

Rule #2: Start small before redesigning your data center

Before you even start with a vision, make sure you have a clear picture of your actual assets. Adopt a phase approach that will get you where you want to go in your data center project upgrade.

• Clean up your environment: make sure that you don’t carry a dead horse, which means equipment that is still running but not processing anything because it’s been decommissioned a few years ago!
• Try to consolidate using virtualization? It is less expensive than building a new data center!
• One last piece of advice before taking your pen to layout your new data center: use some manufacturer accessories that could help you remove hot spots such as directional air flow tiles, rack air removal door with built in chimneys that throw hot air into the plenum! Containment is also sometimes a solution for your air conditioning issues.

Some of these ideas may expand the life of your data by a few years! I know, it’s not sexy, but it will allow you enough time to plan for the best solution. Over the next couple of months, I am planning to write on many topics that I have come across in my multiple mandates around data centers. I believe that had my customers considered some of these elements, many headaches would have been avoided.

So, what do you do first? Get in control of your data center: Understand your assets! Understand the relationship between the elements in it as well as the ownership of every one of these elements. Some of you might be using a spreadsheet while others are using an enterprise solution. I have not come across a single company that was able to manage a 20 cabinets or more data center efficiently using a spreadsheet as the only tool. How many attributes do you have to input in this spreadsheet if you have 20 cabinets? More than a thousand for sure and that isn’t counting the interrelation between them! Ideally, only one person must be using this spreadsheet …as a result the information is of value only to one person!

If you need to expand or design a new data center, what kind of information will you need to provide the engineering firm or the IT specialist so you can have a successful project? Is it in this spreadsheet or will you have to collect more data? The next article will introduce ways to represent your data center and really help you manage it.