Interview: Vincent Byrne, Data Center Design and Efficiency Specialist

Based in Dublin, Vincent Byrne is a data center design, efficiency and CFD modelling specialist who has worked on projects across Europe the Middle East, the Caribbean and Asia. He specializes in the design of fresh air cooled data centers through the use of CFD software to improve the performance of the internal and external airflow distribution.

Over the past 15 years, Byrne has worked as a data center and engineering design consultant , particularly in the design and optimization of highly efficient data centers. His company, Vincent Byrne Consulting, where he is managing director, offers full design and management, auditing and certification and CFD modelling of highly efficient data centers.

In this Q&A interview, Byrne explains some of the biggest flaws in data center design today, the appeal of In Row or In Rack cooling, and even counts down his top cooling accessories and data center measuring tools.

Data Center Talk: What is the biggest flaw in today’s data center design?

VB: The gap between what customers need and what they receive is for me the biggest flaw in current data center design. We see in many instances where rooms are short on space, or power or are over or under designed. A client’s understanding of their need is what determines the scope of any design and the engine that drives the project. We still find that many clients are ill informed about the nature of their current data center load and find it difficult to predict  their future needs. They may know how much power they are drawing but do not understand the nature of its usage such as the power density. This, of course, is fundamental when choosing a future-proofed solution. The direction of design is moving further away from the single-Tiered data center and towards a space which is split into different reliability levels from Tier 1 to 3 to match the different risk levels  by the Tiers 1 to 4. These differing zones of reliability within the one facility resultant in capex and opex savings without effecting the uptime of the core processes.

Both client and consultant play a vital role in this process and those who invest into the understanding of their needs achieve the lowest PUE’s at the uptime they require. A successful project starts with an internal analysis of usage and needs and this is a process which commences long before the appointment of an outside consultant. This information can then be used  to mold a best-fit solution to match the client’s projected growth pattern.

DCT: Considering the life-cycle of power guzzling cooling equipment and drive for data centers to “go green”, what options such as newer equipment and the use of air-side economizers are better for cooling? How do these options compare in terms of cutting costs?

VB: The whole energy reduction process within an existing data center involves multiple stages which may be summarised as follows. 1: Migrating as much of the IT process as possible to the cloud. 2:  Virtualising what remains. 3: Implementing server management on the remaining servers. 4: Implementing good air management in the room including containment. 5: Bring up air temperatures to increase the CRAC units COP. 6: Increasing the chilled water temperatures to maximise the number of free cooling hours.

These are all tried and proven methods of reducing energy consumption in the data center. Any technology, either hardware or software which promotes this process is good in my eyes.

But firstly I am a great supporter of fresh air cooling or air-side economising not only because of my location in northern Europe but because I fundamentally believe in its suitable for usage across many geographical regions. This in my opinion is the first cooling solution which should be considered. But lets look at data center efficiency in general.

On the cooling side the use of EC motors and reverse impellar plug type fans is a must for all new CRAC units. Obviously the use of containment either soft or hard is a must on all of our projects and has proved to give the level of returns which clients like. On the plant side, dry coolers have a good payback period and high efficiency chillers are a must. On the electrical side, high efficiency transformers are a good investment as they are relatively cheap and make saving across all of the load profile. For those you need to have a traditional UPS format transformerless UPS’s are the most efficient answer and if you can risk going offline then the efficiency rewards are there to be made.  DRUPS is another technology which can offer higher level of efficiency, but as this  technology is perceived to be less mature it is not yet accepted by many of the large users. Notwithstanding, higher levels of efficiency can be achieved with this technology. Such efficiency gains should be counter balanced with the increased maintenance costs. I also like the smart PDU units which are now on the market which can shutdown circuits and even power down servers and rebooth them with special scripts.

In the software realm there are some interesting server management software packages and one new development is a very interesting predictive cost modelling software package from Romonet called Prognose.

DCT: What techniques can be used to optimize cooling with minimal cooling equipment?

VB: Great question and let me answer by giving you “Vincent’s Top Four” cooling accessories and tools for efficient cooling. These are low cost tools or items of infrastructure which require little investment  but can be used to effect significant energy reduction.

  • Soft containment (Plastic strip curtains which contain the hot or cold aisles)
  • Blanking plates (Used to prevent air recirculation through the racks)
  • CRAC ceiling extensions. (extends the CRAC to the ceiling)
  • Under floor partitioning. (Used to direct airflow under the floor)

Lets also have a look at “Vincent’s Top Five” data centre measurement tools.

  • An anemometer/thermometer (Used for measuring air temperature at different points throughout the room and used to measure the airflow through the CRAC units and deduct the amount of bypass air)
  • A Laser thermometer (Used to measure the rack temperatures)
  • An Anord Flow Hood (Used to measure the amount of airflow through the floor tiles)
  • A Megger Multicore Clamp meter. (Used to clamp the cables feeding the rack PDU’s without the splitting cables)
  • Thermal camera. (Used to visualise rack surface temperatures)

DCT: How feasible is it to have an onsite substation?

VB: In many regions you don’t have a choice of whether you get a substation or not as is determined by the size of you contracted load. I will take one region as an example. In this region if you require up to 180 kw the utility company provides a three phase supply to you, from 180kw to 500kw you will provide the space for a utility owned substation and above 500kw you can install your own substation and benefit from the reduce cost of electricity purchased at 10kV.

In many cases this question is answered for you by the fact that the building you are occupying already has a substation.

If this substation is already close to maxed out then you will need to apply for an additional substation and will have to carry the associated cost. The elements of a High Tension Installation ie 10kV and above, consist of the transformer, and transformer room, HT switchgear and room and a HT meter room, (the meter is normally supplied by the utility company). In addition there will be a large amount of expensive 10kV cabling to be installed and a financial contribution to the utility company. The financial benefit is that the cost of electricity will be much reduced.

DCT: Data centers are well equipped with cooling devices. Can bypass cooling be used to lay superconducting cables inside a data center for more efficient power distribution?

VB: Superconduction cables, although they have been around for some time are really oncoming into their own now but really only at network distribution level. So unless there is a lot of 10kV,20kV or 33kV cabling throughout the site this is not viable to consider. It should also be noted that the temperatures require for supercondution are in the region of -200degC and require specialist cooling systems.

DCT: For effective air-flow management, is aisle cooling a better option or is rack cooling better? Which is the best long-term cost-cutting option?

VB: Cooling can be broken down into three levels based on rack power levels and may be setout as follows. “In Room” which is standard CRAC, downflow cooling design accommodates up to around 6 kw for a well-designed room. “In Row” cooling accommodates between approximately 6 and 24kw. “In Rack” cooling can accommodate from 12kw  up to 44kw without difficulty.

So for higher loads, why use In Row or In Rack? With In Row cooling the cooling capacity is spread across length of the row, this allows the user vary the load across the racks and isolate any of the cooling capacity of the complete row. It also allows the user to put their high density IT equipment at any location within the row knowing that it will be able to draw cooling from a larger reservoir of cooling. With In Rack cooling, the cooling resource cannot be shared between racks meaning that it is susceptible to being isolated or underutilised. In Rack cooling suits have very constant and defined high loads. There are two distinct types on In Rack solutions, Open and Closed.  An example of an open solutions is one which draws its air form the open room and then cools it at the back of the rack before exhausting to the room. An example of a closed solution is the Rittal LCP in which the air is totally enclosed. If open solutions are to share space with a traditional CRAC cooling system they should be located with caution so as not to be positioned along the return path of the hot air.

For most clients the load profile across their racks varies greatly hence I would recommend the use of “In Row” cooling so that the cooling resource would achieve more efficient usage.

For more about Vincent Byrne and data center design, please visit Vincent Byrne Consulting.

No related content found.

avatar

David.Hamilton

David is a Toronto-based writer who’s particularly interested in technology, law and government. He travelled to more than a dozen cities and posted thousands of articles on Internet technology as a staff writer for The Web Host Industry Review. He has also written for Canadian newspaper The National Post, and technology blog TechVibes. In his spare time, he enjoys reading, photography, cycling and music. He is currently covering news updates, interviews, and roundups of industry events for datacentertalk.com