Go Back   Data Center, Colocation, Cloud Computing, Storage, Dedicated Servers Forums > General DataCenter Discussion Forum > Data center general discussion and solution

Reply

 

Thread Tools
  #1  
Old 10-15-2004, 04:30 AM
whcdavid's Avatar
whcdavid whcdavid is offline
Administrator
 
Join Date: Mar 2004
Posts: 901
Default Data Center Cooling Heats Up

By Salvatore Salamone
Senior IT Editor


As life science companies increasingly rely on high-density clusters of high-performance computers to conduct in silico research, IT managers will encounter a new challenge: How do you cool this equipment?


Last week I visited Wright Line, a company that designs and builds what it calls technical environment solutions for offices and data centers, and got quite an education on cooling challenges in the modern data center.



New high-performance gear like dual Pentium or Itanium processor servers, high-capacity storage equipment, and communication switches crammed into data center equipment racks draw a lot of electrical power and thus produce a lot of heat.



For example, a fully loaded traditional data rack can have an electrical load of between 3,500 and 4,000 Watts. “That’s the equivalent of about sixty 65 W light bulbs,” says Kevin Macomber, marketing manager at Wright Line. Considering the small volume of an equipment rack, dealing with this amount of heat in an enclosure poses a formidable challenge. And Macomber notes that it is not uncommon to see enclosures with between 6,000 and 8,000 Watts these days.

For years, IT managers have used some common techniques to cool their data centers. For instance, most major data centers have a raised floor through which electrical and networking cables run, providing a means to distribute cool air throughout the data center.



In fact, in most cases, a data center is designed with the flow of air in mind. Equipment racks are typically set up in alternating rows: face-to-face or back-to-back. Since hot air is commonly vented out the back of an equipment rack, this alternating configuration creates cool and hot rows within the data center. Vents to draw out hot air are selectively placed over hot spots; and cool air can be directed toward particularly hot racks by replacing solid floor tiles with a perforated floor panels.



This design has been the normal for many years, but Macomber contends this approach alone is inadequate for today’s computing equipment. “Normal methods to safeguarding data no longer work,” says Macomber.


Macomber says that IT managers have to consider not only the air flow within the data center, but also the thermodynamics within rack enclosures themselves to keep equipment properly running.


Tight Fit = Hard to Cool
Data center real estate space is usually so expensive that most companies try to pack as much equipment into as small a space as possible. Equipment vendors have obliged by offering incredibly powerful systems — servers and storage gear, for example — that fit into one, two, or three slots in an equipment rack. As a result, equipment racks in data centers are densely packed with increasingly powerful equipment — all of which is venting more and more hot air.


Racks used to simply have fans on top to draw out the hot air, but the quantity of heat plus the density of the equipment with virtually no space between the devices makes it difficult to draw enough air out of the rack. This resulting heat buildup within the rack can lead to equipment problems. Indeed, some experts contend that heat buildup in the upper portion of equipment racks is causing a higher than normal failure rate for that equipment.


How serious is the problem? A 70 degree Fahrenheit operating temperature is considered the norm in a data center rack. But the Uptime Institute, an organization that examines data center downtime, says it has measured temperatures above 100 degrees Fahrenheit in some densely packed racks. (Uptime Institute members include equipment vendors, engineers, facilities managers, and IT managers.) A rule of thumb is that long-term electronics reliability is reduced by 50 percent for every increase of 18 degrees above 70 degrees.


Wright Line is taking direct aim at this issue of heat buildup within a rack. For instance, racks can be designed with modular fans on selective portions of the rack’s backside. And some of Wright Line’s enclosures use baffles inside the door to get more air directly to the top of the rack where heat tends to build up.


The company has also looked at other issues, such as how cables within a rack impede airflow. It’s quite common for a rack of servers, storage devices, or communication gear to have dual power cords for each unit and up to several dozen networking cables. To prevent the cables from blocking air flow, the company has designed cable management systems that try to keep the air paths open. And they also have some customized cables (smaller both in length and thickness than normal cables) that minimize the area blocking air flow.


Wright Line isn't alone focusing on this issue. In the May issue of Bio-IT World, I’ll be taking a more in-depth look at data center cooling, including some discussion of a new Hewlett-Packard service that helps IT managers better assess their data center cooling situation.



If you have any questions or concerns about data center cooling, drop me a line at Salvatore_Salamone@bio-itworld.com.

Thanks
david.K
__________________
WebHostingChat ( Web Hosting Forum)
DatacenterSearch (Find your Datacenter)
YOU FAIL ONLY WHEN YOU FAIL TO TRY
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT. The time now is 01:25 PM.

Member Area



Data Center Industry Daily News


Cloud and Dedicated Hosting


Sponsors Managed Servers Sponsored by DedicatedNOW.
Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.