Cooling System Design Goals
Maintaining a suitable environment for information technology is probably the primary and most important problem facing data center and computer room managers today. Dramatic and unpredictable critical load growth has levied a heavy burden on the cooling infrastructure of these facilities making intelligent, efficient design crucial to maintaining an always available data center.
To establish an effective cooling solution for any new or upgraded data center or computer room, it is essential to establish a set of design goals. Some of these goals have been categorized below:
Adaptability
1. Plan for increasing critical load power densities
2. Utilize standard, modular cooling system components to speed changes
3. Allow for increasing cooling capacity without load impact
4. Provide for cooling distribution improvements without load impact
Availability
1. Minimize the possibility for human error by using modular components
2. Provide as much cooling system redundancy as budget will allow
3. Eliminate air mixing by providing supply (cold air) and return (hot air) separation to maximize cooling efficiency
4. Eliminate bypass air flow to maximize effective cooling capacity
5. Minimize the possibility of fluid leaks within the computer room area as well as deploy a detection system
6. Minimize vertical temperature gradients at the inlet of critical equipment
7. Control humidity to avoid static electricity build up and mold growth
Maintainability
1. Deploy the simplest effective solution to minimize the technical expertise needed to assess, operate, and service the system
2. Utilize standard, modular cooling system components to improve serviceability
3. Assure system can be serviced under a single service contract
Manageability
1. Provide accurate and concise cooling performance data in the format of the overall management platform
2. Provide local and remote system monitoring access capabilities
Cost
1. Optimize capital investment by matching the cooling requirements with the installed redundant capacity and plan for scalability
2. Simplify the ease of deployment to reduce unrecoverable labor costs
3. Utilize standard, modular, cooling system components to lower service contract costs
4. Provide redundant cooling capacity and air distribution in the smallest feasible footprint
Power density is best defined in terms of rack or cabinet foot print area since all manufacturers produce cabinets of generally the same size. This area can be described as a rack location unit (RLU), to borrow Rob Snevely’s, of Sun Microsystems, description.
The standard RLU width is usually based on a twenty-four (24) inch standard. The depth can vary between thirty-five (35) and forty-two (42) inches. Additionally, the height can vary between 42U and 47U of rack space, which equates to a height of approximately seventy-nine (79) and eighty-nine (89) inches, respectively.
Designing a precision cooling system requires an understanding of the amount of heat produced by the IT equipment and by other heat sources in the data center. Common measurements include BTUs per hour, tons per day, and watts. The mixed use of these different units causes unnecessary confusion for users and specifiers. Fortunately, there is a worldwide trend among standards organizations to move toward a common cooling standard— watt. The archaic terms of BTUs and tons (which refer to the cooling capacity of ice) will be phased out over time.
Since the power transmitted by IT equipment through data lines is negligible, the power consumed from AC service mains is essentially all converted to heat. (Power over Ethernet or PoE devices may transmit up to 30 percent of their power consumption to remote terminals, but this paper assumes for simplicity that all electrical power is dissipated locally.) This fact allows for the straightforward representation of IT thermal output as watts, equal to its power consumption in watts. For further simplicity, the total heat output of a system—therefore, the total cooling requirement—is the sum of the heat output of the components, which includes the IT equipment plus other items such as UPS, power distribution, air conditioning units, lighting, and people. Fortunately, the heat output rates of these items can be easily determined using simple and standardized rules.
The heat output of the UPS and power distribution systems consists of a fixed loss plus a loss proportional to operating power. Conveniently, these losses are sufficiently consistent across equipment brands and models to be approximated without significant error. Lighting and people can also be readily estimated using standard values. The only user-specific parameters needed are a few readily available values, such as the floor area and the rated electrical system power.
Although air conditioning units create significant heat from fans and compressors, this heat is exhausted to the outside and does not create a thermal load inside the data center. This unavoidable energy loss does, however, detract from the efficiency of the air conditioning system and is normally accounted for when the air conditioner is sized.
Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.
Please leave your views and comments on DCT Forum.