Go Back   Data Center, Colocation, Cloud Computing, Storage, Dedicated Servers Forums > General DataCenter Discussion Forum > Data Center Design, Development, Building Systems and Operations

Reply

 

Thread Tools
  #1  
Old 01-09-2007, 04:21 PM
Zitibake Zitibake is offline
Senior Member
 
Join Date: Dec 2005
Posts: 113
Default cooling tonnage versus ambient temp

Does anyone have links to information about relative energy efficiency of split-system datacenter cooling at different ambient air temperatures? For example, the difference between cost of keeping a datacenter at 65F versus 75F?
Reply With Quote
  #2  
Old 04-11-2007, 11:42 AM
jcchiefeng
Guest
 
Posts: n/a
Post

Perform calcs as below to find your loads. Dropping sensible load to 65 will lower your humidity which may require humidification. It depends on how you want to operate. Bottom line, the more energy you use to condition your space, the more expensive your costs. The upside to keeping your space at 65 is you have created a heatsink during cooling failure. May only be 30 minutes or so, but may be enough time to correct a minor problem.

Below taken and edited from RSES SAM Manual..... (www.rses.org)

Is = Np x Fs + Lw x 3.41 + Lf x 4.25 + NM x HP x 3393 + As

where:

Is=sensible internal heat gain, Btu/hr

Np=number of occupants

Fs=occupancy heat gain, sensible, Btu/hr

Lw=electric lights, incandescent, watts

LF=electric lights, fluorescent, watts

NM = number of electric motors

HP = motor HP

As=appliance heat, sensible, Btu/hr


IL=NP×FL+AL

where:

IL=latent internal heat gain, Btu/hr

FL=occupancy heat gain, latent, Btu/hr

AL=appliance heat gain, latent, Btu/hr

Use actual or assume a fluorescent lighting load of 1.5watts per sq ft of floor space.

Use actual or assume (For FS and FL) 250 and 200 Btu/hr per person respectively.
Reply With Quote
  #3  
Old 04-11-2007, 02:43 PM
KenB's Avatar
KenB KenB is offline
Administrator
 
Join Date: Jan 2006
Location: Pittsburgh, PA
Posts: 468
Default

Quote:
Originally Posted by jcchiefeng View Post
The upside to keeping your space at 65 is you have created a heatsink during cooling failure. May only be 30 minutes or so, but may be enough time to correct a minor problem.
Thanks for the formula, John. But I should point out that at higher densities, depending on air as a thermal ballast during data center cooling outages doesn't scale, since racks just start to re-breathe their exhausts. I saw an interesting CFD simulation showing that in an environment built out to 400W/sq ft, a rack will overheat in 10 seconds.

Ken
Reply With Quote
  #4  
Old 04-12-2007, 01:06 AM
jcchiefeng
Guest
 
Posts: n/a
Post

Yes, you're right if the fans die....I was obviously not clear....my apologies
Reply With Quote
  #5  
Old 04-12-2007, 05:45 PM
KenB's Avatar
KenB KenB is offline
Administrator
 
Join Date: Jan 2006
Location: Pittsburgh, PA
Posts: 468
Default

Actually, in the simulation it didn't much matter if the CRAC fans die or not -- the difference was only a few seconds. The effect happens at the rack level. Here's a whitepaper on it (registration required ): http://www.upsite.com/TUIpages/downl...Cooling_WP.pdf


Ken
Reply With Quote
  #6  
Old 04-14-2007, 02:52 PM
jcchiefeng
Guest
 
Posts: n/a
Default

Thanks for the info...it will help in my presentations....JC
Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT. The time now is 08:37 AM.

Member Area



Data Center Industry Daily News


Cloud and Dedicated Hosting


Sponsors Managed Servers Sponsored by DedicatedNOW.
Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.