Nice product from APC. But it still has the same problems. How do I get chillwater to a new cooler unless I already have a chilled water system in place? I'm back to having "Bubba" in my data center with a torch.
Also, this unit doesn't have humidity control. Am I the only one who worries about static electricity?
What about condensate? Where does that go? Do I need drip pans under each unit?
And look carefully, I need a "CDU" (sold sperately) too.
And with this APC product I need to have my equipment in APC Infrastructure racks. No thanks, I'll stick to a 30ton liebert stlye CRAC blowing cold air.
Does anyone know or where to get this information ?
Typical data center cooling capacity ? in watts/sq.ft. etc. ?
Typical data center power density ? in watts/sq.ft. ?
Any study results showing the trend in cooling and power in data center ?
I am in a process of designing servers in 42U high cabinet that would go into "data center" but not too familiar with the physical constraints or power and cooling limitations.
Does anyone know or where to get this information ?
Typical data center cooling capacity ? in watts/sq.ft. etc. ?
Typical data center power density ? in watts/sq.ft. ?
Any study results showing the trend in cooling and power in data center ?
I am in a process of designing servers in 42U high cabinet that would go into "data center" but not too familiar with the physical constraints or power and cooling limitations.
Thanks !!!!!!
Datacenter loads vary enormously. Recent measurements of operating datacenters in CA found a range of 5 - 65 W/sf (see figure 5, pg 11 of this for a pretty graph). I have seen slightly higher, but eneral trends are useless for your equipment - it's going up now, who knows next year?.
Design values for cooling capacity are also all over the map - 100 - 150 w/sf are not uncommon, again entirely dependent on the equipment planned to be used and future growth assessments. Power density should equal the cooling capacity, but usually the design cooling capacity is quite a bit higher than the installed power density, and redundancy approaches usually result in a cooling capacity higher than the power density.
I do some mechanical design in datacenters and one thing I'd ask for from the server designers (is that you? is a very well defined heat exhaust. Hot aisle/cold aisle configurations are a very big deal for getting efficient and effective cooling in a high density enviroment (see the Uptime Institute white papers recommended above for great background info).
I'm looking at the IBM Heat eXchanger doors right now for a Blade Server Deployment. Honestly, I'm pretty comfortable with the water in the data center (it's already running to each of my air handlers) and as I know the guys that have done our facilities work, I trust Cornbread (that's one of the contractors names, really!) and his skills with a torch when it comes to soldering up the copper pipes that feed the Coolant Distribution Unit.
Honestly, I'm more happy with the quality of work I get out of the same Contractors we've had working here at CNN for the past 10 years than I am with some of my co-workers that deploy systems into the Data Center. Given how the Electricians work with dangerous voltages and the HVAC guys work with dangerous voltages, heavy objects and flammable fuels, their capabilities and skills have less room for error. I still monitor their work, but if I'm around, I more or less trust them implicitly. I guess it does help that we've had the same contractors for the past 10 years.