There's only so much that can be done for power costs and mostly we rely on new technology. A cost-effective cooling system is just as important.
Any cooling tips you'd like to share? basic cooling strategies? cooling tactics you think are not commonly used? new home remedies that worked for ya?
Use of liquid cooling (LCPs) over passive cooling. Besides being more quiet than your traditional fans, it's cooler!
__________________ There are three kinds of death in this world. There's heart death, there's brain death, and there's being off the network. ~Guy Almes
Yea liquid cooling is the way to go. It is far more cooler and better way to cool, But we must remember passive coolings do tend to put more pressure on the infrastructure.
One way to efficiently channel cool air involves the “hot aisle/cold aisle” approach when the flow of cold and hot is alternated between server rows. With this system, cooled air is piped into cold aisles. The servers are positioned so their front air intakes face the cool-air aisles while the backs of the servers, where fans pipe the hot air out, face the hot aisle from which the heated air is eventually piped out of the data center through the CRAC.
Turning off servers not in use. It helps with the cooling and just as important saves power.
Covering all leaks. Make sure all unnecessary holes, cracks or spaces that alter airflow are covered.
__________________ There are three kinds of death in this world. There's heart death, there's brain death, and there's being off the network. ~Guy Almes
Hot Aisle/ Cold Aisle is a good and moreover a traditional method for cooling. Its all about utilizing the given resources properly. It does make sense to utilize economizers to make use of the already available cold air into cooling.
For new Data Center construction, in row cooling is the way to go. With the higher demand on cooling and higher rack densities, forced air, just doesn't cut it..
Remember, the limitations on a Floor tile are about 150W/sqft.
In regards to cost savings in a datacenter, the cooling infrastructure is the best place to start. Adjusting the cooling temperature in front of the racks just 1 or 2 degrees C, can really make a difference on your electrical bill. Whether the temperature is 20 or 22 degrees C in front of the servers matters not. it's the hot air on the rear side of the racks, that should decide the front rack temperature.
Depending on the local climate at the datacenter location, freecooling is the way to go. Combined with Close Coupled Cooling (Rack cooling, InRow, LCP etc.) Freecooling can save over 50% on the electrical bill, depending on the inlet temperature and the Delta T temperature on the cooling infrastructure, thus also giving you a good "Green IT" approach.
However, bare in mind, that datacenter cooling and cooling infrastructure is a complex matter, and many things play a key role in obtaining the optimal solution.
Basic tenet of an efficient air cooled data center is to force as much of the cold air coming up from the floor (or down from overhead) to go through servers before returning back to the AC unit. Conversely force as much of the server exhaust air to return back to the AC unit without mixing with the cool air. If you can do this effectively then you can raise the AC setpoints to 72 or maybe even 74 degrees and still have servers held to a safe 77 degree level.
How to do this - In a nutshell - look for any leakage in the raised floor and walls below the raised floor and plug it, use hot/cold aisle with long rows and blanking panels, remove perforated tiles from the hot aisles, Arrange hot/cold rows perpendicular to CRACs.
You may still end up with hotspots due to airflow distribution problems underfloor (pipes, cables etc.). These can prevent you from turning down setpoints to save cooling costs. At this point it is appropriate to employ underfloor air movers such as those found at http://www.adaptivcool.com/hotspotr.