I have spoken to many DC managers and FM’s about how they manage IT power consumption. The power consumption of IT equipment can vary with the workload. This means that there is a potential risk that if you don’t leave sufficient headroom in your DC’s electrical system.
I would like to hear from other members of this forum to understand what methods they use to manage loads.
First start by following UL guidliens never let a circuit/PDu draw more than 80% of what its breakered for. Then OVER BREAKER the crap out of the system a well built system is an over built system. Sell a 120x20 but put it on a 30amp breaker and meter it electronicly.
Third find out what the UPS's and other systems max load is, work with a EE to engineer proper fail over of systems and adjust for spikes. Also pay allot of attention to three phase and two phase options as they can start to get tricky if you cut corners. An allways remeber at good effeciencies 1wat of server power requires 1watt to cool it.
You can also use software tools like RackWise to help you organize, distribute, and track your equipment in your environments as well. Also helps keep a tech from going and installing one of those "oh just throw that one somewhere in there" type devices in a rack that"s already maxed out.
Yes, but Rackwise, Aperture Vista and other such packages are static. The new problem with data centers will be as raid has described: frequent, wide swings in energy consumption for random parts of the data center based on instantaneous workload of specific servers.
The only work I've seen to address this on a room scale has been from HP Labs, whose algorithms managed cooling to accommodate heat loads (by manipulating VFD controls on CRAC fans) and vice-versa (by migrating virtual workloads around the data center to avoid hot spots caused by localized processing logjams). This required a huge sensor and software infrastructure, as well as a completely virtualized workload. Maybe that is why last year HP killed its "dynamic smart cooling" initiative, along with much of the innovative work they were doing on real-time data center management -- too expensive for most customers to implement.
The new answer apparently -- from several vendors -- is closely coupling the heat load with cooling, and increasing the modularity and intelligence of the cooling units themselves. Narrowing the scope of cooling management to racks or rows allows smart cooling units to collaborate to solve localized issues, greatly simplifying the magnitude of the problem. Take care of the racks and rows, and the room takes care of itself.
Capacity planning becomes more important, whether you believe in over-provisioning, load shedding or some other strategy, peak loads will be increasingly difficult to predict.
Yes, but Rackwise, Aperture Vista and other such packages are static. The new problem with data centers will be as raid has described: frequent, wide swings in energy consumption for random parts of the data center based on instantaneous workload of specific servers.
The only work I've seen to address this on a room scale has been from HP Labs, whose algorithms managed cooling to accommodate heat loads (by manipulating VFD controls on CRAC fans) and vice-versa (by migrating virtual workloads around the data center to avoid hot spots caused by localized processing logjams). This required a huge sensor and software infrastructure, as well as a completely virtualized workload. Maybe that is why last year HP killed its "dynamic smart cooling" initiative, along with much of the innovative work they were doing on real-time data center management -- too expensive for most customers to implement.
The new answer apparently -- from several vendors -- is closely coupling the heat load with cooling, and increasing the modularity and intelligence of the cooling units themselves. Narrowing the scope of cooling management to racks or rows allows smart cooling units to collaborate to solve localized issues, greatly simplifying the magnitude of the problem. Take care of the racks and rows, and the room takes care of itself.
Capacity planning becomes more important, whether you believe in over-provisioning, load shedding or some other strategy, peak loads will be increasingly difficult to predict.
At least, that's how I see it.
Ken
There are other companies out there that provide solutions to demand based cooling such as AdaptivCOOL that use rack level sensors and in room air movers to get cooling to racks as needed. The infrastructure isnt nearly as involved as HP's system and does not require a virtualized workload. Its been shown to work effectively too.
There are some company that manufactures data center power management and control products. The company’s specialty is hardware solutions that distribute power, and associated software tools that monitor power usage down to the device level.
Dealing more with watching your power consumption. I keep a chart of all racks that do not have networked power strips and their max power consumption. When installing, you just make sure you do not go over that limit...under any circumstances. As stated before, 80% of breakered limit is your ceiling.
Just about all power strips that are networked for monitoring use SNMP. There are several softwares out there that will actively poll an SNMP OID and log it. I uses SysUpTime and can poll in minute increments...I poll every 7 minutes. The only servers that I see an increase are for my citrix blade enclosures...they'll swing from 5.5kw to 5.75kw...mainly starting at the beginning of the day. Majority of my servers do not see much fluctuation at all. But polling your power strips actively will tell you how they are doing. After a while, you get a feel for what each type of server pulls in power.
Granted, each data center is different. I've managed about 100 racks for the past 5 years. I have a good feel of what servers do what and how that works with power. If I were to go into a colocation data center, taking in who knows what kind of equipment...that's where you get into more planning...but at the moment, that's not the environment I'm in.
Use these five tips for cutting data center power consumptions:
1. Use row and rack-based cooling for high-density equipment, which apparently can reduce energy usage by up to 15 percent.
2. Build and provision only what is needed.
3. User air-side economizers in geographies where they work (you may already have air handlers that support this).
4. Scrutinize floor layouts for optimal air movement.
5. Virtualizes everywhere you can, especially on commodity hardware that eats up a lot of electricity.
"The San Francisco, Calif.-based data center operator participated in PG&E's Critical Peak Pricing (CPP) program, designed to curtail power usage during critical peak days to lower the risk of an energy emergency. "
Source- http://searchdatacenter.techtarget.c...-electric-bill