Quote:
Originally Posted by sailor
well couplf of huge reasons
in a ups system you convert the ac to dc and then back to ac to deliver to the servers.
an ac inverter (power supply in the computer) produces harmonics.
wouldnt it be good to eliminate that out of the deal and eliminate a conversion step out of the deal as well so you have less equipment and less loss.
you would not have much more loss on the short runs of dc as 48 volts vs the 120 volts anyway compared to the losses in a ups and the transformers.
|
UPSes store power using DC, but typically using 480 volts. Somehow, you're going to have to step that down to 3.3v or 5v at the server, and I doubt doing a DC 480v->DC 5v conversion is really any more efficient than AC 480v -> DC 5v.
Infrastructure costs and power losses can be reduced by keeping power at the highest voltage practical, as close to the equipment as possible. For example, if bladeserver and SAN manufacturers started using 277v NEMA L7 plugs, that would be very convenient: no PDU would be needed; just run 480v 3ph to the RPPs, with one breaker pole per circuit (and likely just one circuit per cabinet, if 277v x 30A). The UK is already doing this with their 230v phase-to-neutral power for servers, which saves them a couple percent in energy efficiency over US 208v.
You would still want to minimize the number of power supplies converting hundreds of volts down to 3.3, 5 or 12 volts DC. That could mean using a very large bladeserver chassis, holding dozens of cards, with just a few large power supplies powering the whole chassis. Running 5v DC is easier over a hard backplane than over flexible cables. A 42U-high bladeserver chassis might even look like a cabinet (no external cabinet needed).
...oh, wait, maybe I'm just thinking of the IBM "Blue Gene"
.