For a single server setup or even a few servers, their is very very little difference. When you get into a few hundred/ thousands of servers, their will be a slight difference... more importantly in my opinion is actually selecting hardware appropriate to the need. For example, for my simple testing box that I do DNS for a few domains on - I know I want a decent amount of memory for BIND to run in, but don't need much hard drive space, or processing power so a small simple box would be appropriate. For my primary data base servers, I know that they need multiple processors, as much memory as I can throw at them, a good backend FC SAN etc...
Ultimately, the OS doesn't make that much of a difference!
It's not really the OS that's making the hardware hog power, it's the hardware itself. However, I think Linux is better than Windows at saving power, so you figure it out.
It's not really the OS that's making the hardware hog power, it's the hardware itself. However, I think Linux is better than Windows at saving power, so you figure it out.
I only go on hard facts and statistics.. with a little bit of intuition.
It may be perceived that Linux would give you better power usage (ie: you could state that a web site running on a well configured apache HTTPD running under Linux could potentially serve more page views than the same running on Windows, but then that’s not really a fair comparison!)
I have to agree with Schumie on this one. it really depends on the size of the data center, but really for only on or two you will not notice that much of a consumption difference. Really it comes down to all those things Oigen mentioned as process power not actual electrical power. I pay close attention to my various servers in different locations and do not notice a difference in power bills.