Go Back   Data Center, Colocation, Cloud Computing, Storage, Dedicated Servers Forums > General DataCenter Discussion Forum > Data Center Design, Development, Building Systems and Operations

Reply

 

Thread Tools
  #1  
Old 01-25-2006, 05:02 PM
bw_franko
Guest
 
Posts: n/a
Default Datacenter Setup Basic info

Hello,

I am planning to setup a full rack of servers on our office place. This is what I got now.

1. I have cisco 2501 as core router.
2. 1 Meg pipe is coming to server which is providing by one of the local ISP.
3. I have got 2 2u servers and 4 1 U servers
4. Rack unit is 42 U, How many 1U servers I can put?
5. What is the temprature I need maintain?

Is anything missing?. Please let me know.

Thanks
Frank
Reply With Quote
  #2  
Old 01-31-2006, 05:19 PM
gallant gallant is offline
Banned
 
Join Date: Oct 2005
Location: Atlanta
Posts: 57
Default

Try to maintain around 70 degrees F. How many servers is that? You can calculate worst case by getting the max watts from the UL label plate of your server and converting that into BTUs and comparing the result the the cooling capacity of your HVAC system. Or you can just put a temp/humidity probe in the rack and stop adding servers when the temp gets close to 75F.
Reply With Quote
  #3  
Old 05-10-2006, 12:40 AM
jhw539
Guest
 
Posts: n/a
Default

For heat loads, the electrical nameplate load will probably be 2-6 times the actual heat load, so it is a very conservative number (good for getting your redundancy in!).
Reply With Quote
  #4  
Old 08-07-2006, 05:00 PM
Keith's Avatar
Keith Keith is offline
Administrator
 
Join Date: Aug 2006
Location: Washington DC Metro Area
Posts: 225
Send a message via AIM to Keith Send a message via MSN to Keith
Default

The question regarding how many servers can you put in the rack is dependant on a number of considerations. First and most relevant consideration is power availability. Do you have enough power capacity coming in to the rack to support a full rack of servers? Per the NEC you do not want to exceed 80% of circuit capacity. So for a 20 amp circuit, you are not to exceed 16 amps. Also, if your breaker panels are exposed to potentials of higher than normal heat temperatures, you may want to consider going lower than 16amp depending on the possible ambient heat potential. I believe a typical mid sized configuration should use around 1.5 amps. with that number being kept in mind, I would say that 10 1u servers (with mid sized options and lower utilization) would be the max I would put on a single 20 amp circuit. I would second the above recommendation to check the specs of the power supply rating and doing the standard watts / voltage = amp calculation to get the recommended circuit type. You may also want to look in to purchasing a "metered" PDU or Powerstrip from APC to monitor the circuit load as you are adding more servers.

As mentioned above, there are cooling factors to be brought in to the picture. From what I have been told previously, the ideal temperature for a datacenter is 68 - 72 degrees farenhuit ambient near the front of the rack.

I have one last concern for you, that was not originally asked in your post, and that is in regards of rack placement and the type of rack/cabinet you will be using. Are you planning on putting the exhaust of the servers pointing against a wall? If so, since this is a single rack, you may want to consider exhausting the rack towards the most open area of the room to allow for better cooling. By putting it pointing towards the rack, you are creating potential for the pressure of the warm air to hit the wall and spread towards the front of the rack in which case the servers will intake the exhausted air thus creating higher temperatures both in the room and in the equipment. As for the rack/enclosure type, I would recommend going with an enclosed cabinet..at minimum a cabinet with side panels.

I cannot count the number of datacenters that I have walked in to only to find a fan pointed towards the back of the servers to get the heat to move away from the servers.

Hope this helps! Wish there was an easier "out of the box" answer that I could give you, but there are too many factors that come in to play when planning out these types of setup.

--Keith
Reply With Quote
  #5  
Old 08-08-2006, 04:33 PM
Rmgill Rmgill is offline
Moderator
 
Join Date: Jun 2006
Location: Atlanta, Ga
Posts: 28
Default

This is stuff you can estimate, but until you get systems in the data center and you're working with them on a daily basis, it's still quite nebulous at times.

Things that help:
Environmental monitoring from Netbotz or a similar company (at a minimum, walk account with a temperature probe or just go by feel if you're a really good scot's engineer...."I dunno captain, but the dilithium matrix just doesn't feel right, I need to run more tests"

Power-strips that display per-circuit amperage readings, this allows you to fine tune system installs in a rack and really know when you CAN'T add another system to a rack. I've had engineers sneak in and install more memory and hardware to a system such that it's power jumped, if I'd gone on the old power readings, I'd have been screwed if I'd installed more hardware on that circuit. Pulizzi makes good stuff. So does PDI and a few other companies.

Run the space colder if you can. I like 60°. If you have a CRAC/Air Handler Failure, the extra buffer gives you more time. In a room that's at capacity, you may have 10 minutes between a unit failure and 80° ambient if you run at 70°. Drives start throwing errors when my room reaches 85° ambient (it's usually 90° at the intake of the top of a rack) and then you start seeing sun's spontaneously shutting down to protect the processors.

If you can add environmental monitoring to the room, and use SNMP, you might be able to snag temperature data from systems themselves, either as SNMP traps or some other device crawler function.
__________________
CNN Internet Technologies
Data Center Manager
Reply With Quote
  #6  
Old 08-09-2006, 01:35 PM
gallant gallant is offline
Banned
 
Join Date: Oct 2005
Location: Atlanta
Posts: 57
Default

Server Technologies makes a terrific line of single phase power distribution units (or powerstrips if you prefer.) Most have integrated temperature and humidity measurement as well as local indication of amp load. Temp and amp data is also available remotely through a sweet software package that autodetects Server Tech gear on the network.
More at:
http://www.servertech.com/
Reply With Quote
  #7  
Old 08-11-2006, 07:20 PM
Keith's Avatar
Keith Keith is offline
Administrator
 
Join Date: Aug 2006
Location: Washington DC Metro Area
Posts: 225
Send a message via AIM to Keith Send a message via MSN to Keith
Default

Gallant, It has been a couple years since I have dealt with server technologies. I used to love them when I first started using them. Throughout their life cycle, we started having issues... I ran a hosting datacenter that used the strips to allow customers to reboot their colocated servers remotely.

I started seeing daily service tickets regarding the power strips not being accessible. We found that we had to actually remove the management modules from the strips every once in a while and unplug the power lead for it to come back online. Finally after discussing the problem with a server technologies rep, he shipped me 10 spare management modules to keep on hand incase we had more issues. I had an instance where we swapped a module and it fully took down half a rack of equipment....disconnected the module and have to remove the twist lock to get the strip to come back online.
At that moment, I decided to rip the power towers out of the racks and bought the APC version of the strips that cost $380 opposed to the $900 that server tech was charging at the time. I found that the APC strips had more outlets than the others; however, it only allows a single user in the management interface at a time. Not that I am knocking server tech down, just letting you know my experience with that brand. Probably slightly off topic in here; however I do agree that a metered PDU/Powerstrip is a definite in a situation like yours. I actually put them in all the datacenters that I setup.

I also agree with RMGill, I think the netbotz is a great way to go. I have one setup in my house as a demo for a client of mine. you can PM me if you'd like and I can give you the URL to access it. Perhaps also consider getting a infrared thermometer if you experience hot spots. I use mine to get readings from individual servers that are heating up a rack so that I know the exact server that is heating up too much so I can relocate it and balance out the heat...

Good luck!

--Keith
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT. The time now is 10:06 PM.

Member Area



Data Center Industry Daily News


Cloud and Dedicated Hosting


Sponsors Managed Servers Sponsored by DedicatedNOW.
Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.