Go Back   Data Center, Colocation, Cloud Computing, Storage, Dedicated Servers Forums > General DataCenter Discussion Forum > Data center general discussion and solution

Reply

 

Thread Tools
  #1  
Old 09-02-2004, 04:08 AM
whcdavid's Avatar
whcdavid whcdavid is offline
Administrator
 
Join Date: Mar 2004
Posts: 901
Default Data centre decisions: rackmount or ATX?

Overview

This article has been written to examine the two main choices behind server chassis decisions within a current data centre. It assumes a basic understanding of general computer hardware, and the basic factors surrounding system building, and is written to inform data centre technicians and managers of the choices involved when choosing both ATX and rack-mountable chassis.

Introduction

Within most data centres around the world today, most employ rack-mount server cases to house their servers, either for their own company use, or more likely to rent out as dedicated servers. A rackmount server is radically different in configuration and physical dimensions from an everyday (ATX) computer; rackmount servers employ 19”-wide rack-mountable cases, which are sized by terms of ‘units’ (a measurement of height).

One unit (1U) is approximately 4½ centimetres high. Rackmount servers are commonly available in 1U, 2U, and 4U form factors; the difference obviously being the height, but this is usually to incorporate extra storage space for hard disks, and RAID (Redundant Array of Inexpensive Disk) arrays. Once built, these servers are slid into cabinets of varying size (often from 6U wall cabinets, to 47U floor-standing cabinets), for convenient and tidy storage.

The rackmount form factor is not purely for computer systems; most, if not all professional interconnection devices, such as routers, patch panels and switches are rack-mountable too, allowing for a specific solution to all housing needs.

Most ‘experts’ that I have personally encountered will demand nothing less than rack mountings for 100% of their equipment; any other solution is unmercifully discarded as ‘sloppy’ or ‘unprofessional’. However, there is a growing number of managers that are varying their mix by introducing ordinary ATX-chassis servers into their data centres – with and without public acknowledgement. Ultimately, I expect many do this because ATX is all they know – but is that the only reason to shun rack-mount? Or does ATX have absolutely no place in any data centre?

Practicalities

Unless you are inside the seemingly small and almost closed loop of system and network integrators, most that have heard of rack-mount cases will shy away from them, since the little they do know is not enough to form a rational decision.

In many respects, rackmount cases are the ideal solution for any data centre; by housing all equipment into a uniform chassis, server rooms can be equipped with various cabinets depending on requirements, with very little space wastage. It is also clear that cabling within cabinets is far tidier, and much easier to manage due to the ‘straight’ layout of the racks, and their physical relationship above and below each other.

However, many would question if space is truly such an issue; it would not be a complete guess to assume that most data centres have far more physical server room space than they require, and it is merely clients that they are actually in need of. At the end of the day, this convenience comes at a price; at time of writing (September 2004), 1U server cases can be around six times more expensive than an ordinary ATX case, with many of the larger cases getting exponentially more expensive. With many organisations run on a tight budget in order to profit from dedicated server monthly rentals, this is sure to be offputting for even the larger hosting operations.

For those daring enough to consider experimenting with rackmount servers, many are often deterred by a very large compatibility snag; until recently, it has been a grave concern if ordinary ATX motherboards will fit into a rackmount case; previously, unless the board was sufficiently small enough, and of the right dimensions (accounting for such points as capacitor height, hard disk connector placement, etc), this would cause immediate problems. This is one of the main factors that has fuelled Intel and Supermicro’s unrivalled server success, by retailing rack cases with their own motherboards ready installed. Luckily, for those brave enough to self build their own systems, a large number of the newer cases are designed to accept all ATX-compliant motherboards. Since the orientation of the motherboard and case would not permit traditional horizontal PCI slot mounting, PCI riser boards are often provided to allow expansion, normally by 1-3 slots. Although this is often the case with ATX servers, it is heavily recommended that the motherboards installed into rackmount servers have all basic functions (notably LAN and VGA) integrated directly into the chipset, as to prevent further complications.

From an upgradeability point of view, should a 1U server require more hard disks than the vanilla single drives, this would require a new case (2U or higher), since all of the 1U cases commonly available cater only for single hard disks.

Although this is a relatively uncovered issue, I personally worry about heat problems involved with the higher end of servers (AMD XP2500/Intel 2.6GHz or higher). Most would worry that the ‘special’ heatsink used (1U/2U height) is insufficient to cool the mightiest of processors, but to their credit, the majority of rackmount cases available to buy separately have excellent airflow from front, through to the back of the case using many internal cooling fans. My main concern originates from the cabinets themselves, which I can well imagine approaching unhealthily high temperatures inside, in server rooms that have barely adequate air conditioning.


Where is the heat vented out of the back supposed to go, against a wall? If so, then eventually the build-up of hot air will eventually poison the cool air intake, and heat all mounted racks rapidly. Admittedly, a growing number of cabinet manufacturers are now taking the time to release cabinets with in-built extraction fans, to keep the airflow process constant, and allowing a refreshing of the air which flows around the components.

After thoughts

I am personally a staunch believer in the use of rack-mount cabinets for housing interconnection devices, as these form the basic infrastructure and management points of any networked data system; this is not to say that the tie-wrapping of cables is not obsolete; au contraire, it’s simply insufficient in any structured server room.

For the most part, I do support the use of rackmount cases over ATX – who wouldn’t? With ease of management, and the space savings gained, you’d be a fool not to understand their potential. My personal questions are raised about how necessary it is for certain systems to be housed in rackmount cases, especially considering the cost.

My personal suggestion, is that we treat rackmount cases as the norm (even if they aren’t), and then examine the use of ATX cases depending on application – such as budgets, likely upgradeability routes, and the number of rackmount servers/devices required. That way, we can find the best balance between top-line management and efficiency, budgeting, reliability, and expansion planning, without the risk of taking the proverbial sledgehammer to the walnut.

Copyrights 2004
http://datacentertalk.com
Gareth Roberts
groberts@streetsaheadit.co.uk

Last edited by whcdavid; 09-06-2004 at 01:23 PM.
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT. The time now is 12:18 AM.

Member Area



Data Center Industry Daily News


Cloud and Dedicated Hosting


Sponsors Managed Servers Sponsored by DedicatedNOW.
Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.