Go Back   Data Center, Colocation, Cloud Computing, Storage, Dedicated Servers Forums > General DataCenter Discussion Forum > Data center general discussion and solution

Reply

 

Thread Tools
  #1  
Old 02-03-2009, 05:17 PM
Neoeclectic Neoeclectic is offline
Member
 
Join Date: Oct 2008
Posts: 85
Default Virtualization and Cabling

We're going to be moving more towards virtualization. Currently, the idea is to go to a more blade server environment or equivalent. The idea is to be able to run multiple servers from a single chassis.

My concern is the cabling. If we are to maximize the density of a rack we could potentially rack 10-12 Sun t2000's into a single cabinet because of power. But that would be equivalent to about 40-48 servers. Each t2000 chassis can take up to 10 cables under the current configuration which would make it 100-120 cables per rack. There are additional slots for ethernet cards for an additional 16 cables that can take that number up to 160 cables per rack.

Is anyone else currently working in this type of environment and are you cabling that many cables to a single rack space. If so how are you managing the cables and if you're using CAT5E is alien crosstalk a concern.

Would anyone even consider running that many cables to a server rack?
Reply With Quote
  #2  
Old 02-03-2009, 05:49 PM
KenB's Avatar
KenB KenB is offline
Administrator
 
Join Date: Jan 2006
Location: Pittsburgh, PA
Posts: 468
Default

Our plans for growing to ~100 cables per rack include migrating from 42" to 48" deep server racks, at the expense of our hot/cold aisle widths. I have spoken with other folks who are choosing to use racks that are wider than 24" instead, decreasing their number of racks per row. Either way, more cabling is going to require more room. Without limiting airflow, something has to give.

As for alien crosstalk with CAT6A, we're not pushing much traffic at 10 gig these days, so we'll have to see how it goes.


Ken
Reply With Quote
  #3  
Old 02-03-2009, 06:02 PM
Neoeclectic Neoeclectic is offline
Member
 
Join Date: Oct 2008
Posts: 85
Default

But do you think that is a sound practice? I potentially will have to run 250-300 cables per rack. Something about that just doesn't sound right to me.
Reply With Quote
  #4  
Old 02-03-2009, 06:47 PM
KenB's Avatar
KenB KenB is offline
Administrator
 
Join Date: Jan 2006
Location: Pittsburgh, PA
Posts: 468
Default

Depends on your needs. Chances are you don't need the entire bandwidth of 300 connections per rack, so one way to reduce cabling is to use in-rack switches and uplink traffic to some other aggregation point. In the old days, we called this multiplexing. With today's high-speed switches and virtual networking, you can achieve up to a 48X reduction in the number of cables in 1U of rack space.

Using fiber instead of copper is a way to reduce cable bulk, so more high-speed connections can be managed in the same space. Media converters or native fiber NICS and HBAs are possibilities here.

Ken
Reply With Quote
  #5  
Old 02-03-2009, 06:57 PM
Neoeclectic Neoeclectic is offline
Member
 
Join Date: Oct 2008
Posts: 85
Default

Unfortunately, I don't have that option for a switch per rack. That is what I pushed for but it was pushed back on because of the cost. What I'm being given is a reduced number of switches which I have to condense into a consilidation point to which all racks will be cabled to.

Have you ever seen what I've been talking about in practical application? I have yet to see that many cables per server rack so I'm very hesitant about planning to deploy that architecture.
Reply With Quote
  #6  
Old 02-03-2009, 07:45 PM
KenB's Avatar
KenB KenB is offline
Administrator
 
Join Date: Jan 2006
Location: Pittsburgh, PA
Posts: 468
Default

No, I have never seen 250-300 cables inside a 24"x42" rack full of servers and doubt it can be done and remain manageable. Your situation appears to have too many constraints and I'm thinking you (or someone) needs to look for ways to compromise.

Maybe other forum participants have more suggestions.


Ken
PS- I'm guessing this is a continuation of what you described in this thread: http://www.datacentertalk.com/datace...ng-layout.html
Reply With Quote
  #7  
Old 02-03-2009, 07:55 PM
Neoeclectic Neoeclectic is offline
Member
 
Join Date: Oct 2008
Posts: 85
Default

Yep, it's more of a tangent of that discussion. Everytime I come up with a design they throw me a new curve ball basically nixing the design concept as a whole.

The virtualization aspect is a recent development and quite honestly that design will not accomate the bulk that is presumably possible. And to make matters worse I can't negotiate for more space to expand the consolidation points. I'm trying to move away from the MDF we have now because it's an unworkable behemoth full of its own problems. The virtualization can theoretically quadruple our density so even the MDF wouldn't be able to accomodate that many ports in such a limited space.

I'm truly at a loss here.
Reply With Quote
  #8  
Old 02-04-2009, 10:33 AM
Oigen Oigen is offline
Member
 
Join Date: Feb 2009
Posts: 43
Default

I guess it will be pretty complicated to keep up with such a large number of cables. I'm actually hoping of seeing some sort of a new technology to make managing the clutter easier, if not get rid of it altogether.
Reply With Quote
  #9  
Old 02-04-2009, 07:20 PM
KenB's Avatar
KenB KenB is offline
Administrator
 
Join Date: Jan 2006
Location: Pittsburgh, PA
Posts: 468
Default

Neo,

Sounds like one thing you can use is a model to help communicate the details of the problem to everyone involved. Most IT people don't understand that there are many interrelated capacities within a data center. The uninformed person will expect to be able to fill up the server racks with whatever they can afford until the server racks are full, and disregard things like power, cooling, weight, and cabling. Also, overall capacity means little if that capacity can't be delivered where it's needed, so distribution of these resources needs to be considered, as well. It's not obvious to many people that space and money are only two of a data center's manageable (and exhaustable) resources. A good growth plan will take into account all of the variables, identify constraints and propose plans for replenishing resources when they run out.

In our case, to cut through the confusion when planning our latest data center upgrade, we stopped trying to enumerate all of the variables -- kW, BTUs, square feet, network ports, rack units, etc. -- at once. Instead, we devised a standard planning unit based on our "average" server. In our case this device uses 2 rack units, weighs up to 50 lbs, has two power connections, consumes 400 watts, has up to 6 network connections, etc. Our baseline identified how many of these theoretical devices we currently had installed -- and gave people one number to quantify our current equipment portfolio -- and our growth projections revealed where the resource limits were in our infrastructure, when we would hit them and allowed us to discuss ways we might address the shortages. This was a very useful and informative model for everyone and allowed us to plan a major data center upgrade with very little frustration. Your standard building block will probably be different, but the concept may work for you, too.


Ken
Reply With Quote
  #10  
Old 02-05-2009, 01:36 PM
Zordani Zordani is offline
Member
 
Join Date: Feb 2009
Posts: 36
Default

I don't understand. If we're talking about virtualization and thin servers, wouldn't wireless be a solution to all this cabling problem? Why not get it all wireless and be done with it?
Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT. The time now is 09:58 AM.

Member Area



Data Center Industry Daily News


Cloud and Dedicated Hosting


Sponsors Managed Servers Sponsored by DedicatedNOW.
Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.