Data Center, Colocation, Cloud Computing, Storage, Dedicated Servers Forums

Data Center, Colocation, Cloud Computing, Storage, Dedicated Servers Forums (http://www.datacentertalk.com/forum/index.php)
-   Network and Telecom Forum (http://www.datacentertalk.com/forum/forumdisplay.php?f=4)
-   -   Networking in relation to blades (http://www.datacentertalk.com/forum/showthread.php?t=12646)

cernst 06-23-2008 09:04 PM

Networking in relation to blades
 
My company is just now getting into HP's C7000 blade chassis. At least for now, they've decided to use pass-though modules on the chassis...which forces a 2-1 in terms of networking connections to blades (redundant cabling). This decision was to address a problem with sniffing packets on the cisco modules that aggregate the network connections. With up to 16 blades per chassis....64 to a rack...that's a lot of connections per rack.

Has anyone setup a network infrastructure for a blade farm? I'm looking to build one out for 20 racks worth (long term) with up to 4 chassis per rack. I'd love if there were some 10gig connection setup that could aggregate and allow sniffing of the packets.

cernst 01-06-2009 09:20 PM

just a bit of an update, we've got a pair of 7010 switches acting as distribution switches. We've now gotten into installing 3120 module/switches into the HP chassis. This gives a 10GBit uplink per "side" to the chassis...each chassis gets two for redundancy. When putting two chassis in a rack, we'll "stack" each "side" together and they act as one switch with a 20GBit uplink for the 32 servers. I went from 66 copper cables all the way down to 6 cables (4 fiber and 2 copper). This is definitely the way to go. :) In terms of redundancy, I can lose a switch and all servers still have one network connection. I can lose a fiber cable and only have a slower uplink on one "side".

on a side note, we are scaling back to 2 chassis per rack as there is no benefit to loading up that dense in our situation.

JH GLCOMP 01-11-2009 05:18 PM

I'm a big fan of the c-class blades. I'm curious if you looked at the Ethernet Virtual Connect modules as well. They also provide 10GB uplinks. Did you find that the Cisco 3120's were better to suit your needs than the VC's?

Jared Hoving
Great Lakes Computer

cernst 01-12-2009 06:25 PM

talking to the windows admins who does a lot of the configurations. The 10Gig VC does have one feature over the 3120 that they really would like, and that's the ability to virtually split the connection to each blade 4 ways. Our current VMware servers have 2 pair of teamed connections, a single host connection and a VMotion connection. So currently, we need a total of 4 modules (2 redundant pairs) to accomplish this. With the VC card, we could do it all with two modules, just split one of the pairs two ways each. Each blade would then see 6 connections.

One thing that I haven't heard yet, and was the driving force behind the 3120 was the ability for the netwoking team to sniff packets.

I only hear these conversations in passing...so I don't know much past that.


All times are GMT. The time now is 05:13 PM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.