Go Back   Data Center, Colocation, Cloud Computing, Storage, Dedicated Servers Forums > Server Rack / SAN/NAS Storage Forum > Network and Telecom Forum

Reply

 

Thread Tools
  #1  
Old 06-23-2008, 09:04 PM
cernst cernst is offline
Member
 
Join Date: Feb 2007
Posts: 70
Default Networking in relation to blades

My company is just now getting into HP's C7000 blade chassis. At least for now, they've decided to use pass-though modules on the chassis...which forces a 2-1 in terms of networking connections to blades (redundant cabling). This decision was to address a problem with sniffing packets on the cisco modules that aggregate the network connections. With up to 16 blades per chassis....64 to a rack...that's a lot of connections per rack.

Has anyone setup a network infrastructure for a blade farm? I'm looking to build one out for 20 racks worth (long term) with up to 4 chassis per rack. I'd love if there were some 10gig connection setup that could aggregate and allow sniffing of the packets.
__________________
YRCW Technologies
Kansas City
Reply With Quote
  #2  
Old 01-06-2009, 09:20 PM
cernst cernst is offline
Member
 
Join Date: Feb 2007
Posts: 70
Default

just a bit of an update, we've got a pair of 7010 switches acting as distribution switches. We've now gotten into installing 3120 module/switches into the HP chassis. This gives a 10GBit uplink per "side" to the chassis...each chassis gets two for redundancy. When putting two chassis in a rack, we'll "stack" each "side" together and they act as one switch with a 20GBit uplink for the 32 servers. I went from 66 copper cables all the way down to 6 cables (4 fiber and 2 copper). This is definitely the way to go. In terms of redundancy, I can lose a switch and all servers still have one network connection. I can lose a fiber cable and only have a slower uplink on one "side".

on a side note, we are scaling back to 2 chassis per rack as there is no benefit to loading up that dense in our situation.
__________________
YRCW Technologies
Kansas City
Reply With Quote
  #3  
Old 01-11-2009, 05:18 PM
JH GLCOMP JH GLCOMP is offline
Junior Member
 
Join Date: Oct 2008
Location: Grand Rapids, MI
Posts: 11
Default

I'm a big fan of the c-class blades. I'm curious if you looked at the Ethernet Virtual Connect modules as well. They also provide 10GB uplinks. Did you find that the Cisco 3120's were better to suit your needs than the VC's?

Jared Hoving
Great Lakes Computer
Reply With Quote
  #4  
Old 01-12-2009, 06:25 PM
cernst cernst is offline
Member
 
Join Date: Feb 2007
Posts: 70
Default

talking to the windows admins who does a lot of the configurations. The 10Gig VC does have one feature over the 3120 that they really would like, and that's the ability to virtually split the connection to each blade 4 ways. Our current VMware servers have 2 pair of teamed connections, a single host connection and a VMotion connection. So currently, we need a total of 4 modules (2 redundant pairs) to accomplish this. With the VC card, we could do it all with two modules, just split one of the pairs two ways each. Each blade would then see 6 connections.

One thing that I haven't heard yet, and was the driving force behind the 3120 was the ability for the netwoking team to sniff packets.

I only hear these conversations in passing...so I don't know much past that.
__________________
YRCW Technologies
Kansas City
Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT. The time now is 08:59 AM.

Member Area



Data Center Industry Daily News


Cloud and Dedicated Hosting


Sponsors Managed Servers Sponsored by DedicatedNOW.
Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.