Data Center Networking Guidelines

Application availability has changed how applications are hosted in today’s data centers. Evolutionary changes have occurred throughout the various elements of the data center, starting with server and storage virtualization and also network virtualization.

The common design goals of data centers include:

  • Performance
  • Scalability and agility
  • Flexibility to support various services
  • Security
  • Redundancy/High availability
  • Manageability
  • Lower capital and operational expenses
  • Long term viability

A set of guidelines from which a solution can be derived as per the requirements of an organization is established. Additionally, the design architecture will emphasize criteria which are standard-based without compromising critical functionality.

Data center LANs are continually evolving. Business necessities are forcing IT organizations to adopt new application delivery models. Edge computing models are transitioning from applications at the edge to virtualized desktops in the data center. The evolution of the data center from centralized servers to a private cloud is well underway and will be improved by hybrid and public cloud computing services.

With data center traffic becoming less client-server and more server-server centric, new data center topologies are emerging. Yesterday’s heavily segmented data center is becoming less physically segmented and more virtually segmented. Virtual segmentation allows for the reduction of physical equipment, leading to both capital and operational expense savings.

Connectivity solutions provide the ability to compress the traditional 3-tier network into a physical 2-tier network by virtualizing the routing and switching functions into a single tier. Virtualized routing provides for greater resiliency and fewer switches dedicated to just connecting switches. Reducing the number of uplinks (switch hops) in the data center improves application performance as it reduces latency throughout the fabric.

Data center components including data center network design infrastructures is integral in the planning of a data center facility.  A data center is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant power supplies, data communications connections, environmental controls (e.g., air conditioning, fire suppression, etc.) and security devices.  Some of the important elements of networking in data centers are:

Servers:

Servers deployed in the data center today are either full featured and equipped rack-mount servers or blade servers. A blade server is a stripped down server with a modular design optimized to minimize the use of physical space and energy. Whereas a standard server can function with (at least) a power cord and network cable, blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer. A blade enclosure, which can hold multiple blade servers, provides services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure form the blade system. There are pros and cons for each server type.  Virtualization has introduced the ability to create dynamic data centers and with the added benefit of “green IT.” Server virtualization can provide better reliability and higher availability in the event of hardware failure. Server virtualization also allows higher utilization of hardware resources while improving administration by having a single management interface for all virtual servers.

Storage:

Storage requirements vary by server type. Application servers require much less storage than database servers. There are several storage options – Direct Attached Storage (DAS), Network Attached Storage (NAS), or Storage Area Network (SAN). Applications that require large amounts of storage should be SAN attached using Fibre Channel or iSCSI. In the past, Fibre Channel offered better reliability and performance but needed highly-skilled SAN administrators. Dynamic data centers, leveraging server virtualization with Fibre Channel attached storage, will require the introduction of a new standard, Fibre Channel over Ethernet. FCoE, requires LAN switch upgrades due to the nature of the underlying requirements, as well as not yet ratified Data Center Bridging Ethernet standards. FCoE is also non-routable, so it could cause issues when it comes to the implementation of disaster recovery/ large geographical redundancy that L2 connectivity cannot yet achieve. On the other hand, with iSCSI, support for faster speeds and improved reliability is making it more attractive. iSCSI offers more flexibility and a more cost effective solution by leveraging existing network components (NICs, switches, etc.). On top of that, Fibre Channel switches typically cost 50% more than Ethernet switches. Overall, iSCSI is easier to manage than Fibre Channel with most IT personnel very familiar with management of IP networks.

Connectivity:

The networking component provides connectivity to the data center, for example, L2/L3 switches and WAN routers. Motivated by server virtualization, data center connectivity design is moving towards network virtualization.

Data Center Talk updates its resources every day. Visit us to know of the latest technology and standards from the data center world.Please leave your views and comments on DCT Forum

No related content found.