Load balancing in Data Center

Balancing the data center power load means to understand additional power capacity in the facility. Load balancing is a computer networking line of tactic to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources. This can achieve the best possible resource utilization, exploit throughput, diminish response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The load balancing service is typically provided by committed software or hardware, like, a multilayer switch or a Domain Name System server. Pete Sacco, president of PTS Data Center Solutions stated, “If I have loads of 186, 52 and 31 amps on my phases, I know that I can take loads off of A and spread them to B and C,” he said. “I’ve actually done nothing to deliver more power. I’ve just load-balanced it’’. Pete Sacco further opined that what often happens is a contractor would rather put in more capacity than balance the loads. The problem is when there’s no more capacity to add.  Making use of server load balancing, organizations can decrease the strain put on individual servers and increase the performance of data and application retrieval within its network. Dispensing the work that an individual server is required to perform can enhance application retrieval times and overall network performance. Server load balancing is a procedure where the wide-ranging workload of a network is consistently distributed across selected servers within a data center.

The promising cloud computing paradigm has motivated the creation of data centers that includes thousands of servers and that are accomplished of supporting a large number of divergent services. As the cost of installing and maintaining a data center is exceedingly soaring, achieving high deployment is fundamental. Therefore, emphasising on agility, to facilitate the ability to allocate any service to any server, is significant for a promising deployment of any data center.

Whether a business plainly uses a centralized business network for local application distribution, or it is employing a wide area network for global distribution, balancing user traffic is a significant factor for best transaction performance. Hence load balancing within data center is a favourable practice.

Various networking concerns in data center networks ought to be scrutinized in order to support agility. The network should have layer-2 semantics to permit any server to be allocated to any service. Along with this, the network should supply consistent high capacity between any pair of servers and hold up high bandwidth server-to-server traffics. A number of proposals that can accomplish these design goals have been reported in the recent times. All of these applications campaign use scale-out topologies, such as fat-tree networks. They present high bandwidth among servers. Even though fat-tree topologies have soaring bisection bandwidth and can tentatively support high bandwidth between any pair of servers, load balancing is necessary for a fat-tree topology to pull off great performance. Balancing network traffic in a fat-tree based data center network comes with considerable challenges as data center traffic matrices are highly contradictory and erratic. The existing solution addresses this trouble through randomization using Valiant Load Balancing (VLB).That achieves target autonomous random traffic dispersal across transitional switches.

VLB can accomplish near most favourable performance under two conditions:

  1. When  the traffics are spread homogeneously at the packet level
  2.  When the accessible traffic patterns do not disobey edge constraints.

Keeping the above conditions in mind, the packet-level VLB practice is perfect for balancing network traffics and has been proven to have many great load balancing properties. Nevertheless, even though the fat-tree topology is best suited for VLB, there is limitation in the adoption of VLB in data center networks. To be exact, for avoiding the inoperative packet issue in a data center network with TCP/IP communications, VLB can only be practical at the flow. It is uncertain whether flow-level VLB can attain comparable performance as packet-level VLB, or whether VLB flow-level is further successful in dealing with traffic instability with those load balancing practices that exploit the network state information.  Experiments show that flow-level VLB can considerably be worse than packet-level VLB for non-uniform traffics in data center networks.  There are importantly two alternating load balancing systems that make use of the instantaneous network state information to achieve load balancing. Recent experimental results point out that these techniques can improve network performance over flow-level VLB in many situations.

Data Center Talk updates its resources every day. Visit us to know of the latest technology and standards from the data center world.
Please leave your views and comments on DCT Forum

No related content found.