Data Center Network Architectures and Research Problems

Data centers have progressively become an essential part of Internet services and networking.  This has resulted in setting key demands for the current data center network architecture. Demands like support for cloud computing, competence, scalability and efficiency results in appealing confronts from network architecture’s perspective. Like other sciences, research in data centers is essential to keep the center running smoothly.  Research projects must be paid keen attention for the purpose of quick improvement. Agility is the key. The more agile a data-center network is, the more efficient the deployment of money and resources is. During the research process there is a whole gamut of challenges. The major ones include – formulating ideas, setting out detailed designs to code up and implement, bringing together all the equipment to run the experiments and make them real.

There are several research problems or hitches in data center. Some of them are enlisted below:

Cost:

It is essential to understand the cost structure in a data center. There are various components in a data center which eat up the costs. Some components include – Servers, Infrastructure, Electrical utility costs and lastly the Network (Links, transit, equipment). Power associated expenses are similar to the networks. IT devices consume 59% of each watt brought, 8% to delivery losses and 33% for cooling purposes. Cooling costs could be brought down by permitting the data centers to run hotter, which may need the network to be more flexible in nature. Important fraction of network related costs is spent on networking equipment. Other fraction of the total costs of the network recount to wide area networking that includes traffic to end users, traffic between data centers and regional services.

Cloud Servicing:

Data centers supporting cloud services vary from distinctive enterprise data centers. Cloud service data centers need automation, unlike enterprise data center where automation is inequitable. Cloud service data centers support large economies of scale. Scaling out dispense workload to small cost hardware, in contrast to updating lofty cost hardware. The enterprise networking architectures were initially developed for much smaller data centers, in contrast to the ones active today. The limitations of the conventional architecture have resulted in quite a few workarounds and squares for the protocols to keep up with the new anxieties on data centers.

Unnecessary subscription of resource and fragmentation:

Unnecessary subscription ratio means the ratio of subscriptions to what is offered restricted server-to-server capacity limits the data center capacity and fragments the server pool. This is because idle resources cannot be allotted where they it is required. To evade this trouble all applications should be placed carefully also taking the impact of the traffic into consideration.  However, in practice this is challenging. Partial server-to-server capacity guides to designers clustering the servers around one another in the ladder, because the distance in the ladder influences the performance and cost of the communication

Reliability, utilization and fault tolerance:

Data centers undergo pitiable reliability and utilization. In case some component of the data center is unsuccessful, there must be some means to keep the data center working. Typically in data centers, counterpart elements exist. When an access router fails for example the counterpart handles the load. However, this leads to elements use only 50% of greatest capacity. Multiple paths are not successfully used in current data center network capacity. A vast majority of data centers use TCP (Transmission Control Protocol) for communication. This communication usually takes place among the nodes and Incast. This occurs in many to single environment, which is dissimilar from the usual assumptions TCP based its design. In simple and more understandable words, TCP is unsuitable for a special data center environment with low latencies and high bandwidths thus limiting the optimum use of all capacity. In casting a receiver, requests data from multiple senders. Upon receiving the demand, the senders start sending out data to the original receiver simultaneously with the other senders. Nevertheless, in the middle of the connection from sender to receiver, is a bottleneck link resulting in a fall down in the receiver receiving the data. The result is network jamming from using the same bottleneck link. Advancing and increasing the buffer sizes of switches and routers hinders congestion, but in high latency and bandwidth data center environment, the buffers can still fill up in a short phase. In addition, large buffer switches and routers are costly.

Data Center Talk updates its resources every day. Visit us to know of the latest technology and standards from the data center world.
Please leave your views and comments on DCT Forum

No related content found.