Determining and Managing Critical Load and Heat Load in the Data Center

Sizing the electrical service for a data center or data room requires an understanding of the amount of electricity required by the cooling system, the UPS system, and the critical IT loads.  The power requirements of these elements may vary greatly from each other, but can be precisely estimated using simple rules once the power requirements of the planned IT load are determined.  Apart from estimating the size of the electrical service, these elements can be used to estimate the power output capacity of a standby generator system, if one is required for the data center loads.

A proper planning exercise in developing a data center, from a single rack sized environment to a full scale data center begins with determining the size of the critical load that must be served and protected.  The critical load is all of the IT hardware components that make up the IT business architecture:  servers, routers, computers, storage devices, telecommunications equipment, etc., as well as the security systems, fire and monitoring systems that protect them.  The process of determining critical load begins with a list of all such devices, with their nameplate power rating, their voltage requirements, and whether they are single phase or three phase devices.  The nameplate information must then be adjusted to reflect the true anticipated load.

Determining the critical heat load starts with the identification of the equipment to be deployed within the space. However, this is only part of the entire heat load of the environment. Additionally, the lighting, people, and heat conducted from the surrounding spaces will also contribute to the overall heat load. As a very general principal, estimate no less than 1-ton (12,000 BTU/Hr / 3,516 watts) per 400 square-feet of IT equipment floor space.

The equipment heat load can be obtained by identifying the current requirements for each piece of equipment and multiplying it by the operating voltage (for all single phase equipment). The number derived is the maximum draw or nameplate rating of the equipment. In reality, the equipment will only draw between 40% and 60% of its nameplate rating in a steady-state operating condition. For this reason, solely utilizing the nameplate rating will yield an over inflated load requirement. Designing the cooling system to these parameters will be cost prohibitive. An effort is underway for manufacturers to provide typical load rating of all pieces of equipment to simplify power and cooling design.

The equipment that will occupy a space has not been determined prior to the commencement of cooling systems design. In this case, the experience of the designer is vital. PTS maintains an expert knowledge of the typical load profile for various application and equipment deployments. For this reason, as well as consideration of future growth factors it may be easier to define the load in terms of an anticipated standard for a given area. The old standard used to be a watts-per-square foot definition.

The nameplate power requirements are the worst-case power consumption numbers required by Underwriter’s Laboratory and in almost all cases, are well above the expected operating power level.  Studies conducted by reputable consulting engineering firms and power supply manufacturers indicate that the nameplate rating of most IT devices is well in excess of the actual running load by a factor of at least 33%.  The U.S. National Electrical Code (NEC) and similar worldwide regulatory bodies also recognize this fact and allow electrical system planners to add up nameplate data for expected loads and multiply by a diversity factor, anticipating that not all devices are running at full load 100% of the time. Calculators gather power consumption data from a wide range of manufactures and further specify various equipment configurations.

Determining the electrical power required to support and cool the critical load within the data center is essential in planning for the development of a facility that will meet the end user’s availability expectations.  This will help specify the size of the Data center physical infrastructure components that will achieve the availability determined by the needs assessment.  Once the sizing determination is complete, conceptual and detailed planning can go forward with the assistance of a competent DCPI systems supplier or, in the case of larger scale data centers, a consulting engineer.

Data Center Talk updates its resources every day. Visit us to know of the latest technology and standards from the data center world.
Please leave your views and comments on DCT Forum

Data Center Network Architectures and Research Problems

Data centers have progressively become an essential part of Internet services and networking.  This has resulted in setting key demands for the current data center network architecture. Demands like support for cloud computing, competence, scalability and efficiency results in appealing confronts from network architecture’s perspective. Like other sciences, research in data centers is essential to keep the center running smoothly.  Research projects must be paid keen attention for the purpose of quick improvement. Agility is the key. The more agile a data-center network is, the more efficient the deployment of money and resources is. During the research process there is a whole gamut of challenges. The major ones include – formulating ideas, setting out detailed designs to code up and implement, bringing together all the equipment to run the experiments and make them real.

There are several research problems or hitches in data center. Some of them are enlisted below:


It is essential to understand the cost structure in a data center. There are various components in a data center which eat up the costs. Some components include – Servers, Infrastructure, Electrical utility costs and lastly the Network (Links, transit, equipment). Power associated expenses are similar to the networks. IT devices consume 59% of each watt brought, 8% to delivery losses and 33% for cooling purposes. Cooling costs could be brought down by permitting the data centers to run hotter, which may need the network to be more flexible in nature. Important fraction of network related costs is spent on networking equipment. Other fraction of the total costs of the network recount to wide area networking that includes traffic to end users, traffic between data centers and regional services.

Cloud Servicing:

Data centers supporting cloud services vary from distinctive enterprise data centers. Cloud service data centers need automation, unlike enterprise data center where automation is inequitable. Cloud service data centers support large economies of scale. Scaling out dispense workload to small cost hardware, in contrast to updating lofty cost hardware. The enterprise networking architectures were initially developed for much smaller data centers, in contrast to the ones active today. The limitations of the conventional architecture have resulted in quite a few workarounds and squares for the protocols to keep up with the new anxieties on data centers.

Unnecessary subscription of resource and fragmentation:

Unnecessary subscription ratio means the ratio of subscriptions to what is offered restricted server-to-server capacity limits the data center capacity and fragments the server pool. This is because idle resources cannot be allotted where they it is required. To evade this trouble all applications should be placed carefully also taking the impact of the traffic into consideration.  However, in practice this is challenging. Partial server-to-server capacity guides to designers clustering the servers around one another in the ladder, because the distance in the ladder influences the performance and cost of the communication

Reliability, utilization and fault tolerance:

Data centers undergo pitiable reliability and utilization. In case some component of the data center is unsuccessful, there must be some means to keep the data center working. Typically in data centers, counterpart elements exist. When an access router fails for example the counterpart handles the load. However, this leads to elements use only 50% of greatest capacity. Multiple paths are not successfully used in current data center network capacity. A vast majority of data centers use TCP (Transmission Control Protocol) for communication. This communication usually takes place among the nodes and Incast. This occurs in many to single environment, which is dissimilar from the usual assumptions TCP based its design. In simple and more understandable words, TCP is unsuitable for a special data center environment with low latencies and high bandwidths thus limiting the optimum use of all capacity. In casting a receiver, requests data from multiple senders. Upon receiving the demand, the senders start sending out data to the original receiver simultaneously with the other senders. Nevertheless, in the middle of the connection from sender to receiver, is a bottleneck link resulting in a fall down in the receiver receiving the data. The result is network jamming from using the same bottleneck link. Advancing and increasing the buffer sizes of switches and routers hinders congestion, but in high latency and bandwidth data center environment, the buffers can still fill up in a short phase. In addition, large buffer switches and routers are costly.

Data Center Talk updates its resources every day. Visit us to know of the latest technology and standards from the data center world.
Please leave your views and comments on DCT Forum

Rackwise Manages all Aspects of Your Data Center

Rackwise is a multi- layered software product that provides a series of solutions for managing multiple dimensions of a company’s IT infrastructure and data center(s). Using Rackwise allows companies to optimize their use of components such as power, cooling, space, servers, networks, cables, etc. Improved management of these resources delivers an improved Return on Investment from one of the most critical and most expensive corporate expenditures – your IT Infrastructure.

Rackwise delivers four solution areas –

  • Data Center Essentials
  • Data Center Optimization
  • Data Center Intelligence
  • Data Center Business


101 California St. Suite 2450
San Francisco, CA 94111

Lee Technologies

Lee Technologies Inc. is a leading provider of complete data center solutions and resources that allow customers to focus on their core business. Founded in 1983, Lee Technologies designs, builds, operates, monitors and maintains business-critical facilities for some of the most data-reliant private and public sector organizations in the world, including Coca-Cola, JP Morgan, Northrop Grumman, the U.S. Department of Defense, Time Warner, Verizon and many others.

Since that time, Lee Technologies has grown into a total lifecycle solutions provider of business-critical data center infrastructure. We design, build, operate, monitor and maintain business-critical data centers for some of the most demanding Fortune 1000 companies and government agencies in the world. With more than 25 years of industry experience, our depth and breadth of expertise at solving business-critical data center challenges is unparalleled.


Solutions Information
T 800.955.4533
Corporate Information
T 703.968.0300
National Locations
T 800.955.4533

Data Center Services by PTS

In today’s highly competitive, warp-speed changing, climate where businesses can’t stop and downtime is measured in profits lost, PTS offers solutions for protection against some of the leading causes of critical systems downtime, hardware damage, data loss and decreased employee productivity. Highly respected in our industry, PTS sets the standard for continuous availability solutions for facilities to data centers to desktop systems.

Founded in 1998, PTS is a data center consulting firm and turnkey solutions provider, offering a broad range of project experience, specializing in designing data centers, computer rooms and technical spaces that integrate, best-of-breed, critical infrastructure technologies and result in continuously available, scalable, redundant, fault-tolerant, manageable and maintainable mission critical environments.

PTS corporate headquarters in Franklin Lakes, New Jersey, and our office in Orange County, California, PTS works to fulfill our mission of creating satisfied customers by emphasizing pre-design & planning services to provide the optimal solution to meet our clients needs and resulting in an early & accurate alignment between scope, schedule and budge.

TS Data Center Solutions, Inc. 568 Commerce Street, Franklin Lakes, NJ 07417
Toll Free: 1.866.PTS.DCS1 Tel: 201.337.3833 Fax: 201.337.4722 Email: info at