Overhead Cabling- The New word For Energy Savings in Data Centers

Relying on technology and information has become a norm, rather, a way of living in recent times.  For any information about anything, entering it in the search bar gets you more or less 2000 hits in less than a second. Highly commendable, but what makes it possible are the data centers which work round the clock doing what they do best- optimising your internet experience.

Data center is a centralised storehouse of data and information related to a particular business. Sounds pretty straight forward, but a lot of effort, capital and manpower goes into planning and operating one. Data center planning and operation cannot afford to have glitches, because if that happens, a company will end up losing millions in the process, both in terms of data, clients and money.

To prevent such possibilities, planning is the key. The most important task at hand is usually the wiring of the data center or even rewiring of an existing one. Some pointers should always be kept in mind like when upgrading the cabling, you need to have a clear understanding of the existing infrastructure.

When wiring for the first time, make sure your plan allows you to accommodate both copper and fibre media as each have their own unique purpose and each is useless without the other. Also make it flexible so that it can connect with the other servers of the data center.

Now that we have that out of your way, let’s get started:

From an energy saving point of view, utilising the raised floor for the delivery of cold air to the data center equipments is a common practice. But with the conventional method, the old unused wiring is left undisturbed when laying out new cabling. This leads to accumulation of cables over time which in turn causes blocked air passages and before you know it, it gets so hot that Kalahari will lose out in the competition. Not to mention the damaged equipments, and loss in millions.

Overhead cabling to the rescue:

Thorough analysis has concluded that the overhead cabling lowers fan losses and pump losses which, in a nutshell, mean that overhead cabling reduced energy leakage by 42% as compared to the raised floor model. The advantage of this is that the computer room air handler’s (CRAH) return temperature increases and this increases the cooling capacity of each CRAH unit. Another reason behind placing the cables overhead is the significant decrease in the expenses which makes it a very popular alternative. Also, maintenance of overhead cables is easy when compared to underground cables as they are easily traceable and one does not have to go through the agony of lifting tiles and routing cables when a fault is detected in the cables.

The Downside:

Overhead cabling is prone to clutter which, unlike underground cables, cannot be hidden under false floors. This makes it impossible to remove dead cables (used cables) and if there is a fault in any one of the cables, you probably need Superman’s X-ray vision and the precision of a bomb diffuser to get to the source without disrupting the system.

 

The solution? Make them modular:

Organising the cables in trays and then mounting them on different levels eliminates the problem of clutter right away. So as the data center changes (out with the old, in with the new situations), frequent modifications take place and the cable systems are designed to accommodate such changes. They are even made compatible and interchangeable. The flexibility with which it can transform without affecting the existing system is worth a notice and the best part is that it lowers power consumption further by 24%.

Ultimate savings:

In addition to realizing energy savings, data centers with overhead cabling see a reduction in capital costs for CRAH units (eliminating the 11 additional CRAH units in the hypothetical analysis resulted in an estimated $90,000 savings) as well as the significant expense of a raised floor.

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum.

zp8497586rq
zp8497586rq
Share on TwitterSubmit to StumbleUpon

Virtualization – The Long and Short of It

New, up and coming terms like virtualization, data centre and cloud  computing are often out of reach for most of us out there but after doing some research, it’s clear to me now that, to understand the concepts, you need to ask the right questions. The obvious one being what is virtualization?

Basically it’s a platform which enables a single user access multiple devices, something like a single computer controlling multiple machines. In other words, we have a system / server where the virtualization platform is installed and on that platform several operating systems are installed. Hence the user can access any of the operating systems at any given time or even simultaneously. To an outsider it looks like 3 different systems are at work while in reality, its only one server.

Virtualization solves a problem of buying and managing a lot of servers at any given time. It saves a lot of trouble in case of a server failure and makes the entire system compact.

To understand the concept much more clearly, consider its 3 subsections, host operating system, the hypervisor, and the guest operating system.

 Host OS: it is the original OS with which the virtual machines share all information. The virtualisation platform is installed here.

Hypervisor: It is a program which allows multiple OS to share a single host. This enables the OS to have the host OS’s memory. The hypervisor handles memory allocation and resources of the virtual machines and makes sure that they don’t interfere with each other.

Guest OS: simply put, an OS other than the host OS which is sometimes installed is called a guest OS. It’s usually installed inside the virtual system. For e.g. the OS windows, Linux and MAC are the guest OS on the virtualisation platform of the Host OS.

It is crucial for the virtualisation administrators to monitor and manage the platform resources like I/O traffic diligently. They should make sure that they are optimising the host resources and prevent performance degradation.

Now, there are the different types of virtualization and companies provide specific domain solutions for them.  There are essentially three types:

Hardware Virtualization:

This is the most common type implemented in the IT industries as well as in the data centres. The host Server is virtualized and hence allowing the server to be able to run different OS simultaneously on the same hardware.

It has several benefits namely, less no of servers, easier and faster addition of capacity as per needs & less power consumption.

Desktop Virtualization:

This means that the end users(us) data are all stored in a VM(virtual machine) in the hosted environment which can be of the IT ‘s in house or of the data centers. They can be managed in one place for all the users of the company. This process has few takers as it requires a lot of work and takes a lot more planning to execute it.

Storage Virtualization:

Consider this, when a company is growing, it will need more space, even for the servers it’s using, and before you know it , it’s going to increase tenfold. Hence the company needs proper plans for disaster control that’s where this comes to picture. This ensures that multiple storage devices appear as one common media. Storage virtualization takes care of that.

Virtualisation is adopted by honchos like citric and Microsoft but there are other companies like HP, IBM, oracle etc have entered the market. The conclusion is that the more completion in the industry the more affordable it becomes for the  end user.

zp8497586rq
Share on TwitterSubmit to StumbleUpon

Easy Detection of Air Leaks in Data Centers

Air leaks in HVAC systems in data centers can cause major losses in energy. Studies approximate that 25-30% more energy is wasted just overcoming the ‘leaked cooling’. On a large scale, for example a data center 25-30% amounts to a massive sum. It is possible to detect and plug these leaks. Here’s how:

Usual Locations

Typically air leaks in data centers happen at the joints between the wall and the adjoining row of tiles; floor cut outs for cabling, piping, pillars and so on. Additionally drop ceilings, sill plates, water flues, door frames, window frames, electrical boards and plumbing utilities are also major players in the ‘cool’. These locations seem very small in size considering a large HVAC unit but when in tandem they can drain a whole lot of cooling.

Detecting Leaks

There are a number of ways to detect air leaks. Not all the time is it possible to detect air leaks with a wet finger! A different method is adopted to discover different types of air leaks. Here are a few simple tried and tested methods.

Infrared Camera

An infrared camera is employed to detect air leaks. A solid object will be relatively warmer than a spot that is subjected to continuous air flow. If the infrared camera is set on a high sensitivity, the edge of the solid object will be pictured much cooler than the rest; this is a possible location for an air leak. Cracks are easy to find, a cooler hairline region will be detected to look cooler. It’s that simple with an infrared camera!

Blower Test

In the case of closed rooms a blower test can be employed to detect leaks. All major doors of the room are closed; data center rooms typically do not house windows that can be opened. The blower or a really large fan is placed at the one entrance, oriented outward; this causes all the air inside the room to flow outside. Tiny drafts of air will then be detected at regions of air leakage.

Pressure Test

This one’s a little complicated. First a standard list of air pressure with no leakage is made. This would be a benchmark for all the tests. Correlating the ideal data and actual data collected in the room an approximation of the magnitude of air leakage in that room can be detected.

Incense Test

Shut off all HVAC equipment abruptly. Light an incense stick and pass it around common areas of air leakage. An inward draft or a ‘suck out’ of air would mean a leak. This test can be applied in collaboration with the blower test; it would provide continuity in the outflow or inflow that way.

Balloon Test

This test is based on a very simple concept and cheap equipment, air drafts and a balloon. In this test, first the HVAC unit is shut off abruptly. A medium sized helium filled balloon is moved along the places where leaks approximated, the tie of the balloon is kept loose to provide adequate motion. Based on the air draft direction the balloon will move. This test is beneficial when there is low ceiling accessibility and there is more need for detection and isolation rather than a fix.

The Air Cookie!

Air leaks can add unnecessary load to an HVAC system. Extra load would mean extra power consumption and heavy electricity bills. Leak detection and a special sweep for air leaks are very essential for smooth HVAC operation. This may be very beneficial if incorporated with a monthly maintenance regime especially in the case of older infrastructure. Plug the leaks, save cooling, save energy and finally save the planet.

For more quality articles visit DataCenterTalk.

zp8497586rq
Share on TwitterSubmit to StumbleUpon

Equinix All Set to Target Washington DC Market

Equinix an interconnection and collocation services provider has planned an expansion of its campus in Ashburn, Virginia to target the neighbouring Washington, D.C. market. The new data center will be launched by early 2013. The build will be implemented in 3 phases. The first is targeted at bringing online space for 1200 cabinets. An expected investment of USD 88 million is expected to roll out in the other two phases adding a capacity of about 1800 cabinets. The data center will be named DC11.

Apart from this, another data center in the same region DC10 is all set to upgrade to phase two of Equinix’s phase implementation policy. The first phase will be operational this March and a USD 21 million investment phase two will roll out by the fourth quarter.

“With the DC10 expansion and the addition of DC11, we are positioned to better serve our growing ecosystem of global customers, while further strengthening our leadership position in this important market,” said Charles Meyers, president of the Americas at Equinix.

Equinix, Inc. (Nasdaq: EQIX) operates International Business Exchange™ (IBX®) data centers in 38 markets across 13 countries in the Americas, EMEA, and Asia-Pacific. Equinix believes in protecting, connecting and powering the digital economy.

For more news on data centers visit DataCenterTalk

zp8497586rq
Share on TwitterSubmit to StumbleUpon

Czechinvest CEO to Open Data Centres Central & Eastern Europe Conference

Prague – 9 February 2012 – The leaders of datacentre businesses across central and eastern Europe will converge on Prague 22-23 February for the second regional summit which will be officially opened by Ing. Miroslav Křížek, Ph.D. General Director of CzechInvest (www.datacentrescee.com).

Research by BroadGroup, the event researcher and organizer, suggests that the Central and Eastern European region is increasingly well positioned to exploit the next wave of datacentre development as regional Enterprises outsourcing requirements increase, fibre deployment is sustained, and investors fund infrastructure growth across the key metropolitan areas.

Reflecting these opportunities, the conference will include a special focus on the CEE regional market for datacentres, enterprises and their evolving outsourcing needs, infrastructure as a service, private cloud, new technologies and market growth impact across the sector.

Jan Daan Luycks, CEO, CE Colo, Arno Coster, Senior Vice President Linxdatacenter, Andris Gailitis, CEO, DEAC, Alan Hawkins, Commercial Director, Dataplex and Sylwester Biernacki, CEO, Plix representing five CEE countries, will join the leadership panel at the annual forum

The programme also includes a market perspective of CEE, with current trends in demand in datacentre markets across the region and growth forecasts, datacentre finance and investment, colocation business models, sustainability and efficiency in datacentres, modular datacentres, fibre connectivity and cloud evolution.

“The Czech Republic has emerged as Europe´s top location for offshoring and outsourcing of IT services. Repeatedly recognized by various researchers this fact is confirmed by the strong inflow ofhigh-value-added projects of the world´s top IT companies,” commented Dr.Ing Křížek. “Despite being one of the most mature IT markets in the region, the Czech Republic still offers plenty of growth potential.”

“The CEE markets are important for datacentre outsourcing,” commented Warwick Dunkley, vice president at BroadGroup. “We are seeing innovation in cooling methodologies as well as commercial datacentre propositions that make the region very attractive to enterprises. The event has a stellar speaker academy, content of great value to all attendees and outstanding networking opportunities with peers across the region.”

Sponsors include Schneider Electric, Altron, Hewlett Packard, Dataplex, Anixter, CE Colo, Corning, Automation, Future Facilities, PLIX, ProniX and CNet. The event is supported by CzechInvest and the EBRD. Bird & Bird, Colo-X, International Data Centre Group and dcp are industry partners. Media partners are CIO Business News, Computerworld, eco, Datacenterjournal, Bvents, Datacenter Knowledge, DatacenterPost, ECM Plus, BoogarLists, TeleTechWire, Build, InfoCom, Balkans.com, AllConferences.com, PMR, Conferencelocate.com, ITONews.eu and the Datachain. Datacentres.com News is the official publication for the event.

For more press releases visit DataCenterTalk.com

 

Share on TwitterSubmit to StumbleUpon

What is a Hybrid Cloud?

hybrid cloudEssentially a hybrid cloud is similar to the description of the word hybrid itself; it’s a little bit of everything, the best of both worlds. By definition a hybrid cloud is composed of a private cloud and a public cloud. When within a hybrid cloud, the user may use all the services offered by the public cloud as well as the unique services provided by the private cloud. It’s a more customer centric approach where a cloud host can generalise some services as public and provide others at a different pricing model.

Why use a Hybrid Cloud?

Clear Definition of Boundaries: Some existing infrastructure services such as storage have to be kept public; but when it comes to business intelligence the processing of the actual data can be done on a private cloud. There is no overlap of process flow and is a much more practical approach when it comes to utilisation of hardware.
Infrastructure Compatibility: When large corporations with multiple departments running various platforms need to merge, a hybrid cloud implementation sounds promising. In this model resources can easily be matched and moved around till a satisfactory result is obtained. Cross platform functionality is no more an issue with a hybrid cloud approach.
Role Based Restriction: Not all information needs to be public. Knowledge and businesses which are on a need to know basis can stay that way in the Hybrid Cloud. Assigning permissions for access in specific areas is much easier and hassle free.

The Gamble

Hybrid clouds can be a boon for most cloud service providers. The entire process of assigning and merging a variety of resources and having a unique billing is a very promising choice but flexibility may cause an issue. Not many cloud hosting services have implemented hybrid clouds, but they hold the key to an efficient cloud system. No more is it ‘going cloud’, it’s more like ‘when are you going hybrid cloud’?

For more quality articles, news and discussions on cloud computing, visit DataCenterTalk.com

Share on TwitterSubmit to StumbleUpon

Scope for Evaporative Cooling in Data Centers

Is evaporative cooling a solution? The world has reached a time where there are some major energy shortage scares. Some may be rumours, but most of them very true. There are some critical applications which cannot be accomplished without the use of large amounts of energy. Data Centers are one and cooling them another. Cooling is an integral part of the data center, saving energy here can save carbon credits, a sizable amount on your electricity bill and on the long term a lucrative business. evaporative cooling

Traditional cooling for data centers always have been the HVAC units. Various techniques have been experimented upon over the years to improve the efficiency of the power guzzling HVAC units. Raised floors, load based temperature control and use of air economisers are a few. They have been implemented but haven’t been very successful in providing cooling efficiency. Evaporative cooling has never been a popular choice in precision cooling; an improved form may hold the key to cheap data center cooling

What is Evaporative Cooling?

In a short, evaporative cooling is water based cooling. It uses the evaporation of water as a cooling agent unlike HVAC units which use vapour compression or absorption refrigeration. Conceptually the heat is absorbed by the water which turns to vapour; water has a large enthalpy of evaporation. This is a worthy substitute to refrigeration based cooling, and requires a continuous water source to function effectively.

Geographic Importance

Evaporative cooling works great in climates that have low humidity and high average temperatures, like temperate deserts. Most data centers due to real estate constraints are located in isolated areas of temperate deserts, if there is a reliable source of water available, evaporative cooling is a very attractive cooling option. In the United States, places like Denver, Salt Lake City, Albuquerque, El Paso, Tucson, and Fresno have naturally suitable climates for implementing evaporative cooling systems.

Advantages
  • Very low installation costs, estimated to be half of that of HVAC units.
  • Low cost of operation, almost a fourth of that of HVAC units.
  • Increases air quality by increasing humidity in drier climates.
  • Overcomes air circulation issues due to high volumetric flow rate.
  • As compared to air compressors and blowers, a water pump in an evaporative cooling system consumes less power.
  • Low cost maintenance and expert technicians are not required for maintenance.
Dis-advantages
  • Higher the dew point, lesser the cooling, no controlled dehumidification
  • High air humidity accelerates corrosion, and may also cause condensation, a major drawback when it concerns electronic equipment.
  • Air quality is not suitable for people with respiratory disorders.
  • Require a constant supply of water, not very feasible in dry areas.
  • High mineral content in the water may cause deposits within the evaporative cooler requiring a higher frequency in maintenance.
  • Atmospheric odours cannot be controlled.
  • Corrosion within the evaporative cooler itself may need to be restrained with the help of a sacrificial anode.
Facing Facts

Evaporative cooling is an amazing option to cut cooling costs but not at the cost of sacrificing expensive electronic equipment. If the quality and reliability of electronic equipment is compromised, the very existence of a data center is futile. Traditional evaporative coolers are definitely not a solution but a hybrid solution where there is humidity and air quality control sounds promising. The quest for cost effective cooling does not end here!

For more industry specific information, visit http://www.datacentertalk.com.

Share on TwitterSubmit to StumbleUpon

Data Centre Design and Efficiency Specialist – Vincent Byrne

Vincent Byrne

Vincent qualified with an Honours Degree in Electrical/Electronic Engineering from Trinity College Dublin and the Dublin Institute of Technology. He has subsequently received post-graduate qualifications in business development from The Irish Management Institute and project management form University College Dublin where he guest lecture.

Vincent is a data centre design and efficiency consultant and managing director of Vincent Byrne Consulting which operates throughout Europe, the Middle East and Asia. His company offers full design and management, auditing and certification and CFD modelling of highly efficient data centres.

He is the appointed data centre consultant to the Sustainable Energy Authority of Ireland and to over 30 Government bodies including the Revenue Commissioners, Department of Finance, Central Bank. His enterprise customers include the largest Software companies, Search Engines, Pharmaceutical Companies, Airlines, Retail and telecoms companies. You can find more about him at www.vincent-byrne.com.

Share on TwitterSubmit to StumbleUpon

Underground Data Centers

Yes, you heard right, underground data centers, right out of a James Bond movie where the scheming villain stored all his data and made evil plans to hold the planet hostage. Providing an extremely secure solution for data protection, even against nuclear war, they are simply wow! Underground data centers have been a reality since the dawn of computers itself. Mainly used for military applications they became famous during World War II and especially in the cold war when data security was of prime importance. Today a lot of underground data centers exist, a few know, many unknown. Going underground would challenge most supply based entities to the data center; say food, air, and logistics. Considering these drawbacks and a few more, there are still many underground data centers that are active today, let’s find out why.

No Fire!

One of the main drawbacks to human living conditions can turn out to be an amazing advantage to data center operation. You heard right! No fire. Considering a depth of at least 3000 feet, underground caverns have hypoxic or inert air ventilation. Simple, no oxygen, no fire! Additionally there will be no secondary extinguishing related damages like corrosion, harm to the environment, air poisoning and so on. Hard disks are the most affected during a fire, by the fire or by the extinguishing equipment; hence, here there is no damage because the fire extinguishing equipment is not triggered at all. At the end of it, there is continuous operation of all equipment.

Nukes are Puppies, it is Safer than a Vault

underground data centersUnder layers of solid rock or soil there are very few access points. Most underground data centers are setup inside a solid concrete cavity. Most utility furniture is bolted securely to the floor; hence the entire data center is structurally one single unit. Some military grade data centers are even enclosed in a steel cavity which can be pretty thick at times. Some underground data centers even have 2 feet thick steel gates and a single entrance and exit. If there is an underground cave of ample space, it can be used to house a data center. Here, only one side has to be walled, the natural rock formation can act as the rest of the cavity. This can save on construction costs.  Impenetrable even from an external penetration possibility like a sea or land drill, underground data centers are even nuke proof! These arrangements are like miniature fortifications and literally impregnable.

EMP Protection

An EMP is an easy way to knock out electronic equipment. Electronics once struck by an EMP can never be restored. Underground data centers are well equipped to shield all equipment against EMPs. Heavy steel plating, statically charged regions and dynamic pulse repulsing devices may optionally be installed. Even with a full scale war on the surface, the data center can operate seamlessly at all times. EMP protection of an underground data center is especially useful for military personnel.

Free Cooling

Some underground data centers are built under layers of ice; it inevitably gets really cold inside the data center cavity. Air economisers are used to circulate air within and voila, instant cooling. In fact in some really cold countries there is no need to have additional cooling at all. There is a huge saving in power costs and an increased independence from the world ‘above’.

Secure Incoming Cabling

If located close to a beach, under sea optic fibre cables can be directly connected to the data center. There is literally no sourcing for connection whatsoever. Military grade secure lines can be established with ease. Underground data centers can also take advantage of underground power supply; it’s secure fast and cannot be easily tampered with.

The Creamy Layer

The safest way to have a secure backup is at an underground data center. There is probably nothing safer our planet and the technology that exists today can offer! Yet, not everyone is ready to invest in an underground data center, it may have loads of advantages and cost-cutting options available but it’s still a sizeable investment. Once locked down some underground data centers have no access points whatsoever, and then they pose a serious threat to whole nations’ when hosting say, a ‘notorious’ website.

The Catch

Many major search engines and social networking giants have chosen secure underground data centers as a backup. Running multiple operations across North America and Europe they are slowly migrating out of the Americas. Soon their regular websites may be run from the underground.

Underground data centers promise connectivity beyond parallel, so why not live off what the earth has to offer? Why not go underground?

To read more quality articles and data center insights visit DataCenterTalk.

Share on TwitterSubmit to StumbleUpon

Advantages and Dis-advantages of a Parallel Modular UPS System

Note: The conclusions drawn in this article are the personal views of the author. The website or author is not responsible for any unforeseen damages caused by implementing these techniques.

Modular UPS systems are a smart conception. They are small, light, compact, hot swappable, and low cost modules which can be added when required and removed when there is no further use for it. Parallel connected UPS systems are more distinctive in their own way. There are two data center ups systemsides to a coin, with many advantages come dis-advantages too, read on.


Advantages:

A parallel modular UPS system beyond doubt is very advantageous to a data center. Here are a few:


Higher Availability:

 

When analysing a UPS system it is obvious that availability is a major criteria when considering a purchase. Availability of a UPS is defined as follows:

AV= MTBF/ (MTBF+MTTR) = 1/ (1+ MTTR /MTBF).

Where, AV – Availability, MTBF – Mean Time Between Failure, MTTR – Mean Time To Repair.

The value of mean time between a failure depends on, the number of parallel units and level of redundancy. Swapping of failed modules due to hot swapping feature will reduce the mean time to repair.


Least Floor Space:

 

Most standard stand-alone systems come in the horizontal stacking form factor. This drastically increases floor space as more systems are added. Modular UPSs systems are designed to be stacked vertically reducing space by almost 25%!


Reduced Maintenance Failure:

 

Popular reviews show that 30% of UPS failures are due to errors caused by maintenance staff during repair. Hot swappable modules can be replaced and repaired at a later time. Hence, the failed UPSs can be sent to a service station for repairs, greatly minimizing maintenance failures.


Extreme Efficiency:

 

Efficiency of a UPS reaches its peak when the load is at maximum rating. Modular UPSs allow power modules to be added, this keeps the ratio high and efficiency higher than normal.


Isolation:

 

 The greatest advantage of the parallel operated UPSs is the isolation. The load and supply always remains balanced even though there may be a break due to unforeseen events. 


Dis-advantages:

The parallel operated modular UPSs have a few dis-advantages that are usually neglected. It is best to know these just in case some fine tuning has to be made to the backup power system in general.


Not too many in parallel:

 

Generally a low mean time to repair a parallel modular UPS system is more advantageous than a low mean time between failures. In a system of lower rating, multiple modules may be placed on a single board, hence multiple modules in parallel which means smaller mean time between repairs, and hence this acts as a major drawback. The same principle can be applied to batteries in parallel on these systems.


Relatively Higher Maintenance:

 

Increasing the number of UPS modules further reduces system reliability due to increase in number of failures at a given time and the general availability due to the low availability between failures.


Failure of Individual Components:

 

In a parallel modular UPS, the entire rack may serve as a single UPS. A single failure would amount to a common failure in all parallel modules which may be a common control unit or even a battery bank. This may cause complete output breakdown. Hence it is advisable for the data center to go for an N+1 or N+ 2 modules where N is the number of UPS needed by the data center for normal backup practice. This leads to added costs.

The advantages of a parallel modular UPS system clearly outweigh the dis-advantages. A good parallel connected modular UPS may be the key to an efficient data center power backup.

Share on TwitterSubmit to StumbleUpon