Data Center Environment and Energy Thresholds

If you’re running a data center, there are certain environmental and energy thresholds you need to stay in line with. If you don’t have the proper operating environment, that not only drives your operating costs through the roof; it could very well cause irreparable damage to your servers. Further, if you’re using too much energy, that’s a sure sign that your data center’s suffering from some glaring inefficiency or operations issue- which, ultimately, will hurt your bottom line.

If you’re a smart operator, these aren’t really things you can ignore.

Environmental Thresholds

So what environmental thresholds should you should for? What’s the ‘sweet spot’ for power consumption? How are they tied together?

The answer’s not actually as simple as you might think.

While it’s true that every data center should be both cool and reasonably dry, there’s a wide range of additional factors that come into play when you’re trying to determine the ideal operating conditions for your center:

  • How old is your equipment? Older equipment is more sensitive, and tends to run hotter.
  • What sort of server density is your data center running? As a general rule, more density means more heat.
  • How many hours per year do you run your servers? Believe it or not, not every Data center in the world is “always on.”
  • How intensive are the tasks you assign to your servers? Computers generate more heat when handling more intense tasks, after all.

Given all the above factors, a lot of operators err on the side of caution and run their centers at around 60 degrees Fahrenheit. Believe it or not, they actually don’t need to run that cold- ever. (unless you’re still using legacy hardware) Many operators are warming things up, raising temperatures to somewhere between 65 to 75 degrees. There’s the sweet spot, and if your data center’s running in that temperature range, you’re very likely in the clear. If it climbs above 85 degrees, you may have a problem.

Don’t err on the side of caution here, either- running too far below the threshold can actually end up costing you a pretty penny in climate control, and there’s a good chance that’s money you don’t really need to waste.

As far as humidity’s concerned, too dry is almost as bad as too wet.  Ideally, you’re going to want to shoot for somewhere between 45 to 55%. Too damp, and it’s pretty obvious what’ll happen- computers and water don’t really know how to play nice with each other. Too dry, and there’s a good chance your gear is going to fry itself as a result of electrostatic discharge.

Energy Thresholds

Power consumption’s a little trickier, but at the end of the day, it’s tied directly to efficiency. Elements which affect power consumption include:

  • Climate Control- choose an inefficient or less-than-ideal means of controlling the environment in your data center, and you’re pretty much guaranteed to have your energy consumption shoot through the roof- and well above the threshold.
  • The efficiency of your hardware- less efficient hardware will, naturally, use more power to accomplish less.
  • The uptime of your facility- this one goes without saying. The longer your uptime, the more power you’ll be using.

As far as power thresholds are concerned, we’re going to swing back to my previous article- we’re going to look at the metrics behind power consumption. This is, quite simply, the only way we can really establish an energy threshold. One metric, in particular, is of interest to us- power use effectiveness. The closer you are to 1, the more efficient your center is- and the closer you are to the ideal threshold for power consumption.

You should also keep an eye on energy provisioning. Don’t waste power- you want to be absolutely certain that you aren’t provisioning too much power, or too little. Either way, it’s likely going to cost you money. Make sure you know how much power your center uses, and ensure you aren’t provisioning for more.

In Closing

Environmental and Energy thresholds are both incredibly important factors in the smooth operation of any data center. Run above either threshold, and you not only waste money- you risk causing potentially irreparable damage to your hardware, leading to downtime that could end up costing you much more than a few million dollars.  Thankfully, if you plan ahead, they’re both pretty easy to keep track of- just implement some means of monitoring them, and work out some threshold rules.

Questions? Comments? Concerns? Swing by our forum to have a chat.

Share on TwitterSubmit to StumbleUpon

Underlying Hardware Insignificance

On the following years, important server manufacturers will grave a great deal of trouble when marking their own strengths and failures of the competence.

In just a couple of years Data Center customers won’t be paying attention to the underlying hardware of Cloud Computing services such as IaaS or PaaS. This irrelevance or insignificance factor will come up with winners and losers, the latter being easier to identify:

1) When bought by a competitor.

2) When announcing their retreat from server manufacturing.

Whether the looser is going to be Dell, HP or IBM, no one can say today. But the more customer stop asking for a Dell/HP/IBM server for their Private Cloud and the less Data Center Providers cling to a specific technology (as it tends to) the only difference will be cost. Cost and consequent MRC/NRC (monthly recurrent cost & non-recurring cost) will define which servers to buy.

Some server manufacturers are delivering OEM virtualization licenses together with equipment as a way to leverage the sale. But with the upcoming of licensing agreements for Service Providers, there will come a time when buying OEM licenses won’t be so convenient.

It won’t matter in the future if a server manufacturer does I&D or not, or whether if they were the first in the industry to provide energy efficient servers, because all of them will be the same. There won’t be any real difference talking about hardware and the marketing strategy for Data Center Providers in the future will avoid talking about the underlying hardware of shared environments like IaaS.

On the same way the customers aren’t requiring an EMC or HDS Storage but 15K RPM HDs, it’s just a matter of time customer stop asking for a specific server vendor.

From the Data Center Provider point of view, it isn’t really strategic to marry a hardware vendor for multitenant platforms since commercial conditions may change out of the blue and suddenly expose the Data Center Provider with a non-rentable business.

The underlying hardware irrelevance trend is the reason why server vendors are developing strategic alliances with virtualization hypervisors. It is also the reason why they are developing Data Center Deployment software oriented to Cloud Services. If server vendors fail to develop Data Center Managing software which really makes it worth to buy their hardware, all of them (but maybe two) are doomed.

Another alternative for server manufacturer is not only sell hardware but also services, like Hosting or Housing. IBM is the leader in this subject since they provide almost all the chain of required services: hardware, software, professional services and so on. But, IBM lacks a very important requirement: telecommunications. It isn’t really clear by now if this will strengthen or weaken IBM in the future.

What I am sure about is that Data Center & Telecommunications providers have a nice chance to make the difference. They will be able to negotiate with hardware vendors the best price for multitenant platforms and by means of License Agreements for Service Providers they will be able to provide everything necessary for a high-end Data Center Solution: network, hardware, software, professional services, etc.

As a conclusion, market trends for Cloud Computing indicate a significant change in the Data Center consumer insight. Customers will tend to jump into Public Clouds instead of deploying Private Clouds (which are more expensive by the way). By having more demand on multitenant platforms, customers won’t be able to decide upon hardware specifics and as a consequence they will grant a very important tool for Data Center Providers to leverage their business.

For more quality articles and insights visit DataCenterTalk.

 

Share on TwitterSubmit to StumbleUpon

TIA- 942 Provides Data Center Infrastructure Standards

Following established cabling standards have always ensured a safe and efficient operation of any device or industry. Not only do standards assist us in building cost effective systems, they also bring in a certain level of uniformity for all industries. They ensure proper design, installation and performance of the network and also enabled the industries to advance faster and further.

Data centers, until recently, did not have any established standards. Network administrators had to choose technologies and decipher how to properly implement them into an often-undersized space. In April 2005, Telecommunications Industry Association (TIA) came out with TIA-942 Telecommunications Infrastructure Standards for Data Centers, this being the first standard to successfully address data center infrastructure.

TIA- 942 standards talk of the following specifications:

  • Site space and layout
  • Cabling infrastructure
  • Tiered reliability
  • Power
  • Cooling

Site Space and Layout

While choosing space for a data center, one must keep possible future growth and changing environmental conditions in mind. The interior of the data center also should be designed with plenty of “white space” that can accommodate future racks on cooling devices. According to TIA-942 standards, the data center should include the following functional areas:

  • One or More Entrance Rooms:

The Entrance Room may be located either inside or outside the data processing room. The standard recommends locating the entrance room outside of the computer room for better security. If located within the computer room, the Entrance Room should be consolidated with the MDA.

  • Main Distribution Area

The MDA is a centrally located area that houses the main cross-connect as well as core routers and switches for LAN and SAN infrastructures along with a horizontal cross-connect for a nearby EDA. The standard requires at least one MDA and specifies installing separate racks for fiber, UTP, and coaxial cable in this location.

  • One or More Horizontal Distribution Areas

The HDA is a distribution point for horizontal cabling and houses cross-connects and active equipment for distributing cable to the equipment distribution area. TIA standards specify installing separate racks for fiber, UTP, and coaxial cable in this location. It also recommends locating switches and patch panels to minimize patch cord lengths and facilitate cable management.

  • Equipment Distribution Areas

Horizontal cables are typically terminated with patch panels in the EDA. The standard specifies installing racks and cabinets in an alternating pattern to create “hot” and “cold” aisles to dissipate heat from electronics.

  • Zone Distribution Areas

The ZDA is an optional interconnection point in the horizontal cabling between the HDA and EDA. Only one ZDA is allowed within a horizontal cabling run with a maximum of 288 connections. The ZDA cannot contain any cross-connects or active equipment.

  • Backbone and Horizontal Cabling

Backbone cabling provides connections between MDA, HDAs, and Entrance Rooms while horizontal cabling provides connections between HDAs, ZDA, and EDA. Each functional area must be arranged to prevent exceeding maximum cable lengths for both backbone and horizontal cabling.

CABLING INFRASTRUCTURE

TIA- 942 standards recommend

  • The use of laser-optimized 50μm multimode fiber for backbone cabling.
  • Installing the highest capacity media available for horizontal cabling to reduce the need for re-cabling in the future.
  • Maximum backbone and horizontal cabling distances based on the cabling media and applications to be supported in the data center.
  • 300m of backbone fiber optic cabling and 100m of horizontal copper cabling.

It provides several requirements and recommendations for cabling management.

  • The data center must be designed with separate racks and pathways for each media type, and power and communications cables must be placed in separate pathways.
  • Adequate space must be provided within and between racks and cabinets and in pathways for better cable management, bend radius protection, and access.

 

TIERED RELIABILITY

To provide a means for determining specific data center needs, the TIA-942 standards include an informative annex with data center availability tiers which describes detailed architectural, security, electrical, mechanical, and telecommunications recommendations.

Tier I – Basic: 99.671% Availability

• Single path for power and cooling distribution, no redundant components (N)

• May or may not have a raised floor, UPS, or generator

• Annual downtime of 28.8 hours

• Must be shut down completely for perform preventive maintenance

Tier 2 – Redundant Components: 99.741% Availability

• Single path for power and cooling disruption, includes redundant components (N+1)

• Includes raised floor, UPS, and generator

• Annual downtime of 22.0 hours

• Maintenance of power path and other parts of the infrastructure require a processing shutdown

Tier 3 – Concurrently Maintainable: 99.982% Availability

• Multiple power and cooling distribution paths but with only one path active, includes redundant components (N+1)

• Annual downtime of 1.6 hours

• Includes raised floor and sufficient capacity and distribution to carry load on one path while performing maintenance on the other.

Tier 4 – Fault Tolerant: 99.995% Availability

• Planned activity does not disrupt critical load and data center can sustain at least one worst-case unplanned event with no critical load impact

• Multiple active power and cooling distribution paths, includes redundant components (2 (N+1), i.e. 2 UPS each with N+1 redundancy)

• Annual downtime of 0.4 hours

 

POWER

Determining power requirements requires careful planning and is based on the desired reliability tier. It may include two or more power feeds from the utility, UPS, multiple circuits to systems and equipment, and on-site generators.

Estimating power needs involves determining the power required for all existing devices and for devices anticipated in the future. Power requirements must also be estimated for all support equipment such as UPS, generators, conditioning electronics, HVAC, lighting, etc.

COOLING

The standard incorporates specifications for encouraging airflow and reducing the amount of heat generated by equipments. It recommends the use of raised-floor system for more flexible cooling. The standard encourages hot and cold aisle arrangement.

The standard also suggests:

  • Increase airflow by blocking unnecessary air escapes and/or increasing the height of the raised floor
  • Spread equipment out over unused portions of the raised floor
  • Use open racks instead of cabinets, or use cabinets with mesh fronts and backs
  • Use perforated tiles with larger openings

 

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum.

Share on TwitterSubmit to StumbleUpon

Bringing Fanatical Support to your Premises

In the past, having a private cloud typically meant you had offsite hardware managed by a host, or a full-time IT team on hand to handle on premise hardware, however thanks to Rackspace companies are now able to have the security of an on-premise private cloud while receiving the industry leading support and guidance of Rackspace Cloud Builders.  By providing customers with the option of utilizing:  Rackspace datacenters, partner datacenters, or most commonly a customer chosen data center, Racksapce is now moving from being a traditional web host to an on-call IT support firm.

While the change from a controlled environment to “real world” setups might sound daunting, an interview with Scott Sanchez, Director of Business Development for Rackspace Cloud Builders, greatly helped to clarify the many questions enterprises and information technology professionals have had about the major initiative from the vendor.

Bringing the Cloud to You

  • Although Open Stack can technically run on a wide array of hardware, to qualify for Rackspace support, your servers  must conform to the specifications at ReferenceArchitecture.org
    • The requirements at Reference Architecture are intended to ensure a reasonably standard environment for Rackspace clients regardless of server location
    • The main objective of Rackspaces’ Open Stack is to provide the same level of support to Rackspace clients regardless of where their servers are located.
      • Rackspace Cloud Builders allows Rackspace to help assemble the necessary hardware on your premises, while also handling management tasks remotely
      • When asked about the transition to supporting both controlled and outside environments rather than just their own data centers, Sanchez said it has not been an issue due to the  publication of standardized required specifications
      • According to Sanchez, although Open Stack can be used by smaller companies looking to test their own private environments, the ideal demographic is larger companies with significant infrastructure investment

Proven Track Record

  • Despite Open Stack only being sixteen months old, it has been adopted by computing giants such as:  Sony and their PS3 Network, PayPals’ X Commerce Platform, and The European Organization for Nuclear Research (CERN)
  • Additionally, Open Stack boasts a development network of over 130 participating companies and over 150 developers
  • Interestingly, Sanchez mentioned that many of the major corporate users of Open Stack devote some of their development efforts towards contributing back to the  open stack community as new features are added and improvements are made
  • This model does not just benefit the community at large, but enterprises also benefit since their contributions help shape the project. Additionally it prevents them from being at the mercy of a single company for development direction.

Development Cycle

  • Open Stack operates on a six month development cycle which is based on clearly marked milestones, helping to simplify the logistics of knowing how an update will affect the existing systems
  • Although having a six month deployment cycle has been of concern to many, Sanchez mentioned that with the Diablo release (latest version) of the software, the foundation has become much more solid than the earlier phases
    • Analogous to building a house. It takes awhile for the foundation to solidify before you can begin branching out and adding additional features.
    • In addition to starting with a full deployment, Sanchez mentioned that some companies who are concerned about the early stages of Open Stack simply start with the latest version, but don’t go to production until later versions come out
    • Unlike many Linux distros which have bleeding edge, stable, and legacy version support, Open Stack only maintains their latest editions allowing them to focus on the present rather than having to support various variants of the platform
      • When asked if Rackspace has any plans to adopt a deployment cycle similar to Linux distro’s, Sanchez said there are no plans to change the model as it is already sufficient.

      For more quality articles visit DataCenterTalk

Share on TwitterSubmit to StumbleUpon

Advantages and Dis-advantages of a Parallel Modular UPS System

Note: The conclusions drawn in this article are the personal views of the author. The website or author is not responsible for any unforeseen damages 070-551-VB caused by implementing these techniques.

Modular UPS systems are a smart conception. They are small, light, compact, hot swappable, and low cost modules which can be added when required and removed when there is no further use for it. Parallel connected UPS systems are more distinctive in their own way. There are two data center ups systemsides to a coin, with many advantages come dis-advantages too, read on.


Advantages:

A parallel modular UPS system beyond doubt is very advantageous to a data center. Here are a few:


Higher Availability:

 

When analysing a UPS system it is obvious that availability is a major criteria when considering a purchase. Availability of a UPS is defined as follows:
920-121

AV= MTBF/ (MTBF+MTTR) = 1/ (1+ MTTR /MTBF).

Where, AV – Availability, MTBF – Mean Time Between Failure, MTTR – Mean Time To Repair.

The value of mean time between a failure depends on, the number of parallel units and level of redundancy. Swapping of failed modules due to hot swapping feature will reduce the mean time to repair.


Least Floor Space:

 

Most standard stand-alone systems come in the horizontal stacking form factor. This drastically increases floor space as more systems are added. Modular UPSs systems are designed to be stacked vertically reducing space by almost 25%!


Reduced Maintenance Failure:

 

Popular reviews show that 30% of UPS failures are due to errors caused by maintenance staff during repair. Hot swappable modules can be replaced and repaired at a later time. Hence, the failed UPSs can be sent to a service station for repairs, greatly minimizing maintenance failures.


Extreme Efficiency:

 

Efficiency of a UPS reaches its peak when the load is at maximum rating. Modular UPSs allow power modules to be added, this keeps the ratio high and efficiency higher than normal.


Isolation:

 

 The greatest advantage of the parallel operated UPSs is the isolation. The load and supply always remains balanced even though there may be a break due to unforeseen events. 


Dis-advantages:

The parallel operated modular UPSs have a few dis-advantages that are usually neglected. It is best to know these just in case some fine tuning has to be made to the backup power system in general.


Not too many in parallel:

 

Generally a low mean time to repair a parallel modular UPS system is more advantageous than a low mean time between failures. In a system of lower rating, multiple modules may be placed on a single board, hence multiple modules in parallel which means smaller mean time between repairs, and hence this acts as a major drawback. The same principle can be applied to batteries in parallel on these systems.


Relatively Higher Maintenance:

 

Increasing the number of UPS modules further reduces system reliability due to increase in number of failures at a given time and the general availability due to the low availability between failures.


Failure of Individual Components:

 

In a parallel modular UPS, the entire rack may serve as a single UPS. A single failure would amount to a common failure in all parallel modules which may be a common control unit or even a battery bank. This may cause complete output breakdown. Hence it is advisable for the data center to go for an N+1 or N+ 2 modules where N is the number of UPS needed by the data center for normal backup practice. This leads to added costs.

The advantages of a parallel modular UPS system clearly outweigh the dis-advantages. A good parallel connected modular UPS may be the key to an efficient data center power backup.

Share on TwitterSubmit to StumbleUpon

SAS-70 in the Data Center

Standardization, and associated certification to prove compliance to standards, can be both an indispensable assurance of quality for consumers and an unwelcome expense for vendors.

During the Internet’s “wild west” boom phase in the early 1990s, the pace of technological innovation was so fast that there was little time for rigid standardization or certification. Most of the technologies we rely upon today, such as TCP/IP networking, were developed without formal methodologies; internet standards generally consisted of “Request For Comments” (RFC) and “Best Current Practices” (BCP) papers written by the parties with an interest in the emerging technologies.

As the Internet matured and real money began to flow through the network, technologies that were once mere research projects became part of real-world, critical infrastructures essential to business and government. The globalization of business fueled by the Internet meant that it was no longer sufficient to allow organizations to manage their own data processing without oversight.

One of the most important standards documents in today’s information economy is SAS-70, or “the Statement on Auditing Standards, #70”. The standard, originally titled “Reports on the Processing of Transactions by Service Organizations” provides guidance to auditors assessing security controls. The standard was released by the American Institute of Certified Public Accountants (AICPA) in 1992, well before the internet was widely used as an electronic commerce platform.

Information security is now the foremost requirement of most organizations when selecting a data center. In the wake of the Enron scandal in 2001 in which deceptive accounting techniques were used to defraud investors of some $11 billion, public faith in existing accounting standards deteriorated, and there were widespread demands for more comprehensive safeguards to prevent future occurrences. Section 404 of the Sarbanes-Oxley (SOX) act required all publicly traded companies to utilize SAS-70 “type II certified” data centers.

A data center qualifies as a “service organization” if it provides services which impact it’s customer’s financial record-keeping in any way; such services include managed security, data storage and general IT support.

Unlike other important IT security standards, SAS-70 does not describe the actual controls implemented to safeguard the integrity of transaction information; rather it outlines the processes used when conducting audits of these controls. A SAS-70 report contains the opinions of an auditor on how verifiable an organization’s IT security policies are. This means there is technically no way to be “SAS-70 certified”, since there is no control framework mandated. There are two primary types of SAS-70 reports – “type I” covers the controls implemented by an organization; “type II” includes a comprehensive assessment of effectiveness of these controls.

To the managers of a data center, completion of a SAS-70 audit is a way to attract high-end business. To customers, the audit provides assurance that the data center has an audit-able set of information security controls in place to safeguard critical data. SAS-70 does not certify that the controls are implemented and sufficiently secure, only that they are readily verifiable by an auditor.

To achieve a satisfactory SAS-70 report, data center managers must identify which information security standards are relevant, implement the controls mandated by those standards, and finally ensure that proof of standards compliance is readily available to auditors. Compliance is more than a business requirement to a data center; it is a vital part of the value delivered to customers. Products and services that are certified to comply with security standards justify higher pricing, and security auditing solutions offer very high profit margins. Pricing for high-end data centers offering SAS-70 compliance can be much higher than the market average.

It is important to ensure that internal auditing processes are in place to verify compliance with an organization’s adopted standards; this reduces the cost of third-party services and helps to ensure that the organization realizes long-term benefits from information security investments.

The main information security standards relevant to the data center are :

Control Objectives for Information Technology (COBIT)
Information Technology Infrastructure Library (ITIL)
Payment Card Industry Data Security Standard (PCI DSS)
IPEDA (Personal Information Protection and Electronics Document Act)
ISO 15443 : Framework for IT Security Assurance
ISO 20000 : Information Security : Service Management
ISO 27002 : Code of Practice for Information Security Management
FIPS : National Institute of Standards and Technology Federal Information Processing Standards (NIST FIPS)
Committee of Sponsoring Organizations of the Treadway Commission framework (COSO)

The primary areas covered by these standards are :

policy : leadership, training and governance
confidentiality : protection of critical data from unauthorized disclosure via physical and virtual access controls
service management : IT operations, support and incident reponse
integrity : change management processes
continuity / availability : high-availability systems, backup/restore and disaster recovery

 

For more quality articles and insights visit DataCenterTalk.

Share on TwitterSubmit to StumbleUpon

Data Center UPS- Know What Your Data Center Needs

Power failure is a problem that is fairly common in all parts of the world; one can hardly say that this concern is restricted to data centers. But seeing as how a blackout in a data center could spell disaster, an uninterruptible power supply battery is no longer a recommendation; it is a must. In order to set up a fail- proof UPS for your data center, one will have to know what options one has.

UPS is More Than Just Backup

Unlike popular understanding, a UPS not only supplies power to the data center in the event of a power failure, it also, to an extent, protects the data center from transients, power sags, power swells, harmonic distortions and frequency instability. A good UPS system can also improve the power utilization efficiency (PUE) of the data center. It should be noted that a UPS can support the operation of the data center for a very short interval of time- only till electricity can be switched to an auxiliary power source.

Types of UPS

There are basically three types of UPS systems available today:

  1. Standby power system- this system offers the most basic features of surge protection and battery backup. Electricity is supplied directly to the load and standby power is invoked only during power failure. When the power dip exceeds a certain preset value, the control switches over to the standby UPS which usually takes about 25 milliseconds.
  2. Line interactive power system- This type of UPS system maintains the inverter in line and switches the battery power in the event of power loss. This system makes use of an autotransformer which supplies only the difference in power when there is a dip in voltage. More number of taps on the autotransformer increases the cost and complexity of the system.
  3. Double conversion on-line UPS- In this type, AC power is drawn from the power supply, gets rectified to DC and charges the batteries and once again gets converted to AC by an inverter for powering the protected devices. This system is more expensive and may be considered for incorporation for equipments that are highly sensitive to power fluctuations. Since the battery is always connected to the inverter, switchover is instantaneous. But care should be taken to avoid overcharging of the batteries. This system provides constant output irrespective of the level of disturbance in the input.

UPS Also Requires Backup!

With that said, one should also take into consideration that UPS may also fail in some scenarios. One large UPS supplying power to all servers is not advisable. Instead, opt for multiple smaller UPS modules with either N+1 configuration or 2N configuration (where N is the number of batteries required to sustain the data center load).

UPS Maintenance

UPS, like all other data center equipments, must be monitored regularly and maintenance is a must. The batteries should be checked for acid and hazardous gas leaks. A number of third party companies provide thorough and complete UPS maintenance services.

Remember, the type of UPS one might want to opt for depends solely on the requirement and the data center load. One requires complete understanding of the load demands of the data center. An underpowered UPS system is equivalent to no UPS at all. One can seek the help of leading UPS system experts such as Liebert Systems, APC, Senitel Power, Eaton etc so that the UPS system that is finally fitted into a data center is exactly what the data center needs.

For more insights into the data center world, visit Data Center Talk.

Share on TwitterSubmit to StumbleUpon

Smart Operators And Data Center Metrics

Modern society will soon be built on the cloud, and the cloud is built on the data center.  One cannot emphasize the importance of these facilities enough- or the potential losses that can be accrued if something goes wrong. It should thus go without saying – the best operators are those who pay attention to data center metrics. After all, if they don’t the ins and outs of how their facility operates, how are they going to rectify any problems they might come across?

If they don’t understand where their system’s headed, how can they prepare for the future?

“Data center efficiency goes beyond knowing if servers are still up,” writes Chris A. Mackinnon of Processor, “these days, data center managers are accountable for energy usage, energy efficiency, compliance, regulation, and a great deal more. Performance must be monitored and trends must be predicted to ensure that the data center is always up and ready for capacity increases at any time.”

The problem with the modern tech industry is how rapidly it’s changing- how quickly everything’s moving forward. Simply kicking back and taking on problems as they crop up just doesn’t cut it anymore.  Even a minimal degree of downtime could utterly cripple a center and its servers. While it’s true that you can’t really prepare for everything, you should still prepare for something- you’d be surprised how many pitfalls you can avoid simply by knowing the stats.

At the end of the day, good operators have a plan for whatever problems they feel are likely to arise. Smart operators, on the other hand; use metrics to predict those problems and tackle them before they even become problems.

There’s also the matter of productivity. If you understand your facility’s vital statistics, it becomes that much easier to implement a set of standards and guidelines that’ll improve both the overall effectiveness of your center and the work-efficiency of your employees.

It should go without saying that those are both things you want to do.

You need to be careful in how you analyze the statistics, however.  A set of clearly defined, well-designed metrics are a godsend for any operator. On the other hand, disorganized, poorly defined, or incomplete metrics could very well spell disaster, for several reasons:

  • You could end up improving on areas that don’t require improvement, while overlooking potentially critical problems with your facility.
  • Goals that aren’t properly defined are frustrating, both for you and for your employees.
  • Frustrated employees work much less effectively.
  • Since these employees work less effectively, efficiency goes down the tubes.

When efficiency goes down the tubes, well…It all sort of goes downhill from there.

A well implemented set of metrics, on the other hand:

  • Helps improve the overall functionality of the data center.
  • Makes things far easier for IT.
  • Helps operators understand what works, and what could use improvement.
  • Put the variables associated with a data center’s operation into individual categories, and allows operators the freedom to analyze each variable only if absolutely necessary.

Of course, even with metrics, there’s a downright staggering volume of information to process.  There’s a reason a lot of SAAS providers have started tapping into the BI market- having a platform to organize and display them makes things significantly easier.

Energy efficiency, hardware effectiveness, employee efficiency, business value, data rates and uptime are all of vital importance to the data center. It’s not just a matter of energy efficiency anymore- virtually every factor needs to be considered, and virtually every aspect needs to have a metric associated with it.

While there’s no easy answer to which metrics are the right ones- it largely depends on your facility and what you’re attempting to do with it- it goes without saying that if you want to truly thrive, if you want the technology to be truly effective and the facility to have real business value, you can’t just let it run as it will-you need to implement data center metrics, either way.

 

Share on TwitterSubmit to StumbleUpon

License agreements for Data Center Serivce Providers

Nowadays, large Service Providers (or more exactly Data Center Providers) need more flexibility
when licensing different software.

The most important thing Data Center customers are paying attention lately is delivery time.
Delivery time has become practically the most relevant element people are paying attention to,
since infrastructure requirements vary every hour. Data Center customers need to create new
virtual instances on a regular basis and if the Data Center Providers need to perform a Purchase
Order for software, delivery times increase significantly.

Software providers have aligned to the market needs and developed a business model in order to
provide an On Demand Software Licensing Model.

Microsoft has been dealing licenses under this model for many years now. The SPLA (Service
Provider License Agreement) has some great features but also has some downsides. Microsoft
licenses are mostly made up of per Processor licenses. In some cases, there are the so called CAL
licenses which can be used in very particular situations. There are also the per Processor Data
Center licenses both for Windows Server and SQL which are really aligned to our Cloud Computing
reality: if you have a 2 physical processors server and you pay for 2 Windows Server Data Center
per Processor License you can create as many Windows Server Standard or Enterprise virtual
machines as your physical hardware permit. The same goes for SQL.

One important characteristic of SPLA is that the Service Provider must report usage every month
by means of an online Microsoft tool called MOET.

Let’s describe some important cautions one has to consider when dealing with Microsoft SPLA.
First of all do not be wrong when reporting usage. Reporting wrong or not reporting on time
represent great danger for your business.

If you report wrong, Microsoft folks will choose you randomly (just by chance!) to audit your and
your customer’s servers. If they find out you have reported less licenses that you actually use, you
will get a hard-to-say-no fine. Microsoft speech will be like this: “You have failed reporting your
real software licenses and so you are fined for 1 million USD, but if you pay in the next 10 days you
can forget about this, regulate your situation against Microsoft and continue with your business as
usual.”

Believe me when I say it’s a hard-to-say-no fine. Microsoft SPLA contract is impossible to
negotiate; you have no other alternative than agreeing with every term. So if Microsoft fines you

to pay 1 million USD, either way you’ll pay for it, so it’s always better to happily accept the fine.

On the other hand, if you fail to report on time, you will receive horrible mails saying your account
is cancelled and all of your licenses will expire in the following 10 minutes, it’s like a 10 ton train
hitting against your business. So be patient and report on time: before the 15th of every month.

Moreover, MOET website is maximum expression of Microsoft software. Slow and complicated
which only good feature is working when copying and pasting your report spreadsheet.

But not all is bad news!

Supplying Microsoft licenses under an On Demand model is a great differentiator for your
business. You will be able to provide your customer almost any Microsoft license, for example
Microsoft Windows Server or Microsoft SharePoint, and charge the license together with the MRC
(monthly fee). ROI (Return Over Investment) increases, TCO (Total Cost of Ownership) decreases
and everybody does business happily ever after.

Next time we will talk about VSPP: VMWare On Demand licensing model.

Share on TwitterSubmit to StumbleUpon

Coolant Recycling

Coolants are widely used in HVAC cooling systems in data centers. The main purpose of a coolant is to save a system from any catastrophic damage due to heat. It is hence usually circulated around the heat generating system, absorbing and dissipating heat evenly. It may be referred to as ‘antifreeze’ when it is used to keep a system from freezing when operated below 32 degrees Fahrenheit.

What does a Coolant contain?

Choice of a coolant or antifreeze is made based on the use, here are the general choices:

1.Ethylene Glycol:

Ethylene glycol is the most popular coolant available in the market today; methanol, ethanol, isopropyl alcohol, and propylene glycol are also common choices.

2. Methanol:

Also known as methyl alcohol, carbinol, wood alcohol, wood naphtha, and wood spirits. It is a simple alcohol; light, volatile, colourless, flammable, poisonous having a distinctive odour. At room temperature it is a polar solvent, generally used as antifreeze, solvent, fuel, and a denaturant for ethyl alcohol. It is popular for machinery. It may be found in automotive windshield washer fluid, de-icers, and gasoline additives.

3. Glycerol:

Glycerol can be used as antifreeze and is not toxic.

4. Propylene Glycol:

Propylene glycol is the least toxic and marketed as “non-toxic antifreeze”. It is a replacement to ethylene glycol in food-processing systems or in water pipes where incidental ingestion may be possible.

It’s Hazardous!

Antifreeze is toxic to all, humans and animals alike. It can cause serious water quality issues! Heavy metals such as lead, cadmium, and chromium are usually found in waste coolant; sometimes it may contain high enough levels to make it a regulated hazardous waste! Waste coolant is not allowed to be disposed into land or discharge it into a sanitary sewer, storm drain, ditch, dry well, or septic system.

Why Recycle?

Recycling coolant is advantageous because:

1) Its cost-effective

2) It saves resources.

The most important ingredient of a coolant the Ethylene Glycol is produced from natural gas which is a non-renewable resource. Data Centers use a lot of coolant for dissipating server heat. Recycling coolant can reduce management costs and reduce the quantity new materials purchased. Recycling antifreeze on site and reconditioning it with additives costs significantly less compared to the cost of purchasing new antifreeze.

 

How to Recycle?

Recycling coolant involves two major steps:

1. Removing contaminants like emulsified oils and heavy metals by filtration, distillation, reverse osmosis, or ion exchange.

2. Restoring antifreeze properties with additives. Additives usually contain chemicals that stabilize the acidity of the mixture, inhibit rust and corrosion, reduce water scaling, and slow the breakdown of ethylene glycol.

Where to Recycle?

Depending on the feasibility and the magnitude of recycling necessary waste coolant can be recycled in the following arrangements,

1) On-Site Recycling: Waste coolant is recycled in units purchased by the facility, located on site, and operated by the employees.

2) Mobile Recycling Service: A van or truck equipped with a recycling unit visits the facility and recycles waste antifreeze.

3) Off-Site Recycling: Waste coolant is transported to a specialized recycling company, these services can also resupply the facility with recycled antifreeze.

Industry Standards

Since September, 1999, there is no ASTM quality standard for recycled antifreeze. However, several state agencies have issued product specifications for recycled antifreeze. Also, some renowned vehicle manufacturers test and certify coolant recycling equipment or have developed standards for recycled antifreeze. No single national recycled antifreeze standard is available.

Data Centers cannot run without server cooling. The huge amounts of coolant used can be recycled on-site for savings in capital and maintenance. Invest in a coolant recycler for your data center today!

 

For more updates on data centers, visit Data Center Talk.

Share on TwitterSubmit to StumbleUpon