Backup Problems That Companies Need To Overcome In Order To Capitalize On Big Data

The “Mainframe Age” saw the birth of databases and business process automation. Then, came the “PC Age” which placed the power of computing on the desktops of every worker. Next came the “Internet Age”, which created the virtual world in which most business takes place today. Now, we’re entering the age of “Big Data”.

A major shift is taking place right now in data centers across the world, and it’s spawning a new era in the history of business computing.Until recently, business data was mainly used to support business decisions. Now, data is becoming the business itself.

A combination of cheap storage and extremely powerful processing and analysis capability is spawning new value-creating opportunities which had previously been the domain of science fiction.
Of course, any business that wants to leverage the power of big data will also have to protect this data from disasters, accidents and mechanical failures. This means that certain backup challenges will need to be overcome in order to support the activities associated with Big Data computing.

Manual Labour Costs

Hardware costs are dropping exponentially as complexity is exponentially rising. As this happens, automation is critical in order to prevent labour costs from also increasing exponentially.

Data Transfer Speeds

Transferring data to off-site storage using tape is extremely slow for both backup and recovery. Big data systems will require that data be backed up at network speeds. In cases where data is growing at a faster rate than the public Internet infrastructure can support, companies will require dedicated connections with their remote backup facilities or service providers.

Recovery Speeds

As the strategic importance of corporate data increases, so will the potential risks and costs associated with unplanned downtime. Big data applications will require data backup strategies which are optimized for maximum recovery speeds. This may mean combining both local and remote backups, and it could also mean maintaining a mirrored emergency failover facility at a remote site.

Recovery Point Objectives

Because data loses value with age, the most recent data is also the most valuable. This is especially true for big data apps which work with real-time analysis. Because of this, backup plans will need to be designed to minimize data loss in the event of a disaster.
It’s no longer acceptable to lose 24hours of data because of a daily backup process. Higher frequencies will become the norm.

Archiving

In order to maximize server performance and adhere to regulatory requirements, inactive, older or low-value data will need to be stored separately from live working data. Data archiving and electronic discovery will become much more important in the big data age.

Backup Windows

Big data takes a long time to back up. It’s simply not practical to take a system off-line for 8 hours for daily backups. Tomorrow’s big data apps will need to perform more frequent backups, faster, and with less interruption.

Data Security

It’s really pretty shocking to think that – even today – most businesses don’t encrypt their backups. Backups should be encrypted, especially when stored off-site. Losing backup media is a common problem and can provide hackers and identity thieves with access to sensitive and private information. And big data will only worsen this threat.

Compression and Deduplication

No matter how cheap storage is today, it will always be cheaper tomorrow. That’s why it’s important to budget accurately and maximize storage utilization to minimize waste. Technologies like storage virtualization, block-level incremental backups, compression, and deduplication will grow in importance thanks to big data. But there will also be a balance and need for native and fast storage and because of the processing required for these activities as well as encryption.

These are just a few of the many new challenges that organizations will need to overcome as they begin to leverage the opportunities presented by big data computing. That’s why many organizations are choosing to outsource their data backup and online storage requirements and partner with industry experts.

About The Author:Storagepipe Solutions has been a leader in data center backups for over 10 years, and they’re experts in backing up large volumes of complex data center data. Leading the industry in the fields of disaster recovery, regulatory compliance, and business continuity backupsolutions in anticipation of rapidly-changing data management trends, its more than just an online backup software company. A comprehensive suite of hosted disaster recovery solutions that give organizations greater control over their retention policies while allowing them to overcome complex backup management challenges through cost-effective automation is what StoragePipe has to offer.

 

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum.

Are you in control of your data center?

I have been in the IT business for more than 30 years from managing large IT infrastructure migrations, to moving data centers to helping to design them and managed their construction. During all these years, I have seen a lot of surprising and amazing infrastructures to say the least… I have come across many data center projects that made me wonder why they were started in the first place. It isn’t that the needs weren’t there, but rather the fact that the initial hypotheses were wrong or the information was incomplete. The result: The needs were poorly explained to the engineers and the planning was deficient.

In the following, I propose a couple of rules to follow regarding data centres,

Rule #1: Make sure people understand each other

When a French speaking person speaks to an English speaking person, at least one of them must speak and understand the other person’s language or they must hire a translator. The same principle applies for a Data Center project. The engineer must be able to understand the IT specialist and their needs. Moreover, the IT specialist must be aware where the technology will be in 4 – 5 years down the road and if at the end of the construction project, the data center really meets the requirements. A data center isn’t built for today’s needs! A Data Center must be able to last at least 15 years and must be able to evolve over those years.
Ex: Intel will come out with new technology next year that will be adopted by Vmware that could change multiple factors in your data center… Is the IT specialist aware of this? The engineering firm is rarely tracking those changes in the IT industry. It is the responsibility of the requester to expose the needs correctly and clearly.
How many times have you heard the following from clients? They designed it wrong or they didn’t get me the proper specifications!

So you need:
1. A multi-year vision
2. To clearly define your needs
3. To make sure the other party understands them.

Rule #2: Start small before redesigning your data center

Before you even start with a vision, make sure you have a clear picture of your actual assets. Adopt a phase approach that will get you where you want to go in your data center project upgrade.

• Clean up your environment: make sure that you don’t carry a dead horse, which means equipment that is still running but not processing anything because it’s been decommissioned a few years ago!
• Try to consolidate using virtualization? It is less expensive than building a new data center!
• One last piece of advice before taking your pen to layout your new data center: use some manufacturer accessories that could help you remove hot spots such as directional air flow tiles, rack air removal door with built in chimneys that throw hot air into the plenum! Containment is also sometimes a solution for your air conditioning issues.

Some of these ideas may expand the life of your data by a few years! I know, it’s not sexy, but it will allow you enough time to plan for the best solution. Over the next couple of months, I am planning to write on many topics that I have come across in my multiple mandates around data centers. I believe that had my customers considered some of these elements, many headaches would have been avoided.

So, what do you do first? Get in control of your data center: Understand your assets! Understand the relationship between the elements in it as well as the ownership of every one of these elements. Some of you might be using a spreadsheet while others are using an enterprise solution. I have not come across a single company that was able to manage a 20 cabinets or more data center efficiently using a spreadsheet as the only tool. How many attributes do you have to input in this spreadsheet if you have 20 cabinets? More than a thousand for sure and that isn’t counting the interrelation between them! Ideally, only one person must be using this spreadsheet …as a result the information is of value only to one person!

If you need to expand or design a new data center, what kind of information will you need to provide the engineering firm or the IT specialist so you can have a successful project? Is it in this spreadsheet or will you have to collect more data? The next article will introduce ways to represent your data center and really help you manage it.

Data Center Environment and Energy Thresholds

If you’re running a data center, there are certain environmental and energy thresholds you need to stay in line with. If you don’t have the proper operating environment, that not only drives your operating costs through the roof; it could very well cause irreparable damage to your servers. Further, if you’re using too much energy, that’s a sure sign that your data center’s suffering from some glaring inefficiency or operations issue- which, ultimately, will hurt your bottom line.

If you’re a smart operator, these aren’t really things you can ignore.

Environmental Thresholds

So what environmental thresholds should you should for? What’s the ‘sweet spot’ for power consumption? How are they tied together?

The answer’s not actually as simple as you might think.

While it’s true that every data center should be both cool and reasonably dry, there’s a wide range of additional factors that come into play when you’re trying to determine the ideal operating conditions for your center:

  • How old is your equipment? Older equipment is more sensitive, and tends to run hotter.
  • What sort of server density is your data center running? As a general rule, more density means more heat.
  • How many hours per year do you run your servers? Believe it or not, not every Data center in the world is “always on.”
  • How intensive are the tasks you assign to your servers? Computers generate more heat when handling more intense tasks, after all.

Given all the above factors, a lot of operators err on the side of caution and run their centers at around 60 degrees Fahrenheit. Believe it or not, they actually don’t need to run that cold- ever. (unless you’re still using legacy hardware) Many operators are warming things up, raising temperatures to somewhere between 65 to 75 degrees. There’s the sweet spot, and if your data center’s running in that temperature range, you’re very likely in the clear. If it climbs above 85 degrees, you may have a problem.

Don’t err on the side of caution here, either- running too far below the threshold can actually end up costing you a pretty penny in climate control, and there’s a good chance that’s money you don’t really need to waste.

As far as humidity’s concerned, too dry is almost as bad as too wet.  Ideally, you’re going to want to shoot for somewhere between 45 to 55%. Too damp, and it’s pretty obvious what’ll happen- computers and water don’t really know how to play nice with each other. Too dry, and there’s a good chance your gear is going to fry itself as a result of electrostatic discharge.

Energy Thresholds

Power consumption’s a little trickier, but at the end of the day, it’s tied directly to efficiency. Elements which affect power consumption include:

  • Climate Control- choose an inefficient or less-than-ideal means of controlling the environment in your data center, and you’re pretty much guaranteed to have your energy consumption shoot through the roof- and well above the threshold.
  • The efficiency of your hardware- less efficient hardware will, naturally, use more power to accomplish less.
  • The uptime of your facility- this one goes without saying. The longer your uptime, the more power you’ll be using.

As far as power thresholds are concerned, we’re going to swing back to my previous article- we’re going to look at the metrics behind power consumption. This is, quite simply, the only way we can really establish an energy threshold. One metric, in particular, is of interest to us- power use effectiveness. The closer you are to 1, the more efficient your center is- and the closer you are to the ideal threshold for power consumption.

You should also keep an eye on energy provisioning. Don’t waste power- you want to be absolutely certain that you aren’t provisioning too much power, or too little. Either way, it’s likely going to cost you money. Make sure you know how much power your center uses, and ensure you aren’t provisioning for more.

In Closing

Environmental and Energy thresholds are both incredibly important factors in the smooth operation of any data center. Run above either threshold, and you not only waste money- you risk causing potentially irreparable damage to your hardware, leading to downtime that could end up costing you much more than a few million dollars.  Thankfully, if you plan ahead, they’re both pretty easy to keep track of- just implement some means of monitoring them, and work out some threshold rules.

Questions? Comments? Concerns? Swing by our forum to have a chat.

Underlying Hardware Insignificance

On the following years, important server manufacturers will grave a great deal of trouble when marking their own strengths and failures of the competence.

In just a couple of years Data Center customers won’t be paying attention to the underlying hardware of Cloud Computing services such as IaaS or PaaS. This irrelevance or insignificance factor will come up with winners and losers, the latter being easier to identify:

1) When bought by a competitor.

2) When announcing their retreat from server manufacturing.

Whether the looser is going to be Dell, HP or IBM, no one can say today. But the more customer stop asking for a Dell/HP/IBM server for their Private Cloud and the less Data Center Providers cling to a specific technology (as it tends to) the only difference will be cost.

Cost and consequent MRC/NRC (monthly recurrent cost & non-recurring cost) will define which servers to buy.

Some server manufacturers are delivering OEM virtualization licenses together with equipment as a way to leverage the sale. But with the upcoming of licensing agreements for Service Providers, there will come a time when buying OEM licenses won’t be so convenient.

It won’t matter in the future if a server manufacturer does I&D or not, or whether if they were the first in the industry to provide energy efficient servers, because all of them will be the same. There won’t be any real difference talking about hardware and the marketing strategy for Data Center Providers in the future will avoid talking about the underlying hardware of shared environments like IaaS.

On the same way the customers aren’t requiring an EMC or HDS Storage but 15K RPM HDs, it’s just a matter of time customer stop asking for a specific server vendor.

From the Data Center Provider point of view, it isn’t really strategic to marry a hardware vendor for multitenant platforms since commercial conditions may change out of the blue and suddenly expose the Data Center Provider with a non-rentable business.

The underlying hardware irrelevance trend is the reason why server vendors are developing strategic alliances with virtualization hypervisors. It is also the reason why they are developing Data Center Deployment software oriented to Cloud Services. If server vendors fail to develop Data Center Managing software which really makes it worth to buy their hardware, all of them (but maybe two) are doomed.

Another alternative for server manufacturer is not only sell hardware but also services, like Hosting or Housing. IBM is the leader in this subject since they provide almost all the chain of required services: hardware, software, professional services and so on. But, IBM lacks a very important requirement: telecommunications. It isn’t really clear by now if this will strengthen or weaken IBM in the future.

What I am sure about is that Data Center & Telecommunications providers have a nice chance to make the difference. They will be able to negotiate with hardware vendors the best price for multitenant platforms and by means of License Agreements for Service Providers they will be able to provide everything necessary for a high-end Data Center Solution: network, hardware, software, professional services, etc.

As a conclusion, market trends for Cloud Computing indicate a significant change in the Data Center consumer insight. Customers will tend to jump into Public Clouds instead of deploying Private Clouds (which are more expensive by the way). By having more demand on multitenant platforms, customers won’t be able to decide upon hardware specifics and as a consequence they will grant a very important tool for Data Center Providers to leverage their business.

For more quality articles and insights visit DataCenterTalk.

 

TIA- 942 Provides Data Center Infrastructure Standards

Following established cabling standards have always ensured a safe and efficient operation of any device or industry. Not only do standards assist us in building cost effective systems, they also bring in a certain level of uniformity for all industries. They ensure proper design, installation and performance of the network and also enabled the industries to advance faster and further.

Data centers, until recently, did not have any established standards. Network administrators had to choose technologies and decipher how to properly implement them into an often-undersized space. In April 2005, Telecommunications Industry Association (TIA) came out with TIA-942 Telecommunications Infrastructure Standards for Data Centers, this being the first standard to successfully address data center infrastructure.

TIA- 942 standards talk of the following specifications:

  • Site space and layout
  • Cabling infrastructure
  • Tiered reliability
  • Power
  • Cooling

Site Space and Layout

While choosing space for a data center, one must keep possible future growth and changing environmental conditions in mind. The interior of the data center also should be designed with plenty of “white space” that can accommodate future racks on cooling devices. According to TIA-942 standards, the data center should include the following functional areas:

  • One or More Entrance Rooms:

The Entrance Room may be located either inside or outside the data processing room. The standard recommends locating the entrance room outside of the computer room for better security. If located within the computer room, the Entrance Room should be consolidated with the MDA.

  • Main Distribution Area

The MDA is a centrally located area that houses the main cross-connect as well as core routers and switches for LAN and SAN infrastructures along with a horizontal cross-connect for a nearby EDA. The standard requires at least one MDA and specifies installing separate racks for fiber, UTP, and coaxial cable in this location.

  • One or More Horizontal Distribution Areas

The HDA is a distribution point for horizontal cabling and houses cross-connects and active equipment for distributing cable to the equipment distribution area. TIA standards specify installing separate racks for fiber, UTP, and coaxial cable in this location. It also recommends locating switches and patch panels to minimize patch cord lengths and facilitate cable management.

  • Equipment Distribution Areas

Horizontal cables are typically terminated with patch panels in the EDA. The standard specifies installing racks and cabinets in an alternating pattern to create “hot” and “cold” aisles to dissipate heat from electronics.

  • Zone Distribution Areas

The ZDA is an optional interconnection point in the horizontal cabling between the HDA and EDA. Only one ZDA is allowed within a horizontal cabling run with a maximum of 288 connections. The ZDA cannot contain any cross-connects or active equipment.

  • Backbone and Horizontal Cabling

Backbone cabling provides connections between MDA, HDAs, and Entrance Rooms while horizontal cabling provides connections between HDAs, ZDA, and EDA. Each functional area must be arranged to prevent exceeding maximum cable lengths for both backbone and horizontal cabling.

CABLING INFRASTRUCTURE

TIA- 942 standards recommend

  • The use of laser-optimized 50μm multimode fiber for backbone cabling.
  • Installing the highest capacity media available for horizontal cabling to reduce the need for re-cabling in the future.
  • Maximum backbone and horizontal cabling distances based on the cabling media and applications to be supported in the data center.
  • 300m of backbone fiber optic cabling and 100m of horizontal copper cabling.

It provides several requirements and recommendations for cabling management.

  • The data center must be designed with separate racks and pathways for each media type, and power and communications cables must be placed in separate pathways.
  • Adequate space must be provided within and between racks and cabinets and in pathways for better cable management, bend radius protection, and access.

 

TIERED RELIABILITY

To provide a means for determining specific data center needs, the TIA-942 standards include an informative annex with data center availability tiers which describes detailed architectural, security, electrical, mechanical, and telecommunications recommendations.

Tier I – Basic: 99.671% Availability

• Single path for power and cooling distribution, no redundant components (N)

• May or may not have a raised floor, UPS, or generator

• Annual downtime of 28.8 hours

• Must be shut down completely for perform preventive maintenance

Tier 2 – Redundant Components: 99.741% Availability

• Single path for power and cooling disruption, includes redundant components (N+1)

• Includes raised floor, UPS, and generator

• Annual downtime of 22.0 hours

• Maintenance of power path and other parts of the infrastructure require a processing shutdown

Tier 3 – Concurrently Maintainable: 99.982% Availability

• Multiple power and cooling distribution paths but with only one path active, includes redundant components (N+1)

• Annual downtime of 1.6 hours

• Includes raised floor and sufficient capacity and distribution to carry load on one path while performing maintenance on the other.

Tier 4 – Fault Tolerant: 99.995% Availability

• Planned activity does not disrupt critical load and data center can sustain at least one worst-case unplanned event with no critical load impact

• Multiple active power and cooling distribution paths, includes redundant components (2 (N+1), i.e. 2 UPS each with N+1 redundancy)

• Annual downtime of 0.4 hours

 

POWER

Determining power requirements requires careful planning and is based on the desired reliability tier. It may include two or more power feeds from the utility, UPS, multiple circuits to systems and equipment, and on-site generators.

Estimating power needs involves determining the power required for all existing devices and for devices anticipated in the future. Power requirements must also be estimated for all support equipment such as UPS, generators, conditioning electronics, HVAC, lighting, etc.

COOLING

The standard incorporates specifications for encouraging airflow and reducing the amount of heat generated by equipments. It recommends the use of raised-floor system for more flexible cooling. The standard encourages hot and cold aisle arrangement.

The standard also suggests:

  • Increase airflow by blocking unnecessary air escapes and/or increasing the height of the raised floor
  • Spread equipment out over unused portions of the raised floor
  • Use open racks instead of cabinets, or use cabinets with mesh fronts and backs
  • Use perforated tiles with larger openings

 

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum.

Bringing Fanatical Support to your Premises

In the past, having a private cloud typically meant you had offsite hardware managed by a host, or a full-time IT team on hand to handle on premise hardware, however thanks to Rackspace companies are now able to have the security of an on-premise private cloud while receiving the industry leading support and guidance of Rackspace Cloud Builders.  By providing customers with the option of utilizing:  Rackspace datacenters, partner datacenters, or most commonly a customer chosen data center, Racksapce is now moving from being a traditional web host to an on-call IT support firm.

While the change from a controlled environment to “real world” setups might sound daunting, an interview with Scott Sanchez, Director of Business Development for Rackspace Cloud Builders, greatly helped to clarify the many questions enterprises and information technology professionals have had about the major initiative from the vendor.

Bringing the Cloud to You

  • Although Open Stack can technically run on a wide array of hardware, to qualify for Rackspace support, your servers  must conform to the specifications at ReferenceArchitecture.org
    • The requirements at Reference Architecture are intended to ensure a reasonably standard environment for Rackspace clients regardless of server location
    • The main objective of Rackspaces’ Open Stack is to provide the same level of support to Rackspace clients regardless of where their servers are located.
      • Rackspace Cloud Builders allows Rackspace to help assemble the necessary hardware on your premises, while also handling management tasks remotely
      • When asked about the transition to supporting both controlled and outside environments rather than just their own data centers, Sanchez said it has not been an issue due to the  publication of standardized required specifications
      • According to Sanchez, although Open Stack can be used by smaller companies looking to test their own private environments, the ideal demographic is larger companies with significant infrastructure investment

Proven Track Record

  • Despite Open Stack only being sixteen months old, it has been adopted by computing giants such as:  Sony and their PS3 Network, PayPals’ X Commerce Platform, and The European Organization for Nuclear Research (CERN)
  • Additionally, Open Stack boasts a development network of over 130 participating companies and over 150 developers
  • Interestingly, Sanchez mentioned that many of the major corporate users of Open Stack devote some of their development efforts towards contributing back to the  open stack community as new features are added and improvements are made
  • This model does mobile casino not just benefit the community at large, but enterprises also benefit since their contributions help shape the project. Additionally it prevents them from being at the mercy of a single company for development direction.

Development Cycle

  • Open Stack operates on a six month development cycle which is based on clearly marked milestones, helping to simplify the logistics of knowing how an update will affect the existing systems
  • Although having a six month deployment cycle has been of concern to many, Sanchez mentioned that with the Diablo release (latest version) of the software, the foundation has become much more solid than the earlier phases
    • Analogous to building a house. It takes awhile for the foundation to solidify before you can begin branching out and adding additional features.
    • In addition to starting with a full deployment, Sanchez mentioned that some companies who are concerned about the early stages of Open Stack simply start with the latest version, but don’t go to production until later versions come out
    • Unlike many Linux distros which have bleeding edge, stable, and legacy version support, Open Stack only maintains their latest editions allowing them to focus on the present rather than having to support various variants of the platform
      • When asked if Rackspace has any plans to adopt a deployment cycle similar to Linux distro’s, Sanchez said there are no plans to change the model as it is already sufficient.

      For more quality articles visit DataCenterTalk

Advantages and Dis-advantages of a Parallel Modular UPS System

Note: The conclusions drawn in this article are the personal views of the author. The website or author is not responsible for any unforeseen damages 070-551-VB caused by implementing these techniques.

Modular UPS systems are a smart conception. They are small, light, compact, hot swappable, and low cost modules which can be added when required and removed when there is no further use for it. Parallel connected UPS systems are more distinctive in their own way. There are two data center ups systemsides to a coin, with many advantages come dis-advantages too, read on.


Advantages:

A parallel modular UPS system beyond doubt is very advantageous to a data center. Here are a few:


Higher Availability:

 

When analysing a UPS system it is obvious that availability is a major criteria when considering a purchase. Availability of a UPS is defined as follows:
920-121

AV= MTBF/ (MTBF+MTTR) = 1/ (1+ MTTR /MTBF).

Where, AV – Availability, MTBF – Mean Time Between Failure, MTTR – Mean Time To Repair.

The value of mean time between a failure depends on, the number of parallel units and level of redundancy. Swapping of failed modules due to hot swapping feature will reduce the mean time to repair.


Least Floor Space:

 

Most standard stand-alone systems come in the horizontal stacking form factor. This drastically increases floor space as more systems are added. Modular UPSs systems are designed to be stacked vertically reducing space by almost 25%!


Reduced Maintenance Failure:

 

Popular reviews show that 30% of UPS failures are due to errors caused by maintenance staff during repair. Hot swappable modules can be replaced and repaired at a later time. Hence, the failed UPSs can be sent to a service station for repairs, greatly minimizing maintenance failures.


Extreme Efficiency:

 

Efficiency of a UPS reaches its peak when the load is at maximum rating. Modular UPSs allow power modules to be added, this keeps the ratio high and efficiency higher than normal.


Isolation:

 

 The greatest advantage of the parallel operated UPSs is the isolation. The load and supply always remains balanced even though there may be a break due to unforeseen events. 


Dis-advantages:

The parallel operated modular UPSs have a few dis-advantages that are usually neglected. It is best to know these just in case some fine tuning has to be made to the backup power system in general.


Not too many in parallel:

 

Generally a low mean time to repair a parallel modular UPS system is more advantageous than a low mean time between failures. In a system of lower rating, multiple modules may be placed on a single board, hence multiple modules in parallel which means smaller mean time between repairs, and hence this acts as a major drawback. The same principle can be applied to batteries in parallel on these systems.


Relatively Higher Maintenance:

 

Increasing the number of UPS modules further reduces system reliability due to increase in number of failures at a given time and the general availability due to the low availability between failures.


Failure of Individual Components:

 

In a parallel modular UPS, the entire rack may serve as a single UPS. A single failure would amount to a common failure in all parallel modules which may be a common control unit or even a battery bank. This may cause complete output breakdown. Hence it is advisable for the data center to go for an N+1 or N+ 2 modules where N is the number of UPS needed by the data center for normal backup practice. This leads to added costs.

The advantages of a parallel modular UPS system clearly outweigh the dis-advantages. A good parallel connected modular UPS may be the key to an efficient data center power backup.

SAS-70 in the Data Center

Standardization, and associated certification to prove compliance to standards, can be both an indispensable assurance of quality for consumers and an unwelcome expense for vendors.

During the Internet’s “wild west” boom phase in the early 1990s, the pace of technological innovation was so fast that there was little time for rigid standardization or certification. Most of the technologies we rely upon today, such as TCP/IP networking, were developed without formal methodologies; internet standards generally consisted of “Request For Comments” (RFC) and “Best Current Practices” (BCP) papers written by the parties with an interest in the emerging technologies.

As the Internet matured and real money began to flow through the network, technologies that were once mere research projects became part of real-world, critical infrastructures essential to business and government. The globalization of business fueled by the Internet meant that it was no longer sufficient to allow organizations to manage their own data processing without oversight.

One of the most important standards documents in today’s information economy is SAS-70, or “the Statement on Auditing Standards, #70”. The standard, originally titled “Reports on the Processing of Transactions by Service Organizations” provides guidance to auditors assessing security controls. The standard was released by the American Institute of Certified Public Accountants (AICPA) in 1992, well before the internet was widely used as an electronic commerce platform.

Information security is now the foremost requirement of most organizations when selecting a data center. In the wake of the Enron scandal in 2001 in which deceptive accounting techniques were used to defraud investors of some $11 billion, public faith in existing accounting standards deteriorated, and there were widespread demands for more comprehensive safeguards to prevent future occurrences. Section 404 of the Sarbanes-Oxley (SOX) act required all publicly traded companies to utilize SAS-70 “type II certified” data centers.

A data center qualifies as a “service organization” if it provides services which impact it’s customer’s financial record-keeping in any way; such services include managed security, data storage and general IT support.

Unlike other important IT security standards, SAS-70 does not describe the actual controls implemented to safeguard the integrity of transaction information; rather it outlines the processes used when conducting audits of these controls. A SAS-70 report contains the opinions of an auditor on how verifiable an organization’s IT security policies are. This means there is technically no way to be “SAS-70 certified”, since there is no control framework mandated. There are two primary types of SAS-70 reports – “type I” covers the controls implemented by an organization; “type II” includes a comprehensive assessment of effectiveness of these controls.

To the managers of a data center, completion of a SAS-70 audit is a way to attract high-end business. To customers, the audit provides assurance that the data center has an audit-able set of information security controls in place to safeguard critical data. SAS-70 does not certify that the controls are implemented and sufficiently secure, only that they are readily verifiable by an auditor.

To achieve a satisfactory SAS-70 report, data center managers must identify which information security standards are relevant, implement the controls mandated by those standards, and finally ensure that proof of standards compliance is readily available to auditors. Compliance is more than a business requirement to a data center; it is a vital part of the value delivered to customers. Products and services that are certified to comply with security standards justify higher pricing, and security auditing solutions offer very high profit margins. Pricing for high-end data centers offering SAS-70 compliance can be much higher than the market average.

It is important to ensure that internal auditing processes are in place to verify compliance with an organization’s adopted standards; this reduces the cost of third-party services and helps to ensure that the organization realizes long-term benefits from information security investments.

The main information security standards relevant to the data center are :

Control Objectives for Information Technology (COBIT)
Information Technology Infrastructure Library (ITIL)
Payment Card Industry Data Security Standard (PCI DSS)
IPEDA (Personal Information Protection and Electronics Document Act)
ISO 15443 : Framework for IT Security Assurance
ISO 20000 : Information Security : Service Management
ISO 27002 : Code of Practice for Information Security Management
FIPS : National Institute of Standards and Technology Federal Information Processing Standards (NIST FIPS)
Committee of Sponsoring Organizations of the Treadway Commission framework (COSO)

The primary areas covered by these standards are :

policy : leadership, training and governance
confidentiality : protection of critical data from unauthorized disclosure via physical and virtual access controls
service management : IT operations, support and incident reponse
integrity : change management processes
continuity / availability : high-availability systems, backup/restore and disaster recovery

 

For more quality articles and insights visit DataCenterTalk.

Data Center UPS- Know What Your Data Center Needs

Power failure is a problem that is fairly common in all parts of the world; one can hardly say that this concern is restricted to data centers. But seeing as how a blackout in a data center could spell disaster, an uninterruptible power supply battery is no longer a recommendation; it is a must. In order to set up a fail- proof UPS for your data center, one will have to know what options one has.

UPS is More Than Just Backup

Unlike popular understanding, a UPS not only supplies power to the data center in the event of a power failure, it also, to an extent, protects the data center from transients, power sags, power swells, harmonic distortions and frequency instability. A good UPS system can also improve the power utilization efficiency (PUE) of the data center. It should be noted that a UPS can support the operation of the data center for a very short interval of time- only till electricity can be switched to an auxiliary power source.

Types of UPS

There are basically three types of UPS systems available today:

  1. Standby power system- this system offers the most basic features of surge protection and battery backup. Electricity is supplied directly to the load and standby power is invoked only during power failure. When the power dip exceeds a certain preset value, the control switches over to the standby UPS which usually takes about 25 milliseconds.
  2. Line interactive power system- This type of UPS system maintains the inverter in line and switches the battery power in the event of power loss. This system makes use of an autotransformer which supplies only the difference in power when there is a dip in voltage. More number of taps on the autotransformer increases the cost and complexity of the system.
  3. Double conversion on-line UPS- In this type, AC power is drawn from the power supply, gets rectified to DC and charges the batteries and once again gets converted to AC by an inverter for powering the protected devices. This system is more expensive and may be considered for incorporation for equipments that are highly sensitive to power fluctuations. Since the battery is always connected to the inverter, switchover is instantaneous. But care should be taken to avoid overcharging of the batteries. This system provides constant output irrespective of the level of disturbance in the input.

UPS Also Requires Backup!

With that said, one should also take into consideration that UPS may also fail in some scenarios. One large UPS supplying power to all servers is not advisable. Instead, opt for multiple smaller UPS modules with either N+1 configuration or 2N configuration (where N is the number of batteries required to sustain the data center load).

UPS Maintenance

UPS, like all other data center equipments, must be monitored regularly and maintenance is a must. The batteries should be checked for acid and hazardous gas leaks. A number of third party companies provide thorough and complete UPS maintenance services.

Remember, the type of UPS one might want to opt for depends solely on the requirement and the data center load. One requires complete understanding of the load demands of the data center. An underpowered UPS system is equivalent to no UPS at all. One can seek the help of leading UPS system experts such as Liebert Systems, APC, Senitel Power, Eaton etc so that the UPS system that is finally fitted into a data center is exactly what the data center needs.

For more insights into the data center world, visit Data Center Talk.

Smart Operators And Data Center Metrics

Modern society will soon be built on the cloud, and the cloud is built on the data center.  One cannot emphasize the importance of these facilities enough- or the potential losses that can be accrued if something goes wrong. It should thus go without saying – the best operators are those who pay attention to data center metrics. After all, if they don’t the ins and outs of how their facility operates, how are they going to rectify any problems they might come across?

If they don’t understand where their system’s headed, how can they prepare for the future?

“Data center efficiency goes beyond knowing if servers are still up,” writes Chris A. Mackinnon of Processor, “these days, data center managers are accountable for energy usage, energy efficiency, compliance, regulation, and a great deal more. Performance must be monitored and trends must be predicted to ensure that the data center is always up and ready for capacity increases at any time.”

The problem with the modern tech industry is how rapidly it’s changing- how quickly everything’s moving forward. Simply kicking back and taking on problems as they crop up just doesn’t cut it anymore.  Even a minimal degree of downtime could utterly cripple a center and its servers. While it’s true that you can’t really prepare for everything, you should still prepare for something- you’d be surprised how many pitfalls you can avoid simply by knowing the stats.

At the end of the day, good operators have a plan for whatever problems they feel are likely to arise. Smart operators, on the other hand; use metrics to predict those problems and tackle them before they even become problems.

There’s also the matter of productivity. If you understand your facility’s vital statistics, it becomes that much easier to implement a set of standards and guidelines that’ll improve both the overall effectiveness of your center and the work-efficiency of your employees.

It should go without saying that those are both things you want to do.

You need to be careful in how you analyze the statistics, however.  A set of clearly defined, well-designed metrics are a godsend for any operator. On the other hand, disorganized, poorly defined, or incomplete metrics could very well spell disaster, for several reasons:

  • You could end up improving on areas that don’t require improvement, while overlooking potentially critical problems with your facility.
  • Goals that aren’t properly defined are frustrating, both for you and for your employees.
  • Frustrated employees work much less effectively.
  • Since these employees work less effectively, efficiency goes down the tubes.

When efficiency goes down the tubes, well…It all sort of goes downhill from there.

A well implemented set of metrics, on the other hand:

  • Helps improve the overall functionality of the data center.
  • Makes things far easier for IT.
  • Helps operators understand what works, and what could use improvement.
  • Put the variables associated with a data center’s operation into individual categories, and allows operators the freedom to analyze each variable only if absolutely necessary.

Of course, even with metrics, there’s a downright staggering volume of information to process.  There’s a reason a lot of SAAS providers have started tapping into the BI market- having a platform to organize and display them makes things significantly easier.

Energy efficiency, hardware effectiveness, employee efficiency, business value, data rates and uptime are all of vital importance to the data center. It’s not just a matter of energy efficiency anymore- virtually every factor needs to be considered, and virtually every aspect needs to have a metric associated with it.

While there’s no easy answer to which metrics are the right ones- it largely depends on your facility and what you’re attempting to do with it- it goes without saying that if you want to truly thrive, if you want the technology to be truly effective and the facility to have real business value, you can’t just let it run as it will-you need to implement data center metrics, either way.