Continue...
Distributing the Data Center
Many companies today have just one big data center, or maybe two or more depending on the locations of users. But an abundance of cheap "dark" fiber, plus new virtualization software, is enabling a much more flexible, dynamic and user-transparent distribution of processing workloads.
For example, the Federal National Mortgage Association has two data centers, including one designed to be mostly a contingency site. Fannie Mae is building another data center to replace the contingency center and will then evolve both centers to a "co-production environment," says Stan Lofton, director of Wintel systems at the Washington-based mortgage financing company.
"We have a few applications today that we consider dual-site production, in operation all the time, so if we lost one site, it would be seamless to the user," he says. "Over time, we see more and more applications going that way."
That approach helps avoid single points of failure and makes disaster recovery faster and easier, says Joshua Aaron, president of Business Technology Partners Inc. in New York. "And not having to consolidate all your real estate in one location allows you to negotiate better deals in off-the-beaten-path areas."
That approach is leading some companies to bring disaster recovery in-house, rather than using a service from another company, Sullivan says. "You're seeing those disaster recovery facilities used also for development, testing and co-production," he says.
Co-production data centers carry with them the requirement of "continuous availability," says Terry Rodgers, a facilities manager at Fannie Mae. Increasingly, users are saying they can't wait a few hours or even a few minutes to bring up their systems at a backup site if the main site is knocked out by a fire or some other disaster. Fail-over has to be instantaneous, and that's both a software and a hardware issue, Rodgers notes.
Redundancy Times Two
Continuous availability requires a Tier IV data center, as defined by The Uptime Institute. Tier IV requires two independent electrical systems, all the way down to dual power cables into the computer hardware. Fannie Mae's new data center will be built to Tier IV specs and will offer "real-time backup," Rodgers says.
Visa U.S.A. Inc. has two 50,000-square-foot-plus data centers in the U.S., one on each coast. Either can instantly back up the other. Each center is rated as N+1, which means that every system with
n components has at least one hot spare. For example, if a data center has six UPS modules in use, there will be a seventh standing by under the N+1 principle.
Within a year, Visa will migrate to a 2(N+1) architecture, in which every system is completely duplicated. In the above example, the data center would have two active UPS systems, each with separate cables to the equipment and each with N+1 redundancy.
"Ten years ago, N+1 allowed for a component failure," says Richard Knight, senior vice president for operations at Foster City, Calif.-based Visa. "Now, with technology changes and everything dual-powered, the ultimate design is 2(N+1). It's dual systems versus dual components."
In addition to offering the highest levels of fault tolerance, 2(N+1) will enhance flexibility because an entire system can be taken down for maintenance, says Jerry Corbin, Visa's vice president for central facilities. But, he says, "it also tremendously increases the complexity to be managed."
Communications Buildup
Networking issues are also changing data center designs, and storage-area networks pose special challenges, Aaron says. "SANs typically attach to Fibre Channel switches, although IP SCSI is making inroads in the market," he says. "Today's Fibre Channel switches require their own infrastructure and must be planned for during data center design. They take rack space and consume a lot of power."
Indeed, communications considerations will increasingly influence data center design, Aaron predicts. "With the proliferation of voice over IP, the data center now has to support a very mission-critical application: voice," he says. "How do you provide power to the IP phones? How do you handle 911 service? How do you provide voice mail? How do you or will you support unified messaging?"
Power-failure relays to support 911 service haven't traditionally been part of a data center design, but they will be, Aaron says, as will backup power gear for voice gateways, media gateways and IP phones.
Unified Management
IP networks bring relief as well as challenges. Data centers are starting to connect environmental monitoring sensors to the data network so both facilities managers and IT managers have a unified view of the health of all systems.
Facilities equipment manufacturers use common data-exchange standards and network protocols to help bridge the facilities and IT worlds. For example, NetBotz Inc. in Austin sells IP-addressable wireless "monitoring appliances" that can be fitted with security cameras, microphones and sensors for humidity, temperature and airflow. They can be read remotely or send alerts by e-mail.
"The cost, size and complexity of these kinds of things has come down," Aaron says. "Plus, they are now integrated with the network so you can see them across the WAN in a remote location."
Four Trends Driving New Data Center Designs
1Need to support ultra-dense server racks
2 Move toward distributed and virtual processing
3 Requirement for instant fail-over
4 Migration to IP telephony and voice over IP
Building a Data Center: Soaring Costs
TODAY2009Watts per square foot
40 500Cost per square foot to build and equip a data center
$400$5,000Cost for a 10,000-square-foot facility
$4M $50M
Thanks to computerworld.com