Go Back   Data Center, Colocation, Cloud Computing, Storage, Dedicated Servers Forums > General > Datacenter News, Seminars & Conferences update

Reply

 

Thread Tools
  #1  
Old 12-10-2004, 05:01 AM
whcdavid's Avatar
whcdavid whcdavid is offline
Administrator
 
Join Date: Mar 2004
Posts: 901
Default Data Centers Get a Makeover

Changing technology trends and strategic objectives have IT organizations rethinking basic data center designs.





News Story by Gary H. Anthes

NOVEMBER 01, 2004 (COMPUTERWORLD) - Take a stroll through almost any data center today, and you will see pretty much what you would have seen a decade ago—square white tiles over a raised floor, bright fluorescent lights, little red fire alarms everywhere and rows of faintly humming computer equipment and air conditioning gear.



But this familiar scene masks some big changes in the way that data centers are built, as well as changes in computer technology and an evolution in what data centers are expected to accomplish. Ultradense server racks, the move to distributed and virtual processing, a requirement for instant fail-over, and new requirements for IP telephony and voice over IP are all driving changes above and below the raised floor.

Keeping Cool

Perhaps the greatest challenge in data centers today is how to keep those rooms—and the components within them—cool. Facility designers used to apply a simple rule of thumb: If the room was going to be x thousand square feet, it would need y tons of air conditioning. Or designers relied on equipment "nameplates" that listed peak power usage based not on cooling requirements but on safety requirements.

Those simple approaches don't work well today. They're likely to result in expensive overcooling of the overall facility, even as temperatures in small areas—such as inside a rack of blade servers—soar.

Ron Hughes, president of California Data Center Design Group, says the typical data center last year consumed 40 watts of power per square foot and used server racks that consumed 2 kilowatts each. This year, he's designing a facility that will average 120 watts per square foot and support racks that use 4 to 5 kilowatts.

"And if you look at the latest projections from HP, Sun, IBM, Dell and so on, they are predicting that racks will be 15 to 25 kilowatts for blade servers," Hughes says. "The overall direction is toward smaller footprints, increased capacity, increased power and cooling requirements. I've seen projections for blade servers as high as 30 kilowatts per rack, and that's well over 500 watts per square foot."

The issue at those levels isn't cooling per se, but affordable cooling. Hughes says that at 40 watts per square foot, it costs $400 per square foot to build a data center, or $20 million for a 50,000-square-foot facility. But at 500 watts per square foot—which Hughes says we could see by 2009, well within the lifetime of any data center built today—the amount of air conditioning, uninterruptible power supply (UPS) units, power generators and related gear jumps dramatically. Construction costs soar to $5,000 per square foot, and the same data center busts the budget at $250 million, he says.

The cooling challenge is compounded when a data center switches to emergency backup power. UPS units kick in instantly in a power failure, so there is no interruption in the flow of electricity to computers. But there is often a delay of 15 to 60 seconds for generators to restart the cooling units. That hasn't been a problem in the past, but for some newer equipment, temperatures can rise fast.

The temperature in a data center that averages 40 watts per square foot will rise 25 degrees in 10 minutes with cooling shut off, says Bob Sullivan, a senior consultant at The Uptime Institute Inc. in Santa Fe, N.M. But in places where power consumption is 300 watts per square foot, the temperature can rise that much in less than a minute. The solution, Sullivan says, will be uninterruptible cooling that works the same way as uninterruptible power. That would involve putting air fans, and possibly systems that pump chilled water, on a UPS, he says.
__________________
WebHostingChat ( Web Hosting Forum)
DatacenterSearch (Find your Datacenter)
YOU FAIL ONLY WHEN YOU FAIL TO TRY
Reply With Quote
  #2  
Old 12-10-2004, 05:02 AM
whcdavid's Avatar
whcdavid whcdavid is offline
Administrator
 
Join Date: Mar 2004
Posts: 901
Default

Continue...



Distributing the Data Center

Many companies today have just one big data center, or maybe two or more depending on the locations of users. But an abundance of cheap "dark" fiber, plus new virtualization software, is enabling a much more flexible, dynamic and user-transparent distribution of processing workloads.

For example, the Federal National Mortgage Association has two data centers, including one designed to be mostly a contingency site. Fannie Mae is building another data center to replace the contingency center and will then evolve both centers to a "co-production environment," says Stan Lofton, director of Wintel systems at the Washington-based mortgage financing company.

"We have a few applications today that we consider dual-site production, in operation all the time, so if we lost one site, it would be seamless to the user," he says. "Over time, we see more and more applications going that way."

That approach helps avoid single points of failure and makes disaster recovery faster and easier, says Joshua Aaron, president of Business Technology Partners Inc. in New York. "And not having to consolidate all your real estate in one location allows you to negotiate better deals in off-the-beaten-path areas."

That approach is leading some companies to bring disaster recovery in-house, rather than using a service from another company, Sullivan says. "You're seeing those disaster recovery facilities used also for development, testing and co-production," he says.

Co-production data centers carry with them the requirement of "continuous availability," says Terry Rodgers, a facilities manager at Fannie Mae. Increasingly, users are saying they can't wait a few hours or even a few minutes to bring up their systems at a backup site if the main site is knocked out by a fire or some other disaster. Fail-over has to be instantaneous, and that's both a software and a hardware issue, Rodgers notes.

Redundancy Times Two

Continuous availability requires a Tier IV data center, as defined by The Uptime Institute. Tier IV requires two independent electrical systems, all the way down to dual power cables into the computer hardware. Fannie Mae's new data center will be built to Tier IV specs and will offer "real-time backup," Rodgers says.

Visa U.S.A. Inc. has two 50,000-square-foot-plus data centers in the U.S., one on each coast. Either can instantly back up the other. Each center is rated as N+1, which means that every system with n components has at least one hot spare. For example, if a data center has six UPS modules in use, there will be a seventh standing by under the N+1 principle.

Within a year, Visa will migrate to a 2(N+1) architecture, in which every system is completely duplicated. In the above example, the data center would have two active UPS systems, each with separate cables to the equipment and each with N+1 redundancy.

"Ten years ago, N+1 allowed for a component failure," says Richard Knight, senior vice president for operations at Foster City, Calif.-based Visa. "Now, with technology changes and everything dual-powered, the ultimate design is 2(N+1). It's dual systems versus dual components."

In addition to offering the highest levels of fault tolerance, 2(N+1) will enhance flexibility because an entire system can be taken down for maintenance, says Jerry Corbin, Visa's vice president for central facilities. But, he says, "it also tremendously increases the complexity to be managed."

Communications Buildup

Networking issues are also changing data center designs, and storage-area networks pose special challenges, Aaron says. "SANs typically attach to Fibre Channel switches, although IP SCSI is making inroads in the market," he says. "Today's Fibre Channel switches require their own infrastructure and must be planned for during data center design. They take rack space and consume a lot of power."

Indeed, communications considerations will increasingly influence data center design, Aaron predicts. "With the proliferation of voice over IP, the data center now has to support a very mission-critical application: voice," he says. "How do you provide power to the IP phones? How do you handle 911 service? How do you provide voice mail? How do you or will you support unified messaging?"

Power-failure relays to support 911 service haven't traditionally been part of a data center design, but they will be, Aaron says, as will backup power gear for voice gateways, media gateways and IP phones.

Unified Management

IP networks bring relief as well as challenges. Data centers are starting to connect environmental monitoring sensors to the data network so both facilities managers and IT managers have a unified view of the health of all systems.

Facilities equipment manufacturers use common data-exchange standards and network protocols to help bridge the facilities and IT worlds. For example, NetBotz Inc. in Austin sells IP-addressable wireless "monitoring appliances" that can be fitted with security cameras, microphones and sensors for humidity, temperature and airflow. They can be read remotely or send alerts by e-mail.

"The cost, size and complexity of these kinds of things has come down," Aaron says. "Plus, they are now integrated with the network so you can see them across the WAN in a remote location."

Four Trends Driving New Data Center Designs

1Need to support ultra-dense server racks 2 Move toward distributed and virtual processing
3 Requirement for instant fail-over 4 Migration to IP telephony and voice over IP Building a Data Center: Soaring Costs
TODAY2009Watts per square foot 40 500Cost per square foot to build and equip a data center $400$5,000Cost for a 10,000-square-foot facility$4M $50M

Thanks to computerworld.com
__________________
WebHostingChat ( Web Hosting Forum)
DatacenterSearch (Find your Datacenter)
YOU FAIL ONLY WHEN YOU FAIL TO TRY
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT. The time now is 12:36 PM.

Member Area



Data Center Industry Daily News


Cloud and Dedicated Hosting


Sponsors Managed Servers Sponsored by DedicatedNOW.
Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.