Intel Rivals Google & Facebook in Building Efficient Data Centers

Intel Corp. is building a state-of-the-art data center in Santa Clara, Calif. that it says rivals those of Web giants in energy efficiency. CIO Kimberly Stevenson is leading the conversion of a chip fab facility, decommissioned in 2008, into an energy-efficient, high performance computing data center.

The chipmaker is among the first to benchmark its internal data center specs along efforts pioneered by consumer Internet giants, such as Facebook Inc.FB -0.56% andAlphabet Inc.GOOGL -0.90%’s Google. The goal is to lower costs and power demands while increasing the ability to handle an unprecedented volume of data. Since the 1990s, Intel has evolved its data center design, nearly doubling its power efficiency in its third generation of data centers.

Most enterprise data centers lag far behind those of Web companies when it comes to energy efficiency. But Intel, taking a page from energy-efficient data centers developed by companies like Facebook and Google, said its Santa Clara facility has a power usage effectiveness rating of 1.06. That’s the energy needed to run the facility divided by the energy required to run the IT equipment. An ideal PUE is 1.0 meaning that all of the energy needed for a data center facility goes to the computing devices instead of cooling or power conversion. To get there, the company tapped techniques such as using outside air to cool the data center. That efficiency compares favorably to Facebook,whose Prineville, Ore. data center is 1.078, according to the social network. Google has said by some measures its most efficient data center is as low as 1.06 but across all its data centers the power usage effectiveness is 1.12.

For two-thirds of enterprise data centers, the power overhead is 100% or more with a PUE of 2.0 or greater, according to a presentation given in April of this year by Kelly Quinn, research of manager of worldwide data center trends and strategies at market-watcher International Data Corp. For Intel, power efficiency means using less electricity and water and is part of a broader drive to reduce IT costs.

“We believe for every unit of output, our IT costs have to continue to decline,” Ms. Stevenson, told CIO Journal. Companies often talk about applications being a competitive advantage but those applications run in data centers. “Generally speaking, the IT profession spends more time on optimizing the application for the workflow and the unique requirements of the company and they leave a little bit on the table in terms of the data center optimization and efficiency,” she said.

The data center in Santa Clara is used mainly for computing devoted to designing chips. Servers host electronic design automation applications that run complex, and compute-intensive, modeling simulations. By moving to the new data centers, Ms. Stevenson helped Intel shave 12 weeks off the design cycle for new chips.

Intel’s New Effort

In the new Santa Clara data center, Intel is using free air cooling from the outside. There are nine giant fans in the ceiling pulling air in from outside. If the temperature is more than 90 degrees all nine fans will be working. Even on a day with a temperature of about 64 degrees outside, the fans that ran pulled down enough air to create considerable wind in the data center. One of the technicians wore a wind breaker and tags on the front of the servers were fluttering.

By using free air cooling, the company has saved 44 million gallons of water per year that would otherwise be used to keep servers cool, said Shesha Krishnapura, Intel’s IT CTO who heads up the company’s data center strategy. The free air cooling has yielded an annual saving of more than 10 million kilowatt hours of power, he said.

Not only does Intel wring as much computing capacity out of every kilowatt of electricity, but the company makes sure to get the most out of every server. Ms. Stevenson said Intel runs the data center at low-90s utilization, meaning that it uses nearly all of its computing capacity and doesn’t leave servers running at half capacity. “If you look across typical workloads, people would be jumping for joy at 40-50% utilization of their assets but we’re crying if we’re at 86% utilization,” she said. One reason Intel can do that is because it has heavily invested in software that queues workloads and releases them into the computing environment when capacity is available. If companies don’t do that, then they need to build in extra capacity for when traffic unexpectedly peaks.