It all started with a report on Reuters that Facebook is evaluating the possibility of building its first data center in the Asia Pacific region in Taiwan. A subsequent follow-up on Datacenter Knowledge eventually pointed out that Facebook is already leasing space at the Digital Realty Trust’s (DRT) facility in Singapore, citing a source who wished to remain anonymous.
The episode got me thinking about the true size of the data center market in Asia, and what appears to be an unofficial race to be the first to build the largest, most expensive data center, or have one built at one’s doorstep.
Does it matter where that next Facebook facility is?
Does it matter where the next Facebook data center is? Maybe to municipal officials – seeing that the chief executive of a county in Taiwan was the originator of the Reuters report. Beyond that, insiders already know that the building of a single data center does not necessarily represent a game-changing boost to the local economy, considering the comparatively low staffing complement of a typical data center and how the lion’s share of equipment procurement is so often sourced globally.
Indeed, it is arguable that the most commonly reported metric, the floor space, doesn’t really tell much about a data center. Aside from being prone to double counting, embellishment, or even outright misrepresentation by less scrupulous operators, the floor space is also an increasingly unreliable gauge given the changing face of technology.
There is no question that x86 virtualization changed the IT industry – and the data center – at a fundamental level. With virtualization, businesses were able to utilize their compute infrastructure in a far more efficient manner, increase reliability with the ability to transfer live workloads between servers, even as it allowed them to reduce their physical footprint at the same time.
This shift towards virtualization also threw the traditional per-square-feet or number-of-racks approach into disarray, notwithstanding how cloud computing made matters worse by further increasing utilization. Moreover, both virtualization and the cloud had the effect of encouraging denser compute hardware than before, posing a challenge to older facilities with the increased power draw per rack.
It is due to this that a growing trend in recent years was to focus on tracking the power density of data centers. Yet even that may be a red herring, when one considers how ramping up power is often a matter of diverting the requisite levels of power and cooling capacity to the right sections of the building. Specifically, floor space could be sacrificed and ancillary cooling equipment installed to boost power density for a few racks or an enclosed private area.
Of course, a new generation of data centers such as NTT’s reccently launched FDC2 in Hong Kong is now built with a high power density up to 24kVA per rack, as well as the cooling capacity to back it up. FDC2 also supports extremely tall racks of up to 54U to hold more IT hardware than the more common 42U racks, further blurring the usefulness of tracking by either floor space or racks. Not every operator is so aggressive though, and at least one new data center operator whose floorplans we have seen adopted a more balanced approach by zoning certain sections of each level for high density deployments.
Changing metrics aside, the vested interests of data center operators ultimately makes it difficult to get a good read of basic parameters such as floor space.
We have discovered colocation providers which misrepresent their size by citing the full floor space of the wholesale data center they are based in. Others, because they only have a couple of racks or a small private area, keep quiet about their actual footprint even as they boast about how they have ’multiple data centers’.
Finally, acquisitions, mergers, and decommissioning of older facilities can also change and obfuscate the numbers substantially. Obviously, data center providers don’t ever comment on their customers due to confidentiality clauses, and are also loathe to correct false claims by them.
The result is a quagmire of numbers with a high propensity towards double counting, erroneous figures, and guesstimates that are often way off the mark.
And we are only talking about the floor space here, with few options in the way of accurately measuring the density of data center deployments, nor the utilization level of the underlying hardware.
Finally, there is a desire for secrecy by data center operators, as amply epitomized by the fascination over the location of Facebook’s next data center in Asia. Unfortunately, this inherent desire for secrecy is misguided, and taken too far, harms the data center industry as a whole.
The reason is simple: The veil of secrecy tends to be self-propagating, and is maintained even in situations where it serves no purpose at all. By limiting practically all conversations to behind closed doors and non-disclosure agreements, the ability for the industry to improve together is drastically curtailed.
But how can having more open communication make for a better data center landscape? This is probably best evidenced by the failure of the Singapore Stock Exchange (SGX) two years ago. An independent Board Committee of Inquiry (BCOI) was set up to investigate the outage, and the findings was published online for all to see – and learn from. It is a pity that left to their own devices, data center operators have no incentive to speak up at all.
“The fundamental issue, and it’s a global issue, is broadly speaking there is no mandate to disclose the reason for the failure,” explained Ed Ansett the co-founder and chairman of i3 Solutions to DCD at that time. I3 Solutions was the company that helped SGX identify and rectify the fault.
“Why would you? It’s embarrassing, expensive and damages reputations. So, if it doesn’t breach the statute books, then companies will naturally avoid explaining what happened outside their organizations,” he said.
As we enter into 2016 in earnest, here’s a hope for greater collaboration and openness that will benefit the industry. Instead of relying only on a handful of consultants or technical experts, industry leaders should do their part to foster greater community for learning by more honest and open communication.
“Each outage in a data center is a learning experience for both service providers and end-users,” said Goh Thiam Poh, director of operations, Equinix Singapore, in our earlier report. In his remarks, Goh also pointed to how it is often easy to identify mistakes in hindsight.
So why not share the lessons?