View Single Post

  #2  
Old 02-02-2014, 06:54 PM
StreamLight StreamLight is offline
Junior Member
 
Join Date: Jan 2014
Posts: 3
Default

HI
I am somewhat surprised that there has been no input and no answer from any of the 20K+ members of this forum which is supposed to specialize in Data Center Matters.

One of the fundamental issues in DC Design is the initial Sizing of the DC. I would really like to throw this in the open and have some input from designers who have designed and crafted DCs from the start. I know that some of you may think that this type of information is confidential and not to be shared openly. The problem with this type of approach is that it does not really encourage open debate and that is fundamental to understanding and moving forward.
Let us start at the beginning of what was called a DC like in the 80s and at the beginning of the 90s. The DC was dominated by a monolithic structure, the Mainframe from the likes of IBM, Fujitsu, ICL , Bouroughs, Speery Univac which became Unisys.

Then came along DEC or Digital and said Hey we can build you a cheaper data center with our VAX Mini Computers, and this sprouted the likes of Pyramid Systems with their Multi Processor Mini Computers with the same bang as Big Blue at a fraction of the Price. So at the end of the 80s the data center was either Big Blue or Digital.

Then came along the Unix departmental Servers and the move from monolithic data centers. At the begining of the 90s people were talking about downsizing , decentralizing and having departmental servers. HP and Sun were the movers in that market. The data center in the 90s died a big death and more or less disappeared until the advent of the internet. I first heard of the internet in 1992, when I was told i got to have an email address and by the way use the internet.
The internet spawned a new wave of the move to server based data centers especially for Internet ISPs.

Now what we have is a shrinkage of the server to a 2U format and the Blade to a 1 U Format. We can pack a 42 U Rack with a lot of blades and alot of power 30 Kw.

The question is is the Departmental Server of the mid 90s or the Mini Computer of the 80s the same power and performance of today's so called 2U Servers. The answer is no, just based on clock speed , yes the CPU clock speed has gone up dramatically, but these old departmental servers had been designed as a balanced architecture to run as data base servers, application servers unix servers which can handle 100s of IO requests and terminal requests.
So we cant say that a 90s departmental Unix server which can handle 100s of users is matched by todays 2U Servers in terms of IO Interupt handling etc. After all a Server has two functions, CPU processes and IO Processes. Todays servers are more efficient at CPU utilization but they dont have the complex and balanced architecture of the older generation Unix departmental servers which were quiet complex.

So how many users and what type of users can we turn against todays 2U servers?
That is really fundamental to my original question of Sizing a DC.
It should not be difficult to write a simulation program that can simulate different scenarios of users , a mix of IO interrupts, disc access, data base search query, and basic IO for the different type of apps for different type of user profiles.

What seems to be the case today is really a less formal approach to sizing and a more what I call Quantum approach.

There seems to be three different categories of DCs and the designer does a quantum jump based on a gut feel and a rule of thumb rather then a formal process.

It is either gong to be a 1000 sq ft , 1000 Server DC which is called a Normal Size
or a 10,000 sq ft 10,000 Server DC which is called a Medium DC
or a Mega Google/facebook/Microsoft 50,000 Server + DC.

Anything below the 1000 Server normal DC is not even classified as a DC , just a server room.

For example I did a lot of research to find out this sub 1000 dc category, and I turned to hospitals for my source of information. What I found was an unbelievable divergence of sizes from a 10 server room to an 80 server room for a single hospital , and that is not even dependent on what apps the Hospital is using. It is a suck it and see aprroach or the more we have the more likely we will not run out of capacity.

It is about time that someone , perhaps at an institution such as MIT rights a formal simulation program that sizes modern data centers either using traditional 2U Servers plus linux or windows OSs , or Blade Servers and Cloud Architecture. It cant be difficult and if I have the time I may even attempt it myself. Any takers?

TIA
Streamlight
Reply With Quote