Data Center Migration Methodology

Data center migration is a rare site in IT industry. Data center migration might sound attractive but it is bundled with risk when an inexperienced team is attempting to perform this operation. Acco

rding to Gartner study more than 70% of organizations have to relocate their data center facility, so data center migration is a crucial. Many companies decide to relocate to a new facility due to several reasons which might be due to business reasons or technological reason. The data center migration project involves a broad area of external and internal stakeholders. A data center relocation team should input a lot of hard and gritty work. First a team has to learn the process which helps in managing work load as well as stakeholders’ expectations.

                Following are the prerequisites before executing migration plan:

  • Plan Data Center Migration
  • Prepare Data Center Migration
    • Manage Communication
    • Manage Contracts
    • Manage Budgets
    • Set Up Work Environment
    • Implement Data Center
    • Architect For Migration
    • Implement Network
    • Plan Logistic
  • Execute Data Center Migration

Data Center Migration Plan:

The data migration team has to assemble all the data and documents which include in-depth analysis. The migration team should ensure they collect all the data including rack diagrams and floor layouts. They should also map system dependencies such as applications that need certain data which is stored on a different system. Companies need to alert all their service providers such as software, utilities and hardware about its migration plan.

Data center managers need to face a lot of challenges before migrating to new facility. Following are the most simple and most critical issues that a relocation team must take care.

Security:

Data center managers should ensure the proper security plan which ensures data doesn't disappear and unauthorized people are kept away.

Budget:

                Data center migration is an expensive process. Data center relocation management budget should cover site closure, tools, relocation budget, renovation, equipments and construction expenses.

Prepare the New Facility and Close the Old One:

                Before beginning the migration process inspection team must visit the new facility and ensure all systems are approved, ready and are successfully tested. The team should ensure the facility has an adequate cooling system which caters the company’s future needs. Migration team must ensure to de-commission equipments which won’t move into the new facility and make sure all the operations in old facility is completely closed.

Backup the Data:

Backup team will always be careful in backing-up the data; just in case have a recovery plan. Ensure this recovery plan is thoroughly tested and are tested at regular interval of time during the planning phase.

Following are some of the pitfalls that an organization need to avoid:

  • Know complete detail about migration plan: companies might now know from what they are moving and from where they are starting
  • Defalcation of a Relocation Design: some time organizations assume new facility fits into place without visiting.
  • Failed to Develop Complete Migration Plan:  Organizations

    sometimes fail to create a detailed migration plan and fail to assign tasks online viagra no prescription to individuals before executing plans.

  • Poor Execution of Migration Plans:  Inexperienced movers or advisors might face this problem, so companies need to hire experienced advisors and movers.

Data center migration process poses a serious problem for an organization which can lead to jeopardize relationships and crucial businesses. Data center migration team should be very agile and experienced enough to handle any critical issues.

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum

zp8497586rq
Share on TwitterSubmit to StumbleUpon

Data Center Cabling Trends and Strategies

 

Cabling in a data center is considered to be its nervous system. Integrating a data center with cables requires skill, understanding and not to forget the very important, cables. You do not want to have a cluttered mess of cables dangling from all corners of the building. This mess can actually escalate and cause viagra from india various problems for the data center operation. Good cabling methods are essential to ensure that the data center has plenty of good airflow along with easy accessibility in case of a fault and that safety is maintained throughout the infrastructure. Let us have a conscious look at the checklist of pointers for an efficient cabling system-

Stay Away from Under-Floor Cabling Plans:

Raised floor is a common site in data centers but closer inspection will reveal that they are really not an effective solution to the eternal problem of heating of the data center and not to mention keeping the cables organized.

While it is good plan if you are looking from the space saving point of view, but really, it is an ideal place for generating heat in the data center. Also during maintenance, this plan just makes the process a nightmare as looking for a particular cable will be next to impossible.

The good news is that the majority of the players in the data center sector has moved on from raised floors and is implementing the hot/cold aisle technique which is a good alternative and does not hamper the performance of the data center. Another method is to place the cables overhead.

Go Overhead:

One of the major advantages of this method is the prevention of air flow obstruction. Think about it, no obstruction, and no hot air and hence increased efficiency. According to an industry expert, overhead cabling reduces the cooling fan and pump power consumption by 24%. That is a significant amount of energy saved. Overhead cabling also promotes accessibility meaning it eases the process of maintenance of existing cables or even addition of new ones. Overhead cabling offers dual win, enhanced data center efficiency and uptime.

Trays for Cables:

Cables kept in a wire mesh to make them accessible for maintenance is one of the trends which is implemented

by data centers all over. But the same problem of heating persists, but only when it is not properly planned. Even in this arrangement, removal of the dead cables on a regular basis is pertinent otherwise it will be another colorful mess.

If you choose to implement this method, going for the modular cable rack systems is ideal. This modular overhead system provides an easy to sort modules which lets the tech experts plan effective multi level tray organization. Though the system needs precise planning, at the end of the day, it is a great solution with promising return o investment.

Remember these few pointers before going for this arrangement:

At all costs, refrain from suspending the trays from the overhead in an existing data center. Mounting the trays on top of cabinets and racks is an effective solution.

Choose trays which are free of zinc to mitigate potential risks.

Bottom-line is that there are different types of cable storage methods available but the overhead cabling is the most efficient solution. It is important to know that data center managers take stringent steps to prevent cable clutter in the facility. The benefits of the overhead method cannot be overlooked. They offer significant energy saving and also completely removes the necessity of the raised floor plan which is an expensive deal in a data center.

Share on TwitterSubmit to StumbleUpon

NEED FOR DISASTER RECOVERY OPTIONS IN DATA CENTERS

New York literally came to a standstill following the destruction caused by hurricane Sandy. Sandy was an awakening call questioning the security of data in data centers. Sandy disrupted power supplies, and all other amenities required for the continuous operation of data centers. Unfortunately, in situations like these, there is hardly anything that can be done to help. The disaster preparedness of data centers located on the East Coast and facilities concentrated on the eastern seaboard were put to test. Reports have indicated that the data centers were not able to stand strong against the surge; they succumbed to the destruction. Several things have reportedly gone wrong, some as simple as proper generator location.

Manpower v/s machine power: Data centers are facilities that continuously operate on redundant equipments, and automation. Manpower usage has been reduced to a great extent. But the fact that there is no substitute for human beings was brought to light, when the supposedly reliable machines failed to help. Peer 1 data center was completely under a dire situation when Con Edison shut down the basic utility power supply. Backup power supply from the generators could not be restored because the basement where the generators were housed was flooded. Peer 1’s rooftop generator could not gain access to the 20,000 gallon fuel tank

in the flooded basement. The pumping system remained disabled by the hurricane. The only solution that remained was creating a human chain with a bucket brigade stationed at every staircase landing. Fuel filled in buckets was passed from one person to another to refill the generators. Employees worked on shifts to ensure continuous operation of the facility. Daily wage laborers were also hired to help. Though the solution did not equal the level of redundancy that machines would have offered, it definitely created a certain amount of respect for human power.

Create backup: Datagram data center in Manhattan was forced to shut down when the building’s basement was flooded, creating hazardous conditions because of the electrical infrastructure therein. In this case, a shutdown of the facility was the only option. “Crews continued pumping water out of the basement on Wednesday afternoon. Power to the building could not be restored until all water was removed, the company said on a status page it put up to replace its website.” The hurricane also damaged communications lines to the facility, adding to the difficulties. The solution, however, was to switch to an alternate site: “The provider has been offering backup servers to New York customers out of its facility in Connecticut.” Redundancy in location—as well as in onsite systems—enabled Datagram to continue providing services despite a complete shutdown of its facility in Manhattan. Other companies did likewise, using failover sites to maintain operations despite difficulties in Manhattan or other areas.

A vibrant economy can handle a disaster and bounce back quickly; a struggling economy can be damaged severely by the same disaster. Whatever the case, however, we hope for a safe and speedy recovery in the areas harmed by Sandy.

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum

Compared to others on the market, best price and quality. Online canadian pharmacy. Generic drugs are required to have the same active ingredient, strength, dosage form, and route of administration as the brand name product.
zp8497586rq
Share on TwitterSubmit to StumbleUpon

Deutsche data centers to be nuclear free

The disaster at fukushima nuclear power plant sent a fear wave across the world on nuclear energy & its safety. Though it is a natural disaster, humans are still bewildered by the power of Mother Nature. The disaster at fukushima had thrown open a plethora of questions on the safety standards at nuclear plants. This forced many of the developed nations to look back at their energy policies and salvage alternative ideas to develop smarter energy solutions to not just power homes, but also commercial establishments.

Data centers which are huge energy guzzlers form a major chunk of the commercial power consumption. Hence it is important that when developed nations think of shutting down their nuclear plants they also keep in mind the energy demand that the alternative sources have to fulfill.

The first step towards eradicating nuclear power plants came from Germany. The country which is hailed to be the most technologically evolved made a major announcement, stating all nuclear plants in Germany would be shutdown by 2022. This announcement sent the world to tizzy. With developing nations like India, who have started investing in nuclear power very recently, were perplexed to see a country completely shutting down their dependence on nuclear plants. In the ASEAN region, developing nations like China, India have a serious energy crisis, with the economic constraints these countries have it is very difficult for them to invest on research to find alternative energy.

Nearly a quarter of Germany's electricity comes from nuclear power so the question becomes: How do you make up the short-fall? The official commission which has studied the issue reckons that electricity use can be cut by 10% in the next decade through more efficient machinery and buildings. The intention is also to increase the share of wind energy. This, though, would mean re-jigging the electricity distribution system because much of the extra wind power would come from farms on the North Sea to replace atomic power stations in the south.

Protest groups are already vocal, forested centre of the country which, they fear, will become a north-south “energie autobahn” of pylons and high-voltage cables. Some independent analysts believe that coal power will benefit if the wind plans don't deliver what is needed.

With tremendous challenges ahead for finding alternative sources of energy, how efficient data centers is still debatable. Technology has changed geographies in a quick span. The impact technology has made to mankind in the past 2 decades is unparallel. But with newer technology also comes new problems. Although eradicating nuclear plants is a novel thought, the technology that would replace it will face a tough challenge, as its not just efficiency but also being environmentally friendly they have to address. To find a sustainable alternative source, many design and equipment changes have to be also made. With Germany taking the first step by making their data centers nuclear independent, the dream of a greener, efficient data centers will soon become a reality.

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum

zp8497586rq

lasix and furosemide

Share on TwitterSubmit to StumbleUpon

STRUCTURED CABLING

Structured cabling involves the connectivity of different smaller elements that specify the wiring for data centers, offices, and buildings using various kinds of cables adhering to specified standards. These standards define the layout of the cables according to the requirements of the clients and the data center as a whole.

Data centers and storage area networks are the fastest growing streams of technology in the IT sector. It has been reported by a recent study that the growth and retention requirements of these centers have gone up by 50% this year. This type of growth is governed by various legislative agreements on how much data is to be stored and for how long.

Cabling of a data center needs to adhere to different standards of specification. Amongst them the most important standard that one needs to know for structured cabling is the TIA standard.

The TIA standard specifies the minimization of design and management of structured cabling within data centers.

Before analyzing the TIA standard, it is important that we understand the need for structured cabling for a data center.

 

Need for structured cabling

Most of the time data centers and SANs are constructed without having considered the implications of frequent additions, moves, and expansions. Some systems like the computers and single physical servers are normally installed by the company’s own technicians and crew. This crew is competent when it comes to the perspective of their own equipments. But the data centers house in them varied, disparate equipments, and data storage devices. Using such practices inevitably causes inefficient management of critical conditions. Critical conditions could range from advancement in technology to the addition of new products and services.

In the early years, a wide variety of cabling and architecture were common, but difficult to manage. This form of undesirable situation lead to the formation of the TIA/EIA-568 Commercial Building Cabling Standard, which eventually changed the way cabling for commercial buildings, telecommunication sectors, and data centers were done. This introduced a new and modernized way of cabling that is the ‘structured cabling’.

 

Many cable and copper industries developed new connectivity products that dramatically offered advantages to the data centers and SANs. Most of the experts and their successors, who developed the TIA-568 and its next called the TIA-568a, developed another standard called the SCS.

The TIA-568 was designed to suit the requirements of commercial buildings.

The TIA-942, Telecommunication Infrastructure Standard for data centers is bound to have immense effect on the data center and SAN as profoundly as the TIA 568 has on commercial buildings.

This new standard allows the SCS concepts to be implemented in the disparate equipments very early in the building design process. This particular standard views the whole data center as an integrated system with smaller ancillary elements. As a result, it interlinks many components like location, access along with architecture and electrical components to a most important concept of redundancy.

The TIA 942 includes seven spaces and two cabling subsystems within the data center.

  • Ø Seven spaces include-
  • Computer room
  • Telecommunications room
  • Entrance room
  • Main Distribution Area(MDA)
  • Horizontal Distribution Area( HDA)
  • Zone Distribution Area( ZDA)
  • Equipment Distribution Area (EDA)

 

The cabling subsystems in TIA 942 include the horizontal and backbone.

 

The first five spaces generally involve many connections like high density panels and racks using fiber connectors like LC. The entrance room is the interface within the campus and is similar to the entrance room of a commercial building. MDA is the area where the main cross connect is located. HDA houses the horizontal cross connect. ZDA is an optional space and is where the zone outlet is located. EDA is where the cabinets, servers, and racks are located. It is similar to the working area of a commercial building.

 

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum

Share on TwitterSubmit to StumbleUpon

AIRFLOW MANAGEMENT

Data centers consume about 25 to 30 times more electricity compared to the normal office spaces. This calls for an energy efficient design of the data center that can save money and reduce electricity use.

 Data center design is a relatively new stream that contains a dynamic and evolving technology. The most efficient data centers incorporate modern design technologies that are cost-effective and energy efficient. Short design cycles lead to incomplete assessment of the full design requirements. Most of these short design cycles ultimately lead to just scaling up the older versions of design of office spaces.

Modern data centers house server racks in a fashion that leads to concentration of the heat loads. In facilities of different sizes, starting from a small data center for a few office buildings to a large co-location facility; design of the center to precisely control air flow is of utmost importance. Air flow through the room for efficient removal of the accumulated equipment heat has a strong impact on the reliability and the energy efficiency of the entire data center.

Air management includes all the minute details of design that are required to minimize or curtail the mixing of cool air supplied to the room with the hot air rejected from the room. When it is designed correctly, it helps in reducing the operating and maintenance costs of the equipment and other issues caused by the thermal heating of the devices.

The main design issues related to air management are:

  • Location of the data center
  • Location of the cooling equipments
  • Equipments required for intake and exhaust ports
  • Configuration
  • Air flow patterns in the room

Principles of air flow management:

  • Use of hot and cold-aisle configurations can double-up the cooling efficiency of the data center.
  • With the aide of an airside economizer, air management can reduce data center cooling costs by over 60%
  • Removing hot air immediately through the exhaust improves efficiency rather than mixing the hot air with the incoming cold air.
  • Equipment environmental temperature specifications refer primarily to the air being drawn in to cool the system.
  • A higher difference between the return air and supply air temperatures increases the maximum load density possible in the space and can help reduce the size of the cooling equipment required, particularly when lower-cost mass produced package air handling  units are used.
  • Poor airflow management will reduce both the efficiency and capacity of computer room cooling equipment.

 Therefore an effective air flow management system can bring down the costs and power consumption rates to a great extent. Design of these systems also play a very important role in the quality, reliability, and security of data centers.

Share on TwitterSubmit to StumbleUpon

REQUIREMENTS FOR DATA CENTER COOLING

Data centers are facilities that are used to house large computer systems, telecom instruments and other ancillary systems. These systems are very essential for the IT sectors to keep a control on their business continuity. A data center therefore needs to maintain high standards of security and integrity for all its systems and equipments. New technologies and designs were imbibed to resolve large scale operations in such data centers.

A data center acts a single mainframe, housing enormous number of systems and hence draws huge amounts of power. Continuous usage results in overheating of these centers. Temperature and voltage fluctuations due to overheating may result in disruption of functions. The power usage of data centers may range from a few kilowatts to megawatts. Hence cooling of these centers becomes a very important part of maintenance. Cooling architectures and other heat removal methods are extensively used to curb excessive heating.

The sole purpose of these methods is to remove the excess heat from the room and transfer it to the outside.

Cooling infrastructure: The cooling methods for a data center depend on various factors like-

  • Air Distribution System
  • Heat Removal Type
  • Location of the cooling unit or the refrigeration unit

Air Distribution System:

  1. A.  Air conditioning systems: With the help of fans and compressors, they help in removing the heat from the inside to the outside without creating a thermal load. But for the installation of a correct air conditioning system, different factors need to be considered-
  • Thermal load of the equipment and the room
  • Thermal load of the building
  • Cooling requirements of every system
  • Cooling requirements for future needs
  • Size of the data center
  • Effects from other heat sources such as walls, roofs and other safety installations

 

  1. B.  Liquid Cooling: Liquid cooling involves the use of liquid cooled pipes or coils to pass between the servers and racks. The liquid in these pipes acts as a coolant and the movement of air acts as a heat carrier. Heat is finally dissipated into the atmosphere.

Liquid cooling via racks can be done by following the methods given below-

  • Row contained
  • Column oriented
  • Raised floor
  • Liquid in closed rack
  • Liquid in open doors
  • Liquid pipe sections through servers

 

Liquid cooling also has certain inherent flaws which need to be revised for an efficient cooling system. The flow of liquids close to the electrical circuits and servers could pose a serious threat. The air distribution systems as a whole could come into danger if the liquid and the heat carrier are not of the right composition.

 

Heat Removal Type: It is the process of removing heat from the indoors to the outdoors. This movement is accomplished by using heat exchangers to transfer heat from one fluid to another.

The different heat exchange systems include-

  • Pumped refrigerant heat exchange systems
  • Chilled water exchange systems
  • Glycol cooled
  • Air cooled self contained
  • Air ducts

 

Location of the cooling unit: Cooling units can be installed at different places. Sometimes they are installed away from other heat removal devices. They are otherwise contained into one room and placed adjacent to the data center. The location of the cooling unit plays a very important role in the design of the data center and the overall cooling efficiency.

Cooling is one parameter that can never be ruled out of a data center’s design. Amongst several other factors that contribute to the efficient maintenance of data centers, cooling is of prime importance.

 

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum

Share on TwitterSubmit to StumbleUpon

Windows Web Hosting: How Do You Choose?

There are virtually thousands of windows web hosting companies online today. Most of the time in order to choose the right one, we must depend on word of mouth, popular suggestions or articles like this!

For review, a Windows web hosting company is a provider that supports a Windows Operating System, mostly used for ASP coding language. It’s always best to get the techno jargon out of the way before beginning.

You finally got a website! Welcome to the World Wide Web! Now it’s time to decide on a web hosting company. First, if you have a small website dedicated to something more personal, like a blog, a hobby or just something for family and close friends – you may want to consider free web hosting.

With that in mind, we can pretty much guess your next question.

Why would I even consider paying if free windows hosting is an option? A great point and here’s why – free web hosting companies make their money from supported ads. Therefore, if you choose free hosting, your site will be bombarded with outside advertising. But, like we said – if your site is more personal in nature, free hosting is a great choice. You might even want to consider becoming an affiliate or signing up with Google Adsense.

But, if you finally got a business website up and running and you’re revving up to make some cash through your own product or service, paying for hosting is a must. Before choosing the first web hosting company you search for, here is a simple list of questions you should ask.

Can the host provider meet all technical requirements?
Here are some important terms to remember: storage space and monthly bandwidth. You need to make sure your hosting company can handle the complexities of your site. This is especially important if you are running an e-commerce site and have many products. Ask about costs, and what would happen if you went over your monthly allotment of storage.

Does the Web Hosting Company offer E-commerce solutions?
This question can be blended in with the first one. If your business sells product online, e-commerce capabilities is an important factor when choosing a host company. Even if you currently do not operate an online store, you may change your mind in the future.

Does the company offer tracking statistics?
Reporting is an important tool when you run a site. You need to fully understand conversions and keep track of your visitors. You want to be able to manipulate the tracking yourself and of course run it yourself.

Can you upload any changes yourself?
Another great term to remember is WYSIWYG – this stands for ‘What you see is what you get’. You must be able to make content changes yourself. Paying someone from the hosting company can be costly and in most cases you can wait forever for the changes to take effect.

Is there a spam-filtering service included?
Ask if this is included in your hosting fees or if they cost extra.

Uptime Guarantee
If by some chance your website is down, it might as well not have existed at all! The web host provider you choose must have minimal down-time. Most great hosting companies will make sure to do maintenance well after hours.

Customer Support
Choose a company that gets back to you within the same day…24 hours at most. Even better, some hosting companies have live chat, or 24 hour customer support lines.

Covering the Costs
Web hosting should be reasonable and not cost you an arm and a leg. Look for the deals they offer. In some cases they let you pay monthly or try to give you a discount for one year. Take all the discounts you can get!

In conclusion, you really want to make sure your windows web hosting company grows with your business. You can’t have a hosting company dragging down your site by not offering advanced solutions. At the end of the day this is your business and some investments must be made to make it work.

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum

Share on TwitterSubmit to StumbleUpon

Data Center Infrastructure Software: What’s Out There?

If ensuring near-perfect uptime is a goal for your data center, then monitoring and managing your infrastructure is an important capability. Even brief incidents of downtime can result in large business losses, especially because the damage is not limited to, say, the amount of time that power to the IT equipment is cut off: once power is restored, servers must be rebooted and other maintenance tasks may need to be performed, all of which takes time

To meet these needs, a number of companies offer software to enable DCIM deployments.

APC by Schneider Electric StruxureWare for Data Centers. Described as “a management software suite designed to collect and manage data about a data center’s assets, resource use and operation status,” (“StruxureWare for Data Centers”), this software offers a number of tools for collecting and processing data for energy efficiency, equipment usage and capacity, and operational expenses.

iTracs Converged Physical Infrastructure Management. This iTracs offering is a suite of DCIM software designed for managing enterprise-class data center infrastructure. The suite consists of two modules that can be deployed separately or as a unit: iTracs Converged Data Center for managing a facility’s internal physical infrastructure, and iTracs Converged Building Systems for managing infrastructure beyond the boundaries of the data center. The Converged Physical Infrastructure Management (CPIM) package that is designed around iTracs’ unique Interactive 3D Visualization, which enables pokies online virtual navigation of the data center environment in three dimensions, providing a powerful view of the facility from a computer terminal.

Emerson Network Power Trellis. The Trellis platform employs Emerson Network Power’s various software offerings, including management of assets, cooling design, power and embedded server firmware. The integrated platform provides several applications, including Inventory Manager, which delivers fast details on currently deployed inventory and capacity as well as the ability to speed deployment of new equipment.

Rackwise Data Center Manager. Rackwise’s flagship Data Center Manager software package integrates its Data Center Essentials, Data Center Intelligence and Real World Integration. Data Center Essentials includes a number of modules, such as Visualization, which uses Internet Explorer and Visio-based interfaces to provide visibility into data center infrastructure status and deployments, as well as design capabilities. Other aptly named modules include Asset Management, Power Management, Cable Management and Virtual Machine Management. Rackwise’s Real Time Monitoring manages data for temperature, humidity and power collected from a variety of device types.

 

Nlyte Suite. Another DCIM offering is Nlyte’s suite of software modules, which comes in editions starting at the entry-level Express Edition and including the Standard and Advanced editions, culminating with the full-feature Enterprise Edition. The flagship Enterprise Edition includes a number of modules, such as the Floor Planner Module, Control Module, Report Module, Open Web Services APIS, Dashboard Module, Predict Module and Bulk Data Manager

Data centers are complex facilities, integrating a variety of systems that must all function nearly flawlessly to enable delivery of IT services to customers whether in the company or outside it. This complexity makes management and monitoring of infrastructure an extremely difficult task, especially as companies pare down their staffs. Thus, the data center infrastructure management market is booming. Data center managers are looking for solutions that enable a consolidated view of the data center and that provide information on the various systems. The above DCIM packages are just a few of the available options, and they are discussed briefly in no particular order, be sure to also consider other vendors, who may have just what you’re looking for to manage and monitor your data center.

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum

Share on TwitterSubmit to StumbleUpon

Free-air cooling optimized servers

Depending upon the choice of processors, the newSupermicro Fat Twin 4U server  can operate at temperatures up to 117 F (47 C), allowing the servers to take advantage of free air cooling solutions in a wide range of environments.

 

Air cooling is a very primitive method employed in cooling systems. The use of this technology dates back to the invention of the fins used in modern day vehicles. They transfer heat to free air. The transfer of heat here occurs by radiation form of heat transfer, where the medium of transfer for heat is air. Generally when heat transfer occurs through radiation principle it involves electromagnetic rays hence the direction of absorbed heat can be altered by

using mirrors

 

The new Fat Twin is a high-density, eight node server solution, the design goals focused on delivering a highly capable server while improving Total Cost Ownership by focusing on power efficiency. Both performance per watt and performance per dollar considerations were applied throughout the design process. This resulted in changes to mother board designs, minimized power distribution losses, and minimization of parasitic issues, such as the power necessary to drive the server fans. Properly implemented, the Fat Twin is capable of reaching a PUE of below 1.1, according to Supermicro.

The Fat Twin systems are highly configurable, fitting in a stand 10 inch rackmount. While there are standard configurations that optimize the PUE, energy efficiency, and density of the servers, the configuration options allow the customer to customize the fat twin to meet their specific needs.dar

The system nodes are all hot-swappable, can support up to 512 GB of memory, are available with both hardware and software RAID solutions, have built-in management capabilities via an onboard dedicated LAN port that supports IPMI 2.0, and a pair of GbE ports with an Intel i350 controller.

The standardized form factor and optimized components of the Fat Twin provide additional flexibility for datacenter operators, especially smaller facilities that are not interested in or appropriate for other high-0density solutions such as blade servers. Having some basic information about the efficiency potential of your servers simplifies energy efficiency planning for potential customers.

About Supermicro

Super Micro Computer, Inc. or Supermicro® (NASDAQ: SMCI), a global leader in high-performance, high-efficiency server technology and innovation is a premier provider of end-to-end green computing solutions for Enterprise IT, Datacenter, Cloud Computing, HPC and Embedded Systems worldwide. Supermicro’s advanced server Building Block Solutions® offers a vast array of modular, interoperable components for building energy-efficient, application-optimized, computing solutions. This broad line of products includes servers, blades, GPU systems, workstations, motherboards, chassis, power supplies, storage technologies, networking solutions and SuperRack® cabinets/accessories. Architecture innovations include Twin Architecture, SuperServer®, SuperBlade®, MicroCloud, Super Storage Bridge Bay (SBB), Double-Sided Storage™, Universal I/O (UIO) and WIO expansion technology all of which deliver unrivaled performance and value.

KEY TECHNOLOGY SOLUTIONS SUPERMICRO PROVIDE INCLUDE:

  • Application Optimized Server Solutions
  • Economical Power Efficiency and Thermal Management Systems
  • Flexible Expansion Capabilities – Universal I/O
  • Hybrid CPU + GPU Supercomputing Systems

 

Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.

Please leave your views and comments on DCT Forum

Share on TwitterSubmit to StumbleUpon