AToM – Any Transport Over MPLS

AToM is something, if you know what it can do , you can create solutions that can save thousand of dollars that you might invest in additional links and network infrastructure.  From a engineer perspective , it is something you will love to know and admire . Its an application on MPLS and provides and evidence how MPLS has revolutionize the network world and provides solutions which will be used more and more in coming years . Time has gone when Service Provider/Telecoms provides pure layer 2 dedicated LL and customer has to pay a lots for a international Leased Circuit and he never uses it up to its full capacity . So its time to save your precious dollars by sharing a common infrastructure and enjoy same service Level Agreement.

What is it ?

-  Any Transport over MPLS (AToM) is a solution for transporting Layer 2 packets/frames over an Layer 3 MPLS backbone.

-  Think of it as a method of emulating a layer 2 circuit over an MPLS backbone similar to AAL1 on an ATM Backbone.

-  AToM Supports the following Services

  • Frame Relay
  • ATM AAL5
  • ATM Port Mode
  • Ethernet VLAN
  • PPP
  • HDLC
  • Sonet/SDH

You can imagine a scenario when you have a Layer 3 backbone and you need to provide L2 circuit to your client using that L3 infrastructure . The challenge is how you will transport Ethernet frames received on one leg of an router to another router leg on another side and you have multiple routers in between as well .  Sounds interesting ?

Why Use It ?

PROs

-          Savings in transmission costs by consolidating multiple lower speed circuits into a few high speed circuits.

-          Flexibility with available capacity, by having all physical capacity on a single IP/MPLS backbone we can utilize available capacity for the services that require it.

CONs

-          SPOF

-          More Overhead

-          Synchronization could be an issue.

How does it work ?

-      AToM uses a two-level (Inner for Service and outer for Transport) label stack similar to a L3VPN.

-      PE’s use targeted LDP sessions to exchange label information.

-      Traffic is received at the ingress PE (AToM start point) and the layer 2 headers are removed.

-      An MPLS label is added suggesting the remote end of the pseudo wire.

-      A second label may be added for the outbound interface.

-      For port mode ATM Without cell packing, the 53 bytes ATM Cell (minus the HEC) is encapsulated in a 64 bytes AToM MPLS Packet.

-      Cell packing is the feature used to conserve bandwidth on the backbone by sending multiple ATM cells in a single IP packet.

One of challenges that arises and  here is added overheads due to this encapsulation . All service providers who have implemented  have faced this challenge to make sure that their core backbone is supporting the MPLS packets with increased MTU .

Let see how much over head is added

Pitfall : Avoid exceeding the Core MTU
 

Transport Type Header Size
ATM ALL5 0-32 bytes
Ethernet VLAN 18 bytes
Ethernet Port 14 bytes
Frame Relay Dlci (Cisco Encapsulation) 2 bytes
Frame Relay Dlci (IETF Encapsulation) 8 bytes
HDLC 4 bytes
PPP 4 bytes
  • The AToM Header is 4 bytes but it’s required for ATM AAL5 and Frame-Relay. (optional for Ethernet, PPP and HDLC)
  • Label number is 2 if P routers are directly connected. 3 if not.
  • If FRR is requested, it will add another level of tag.
  • Rule: Always assume we need 4 labels.
  • The Label size if 4 bytes

ie. For FR IETF  MTU = 4470 – 8 – 4 – (4 x 4) = 4442 bytes

What is needed ?

-          An Operational MPLS network

-          Targeted LDP session between PE end point routers. (used for advertising Vc labels)

-          TE Tunnels between PE end points.. Question do we need the Tunnels ? If so why ?

-          ESR, exception to use TE Tunnel.

-          Pseudowire configuration.

-          MTU considerations

 

 

AToM,Any Transport Over MPLS, Data Center, power calculation, cooling system, fewer generator, Green Data Center, datacenter, data center services, data center management, about data centers, internet data centers, datacenter services, datacenter solutions Business continuity

 

The ingress PE router PE1 receives the packets and  attaches the VC label (label 33) onto the frame first. Then it pushes the tunnel/transport label which is 121. The tunnel/transport label is the one  that is  Interior Gateway Protocol (IGP) prefix of remote PE . This prefix is specified by the configuration of AToM. The MPLS packet is then forwarded to connected P router and then it is forwarded by same method , hop by hop, until the packet reaches the egress PE, PE2.

Notice that when the packet reaches the egress PE, the tunnel label has already been removed by PHP. This is because of the penultimate hop popping (PHP) behavior between the last P router and the egress PE. The egress PE then looks up the VC label in the label forwarding information base (LFIB), strips off the VC label, and forwards the frame onto the correct AC.

The P routers never need to look at the VC label; therefore, they do not need the intelligence to be able to do anything with the VC label, the best part is the P routers are completely unaware of the AToM solution.

Because the tunnel label is simply the LDP or RSVP-learned label, no special label distribution protocol has to be set up for AToM on the P routers. The MPLS backbone normally is already using either label distribution protocol. The VC label, however, needs to be associated with a certain AC and advertised to the remote PE. A targeted LDP session performs this job.

Its fun and interesting if you know how it works, the benefits of this technology are immense!

 

You can also keep up to date with current trends and technology by visiting Data Center Talk where we keep you informed on important changes as they occur.

Share on TwitterSubmit to StumbleUpon

Proposed “Data Furnaces” Could Use Server Heat to Warm Homes

As winter approaches, could a warm server take out the chill as opposed to a radiator or fireplace?

A new paper from Microsoft Research and the University of Virginia makes the case that servers can be sent to homes and office buildings and used as a heat source. The household data centers, which Microsoft calls “Data Furnaces”, has three main advantages over traditional data centers: a Proposed Data Furnaces, Could Use Server, Heat to Warm Homes, Data Center, power calculation, cooling system, fewer generator, Green Data Center, datacenter, data center services, data center management, about data centers, internet data centers, datacenter services, datacenter solutions Business continuitysmaller carbon footprint, reduced total cost of ownership per server, and closer proximity to users.

US Environment Protection Agency

According to figures from the US Environmental Protection Agency, the nation’s servers and data centers consumed around 61 billion kWh in 2006 — 1.5 percent of the country’s total electricity consumption. And as one of the fastest growing sectors in the US, it was estimated that national energy consumption by servers and data centers could exceed 100 billion kWh nearly double by the end of the year.

Exhaust Air Temperature

“The temperature of the exhaust air (usually around 40-50°C) is too low to regenerate electricity efficiently, but is perfect for heating purposes, including home/building space heating, cloth dryers, water heaters, and agriculture,” the study states.

While it’s most likely that early adopters will be office buildings and apartment complexes with mid-sized data centers heating them, micro-datacenters on the order of 40 to 400 CPUs could serve as the primary heat source for a single-family home. These Data Furnaces would be connected to a broader cloud via broadband, and connect to the home heating system just like any conventional electric furnace.

Microsoft is far from the only company looking to combine the cost of powering servers and heating buildings.

Reusing Data Center Heat in Office

In 2007, for instance, Intel released a study on reusing data center heat in offices, and in 2010 it opened Israel first LEED-certified green building, which featured a 700-square-meter (about 7,500 square feet) server room where heat is recycled for hot water and winter heating.

In another interesting take on reusing data center heat, the Swiss Federal Institute of Technology Zurich, and IBM built a water-cooled supercomputer that provides warmth to university buildings. Dubbed “Aquarius”, the system consumes as much as 40 percent less energy than a comparable air-cooled machine, IBM reckons.

Data Furnace ideas

Microsoft identifies some challenges to their Data Furnace idea such as how to monitor and react to local changes in power and broadband usage, physical security, and the lack of dedicated system operators in a home. What is not discussed in the report is how servers would be cooled in warmer months, the risk of fire from overheating, and the potential noise that could come from so many servers.

While there’s still work to be done, the idea that electricity demand could be curbed by harnessing the heat from data center and putting it to good use is exciting and one that we’ll be following intently.

 

To keep yourself updated on the latest happenings in the data center industry, please visit us at Data Center Talk.

Share on TwitterSubmit to StumbleUpon

Smart Solutions with Emerson Network Power

Emerson Network Power has developed the Smart Solutions family of data center infrastructure for all data centers, regardless of their size; operational and business objectives. Balancing data center best practices for capacity, space utilization, availability and efficiency has been difficult without making making adjustments to the present infrastructure, hence these solutions.

 

Share on TwitterSubmit to StumbleUpon

How the Best Data Centers Ensure 24×7 Business Continuity

Continuous HADR monitoring means you don’t need to wait for a disaster recovery test to find out when your RPO and RTO goals are at risk.

View this 30-minute webinar and see how best-in-class data centers utilize a one-stop solution to manage their entire business continuity infrastructure:

Share on TwitterSubmit to StumbleUpon

Facebook’s new energy efficient data center

Facebook recently launched green data center. Here are some of the important aspects of the design.

Share on TwitterSubmit to StumbleUpon

Best Practices for Data Center Cabling

Today’s data center house a large number of diverse bandwidth-intensive  devices, including bladed servers, clustered storage systems, virtualization  appliances, and backup devices-all interconnected by network infrastructure.  These devices need physical cables with an higher performance factor .

There are considerations outside of the cable plant and number of  connectors alone: usability, scalability, costs and the ability to perform  Moves, Adds and Changes (MAC’s). Additionally, some limitations exist based on the category of the cabling system. Copper and fiber distances may vary with  the type of cabling system selected. We will discuss some of those parameters
and their potential impact on data center designs.

This Document will take you, briefly though the process of effect planning of cabling in data center which includes:

  • Understanding cabling requirements
  • Planning cabling infrastructure
  • Selecting cabling components
  • Implementing and testing cabling
  • Manage Cabling infrastructure

Key Factors to be considered for Data Cabling Design:

Planning is the key. You may involve in wiring and cabling for new datacenter or upgrade of existing one.

  • Review Common Media Types: Before you start, please review common media types used in datacenter.
    There are chances that a new type supported by device since you did similar project last time. For example, you implemented in datacenter few years back with 1Gig backbone and now current project need 10Gig backbone and has servers with 10Gig NIC also switches support 10Gig connections.
  • Document the Current: if you are upgrading an existing datacenter, document the topology and various types cables used and present. Also document proposed cable type, count, distance, area of concerns, and any established internal cabling.
  • Plan to accommodate both copper and fiber: use copper and fiber as needed. Build flexibility, so that the patching structure will allow a device to connect to any other device in datacenter.
  • Use structure approach: create a structure and implement the same. In this plan core-distribution, distribution-access or MDF-IDF topology and follow the same.
  • Define naming convention: Define an easy to understand naming convention for all cabling. It should be in such a way that anyone can understand how is the physical connections.

For example :
10th port on 5 Patch Panel on rack 2 of 3rd row can be

PC325-10      [PC{row}{rack}{patch panel}- {port}]

  • Document the cabling design: once cabling is done and servers are connected, it gets very difficult to trace cable connections. So it is always best to document and manage cabling design.

For Example :
Server1 is connected on server_farm_sw1 on 4/5 port

SS111-10>>SF124-5 ( this can have multiple level of cable connections but document it and if it is too complicated make a flow diagram for “how to document”

  • Modular Data Cabling: Modular cabling systems for fiber and copper connectivity are gaining in popularity. Modular cabling introduces the concept of plug-and-play, simplifying the installation of cables and reduces time and cost. In this
    cables are normally pre-terminated and tested.
  • Trust the standards:  Industry cabling standards are designed to protect the end user. Weather these standards are in draft or ratified state, they provide a firm foundation for establishing a coherent infrastructure, and guidance.

There are a number of standards organization and standards. The three best known cabling standards organizations are listed below:

United States           ANSI/TIA/EIA-568 from the Telecommunications Industry Association

International           ISO/IEC IS 11801 (Generic Customer Premises Cabling)

International           TIA-942 from the TIA

  • Use Color to Identify Cables: Color provides quick visual identification. Color coding simplifies management and can save you hours when you need to trace cables. Color coding can be applied to ports on a patch panel.

Example color scheme for patch cables:

bandwidth-intensive  devices, including bladed servers, clustered storage systems, virtualization  appliances, backup devices-all interconnected, network infrastructure, Data Center, power calculation, cooling system, fewer generator, Green Data Center, datacenter, data center services, data center management, about data centers, internet data centers, datacenter services, datacenter solutions Business continuity

  •  General Media Standards: Following list contains standard media types widely used.
Application Media Classification Max. Distance Wavelength
 10GBASE-T  Twisted Pair Copper  Category 6/Class E UTP  up to 55m*
 10GBASE-T  Twisted Pair Copper  Category 6A/Class EA UTP  100m
 10GBASE-T  Twisted Pair Copper  Category 6A/Class EA F/UTP  100m
 10GBASE-T  Twisted Pair Copper  Class F/Class FA  100m
 10GBASE-CX4  Manufactured  N/A  10-15m
 10GBASE-SX  62.5 MMF  160/500  28m  850nm
 10GBASE-SX  62.5 MMF  200/500  28m  850nm
 10GBASE-SX  50 MMF  500/500  86m  850nm
 10GBASE-SX  50 MMF  2000/500  300m  850nm
 10GBASE-LX  SMF  10km  1310nm
 10GBASE-EX  SMF  40km  1550nm
 10GBASE-LRM  All MMF  220m  1300nm
 10GBASE-LX4  All MMF  300m  1310nm
 10GBASE-LX4  SMF  10km  1310nm
  •  Horizontal Cabling: Use horizontal patch panels to accommodate cables in racks and manage them neatly. Also Cables should be labeled and tie using ties or Velcro. 
  • Vertical Cable Manager: Cable manger is used to manage cables going across racks. Mostly vertical cable mangers are covered and give neat and clear view.
  • Overhead Cable Trays: Use over head trays to manage cables. Define power cables and network
    cables and don’t keep them in same area.
    As Diagram shows
  1. Power Cables
  2. Power Cable trays
  3. Network Cable trays
  4. Cable Manager in Rack

 bandwidth-intensive  devices, including bladed servers, clustered storage systems, virtualization  appliances, backup devices-all interconnected, network infrastructure, Data Center, power calculation, cooling system, fewer generator, Green Data Center, datacenter, data center services, data center management, about data centers, internet data centers, datacenter services, datacenter solutions Business continuity

Conclusion

Data Center Cabling is one time project viable for lifetime of data center. Choose wisely and make is a master piece. If you design it right, it will definitely going to add to success of your Data Center.

 

You can also keep up to date with current trends and technology by visiting Data Center Talk where we keep you informed on important changes as they occur.

Share on TwitterSubmit to StumbleUpon

Enerprise: Multiple Data Centers, Global presence

There are several reasons why a company may outgrow even the largest colocated environment. Most importantly, internet companies need to ensure availability – systems must be available 24 hours a day, and even a brief service outage can be disastrous. The customer must be protected against SPOF (“Single Point Of Failure”, or “all the eggs in one basket”) situations. While each discrete data center may incorporate redundant backup systems, each data center is also vulnerable to availability threats, no matter how large or well-designed the facility is. Potential threats to data centers include political issues, competition, financial matters such as taxation and government incentives, availability of sufficient raw resources such as electrical power and network capacity – and of course, “acts of god” such as broken fiber-optic cables, extended power outages or employee misconduct.

Redundancy & Fail-Over Capacity

At this point, the business must scale in such a way as to provide redundancy and fail-over capacity in order to maximize availability. Redundancy allows the enterprise to continue operations even if a major disaster occurs in one of it’s data centers by “failing over”; redirecting traffic from a failed facility to an operational one. An additional benefit of a global internet presence is the ability to provide localized services – for example to provide Google’s search engine service in China. Technologies such as anycast routing ensure the fastest possible service for users, wherever they are located.

Each Data Center has it’s own Unique Requirements

Organizations who require this level of infrastructure include Google, Amazon, Microsoft or IBM. The enterprise often must research, design, construct and maintain it’s own data centers from scratch. Each data center has it’s own unique requirements – a “peering hotel” requires a highly accessible location within a major city, whereas a “server farm” used for a company’s internal operations may be located in urban areas. In this highly competitive industry, success is a matter of locating a suitable site (defunct factories and warehouses can be attractive options), ensuring sufficient power, bandwidth and human resources will remain available at a reasonable cost, and negotiating with local vendors and government agencies to foster the best possible business environment.

Nuts & Bolts Construction of Data Centers

In the current market, brisk trade occurs in “second-hand” data center properties. Specialized internet companies, in co-operation with property management and investment firms often handle the nuts-and-bolts construction of data centers which are then sold to larger, more established organizations. Or, a failing internet company sells it’s assets when it goes out of business. In these cases, the critical metric for investors is revenue/costs per square foot. The average Canadian data center resells for between $1,500,000 and $100,000,000 depending on the facility and it’s associated resources; revenues of $20 to $200 per square foot can be expected.

 

You can also keep up to date with current trends and technology by visiting Data Center Talk where we keep you informed on important changes as they occur.

Share on TwitterSubmit to StumbleUpon

Cage / Suite / Hotel / Warehouse colocation

“Fractional” colocation customers usually pay for “infrastructure” – space within rack cabinets, power and network bandwidth from a single provider. When a customer has populated four full racks, or is using substantial amounts of power or bandwidth resources, “suite” colocation becomes an attractive option. Cage colocation is suitable for very large installations, where the costs of purchasing and maintaining infrastructure such as backup power and air conditioning from a single source begins to become prohibitive. At this stage, “suite” colocation becomes a viable option. Suite colocation is generally sold by the square foot by property management companies, rather than hosting providers. Rather than obtaining power and network resources from a single source (the hosting company), the customer may opt to install their own resources. Bandwidth is generally available from many different providers within a facility, providing great flexibility to the customer. This type of colocation is particularly well suited to peering situations, where information is flowing to and from a large number of carriers. For example, a VOIP service provider would benefit from being able to negotiate multiple connectivity contracts which are used simultaneously – a cheap “short-hop” network like Cogent to service major North American cities, in combination with a higher-cost “long-haul” provider such as Global Crossing, Bell or Worldcom to provide overseas connectivity.

 

You can also keep up to date with current trends and technology by visiting Data Center Talk where we keep you informed on important changes as they occur.

Share on TwitterSubmit to StumbleUpon

Colocation

The next logical stage of infrastructure development, after dedicated servers, is colocation. With this model, a customer is leased rack space, power and network bandwidth by the hosting provider. The provider also manages site security, air conditioning and other requirements of the data center. All servers and peripheral devices such as switches, routers, firewalls and load balancers are owned and managed by the customer.

Entry Level Colocation or Fractional Colocation

Entry-level colocation, (often called “fractional colocation” when it is sold in units of less than a full rack cabinet) is available in various capacities based on the rack space requirements, such as a single-U solution (for a single server), “octal rack” (11 U, suitable for several servers plus peripheral devices), “half rack” (22 U) or “full rack” (44 U).

What All Does a Colocation Plan Include?

Colocation generally includes a basic amount of power service included in the basic price, for example “5 amps included with half rack plans”. Because the profit margins on power delivery tend to be high, customers need to carefully consider their power requirements when deploying colocated servers. If a single 1-U server burns 300 watts under maximum load, 5 amps of power is only sufficient for two such servers – leaving 20 units of space unusable.

Network Bandwidth of Colocated Hosting Plan

Network bandwidth is also sold as a commodity item under colocated hosting plans. Bandwidth may be metered or “burstable” (a large amount of capacity is made available to you, and you pay for the actual amount of data transferred, allowing you to accommodate occasional “bursts” of utilization) or unmetered (you are guaranteed an amount of network capacity, and may utilize this full capacity at no extra cost).

Almost every business customer with a colocated hosting service will realize the best value from unmetered bandwidth. Although the unit costs of raw resources are much lower with colocated hosting, in most cases the customer is still “stuck” with a single provider of space, power, bandwidth and services.

 

You can also keep up to date with current trends and technology by visiting Data Center Talk where we keep you informed on important changes as they occur.

Share on TwitterSubmit to StumbleUpon

Managed Dedicated Hosting

What is Managed Hosting?

A “managed” hosting plan offers customers additional support for their hosting infrastructure, at an additional cost. Managed services include monitoring service, maintenance of operating system updates, response to security threats and assistance in diagnosing performance anomalies. If a managed server is broken into by hackers or a service is degraded due to software bugs, it’s the hosting provider’s responsibility to fix the problem and restore service.

The “depth” and costs of available managed services vary widely by provider and customer.

Who Can Opt For Managed Hosing?

A simple, stable architecture using commodity open-source components like Apache, PHP and MySQL often has little need for managed services – the hosting customer can handle all maintenance internally, or hire their own resources at low cost when special needs arise.

On the other hand, customers with highly complex infrastructures will often benefit from the availability of managed services. For example, a large organization which uses expensive proprietary components such as SAP or Oracle (and in particular the requirement to comply with operational and security standards like ITIL or Sarbanes-Oxley) can select managed service options to maintain their infrastructure, reducing maintenance costs. Rather than paying internal specialists to develop and maintain systems, a managed services provider (MSP) can meet these needs on an “on-demand” basis, at much lower cost.

 

You can also keep up to date with current trends and technology by visiting Data Center Talk where we keep you informed on important changes as they occur.

Share on TwitterSubmit to StumbleUpon