cim large scale data centers special report

17
SPECIAL REPORT Cabling and management considerations in large- scale data centers Large-scale data centers, including hyperscale, co-location/multi-tenant, and some enterprise data centers, place significant demands on their network cabling systems. These facilities also place heavy demands on the management systems for cabling, networks and airflow. This special report of articles, compiled from material that has appeared in Cabling Installation & Maintenance magazine, examines some of the networking, cabling, and facilities-management considerations that currently face administrators of large-scale data centers. REPRINTED WITH REVISIONS TO FORMAT FROM CABLING INSTALLATION & MAINTENANCE. COPYRIGHT 2016 BY PENNWELL CORPORATION SOLUTIONS FOR PREMISES AND CAMPUS COMMUNICATION SYSTEMS WORLDWIDE Past the horizon: Beyond 100G networking PAGE 2 Cable management’s role in data center airflow efficiency PAGE 7 Established methods meet new tools for data center aisle containment PAGE 13

Upload: momitza

Post on 15-Jul-2016

11 views

Category:

Documents


6 download

DESCRIPTION

vv

TRANSCRIPT

Page 1: CIM Large Scale Data Centers Special Report

SPECIAL REPORT

Cabling and management considerations in large-scale data centersLarge-scale data centers, including hyperscale,

co-location/multi-tenant, and some enterprise

data centers, place significant demands on

their network cabling systems. These facilities

also place heavy demands on the management

systems for cabling, networks and airflow. This

special report of articles, compiled from material

that has appeared in Cabling Installation &

Maintenance magazine, examines some of the

networking, cabling, and facilities-management

considerations that currently face administrators

of large-scale data centers.

REPRINTED WITH REVISIONS TO FORMAT FROM CABLING INSTALLATION & MAINTENANCE. COPYRIGHT 2016 BY PENNWELL CORPORATION

SOLUTIONS

FOR PREMISES

AND CAMPUS

COMMUNICATION

SYSTEMS

WORLDWIDE

The path to Terabit

10T

1T

400G

200G

100G

50G

25G

10G2010 2020 20302000

Ethernet speed Highly parallel speeds(e.g., CFP)

Quad speeds in QSFP

Serial speeds in SFP+

Speed in development

Possible future speed

Link speed (b/s)

Standard completedPast the horizon: Beyond 100G networkingPAGE 2

Cable management’s role in data center airflow efficiencyPAGE 7

Established methods meet new tools for data center aisle containmentPAGE 13

Page 2: CIM Large Scale Data Centers Special Report

2 Cabling Installation & Maintenance SPECIAL REPORT

ORIGINALLY PUBLISHED MAY 2015

Past the horizon: Beyond 100G networking

Prominent developers of Ethernet technologies gaze into the future and

see 100-Gbit/sec transmission not as the endpoint, but as a building block.

By Patrick McLaughlin

WHEN THE ETHERNET ALLIANCE (www.ethernetalliance.org) debuted its 2015 Ethernet Roadmap at OFC in March, the organization also published a white paper that lends perspective to the roadmap document. Authored by John D’Ambrosia, chair of the Ethernet Alliance board of directors; and Scott G. Kipp, the alliance’s president, the paper takes a moderately deep dive into some of what’s in the planning stages for Ethernet standards and technologies.

“The Ethernet community is no longer locked into the notion of introducing new speeds in factors of 10; rather, the cast of Ethernet users has become so varied that no longer can such a diverse Ethernet ecosystem be expected to leap to any single, next given speed,” they wrote, after recalling that the “10x” path was broken in (perhaps ironically) 2010 when the IEEE completed specifications for 40- and 100-Gbit/sec Ethernet simultaneously. “Today the varied developers of IEEE 802.3—with projects based on specific use cases with clear objectives and solution spaces—is at work to deliver standards that meet the needs of well-defined users and applications,” the paper says.

Page 3: CIM Large Scale Data Centers Special Report

Past the horizon: Beyond 100G networking

3 Cabling Installation & Maintenance SPECIAL REPORT

400G and Tbit

400-Gbit Ethernet is one of four projects (along with 25, 5, and 2.5 Gbit) making their way through the IEEE 802.3 Working Group. About 400-GbE, D’Ambrosia and Kipp noted, “The Bandwidth Assessment Ad Hoc spent two years assessing the Ethernet market’s emerging application needs and concluded that 400-GbE would strike the correct balance among varied considerations including cost, power, density and bandwidth demand.” The development of 400-GbE was then officially launched in March 2013.

Later the paper explains, “The 400-GbE Task Force is using 16 lanes of 25-Gbit/sec technology in the CDFP form factor, but the industry also wants to use eight lanes of 50 Gbits/sec to create higher-density 400-GbE in the CFP2 form factor. 50-Gbit/sec lanes will enable 50-GbE in the SFP+ form factor and 200-GbE in the QSFP28 form factor. The speeds based on 50-Gbit/sec lanes should be available by 2020.”

Taken from the Ethernet Alliance’s 2015 Ethernet Roadmap white paper, this graph depicts the trend of increasing lane speeds. The bottom line shows how individual lanes are increasing from 10 to 25 to 50 Gbits/sec, and eventually 100 Gbits/sec in the 2020s.

The path to Terabit

10T

1T

400G

200G

100G

50G

25G

10G2010 2020 20302000

Ethernet speed Highly parallel speeds(e.g., CFP)

Quad speeds in QSFP

Serial speeds in SFP+

Speed in development

Possible future speed

Link speed (b/s)

Standard completed

Page 4: CIM Large Scale Data Centers Special Report

Past the horizon: Beyond 100G networking

4 Cabling Installation & Maintenance SPECIAL REPORT

That explanation was in the context of the Ethernet Alliance’s vision for a future Terabit-per-second specification. “The purpose of a roadmap is to show people where they can go. They want to go places they’ve never been,” the paper says, then: “The 2015 Ethernet Roadmap shows how the industry is progressing toward Terabit Ethernet (TbE). TbE is in the future and expected after 2020. Significant investment is needed to get to Terabit speeds.”

The post-2020 Tbit Ethernet will be enabled, in no small part, by the engineering feats that allow ever-higher data rates per lane. “Individual lanes are being increased from 10 Gbits/sec to 25 Gbits/sec to 50 Gbits/sec,” the paper says, referring to both current and future capabilities. “The first Terabit speeds could be 1-Tbit/sec (10 lanes of 100 Gbits/sec) or 1.6-Tbits/sec (16 lanes of 100 Gbits/sec). The 100-Gbit/sec lane technology is, thus, the building block for TbE.

“Technologies beyond 100-Gbit/sec lanes are very costly right now, and the industry will have to invest hundreds of millions of dollars before these technologies reach the cost points of Ethernet,” the paper asserts. “These technologies will become clear over time.”

Fiber and connectivity

These excerpts paint only part of the comprehensive picture the Ethernet Alliance has produced with its 2015 Ethernet Roadmap and accompanying white paper. Both documents can be downloaded from the alliance’s website.

While speeds like 400 Gbits/sec and 1 or even 1.6 Tbits/sec are not in the short- or even mid-term plans for data center managers today, the concept of parallel optics and multi-lane transmission are the here-and-now for many. As such, the general approach to cabling architecture being applied today may very well carry a data center through to some of these higher speeds. As far back as 2010 recommendations for array-style multi-fiber connectivity were being made

Page 5: CIM Large Scale Data Centers Special Report

Past the horizon: Beyond 100G networking

5 Cabling Installation & Maintenance SPECIAL REPORT

to data center managers who would one day want to migrate from 10-GbE to 40- or 100-GbE.

One article in particular prophetically advised, “The choice in physical connectivity is … important. Because parallel-optics technology requires data transmission across multiple fibers simultaneously, a multi-fiber (or array) connector is required. Using MPO-based connectivity in today’s installations provides the means to migrate to this multi-fiber parallel-optic interface when needed.” (“Migrating to 40 and 100G with OM3 and OM4 connectivity,” authored by David Kozischek and Doug Coleman, Corning Cable Systems [now Corning Optical Communications], November 2010)

That years-old advice is relevant today and appears to hold relevance for the future as well. Some much newer—in fact, still-in-development—fiber technology could be another puzzle piece. The Ethernet Alliance’s D’Ambrosia and Kipp described 100-Gbit/sec lanes as the “building block” to future high speeds, and a multimode fiber type currently under development could be an essential piece of that building block. Dubbed wideband multimode fiber (WBMMF), the optical fiber will support wave-division multiplexing (WDM).

Paul Kolesar, engineering fellow in CommScope’s (www.commscope.com) enterprise solutions division, introduced the WBMMF concept in a blog post in late 2014, saying the need for a WDM-capable multimode fiber has made itself self-evident. “Existing OM3 and OM4 multimode fibers have a rather limited ability to support high-speed transmission using wavelengths different than the 850-nm wavelength for which they are optimized,” Kolesar explained. “WBMMF can support four or more wavelengths to significantly improve capacity. For example, this new fiber type could enable transmission of 100 Gbits/sec over a single pair of fibers, rather than the 4 or 10 pairs used today.”

Page 6: CIM Large Scale Data Centers Special Report

Past the horizon: Beyond 100G networking

6 Cabling Installation & Maintenance SPECIAL REPORT

In October 2014, the Telecommunications Industry Association (TIA; www.tiaonline.org) accepted a project request to initiate a standard document specifying a multimode fiber that can support WDM. The multimode fiber specified will enable transmission of at least 28 Gbits/sec per wavelength, totaling at least 100-Gbit/sec transmission capability.

If 100-Gbit/sec transmission is indeed a building block rather than a finish line for high-speed data transmission, it appears as if the combination of established architectures and connectivity, along with emerging optical fiber capabilities, will enable cabling systems to support that building block.

PATRICK McLAUGHLIN is our chief editor.

Page 7: CIM Large Scale Data Centers Special Report

7 Cabling Installation & Maintenance SPECIAL REPORT

ORIGINALLY PUBLISHED FEBRUARY 2015

Cable management’s role in data center airflow efficiency

It might not be the proverbial biggest fish, but the routing

and maintenance of cables is a fish nonetheless in the

ecosystem of data center energy efficiency.

By Patrick McLaughlin

IT HAS BECOME SOMEWHAT CLICHÉ to refer to a data center as an ecosystem. But as with many clichés, it got to be one because of the statement’s fundamental truth. “Ecosystem” is an appropriate term for a data center’s network and facilities systems because they all are interdependent at least to some extent. And a change, whether it is an improvement or an inefficiency, in one system very likely will affect multiple others. In that vein, the management of a data center network’s physical-layer cabling can and often does have an effect on the flow of cooling air in the facility. If cable management serves to improve airflow, the entire ecosystem—including the all-important cooling of network equipment—also improves. If cable management inhibits airflow, the opposite becomes true and the cabling then becomes an inefficiency in the ecosystem.

In perspective, cabling is by no means the proverbial “biggest fish” when it comes to data center network operations, their impact on airflow, and the consequent results related to energy efficiency. But it is a fish nonetheless. Some of that perspective was provided by Ian Seaton, a critical facilities consultant who was a long-time technical staff member with Chatsworth Products Inc. (CPI; www.chatsworth.com). Seaton now provides consulting services for firms including CPI, Upsite Technologies

Page 8: CIM Large Scale Data Centers Special Report

Cable management’s role in data center airflow efficiency

8 Cabling Installation & Maintenance SPECIAL REPORT

(www.upsite.com) and others. Seaton delivered a presentation during a webinar hosted by Cabling Installation & Maintenance. His presentation, titled “Achieving effective airflow management in challenging networks,” addressed cabling-related issues including cable distribution and management. The sheer number of cables used with some of today’s large switches make the practicality of cable management a significant challenge. Challenging as it may be though, one conclusion Seaton drew was that, “Good cable management practices enhance airflow management strategies.”

Cabinets grow up and out

The fact that massive numbers of cabling need to be managed in data centers is not breaking news. Nearly four years ago analysis by what was then called IMS Research (which has since been acquired by IHS Research) examined drivers that have caused an increase in the market for taller-than-42-U enclosures within data centers. Liz Cruz, a senior analyst for data centers, cloud and IT infrastructure with IHS, conducted the research and issued a report in spring 2012. At that time she cited “increasing server depths, more cabling within cabinets, the need for airflow management and the desire to maximize floor space within data centers” as primary drivers of taller cabinets.

Cruz forecasted shipments of 48U cabinets to grow an average of 15 percent annually over the following five years, with 42U-rack shipments growing at 5 percent. And while cabinets were predicted to get taller, they also were predicted to get wider. The analyst also explained in 2012 that the standard cabinet width was 600 mm but “going forward, shipments of 750- to 800-mm-wide cabinets will grow at nearly twice the rate of 600-mm cabinets. In terms of depth, the 1100-mm category currently accounts for the greatest share, but 1200-mm will grow faster than any other depth in percentage terms.”

Page 9: CIM Large Scale Data Centers Special Report

Cable management’s role in data center airflow efficiency

9 Cabling Installation & Maintenance SPECIAL REPORT

Cabling was one of several factors influencing this anticipated change. The analyst explained that greater computing densities at the rack level were primary causes. These densities result in more cabling within cabinets and more heat generated in them as well. Those two realities were driving up the cabinets’ width and depth, to accommodate cable management and airflow. “Growth in power densities are not expected to level out anytime in the near future, which means neither will enclosure sizes,” Cruz stated then.

Dos and don’ts

Lars Strong, P.E., a senior engineer with Upsite Technologies, wrote in December 2014 about the impact of taller racks on data centers and airflow management in particular. Citing the logistical limitations that are likely to keep rack heights at around 48U instead of 51 or 52 U in many cases, Strong elaborated, “On top of the challenges that are accompanied with installing taller racks, cable management also becomes a significant problem. The taller the racks, the more servers can be deployed, and the more cables you have. Cable management must be done well and kept tight, and to the sides of enclosures to allow clearance for exhaust air to freely leave the cabinet.

The amount of cabling used with a large network switch makes the management of that cabling a significant challenge. Photo: Chatsworth Products Inc.

Page 10: CIM Large Scale Data Centers Special Report

Cable management’s role in data center airflow efficiency

10 Cabling Installation & Maintenance SPECIAL REPORT

“However, even if cables are properly managed, sometimes there simply isn’t enough space in the back of the cabinet. This increases the demand for wider and deeper cabinets to accommodate more cables.”

That article from Strong appeared in Upsite’s blog. He also wrote an article on the blog titled “10 tips to improve PUE through cable management.” In that article he said, “If cables are improperly placed and block airflow, your cooling units are forced to work harder,

albeit inefficiently, which negatively impacts your PUE [Power Usage Effectiveness]. How you manage your cables is an important part of your overall airflow management strategy, but one easily overlooked as the two are not often associated with each other.”

He divided his 10 tips into three areas: cable management in the raised floor, cable management in the rack, and cable management overhead. Strong characterized each tip as a “do” or “don’t.”

For underfloor cable management, he advised, “Do place cable trays under cabinets or hot aisles. This allows the raised floor space under the perforated tiles in the cold aisle to remain free. Do place cable management trays as high as possible, allowing air to flow underneath them. This is particularly important when running cable trays close to or in front of cooling units where most of the airflow movement is close

When cables are fed through the top of an enclosure or cabinet, a sealing device can help preserve airflow efficiency. Shown here is Upsite Technologies’ four-inch HotLok round rack-mount grommet.

Page 11: CIM Large Scale Data Centers Special Report

Cable management’s role in data center airflow efficiency

11 Cabling Installation & Maintenance SPECIAL REPORT

to the floor … Do place cable trays at a consistent height as much as possible. This allows conditioned air to flow in a straight path. Don’t place cable management trays underneath the cold aisle. They may end up under perforated tiles.”

Within the rack, Strong says, “Do use wider cabinets with cable management built into the side and not right behind the exhaust ports. Do use deeper cabinets that allow the air more room to escape vertically. Do use blanking panels. When cables increase the pressure within the cabinet, blanking panels become especially important. Don’t block the exhaust from servers, particularly ones with high volume and velocity fans.”

Overhead cable management has a “do” and a “don’t.” According to Strong, “Don’t place cable management trays high above the cabinets. In rooms without a ceiling plenum return, it forces hot air returning to the unit to go under the cable trays and closer to IT intakes, which can cause hot spots. Do place cable management trays within a few inches of the top of an IT cabinet so that all exhaust air flows to the top of the room and over the top of the cable trays. This can actually improve the airflow management in the room.”

In an interview with Cabling Installation & Maintenance, Strong shared that from what he sees, most often cable trays are installed without much regard to airflow management. Often they are installed as he recommends—at a decent height that is not too high in the room—but the decision to do so was done without consideration of airflow.

Strong advocates the convening of what he calls an ICE team—integrated critical environment team—to make decisions about data center and computer room spaces. “It’s a concept we’ve shared and was coined by the Uptime Institute,” he said. Members of the ICE team typically include personnel from corporate real estate, facilities, an IT executive and a data center manager from IT. “A couple people who are in the room [computer room or data center] every day, and a couple people

Page 12: CIM Large Scale Data Centers Special Report

Cable management’s role in data center airflow efficiency

12 Cabling Installation & Maintenance SPECIAL REPORT

from the C-suite who don’t walk into the room very often,” he said. Quite often in many organizations, Strong pointed out, conversations related to the data center focus on organization structure and considerations that would prevent problems or otherwise allow operations to flow more smoothly.

Strong pointed out that issues can arise when cabling is fed through the top of a cabinet. When that happens, hot air releases through the top of the cabinet. Two practical approaches can minimize or eliminate the effects of cables-through-top-of-cabinet. One is the use of a sealing mechanism like a grommet. The other is, when aisle containment is being used, ensure that the containment is placed at the cabinet’s front edge, so the entire top of the cabinet, including the hole through which cables pass, is in the hot aisle.

Containment is one of airflow’s “big fish” in the data center ecosystem. But cable management, though a small fish, remains important.

PATRICK McLAUGHLIN is our chief editor.

Page 13: CIM Large Scale Data Centers Special Report

13 Cabling Installation & Maintenance SPECIAL REPORT

ORIGINALLY PUBLISHED JUNE 2015

Established methods meet new tools for data center aisle containment

By Patrick McLaughlin, Chief Editor

Some aspects remain the same, such as whether to contain the hot or cold

aisle, while others evolve, like how to deal with various cabinet heights.

IN SOME WAYS, the challenges of keeping data center equipment sufficiently cool are the same they have been for some time. In other ways, the challenges evolve with overall data center trends. For example, discussing whether an isolation strategy should contain the hot aisles or the cold aisles is like a debate about the world’s greatest (baseball player, rock band, etc.). The conversation is timeless and no matter when it’s being discussed, there’s a case to be made for either side.

While not coming down squarely on one side or the other, some providers of equipment used for aisle containment provide information acknowledging the benefits of each. Simplex Isolation Systems (www.simplexstripdoors.com) says, “There is a huge discussion in the industry about whether it makes more sense to isolate the hot aisle or the cold aisle. Different data center experts advocate different theories. The reality is your decision on this question, especially in a legacy data center, is largely decided by your existing infrastructure. Where is the cold air coming from? Where does the warm air have to go in order to be exhausted from the data center or rerouted back into the HVAC or CRAC units? Certain factors will dictate whether you isolate the hot aisle or the cold aisle. Every case is site-specific.”

Page 14: CIM Large Scale Data Centers Special Report

Established methods meet new tools for data center aisle containment

14 Cabling Installation & Maintenance SPECIAL REPORT

Polargy (www.polargy.com) describes the options as follows: “Hot aisle containment focuses on isolating hot exhaust air on its return to the CRAC units or on its way out of the building. And this method of containment is clearly the trend for most new enterprise data center builds. Why do so many architects and engineers prefer and specify this method of air segregation? Three reasons. One, with the often-accompanying energy efficiency measure of airside economization, designs that flood the whole room with cold air are easy to build. Two, it is much easier to achieve airflow balance with a common cold zone. Three, a common cool area in the space is more comfortable for users. For these reasons, containment on the hot side sees wide adoption in new data center builds.”

Of cold aisle containment, the company says it “confines the cold supply air within the aisle so that it is only available to the equipment and cannot escape out the aisle ends or over the top of the aisle. The most common model of this containment approach is aisle end doors and roof panels on a raised floor with perimeter CRACs. Cold aisle containment is widely used on retrofits of existing sites because the roofing approach can avoid the need to modify fire suppression, and fits below existing cable trays and other obstructions. Colocation and wholesale providers like this model because it provides flexibility for layout changes. With individual contained cold aisles, it is particularly important to monitor and balance the airflow within the aisles. And in all cases, the roofing model needs to be approved by the local fire marshal.”

Simplex brings cabling into the conversation: “Data center managers can spend a lot of energy, time, and money making sure they have sealed off the wide open walls in a data center, but they will overlook the small area above a rack or wiring loom, or the hole in a Plexiglas wall through which wires and cables pass. But air is like fluid in a data center. It will take the path of least resistance and you will experience serious leaks in these overlooked areas as cool air comes rushing through to mix with the warm air. Specially designed short curtains and brush seals can be used to close off these areas.”

Page 15: CIM Large Scale Data Centers Special Report

Established methods meet new tools for data center aisle containment

15 Cabling Installation & Maintenance SPECIAL REPORT

Simplex and Polargy both offer curtains as well as doors and other air-isolation products and systems for the purposes described here. Another provider of isolation products and systems, Upsite Technologies (www.upsite.com), recently added to its cadre of offerings to accommodate the fact that some data centers are installing racks taller than 42U in height. Upsite’s AisleLok Modular Containment line of products includes bidirectional doors and adjustable rack-gap panels in 42, 45, and 48U heights. The company said it launched these new sizes in response to the growing demand for taller equipment cabinets.

Upsite’s senior engineer and science officer, Lars Strong, took to the company blog to explain the need for, and implications of, taller cabinets or racks. “On top of the challenges that are accompanied with installing taller racks, cable management also becomes a significant problem,” he explained. “The taller the racks, the more servers can be deployed, and the more cables you have. Cable management must be done well and kept tight, straight, and to the side of enclosures to allow clearance for exhaust air to freely leave the cabinet.

“However, even if cables are properly managed, sometimes there just simply is not enough space in the back of the cabinet. This increases the demand for wider and deeper cabinets to accommodate more cables … When taller racks are installed in a row, very rarely is the entire row changed out. Most of the time taller cabinets are added to existing rows of shorter cabinets, creating a ‘skyline’ effect. This creates challenges for the installation of containment systems.”

Strong continued, “At the aisle level, it’s common for there to be just enough conditioned airflow to meet the IT airflow demand. In these situations [taller cabinets containing more equipment] the volume of conditioned air delivered to the aisle will need to be increased before additional IT equipment can be added. This usually only requires better management of the open areas in the raised floor to direct the air where it’s needed.”

Page 16: CIM Large Scale Data Centers Special Report

Established methods meet new tools for data center aisle containment

16 Cabling Installation & Maintenance SPECIAL REPORT

Chatsworth Products Inc. (CPI; www.chatsworth.com) offers the Build To Spec (BTS) Kit as part of its hot-aisle containment portfolio. The company offers separate hot-aisle and cold-aisle containment systems.

The BTS kit addresses the “skyline” situation in which a row contains cabinets of various heights. According to CPI, the BTS kit “includes all the components needed to construct a ceiling-supported or cabinet-supported duct to capture and direct airflow in the contained aisle. This design adapts to a mix of cabinets and allows cabinets to be changed when required.”

For facilities that use the BTS Hot Aisle Containment Solution, CPI advises, “The specific combination of components needed to create a complete solution will depend on a number of factors, including room layout, ceiling height, and what types of cabinet models are selected.” The company also offers a pre-installation site survey, during which a technical representative from CPI visits the deployment site to provide a detailed recommendation. CPI also offers supervision services for the on-site installation of the BTS Hot Aisle Containment Kit.

Upsite’s Strong sums up: “The trend towards taller, deeper, and wider IT equipment racks continues to grow with increasing business and customer demands. Although this growing trend often allows for better space utilization, it doesn’t come without its challenges. In order to ensure the effectiveness and efficiency of installing

Chatsworth Products Inc.’s Build To Spec Kit, a hot-aisle containment system, accommodates the “skyline” look of rows that contain cabinets of various heights.

Page 17: CIM Large Scale Data Centers Special Report

Established methods meet new tools for data center aisle containment

17 Cabling Installation & Maintenance SPECIAL REPORT

larger cabinets, a rigorous and holistic approach to layout and cable and airflow management must be taken.”

Carrying out such an approach may incorporate some combinations of the tools discussed here.

PATRICK McLAUGHLIN is our chief editor.