dcd juniper supplement final september 14

16
How smart networks enable companies to succeed in the digital economy NEED FOR THE Powered by SPEED

Upload: yvanka-guertin

Post on 12-Feb-2017

29 views

Category:

Leadership & Management


1 download

TRANSCRIPT

How smart networks enable companies to succeed in

the digital economy

NEED FOR

THE

Powered by

SPEED

Speeding up delivery

The more we are capable of doing, the more we will be expected to do as businesses

EDITORIALGlobal Managing Editor Bill Boyle Global Editor Peter Judge News Editor Max Smolaks Contributing Editor Martin Courtney

SALESSales Manager Yash Puwar EMEA Sales Manager Vanessa Smith

DESIGNHead of Design Ross Ellis Designer Jess Parker

PUBLISHING DIRECTORJon McGowan

JUNIPER CONTACTSSnr Director, Enterprise Marketing, EMEA Lutz Klaus - [email protected] Snr Director, Corporate Communications EMEA Penny Still - [email protected]

Find us onlinedatacenterdynamics.com datacenterdynamics.es datacenterdynamics.com.br twitter.com/DCDFOCUS Join DatacenterDynamics Global Discussion group at linkedin.com

Subscriptions: www.datacenterdynamics.com/magazine To email one of our team: firstname.surname @datacenterdynamics.com

Welcome to this DatacenterDynamics supplement – The Need for Speed – produced in partnership with Juniper Networks. The supplement gets its name from a by-product of the internet age – the ever-increasing pace of innovation that is sweeping us along in

waves of new working practices, new technologies and new ways of doing business. This increasing need for speed is not going to change – in fact, it is almost certain to become an even faster-paced world. The more we are capable of doing, the more we will be expected to do as businesses. And the speed at which we will need to react will eventually become so immediate that only wholesale automation of our networks will give us an edge on the competition.

As that happens, there are a number of paradoxes developing and fundamentally changing the way businesses run, interact and plan their IT strategies. One of those is that businesses with legacy IT kit now have the option to ditch most of it and go to the cloud for much of their operations, as long as they adopt the correct mixture of public and private offerings. As we point out in our article, Web Services – the Final Networking Frontier (page 4), those organizations using web services successfully are those that have chosen their networking infrastructure carefully and taken advice about future

upgrade strategies. One of the challenges many cloud services providers and large enterprises will face is managing the sheer volume of applications and services they have to host, and the speed of that growth. We look at whether software defined networking (SDN) or network functions virtualization (NFV) will solve this problem, and when it is likely to fulfil its promise. In the pages of this supplement, we take a look at how the open source movement is shaping up. And, finally, we take a look at that old favorite – the demise of colocation in Long Live Colocation (page 14). According to the analysts, colocation is in good health, has already returned from the dead, like Lazarus, and is growing very nicely, thank you, because of the rapid growth in cloud services. We hope you find this supplement on our increasing need for speed useful.

•Bill BoyleGlobal managing editor

Web services

4SDN and NFV

6Infographic

9Internet of Things

10Colocation

14 Contents

MPLS Stands for multi-protocol label switching. It’s a protocol for speeding up and shaping network traffic flow.

4 The Need for Speed

W eb services are built on a set of emerging standards designed

to enable integration between all IT processes and systems, and the data center and cloud applications. The idea is to create a new type of self-contained application that can provide work across all business environments – from the most basic to the most complex.

Until recently, the exchange of information between computer systems could become extremely complex. Before the concept of web services became a reality, integration between proprietary systems and incompatible data formats made the sharing of data very difficult.

At their most ambitious, web services applications may have the ability to provide a common standard for interoperability

between disparate systems.The emergence of web

services can be likened to the process of creating a nation out of a number of warring city states. Only after the fighting factions can agree a set of common standards, such as agreed language, agreed weighing standards, a common currency, and so forth, can true unity be achieved. The same is true in breaking down the barriers to web communications.

Clive Longbottom, service director at Quocirca, says: “The first issue is helping a company understand what a web services company is. A web service is not just moving something from an on-premise environment to a hosted environment.”

Web services need robust networks to operate efficiently. However, one of their major advantages is their adherence to open standards. This means

a web service written for one proprietary platform can communicate with another written on a different platform.

Many businesses are in the process of reinventing themselves to become web service businesses. However, they have a number of hurdles to negotiate before they can complete their journey. The first is to ensure their network infrastructure is robust enough. Many businesses have improved their networking infrastructures, only to fall foul of a subsequent lack of planning.

If you are transitioning to a web services model, you will have to ensure your data center is providing a growth engine for driving next-generation automated service provision. Automation is the only way to ensure that complexity is squeezed out of the process and that as little human intervention is allowed as possible. Humans

mean risk, and risk means mistakes. Only by eliminating the human element can we ensure the road to full automation.

Tony Lock, distinguished analyst at Freeform Dynamics, says: “Some companies that started well with web services have taken it all back in-house.” In some cases, they have just overextended themselves, but those that have been successful are certainly reaping the benefits.

“Networking is key to web services,” says Lock, “particularly as companies become more centralized and users become further removed from the services they are consuming, as well as using resources that are hosted in different locations – your own or hosted service or cloud providers. Most networks still fail – you need the resilience to deal with those failures, and you need higher management on the planning side of things.”

As Lock says: “IT does not

Web services – the final networking frontierEmerging technology makes web services for the smaller enterprises a possibility. Bill Boyle looks at the trends

The Need for Speed 5

want to make these decisions; the business needs to make these decisions. IT can do the policing, but there have to be laws laid down by the business. All too often there is confusion.”

Lock says that networking is absolutely vital to the daily business function. “For the past 10 to 15 years, things network-wise have been good enough. But now the sheer volume of data that has to be moved, manipulated, shared and stored is growing enormously. Just look at how many video files are flying around now. And as the network evolves, we need better network management tools.”

“If this is going to be a relatively air-locked service, then the only real networking issue will be the connection between the people using the service and the service itself. Whereas in the past there were many such services (expense management, payroll, accounting, etc), this is far less the case now, as

interoperability between different systems becomes more key.”

Longbottom says that any company wanting to offer web services needs to understand where their services fit within any composite application. “Is it a key, mission-critical part, or is it a peripheral, nice-to-have part? How does it need to interact with other parts of the process that the composite app is facilitating?”

He explains that by colocating systems in a large facility, there is the capability to integrate into other web services at data center speeds. This is currently being offered by Virtus and Equinix. And by having lots of web services all in one place, the root cause of any issue is easier to identify and fix, particularly if the colocation owner provides tools such as DCIM and response monitoring.

Colo owners should also offer multi-provider external connectivity with failover and peering to other systems, such

as AWS, Azure and so on, making the integration of external web services easier. Many are offering specialized connectors to these third-party clouds in the way that Rackspace offers the AWS connectivity it manages.

On the provisioning of web services, this is where automation has to be a priority. For organizations looking to change their business processes on a regular basis, they cannot have a requirement to reprovision systems manually. There has to be both automated provisioning of the overall required platform and the flexibility and elasticity to ensure the web service performs as required at all times.

As the move to micro services quickens the use of containers, and intelligent workload automation tools enable containers to work automatically in real-time, automation will define the winners and losers in the web services world.

A web service is not just moving something from an on-premise environment to a hosted environment.Clive LongbottomQuocirca

C loud service providers and large enterprises need more responsive networks, and these are being offered by the new technologies of SDN (software defined networking) and NFV (network functions virtualization).

Among the benefits offered by these two developments are the ability to place applications dynamically in data centers, to implement microservice architecture and microsegmentation, to deal with traffic congestion and quality of service (QoS) demands dynamically, to improve network utilization, and to provide network functions in an agile manner. Between them they can allow lower operational expenditure (opex) and more innovation, as well as automation and simplicity.

SDN is often described as separating the control and data plane of a network. The control plane consists of the “signalling” functions of the network which make decisions about where traffic is sent, while the data plane is the underlying network that routes traffic to the correct destination. SDN defines protocols and standards so that these two can be decoupled.

This allows for greater simplicity in building and managing networks. SDN controllers and managers

6 The Need for Speed

needSDN and NFV are delivering a more

agile and future-proof network for service providers and enterprise.

Martin Courtney reports

the

forspeed

93%... of IHS survey respondents to engage in SDN by 2018

The Need for Speed 7

run on servers, handling control plane tasks. They integrate “upwards” with higher-level service management functions, and “downwards” to manage multiple kinds of network equipment, including branded devices and “white label” original device manufacturer (ODM) equipment.

Network functions virtualization (NFV) virtualizes network node functions so they can be put together as building blocks. This can be implemented on top of an SDN network or other architecture. The idea of NFV emerged from the telecommunications industry in 2012, and more than 38 telecoms players have signed up to the European Telecommunications Standards Institute Network Functions Virtualization Industry Specific Group (ETSI NFV ISG).

By the end of 2014, 34 NFV ISG proof-of-concept deployments had already been demonstrated or were in progress. These address a variety of NFV elements, including architectural frameworks for compute, hypervisor and network domains, service management and orchestration, security and trust, resilience and service quality metrics.

Participants include AT&T, NTT Com, BT, Telefónica, Orange and Deutsche Telekom. The next two years will see the focus move beyond requirements and towards industry transformation as telcos and service providers introduce NFV technology into live networks, with key topics including interoperability, operations and collaboration with other industry groups, including open source consortia.

The potential use cases of NFV are wide-ranging. Cloud radio-access networks (C-RAN) are being deployed by China Telecom to replace cellular base station hardware with centralized, cloud-based software functions.

Other examples of virtualized network functions (VNFs) include virtual CPE (customer premises equipment) which allow service providers to deliver services with less hardware placed on the user’s premises. BT has earmarked these as substitutes for devices such as firewalls, load balancers, WAN optimization appliances and other dedicated devices in locations that previously had to be serviced by on-site engineers.

LTE-based mobile networks operate with the evolved packet core (EPC) standard, which lends itself to virtualization, so that now most mobile operators are operating a virtual EPC as a VNF.

German cloud engineering and consultancy company CloudSeeds is one example, having started with SDN-enabled data center infrastructure from its inception in 2012, when founder Kevin Fibich realized that any requirement to scale its infrastructure as a service (IaaS) hosting business beyond a certain size would create problems within the Layer 2 part of the network.

“We saw that SDN would be a great help because it allowed us to build a pure Layer 3 network,” he says. “We had worked with the open shortest path first (OSPF) routing protocol and border gateway protocol

6%... of IHS survey respondents to engage in SDN by end of 2015

8 The Need for Speed

(BGP) for years, and knew how flexible and scalable networks could be without Layer 2 problems. SDN gave us a scalable infrastructure that could just grow [with the business].” CloudSeeds also wanted to automate its application delivery using standardized hardware and software

infrastructure and support backup and disaster recovery services by building virtual networks that span multiple, geographically distant data centers.

A survey of US enterprises, published in February 2015 by research company IHS, suggests that the top SDN use cases in support of capex and opex savings are based on the same imperatives: automated application provisioning and disaster recovery, alongside hybrid clouds designed to span on-premise and hosted data center architecture.

After testing rival vendor equipment, the IaaS provider installed Juniper’s Contrail Networking automation solution for cloud SDN. Contrail Networking is developed by Juniper and a larger community of developers and users like CloudSeeds in the OpenContrail open source project. On the Layer 3 switching underlay fabric, Contrail Networking orchestrates each tenant’s private virtual overlay networks and creates NFV-style service chains for smaller virtual firewalls for CloudSeeds’ customers. While this overlay solution supports a multi-vendor Layer 3 underlay, CloudSeeds also used Juniper’s QFX5100 Ethernet switch in a high-performance data center fabric, alongside the Juniper SRX1400 Service Gateway Firewall and Juniper’s MX80 3D Universal Edge Router for diverse wide area network (WAN) connectivity options. Having worked with Juniper Networks in the past, the CloudSeeds technical team liked the vendor’s approach to open APIs and software management features that allowed it to install the Puppet configuration management system software agent directly onto switches running the Junos operating system, so the entire physical underlay architecture could be provisioned using their usual automation tooling.

“We had APIs where we could directly configure devices without having to

do workarounds,” explains Fibich. “The configuration is completely versioned, which is a really great thing because, when we start to automate network device configuration, it has this history of pre-configuration, where we can see what the automation process did, what configuration changes were applied, and if something went wrong, why it did so.”

Now CloudSeeds is experimenting further with SDN to see how it can be used to implement security applications such as firewalls or anti-virus on the fly without needing to make changes to the underlying network.

“Another interesting feature is the service chaining concept. We are seeing how we can use SDN in a dynamic fashion so that if a customer needs distributed, denial-of-service (DDoS) protection, for example, we are able to put that in the system and switch it on without impacting service uptimes,” he says.

The fact that CloudSeeds is a startup is significant: the company was able to build its data center architecture from scratch, without having to think about any legacy server, storage or network equipment integration. Few other SDN/ NFV deployments, whether enterprise or telco, have that luxury, and in most cases the complexity involved in configuring, managing and maintaining virtual and physical network hardware and software within the same network still causes apprehension among potential customers.

Clifford Grossner, research director for data center, cloud and SDN at IHS, points out that none of these issues are new, however, and that the slow pace of adoption is inevitable given the relative immaturity of SDN technology. While only six per cent of companies responding to the IHS survey expected to see their organizations engaged in live production SDN networks by the end of 2015, for example, that figure rises to 23 per cent by 2016 and 93 per cent by 2018.

“I don’t know of any technology – especially when you are talking about a network that is a complex animal – that would be any different. It takes time for everything to happen, because first versions are never mature, and you need to get to a point where people understand what they can do with it before it really starts to grow. And that can take years.”

OPENFLOWThe first protocol defined to link the control and data planes of an SDN architecture, OpenFlow allows compatible switches to be manipulated and managed directly. It is managed by the Open Networking Foundation (ONF), whose members include Facebook, Google, Juniper Networks, Microsoft and Verizon. OpenFlow version 1.1 was published in 2011; it is now on v1.4.

OPENCONTRAILAn open source project offering cloud networking automation, OpenContrail was seeded by Juniper Networks and is available under the Apache 2.0 license. It provides components for network virtualization including an SDN controller, virtual router, analytics engine and published northbound APIs. It is configured and exposes data through a REST API or its GUI.

OPENDAYLIGHTBacked by the Linux Foundation, OpenDaylight is a collaborative, open source initiative with broad vendor support. It has released three versions of its standards-based SDN controller platform: hydrogen, helium and, in June 2015, lithium – a ‘carrier-grade’ iteration that adds support for OpenFlow, OpenStack Neutron, and new security, monitoring and automation features, alongside more APIs, service function chaining and NFV.

OPEN NETWORK OS (ONOS)The Open Networking Lab (ONLab), backed by the ONF, released the Open Network Operating System (ONOS) – an SDN/NFV-enabled open source switch/router OS designed for white box hardware in cloud service provider networks. The second version of ONOS, Blackbird, shipped in April 2015 with a ‘carrier-grade’ SDN tag affixed.

FLOODLIGHTProject Floodlight is an open source Apache-licensed, Java-based OpenFlow SDN Controller, released by Big Switch Networks and managed by the ONF. It specifies a way to remotely control OpenFlow networking equipment.

SDN AND NFV PROJECTS

See ‘Prepare for IoT’ (page 10) for more on the Internet of Things and its impact on storagep10

The Need for Speed 9

ODDS THAT AN ATTACK WILL SUCCEEDFROM CYBER ATTACKS BY COMPANY SIZE

WORLDWIDE ANNUAL SPEND ON CYBER SECURITY BY COMPANY SIZE

FACTORS REDUCING RISK

CISO’S BIGGEST CAUSE FOR CONCERN

SMALLFew Hundred

Employees

MEDIUM Few Thousand

Employees

LARGE Tens of Thousands

of Employees

VERY LARGE Hundreds of

Thousands of Employees

Source: GartnerSource: Rand Corporation

17.92% 19.85%

32.18% 32.69%

42.10% 46.84%

7.80% 10.25%

Number of computer/

devicessoftware quality

Security software

Restricting what users bring into

the network

Isolating sensitive

sub-networks from the internet

End-user restrictions

on device configuration /data access

INITIALHARDNESS

TRAINING TOOLS BYOD AIR-GAPPING

2015

$101bn

2020

$170bn

CAGR9.8%

2015 2025

10 The Need for Speed

A nalyst company ABI Research has predicted that the Internet of Things (IoT) will generate 1.6 zettabytes of data (roughly 1.6 billion terabytes) by 2020 – a startling figure

that will present both business opportunities and logistical challenges to data center owners and occupiers.

The IoT is in many ways an evolution of earlier machine-to-machine (M2M) networks, but it expands point-to-point communications between a machine or device and a remote computer backed by cloud storage, management and data collection processes to include a much broader range of devices, as well as software applications, industrial and agricultural control systems, and even people.

Potential IoT applications and use cases are as broad as they are long, with automotive, consumer electronics, utility, manufacturing, travel and healthcare industries all featuring

strongly. IoT-connected cars are expected to extend way beyond mobile entertainment, locational tracking and links to music, video, social media and other apps to include the mechanical operations focused on service updates, component monitoring and failure notifications, alongside inter-car communications and autonomous or semi-autonomous driving.

Electricity, gas and water suppliers are expected to invest equally as much in IoT-supported supply chain operations as they do in smart meters, which feed back usage metrics from business and residential premises, while smart cities will look to further improve smart parking, connected waste, public transport and traffic management services through larger networks of sensors embedded in vehicles, bus stops, lamp posts and other street furniture.

Elsewhere, kitchen appliances providing real-time inventory data collection and

The volume and diversity of information set to be generated by the Internet of Things will put considerable strain on data center storage, processing, network and security architecture.

Martin Courtney investigates

The Need for Speed 11

Prepare for IoT

Zettabyte One zettabyte is equal to a billion terabytes. If a four-terabyte, 3.5-inch drive is roughly

equivalent in size to 16 sq ft, a zettabyte hard drive would be about the size of Antarctica.

12 The Need for Speed

ordering of shopping lists may transform food supply chains and retail food services, and perhaps even deliver benefits to the healthcare and insurance industries by collecting information on people’s dietary habits. All of these examples represent just a small cross-section of the types of application which the IoT will enable, with many more use cases waiting to be discovered once the technology is better established and understood.

While not all of the zettabytes of data generated by the billions of IoT devices expected to come online in the next five years will need to be stored, processed and analyzed, a significant percentage of the total will find its way into a data center somewhere. And whether those facilities are owned and operated by the data owners themselves, or run as third-party facilities by colocation companies and cloud service providers delivering infrastructure as a service (IaaS), database as a service and/or hosted analytics or business intelligence SaaS applications, the challenges presented by IoT convergence remain the same.

Research company IDC forecasts that by 2019, more than 90 per cent of all IoT data will be hosted on service provider platforms, as cloud computing reduces the complexity of supporting what the research company dubbed ‘data blending’, also predicting that 10 per cent of sites would see their networks overwhelmed by data generated by the IoT.

Maintaining sufficient capacity to store all that data will be one crucial element, as will storage management frameworks that decide how long information is kept, when and where it is archived, and how quickly it needs to be accessed by end users. And the requirement to regularly back up and replicate huge file volumes across multiple hosting facilities will be equally demanding.

While some of the data generated by the IoT at collection point is likely to travel over cellular networks run by mobile operators, most of it will be transmitted on wireline or unlicensed

wireless networks within homes, cars, offices and other private network environments, all of which

will feed into telecommunications backbones that in turn transfer information

en masse into data centers.In-bound traffic is likely to increase

exponentially, and will force many data center owners to upgrade their wide area network (WAN) performance to 40 Gigabit Ethernet (40GbE) or even 100 Gigabit Ethernet (100GbE).

It is a similar story for the local area network (LAN), tasked with meeting input/output requirements and the need to transfer huge volumes of information between servers and storage elements, where the bandwidth in backbones, edge and top of rack may simply not be enough to prevent bottlenecks that severely hamper application and service performance.

Data management and security policies will be key. Much of the information generated by the IoT will come from end-user devices – wearables, in-car GPS trackers and personal healthcare-monitoring devices such as fitness trackers, home blood pressure monitors or even MRI machines in hospitals and GP surgeries, for example. Much of this will inevitably be subject to strict national and/or regional data protection regulation. This means that data center owners hosting that data will have to be more attentive to relevant legislation than ever, and build data protection and information security certification guarantees into any service-level agreements provided to customers storing data within their facilities.

Processing all that data to derive meaningful insight – whether by determining usage and performance trends for business intelligence purposes, or building databases of critical information that, subject to data privacy laws and/or consumer opt-in consent, can be sold on to third parties – will also demand powerful ‘big data’ analytics capabilities. Only the latest generation of fast, energy-efficient servers, groups of servers configured in high performance computing (HPC) clusters, are likely to be able to provide this number-crunching capacity without creating significant power or performance problems for data centers already struggling with electricity availability and cost issues. Owners may have to examine technologies such as Infiniband and photonics to enable fast server and storage interconnects.

The Need for Speed 13

50%Bandwidth required to carry IoT traffic by 2020 16bn 50bn

Active wireless connected devices by 2014

Devices connected to the internet by 2020

Super-scale data centers are already gearing up to address the commercial opportunity that the IoT presents, with Microsoft more advanced than most in integrating IoT platforms with the data center capabilities associated with its Windows Azure cloud services.

Azure Stream Analytics is a service designed from the ground up to provide businesses with IoT data, storage, processing, analytics, data visualization and business intelligence capabilities, and it would be no surprise if the software giant was able to forge partnerships with the world’s telcos and mobile operators and network equipment providers to handle the connectivity piece too.

May 2015 saw Google outline its own approach to building a hardware and software ecosystem in support of IoT, including an embedded operating system for devices beyond smartphones and tablets for industrial devices, sensors, appliances and lighting, complete with its own communication stack and device administration API alongside a Google-developed chip that uses the 60GHz spectrum to support near-field communications on wearable devices such as smart watches.

Few hardware and software companies – beyond those such as IBM, HP and Oracle, which also happen to be some of the biggest cloud service providers – have hosting facilities of their own within which they can integrate broader IoT service packages. This opens up considerable options for partnerships for specialist data center hosting companies that can provide the infrastructure elements required to store and process large volumes of data.

Early examples of the type of relationships being forged around IoT come from Huawei and T-Systems, the systems integrator arm of German telco Deutsche Telekom, and Citrix partnering with Amazon to deliver Project Octoblu, a combination of cloud-hosted software and hardware dedicated to the workforce automation of IoT applications and services.

Demand for data center capacity – whether leased colocation space and/or cloud service hosting – is likely to grow in parallel with IoT expansion. As long as data center owners can adapt their infrastructure to handle the strain without extending capex/opex too far, the IoT is a phenomenon to be welcomed, not feared.

Research companies, analyst firms and equipment vendors alike are busy trying to predict the size and value of the IoT, spanning a broad range of possible scenarios. A small sample includes:

ABI RESEARCHThe research company put the installed base of active wireless connected devices in excess of 16 billion by 2014, with 41 billion forecast for 2020, by which time IoT-generated data will exceed 1.6 zettabytes (roughly equivalent to 1.6 billion terabytes). ABI also predicts that 75 per cent of that growth will come from ‘non-hub’ devices – sensor nodes and accessories rather than components designed to aggregate and transmit data collected from other IoT nodes.

GENERAL ELCTRICAmerican conglomerate General Electric (GE), which in 2015 announced a public cloud service to capture and analyze IoT data, estimates that the “Industrial Internet” has the potential to add $10 to $15 trillion to global GDP in the next 20 years

GARTNERAn estimate of 25 billion units installed by 2020 (up from 4.9 billion in 2014), 250 million of which will be connected vehicles with automated

driving capabilities and 13.2 billion

consumer devices.

HUAWEIThe Chinese manufacturing

giant launched a lightweight open

source operating system for IoT devices

that features a kernel that is just 10Kb in size. It forecast that the world will produce 100 billion connected devices by 2025, when two million new sensors will be deployed every hour and 50 per cent of all network bandwidth will carry traffic generated by the IoT.

ON WORLDResearch firm ON World predicts 100 million wireless connected LED light bulbs and lamps by 2020, based on a use case in which an app can turn lights on or off, change colour and brightness, and set alarms and industrial scenarios where smart street lighting reports maintenance metrics to local authority headquarters.

THE VALUE OF THE IoT

41bnNumber of

connected wireless devices by 2019

T he rise of the cloud has been unstoppable in the past five years, with companies gradually moving additional applications

and workloads into cloud hosting environments as they grow more familiar with the pay-as-you-go, on-demand, cost-and-delivery model. Figures from the Cloud Industry Forum (CIF) published in May 2015 suggest that 70 per cent of public and private sector organizations that already use cloud services planned to increase their adoption over the next 12 months, with half saying they would move their entire infrastructure to the cloud at some point.

That increase in cloud usage has led many industry watchers to predict a parallel fall in demand for colocation services. But another argument says there is no reason why greater use of the cloud should damage the market for colocation – the two approaches are in some ways complementary and can

accommodate a variety of customer requirements that are unlikely to disappear completely.

“If you want more capacity and you have not got the space, then your choices are to build a virtual infrastructure, or put it in a hosted colocation facility and just wheel your equipment in there,” says Ian Osborne, a member of CIF’s governance board and senior manager for ICT at the Knowledge Transfer Network.

A key selling point of wholesale colocation is that it often makes sense for companies with large-scale power, space and performance requirements that would otherwise need to spend millions building their own data centers. Instead, they can minimize their infrastructure-hosting costs by renting space in a third-party facility within which they can install, run and maintain their own servers, storage and network infrastructure, while sharing the cost of power, cooling, connectivity and floor space with other

tenants. This approach gives them tighter control of their infrastructure, offers room for expansion within the same geographical facility, and retains some scope for customization of available server, storage and network configurations. And retail colocation can be a good fit for small companies that do not forecast any expansion in capacity requirements in the short term beyond a few servers or a single rack.

But unlike cloud, colocation still requires that IT departments buy and maintain servers, storage, network switches and routers, and software of their own. “That presumes you have got equipment,” says Osborne. “While that is not unusual today, I suspect in 10 years’ time it will be quite unusual for people to have compute equipment they use for running their own services.”

Companies using colocation also have to rely on their staff to monitor, maintain and back up that equipment, even though it is hosted off-premises, though in some cases the colocation provider will offer managed services

The demise of colocation, cannibalized by growing demand for cloud services, has been much exaggerated, says Martin Courtney

The advantage of cloud services is that they can be accessed from anywhere.Ian OsborneKnowledge Transfer Network

Long live colocation

14 The Need for Speed

70%The percentage of public and private sector organizations that already use cloud services

that take over

some admin tasks. In contrast,

the company leasing cloud infrastructure,

applications and services from a cloud service provider

reduces its own capital expenditure costs since responsibility for day-to-day admin maintenance and troubleshooting of that server, storage and network infrastructure also shifts to the cloud service provider leaving internal IT staff to focus elsewhere.

“The advantage of cloud services is that they can be accessed from anywhere, and if designed properly they can provide scalability alongside policies for reliability, resilience and robustness much more effectively than you could do it yourself, and you can bring services to market more quickly,” says Osborne.

In some ways, greater use of cloud services is also an opportunity for colocation providers because many of those clouds are housed within their data centers.

Given the costs involved, building their own mega data centers is usually only an option for super-scale public cloud service providers such as Amazon Web Services (AWS), Google and Microsoft.

Smaller cloud service providers are more likely to host a broad range of public, private and hybrid cloud services at larger corporates and SMEs within third-party data centers.

“The fast growth of Interconnect can be explained by the fact that we do not believe in a single solution which suits the needs of all customers.” says Rob Stevens, CEO at the Netherlands-based ISP. “With a wide range

of services (colocation, cloud, connectivity and hosted telecom)

we always look for the right solution based on the specific

requirement of individual companies.”As an ISP that has evolved to offer

a range of services to businesses – including colocation, cloud services, broadband and hosted communications – Interconnect is well placed to judge when and why its customers choose cloud over colo, or vice versa. But Said van de Klundert, network engineer at Interconnect, says that requirements vary widely from one to another. “Some want to rent a whole floor, or 600 racks; others just want one rack. Some want a temporary boost in capacity and others just want to drop all responsibility for hardware,” he says. That proliferation of services puts a strain on the data center infrastructure.

Interconnect operates two data centers located in Den Bosch and Eindhoven, which are roughly 50km apart. Having grown organically since 1995, the ISP has accumulated a large number of switches and routers from different vendors, and recently engaged in a consolidation exercise. “There came a point where we wanted to add more services and needed to add a lot of 10GbE ports,” explains van de Klundert. “Because that was going to cost a lot, we decided to investigate what we could do differently. By eventually purchasing the Juniper Networks virtual chassis fabric and MX Series routers, we can just copy-paste the deployment of one data center to another and provide all the services in exactly the same way, simply by managing one switch in each data center and four routers in total.”

Stevens adds: “Many cloud providers rely on the IT-infrastructure of Interconnect. Initially they often house their own equipment inside our data centers. However more and more cloud providers rely on our stable IaaS platform and use it as a solid foundation for their cloud services.”

If greater use of cloud services by enterprise customers is causing the widely predicted cannibalization of the colocation business, analyst revenue forecasts do not support the theory.

While the size of the pot might be shrinking, figures from Allied Market Research (AMR) indicate sales of colocation services are on a healthy growth trajectory nevertheless. AMR expects that the global market will achieve a CAGR of 12.5 per cent for the next five years to be worth $51.8bn by 2020.

The research company identifies the US as a hotspot of demand, with growth of the retail 500KlW-1000KlW colocation market (estimated to represent around two thirds of the total) attributed to small-to-medium enterprises (SMEs) that are increasingly recognising the value of resilient power supplies, dedicated cooling systems and remote monitoring services, which colocation companies are able to provide in the face of shrinking internal IT budgets and the high costs associated with the construction of dedicated, on-premise data centers.

AMR predicts growth for the wholesale colocation market too, with larger organizations needing bigger, more powerful servers to handle heavier workloads, and threats from terrorism also driving heightened requirements around physical data center security.

Synergy Research, which tracks leading colo companies on a quarterly basis, also suggests ongoing annual double-digit market growth led by the US, Japan, the UK and China, with companies such as Equinix, NTT, Verizon, CenturyLink, TelecityGroup and China Telecom all continuing to make large profits following a spate of significant merger and acquisitions deals in recent years, and many securing new investment for further expansion.

Colocation companies are targeting smaller companies looking to lease one or two servers – 365 Data Centers has introduced a service that provides ‘micro-colocation’ options for start-ups short of capital investment looking to grow their business gradually.

COLO: IN RUDE HEALTH?

The Need for Speed 15