software defined networking to support the software defined

14
Software defined networking to support the software defined environment C. Dixon D. Olshefski V. Jain C. DeCusatis W. Felter J. Carter M. Banikazemi V. Mann J. M. Tracey R. Recio Software defined networking (SDN) represents a new approach in which the decision-making process of the network is moved from distributed network devices to a logically centralized controller, implemented as software running on commodity servers. This enables more automation and optimization of the network and, when combined with software defined compute and software defined storage, forms one of the three pillars of IBM’s software defined environment (SDE). This paper provides an overview of SDN, focusing on several technologies gaining attention and the benefits they provide for cloud-computing providers and end-users. These technologies include (i) logically centralized SDN controllers to manage virtual and physical networks, (ii) new abstractions for virtual networks and network virtualization, and (iii) new routing algorithms that eliminate limitations of traditional Ethernet routing and allow newer network topologies. Additionally, we present IBM’s vision for SDN, describing how these technologies work together to virtualize the underlying physical network infrastructure and automate resource provisioning. The vision includes automated provisioning of multi-tier applications, application performance monitoring, and the enabling of dynamic adaptation of network resources to application workloads. Finally, we explore the implications of SDN on network topologies, quality of service, and middleboxes (e.g., network appliances). Introduction Software defined networking (SDN) represents a fundamental advancement, revolutionizing the network industry [1]. SDN differs from traditional networking in three critical ways. First, SDN separates the data plane, which forwards traffic at full speed, from the control plane, which makes decisions about how to forward traffic at longer time scales. Second, SDN provides a well-defined interface between the now-separated control and data planes, including a set of abstractions for network devices that hide the many of their details. Third, SDN migrates control plane logic to a logically centralized controller that exploits a global view of network resources and knowledge of application requirements to implement and optimize global policies. Figure 1 shows a high-level SDN architecture where the SDN controller provides the centralized control plane functionality for the switches and allows SDN applications to provide network functionality. As a result of these three seemingly simple changes to traditional network architectures, SDN should radically increase the pace of network innovation and improve the performance, scalability, cost, flexibility, security, and ease of management. Along with software defined compute and software defined storage, IBM’s software defined environment (SDE) allows for the automation and optimization of entire computing environments providing similar benefits. At its core, SDN allows a high-level abstract specification of network connectivity and services to be automatically and dynamically mapped to a set of underlying network resources. Figure 2 depicts an abstract representation of the network for a three-tier web workload. The thick lines represent connectivity between the tiers, whereas thin ÓCopyright 2014 by International Business Machines Corporation. Copying in printed form for private use is permitted without payment of royalty provided that (1) each reproduction is done without alteration and (2) the Journal reference and IBM copyright notice are included on the first page. The title and abstract, but no other portions, of this paper may be copied by any means or distributed royalty free without further permission by computer-based and other information-service systems. Permission to republish any other portion of this paper must be obtained from the Editor. Digital Object Identifier: 10.1147/JRD.2014.2300365 C. DIXON ET AL. 3:1 IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014 0018-8646/14 B 2014 IBM

Upload: phamkhuong

Post on 14-Feb-2017

236 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Software defined networking to support the software defined

Software defined networkingto support the softwaredefined environment

C. DixonD. Olshefski

V. JainC. DeCusatis

W. FelterJ. Carter

M. BanikazemiV. Mann

J. M. TraceyR. Recio

Software defined networking (SDN) represents a new approach inwhich the decision-making process of the network is moved fromdistributed network devices to a logically centralized controller,implemented as software running on commodity servers. This enablesmore automation and optimization of the network and, whencombined with software defined compute and software definedstorage, forms one of the three pillars of IBM’s software definedenvironment (SDE). This paper provides an overview of SDN,focusing on several technologies gaining attention and the benefitsthey provide for cloud-computing providers and end-users. Thesetechnologies include (i) logically centralized SDN controllers tomanage virtual and physical networks, (ii) new abstractions forvirtual networks and network virtualization, and (iii) new routingalgorithms that eliminate limitations of traditional Ethernet routingand allow newer network topologies. Additionally, we present IBM’svision for SDN, describing how these technologies work togetherto virtualize the underlying physical network infrastructureand automate resource provisioning. The vision includes automatedprovisioning of multi-tier applications, application performancemonitoring, and the enabling of dynamic adaptation of networkresources to application workloads. Finally, we explore theimplications of SDN on network topologies, quality of service,and middleboxes (e.g., network appliances).

IntroductionSoftware defined networking (SDN) represents afundamental advancement, revolutionizing the networkindustry [1]. SDN differs from traditional networking in threecritical ways. First, SDN separates the data plane, whichforwards traffic at full speed, from the control plane, whichmakes decisions about how to forward traffic at longer timescales. Second, SDN provides a well-defined interfacebetween the now-separated control and data planes, includinga set of abstractions for network devices that hide the manyof their details. Third, SDN migrates control plane logic toa logically centralized controller that exploits a globalview of network resources and knowledge of applicationrequirements to implement and optimize global policies.Figure 1 shows a high-level SDN architecture where the

SDN controller provides the centralized control planefunctionality for the switches and allows SDN applicationsto provide network functionality. As a result of thesethree seemingly simple changes to traditional networkarchitectures, SDN should radically increase the pace ofnetwork innovation and improve the performance,scalability, cost, flexibility, security, and ease ofmanagement. Along with software defined computeand software defined storage, IBM’s software definedenvironment (SDE) allows for the automation andoptimization of entire computing environments providingsimilar benefits.At its core, SDN allows a high-level abstract specification

of network connectivity and services to be automaticallyand dynamically mapped to a set of underlying networkresources. Figure 2 depicts an abstract representation of thenetwork for a three-tier web workload. The thick linesrepresent connectivity between the tiers, whereas thin

�Copyright 2014 by International Business Machines Corporation. Copying in printed form for private use is permitted without payment of royalty provided that (1) each reproduction is done withoutalteration and (2) the Journal reference and IBM copyright notice are included on the first page. The title and abstract, but no other portions, of this paper may be copied by any means or distributed

royalty free without further permission by computer-based and other information-service systems. Permission to republish any other portion of this paper must be obtained from the Editor.

Digital Object Identifier: 10.1147/JRD.2014.2300365

C. DIXON ET AL. 3 : 1IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014

0018-8646/14 B 2014 IBM

Page 2: Software defined networking to support the software defined

lines indicate policy and/or network services applied toconnectivity. Figure 3 shows the underlying networkresources for one potential realization of the abstractnetwork depicted in Figure 2. The thick arrows correspond tothe abstract connectivity in Figure 2, whereas thin linesindicate the physical links in the network. The underlyingimplementation is far more complicated than the abstractrepresentation. Traditional networks are specified in terms ofthe underlying implementation. This is analogous toprogramming in assembly language. Software defined

networks are specified in abstract terms that more closelyresemble programming in a high-level language.

Virtualization and abstractionSDN defines open, standard abstractions for networks thathide the details of the underlying infrastructure, similar tohow an operating system abstracts the complexity ofunderlying hardware by exporting common applicationprogramming interfaces (APIs) to services such as filesystems, virtual memory, sockets, and threads. These

Figure 1

A high-level view of a standard SDN network architecture where a (logically) centralized SDN controller provides the control plane for all of theswitches. The solid lines represent the physical connectivity between switches (either physical or virtual) and the dotted lines represent communicationbetween the controller and the switches to install forwarding rules. The controller provides new network functionality by hosting SDN applications.

Figure 2

The abstract network specification for a three-tier web workload including a firewall service and a load balancer service on the logical link between theInternet and web servers. Further, connections between application servers and database servers are labeled with a higher quality of service.

3 : 2 C. DIXON ET AL. IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014

Page 3: Software defined networking to support the software defined

abstractions provide new tools to enable the richer network-ing functionalities demanded by recent industry trendsincluding dynamic virtual server workloads, multi-tenantcloud computing, and warehouse-scale data centers. Existingstandards and abstractions have proven inadequate fordelivering this functionality Bat scale[; for example, 12-bitvirtual local area network (VLAN) identifiers [2] allow for upto 4,096 isolated tenants. To address this, networks havebecome increasingly complex, including proprietary routing,traffic engineering mechanisms, and labor-intensiveconfiguration of network appliances used to secure andoptimize multi-tier systems. SDN offers the potential toreverse this trend by addressing these problems in thecontroller software running on commodity servers thatprograms network hardware using open protocols.The dominant use of SDN that enables solutions to these

problems is network virtualization. Network virtualizationinvolves abstracting the physical network in two ways:(i) isolating multiple tenants and giving them a Bview[such that they are the only ones using the networkand (ii) presenting an abstract topology that may differ fromthe physical topology, e.g., an abstract topology with allhosts attached to a single, large switch. A related concept isNetwork Functions Virtualization (NFV) [3], which replacesspecialized appliances such as firewalls, load balancers,and intrusion detection systems with virtual machines (VMs)running on conventional servers [4–7] connected to the

network. In the server world, virtualization has enabled newapplications and revenue streams that would not have beentechnically possible or economically feasible otherwise.It is anticipated the same will be true for networking.

Splitting the data plane and the control planeIn conventional networks, each device implements both dataand control plane functionality. For example, when a packetis received at a switch, the data plane matches fields inthe packet with respect to forwarding rules and performsspecified actions such as changing the destination InternetProtocol (IP) address and forwarding the packet on a specificport. To instantiate these rules, various mechanisms areused. For basic forwarding, each device participates indistributed control plane logic, communicating with peersusing protocols such as spanning tree [8] or TransparentInterconnection of Lots of Links (TRILL) [9] for EthernetswitchesVand Border Gateway Protocol (BGP) [10] orOpen Shortest Path First (OSPF) [11] for IP routers.SDN combines these control channels into one mechanism

that can control both basic forwarding and more sophisticatedservices. Each device continues to forward packets at fullspeed on the basis of currently installed forwarding rules, butthe distributed control plane is replaced with a logicallycentralized controller that programs the forwarding rules ofeach device in the network. The controller uses its globalnetwork view to create basic forwarding rules that are not

Figure 3

The underlying network implementing the abstract network shown in Figure 2. The thick arrows represent the tunnels that provide the abstractconnectivity from the Internet to the various services via the appropriate middleboxes. The hypervisor virtual switches (labeled vSwitch) areresponsible for routing traffic in to and out of the tunnels. The quality of service requirements between the application servers and database servers areprovided by configuration in the switches and are not shown.

C. DIXON ET AL. 3 : 3IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014

Page 4: Software defined networking to support the software defined

limited to spanning trees and dovetail with higher-levelfunctionalities such as Network Address Translation (NAT)and VLANs. The ability to control all aspects of the networkresults in flexibility and innovation.

Centralizing network controlOnce the data and control planes are split, it is no longernecessary to have a distributed control plane. As aconsequence most realizations of SDN migrate a substantialportion of network control functionality to a logicallycentralized SDN controller. The controller connects to everyswitch in the network, typically through a separate controlnetwork, which allows it to monitor and control each device.A tightly coupled cluster of SDN controllers can be usedfor scale-out and fault-tolerance [12–14]. In such clusters,consistently distributing shared state can be problematic andmany recent efforts have explicitly distributed some taskseither to a subset of the cluster or to switches to alleviatethese issues [15, 16]. Though less common, the distributedmanagement plane can also be replaced with a logicallycentralized management point, possibly the same controller,to enable network-wide monitoring, management, andpolicy enforcement [17].While there are well-recognized trade-offs between

distributed and centralized control, the advantages ofcentralization appear to greatly outweigh the disadvantagesin the context of SDN. Most of the problems described earliercan be solved using SDN technology. For example, anSDN controller has global visibility into the current stateof the network, e.g., link and buffer utilization, devicefailures, and where hosts are located, so it can implementend-to-end quality of service (QoS) and respond rapidlyto failures [18, 19]. However, SDN need not centralizecontrol entirely. Internet scale networks and networksof large organizations will continue to consist of numerousindependent administrative domains.The rest of this paper is organized as follows. The next

section describes several example SDN scenarios: networkvirtualization and abstraction, middlebox connectivity,fabrics, monitoring-control loops, and QoS. That is followedby a discussion of IBM’s SDN vision and offerings includingthe OpenDaylight** Project [20]. After that, examples ofearly adopters of SDN technology are presented. Two finalsections briefly discuss migration to SDN technology andprovide concluding remarks.

Example SDN scenariosSDN provides a platform on which a wide variety ofscenarios can be realized. Ultimately, the functionalitydelivered via this platform will be limited only bydevelopers’ ingenuity and imagination. Here, we describesix scenarios that have already been realized: networkvirtualization, network abstraction, middlebox insertion,fabrics, monitoring-control loops, and QoS. These examples

are chosen to both give an idea of the breadth of SDN’suses and illustrate some of the most prevalent currentuse cases.

Network virtualizationPerhaps the most common SDN scenario is networkvirtualization. In fact, people sometimes mistakenly equateSDN with virtualization. The Merriam-Webster dictionarydefines virtual as Bbeing such in essence or effect though notformally recognized or admitted.[ For example, a virtualEthernet switch, such as Open vSwitch [21], behaves just likea physical Ethernet switch. It receives incoming Ethernetframes on its ports and forwards them, transmitting each onthe appropriate port(s), based on fields in the frame headersuch as the destination address. Unlike a physical Ethernetswitch, however, a virtual switch is realized in software,typically in a hypervisor running on a general-purposecomputer. This enables virtual switches, and virtualnetworks, to be created quickly and easily, which is one ofthe key benefits of virtualization.Virtual Ethernet switches forward Ethernet frames between

virtual network interface cards (vNICs) associated withVMs running on a hypervisor and one or more of thehypervisor’s physical Ethernet NICs. This allows a singlephysical Ethernet NIC to be shared by multiple VMs. Byitself, interface sharing only allows vNICs to be connectedto physical Ethernet switches. A typical physical Ethernetnetwork consists of multiple interconnected switches. Toconstruct a virtual network, the connections betweenswitches must be virtualized as well. This raises someinteresting challenges for which SDN is particularlywell-suited.Virtual networks can be implemented using a variety of

means. One is by using VLAN tags. Such tags segment trafficflowing throughout a single Ethernet into multiple conceptualor Bvirtual[ local area networks. Switches direct traffictagged for a particular VLAN only to ports associated withthe corresponding tag thus enforcing the appearance ofmultiple and separate networks. At first glance, VLANsmight appear to be an ideal mechanism for networkvirtualization. Their effectiveness for this purpose is limitedby several factors. First, VLANs do not, by themselves, spanmultiple Ethernet networks, or even multiple Ethernetswitches. Instead, they require careful coordination of theconfiguration at all switches that might need to handle trafficfor that VLAN. This is a severe limitation for a virtualnetwork, which can reasonably be expected to span multiplephysical networks potentially over geographically dispersedregions. Second, the total number of VLANs is limited to4,096 because of the number of bits in the VLAN tag.This limit is smaller than the number of virtual networksthat might be needed in even a small to medium sized datacenter. VLANs, though useful, do not provide an idealsolution for network virtualization.

3 : 4 C. DIXON ET AL. IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014

Page 5: Software defined networking to support the software defined

A more effective approach to network virtualization is touse an overlay network. With this approach, the virtualswitches encapsulate or Bwrap[ virtual Ethernet frames inone or more higher layer network protocols, typically UserDatagram Protocol (UDP) and/or IP. Virtual switches areconnected to one another via IP Btunnels[ for which thereare two leading and competing standards: Virtual eXtensibleLocal Area Network (VXLAN) [22] and NetworkVirtualization using Generic Routing Encapsulation(NVGRE) [23].The tunnels form a network that is overlaid on top of an IP

network. This approach has several benefits. One is the lackof any need to configure or otherwise control physicalswitches. The only requirement overlays have on theunderlying network is IP connectivity, which has long beenubiquitous. Another benefit is their significantly greaterscalability relative to VLANs. One important considerationwith overlays is the approach used to select switchesbetween which a tunnel is established. Tunnels could beestablished between every pair of switches. This generates n2

tunnels where n is the number of virtual switches. In anotherapproach, a single virtual switch is selected with whichall other switches establish a tunnel. Both approaches can runinto scalability issues when there are very large numbersof virtual switches. Current solutions use traditional routing,e.g., BGP or OSPF, to interconnect different overlaynetworks when they reach these scaling limits.Overlays also have some drawbacks. Perhaps the largest

is that the mappings from destination addresses tocorresponding tunnels have to be disseminated and keptup-to-date even as VMs join, leave and move. This addseither increased latency to the first packet while the virtualswitch pulls this information from a centralized serviceor extra communication and complexity overhead to keepcorrect entries pre-cached. Further, encapsulation addsadditional headers reducing the amount of data in eachphysical Ethernet frame available for useful data (slightly)decreasing throughput. Encapsulation and de-encapsulationalso entail processing and memory overhead. Finally,while building on top of IP eases deployment, it also meansaccepting IP’s best effort delivery. Alternately, the overlaycontroller can leverage QoS features in the underlyingswitches at the cost of the added complexity involved inconfiguring and using QoS queues in each switch.A third approach to network virtualization leverages

OpenFlow** [24]. OpenFlow is a standard protocol bywhich an SDN controller can configure forwarding rules inEthernet switches. The protocol allows packets to beidentified by a comprehensive set of header fields such assource and destination Media Access Control (MAC)addresses, IP addresses and Transmission Control Protocol(TCP) or UDP port numbers. OpenFlow can be used toimplement a virtual network by programming virtual andphysical switches to forward packets between network

interfaces attached to the same virtual network whileprohibiting flows between different networks. OpenFlowdoes not require encapsulation. Furthermore, once a switch isprogrammed for a particular flow, it forwards frames usingthe same mechanism and identical performance as for aconventional nonvirtualized network.A key consideration when leveraging OpenFlow is

the approach used to program switch forwarding rules. Rulesmay be programmed reactively, where packets that do notmatch an already-installed rule are sent to the controller,which then installs an appropriate rule. They may also beprogrammed proactively, for example, when a new hostis discovered or a virtual network is defined. This exposesa tradeoff between the added latency of contacting thecontroller on the first packet of each flow and needing to keeprules installed for all flows versus only currently active flows.The reactive approach is more commonly used today asthe number of fully flexible rules that can be simultaneouslyinstalled in physical switches is often quite limited, e.g.,1,000 rules. Increasing rule sizes and hybrid approachespromise to improve this state of affairs.OpenFlow provides fine-grained control over a switch’s

forwarding behavior including the ability to enforce QoS.This can be leveraged to limit or eliminate performanceinterference between distinct virtual networks. OpenFlow toohas its limitations. Its primary drawback is that it requiresOpenFlow capable switches and the administrative authorityto exploit that capability. OpenFlow is a new and stillemerging technology so its deployment is currently limited.Currently deployed OpenFlow controllers typically managea limited set of switches such as those within a singledata center. Thus, while highly effective for use within adata center, OpenFlow, by itself does not currently representa viable solution for network virtualization acrossgeographically dispersed sites without using overlay tunnelsbetween Bislands[ of OpenFlow switches.The preceding discussion of virtualization options may

seem to reflect increasing complexity rather than thesimplification promised by SDN. However, the details ofthe implementation and ultimately the selection of animplementation are handled by the SDN controller. A virtualnetwork user need only specify the required connectivityand desired network services. The controller takes thesespecifications and configures implementation artifacts such astunnels or OpenFlow rules as needed to realize virtualnetworks on the available underlying network resources.

Network abstractionTraditionally, one measure of virtualization quality is theextent to which virtual entities appear to be as real aspossible. Fidelity between real and virtual entities allows theentire ecosystem of software and expertise developed inthe real world to be leveraged in the virtual one. It eliminatesthe need for solutions already perfected on real resources

C. DIXON ET AL. 3 : 5IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014

Page 6: Software defined networking to support the software defined

to be re-implemented. One benefit of virtualizationhowever is the ability of virtual entities to offer enhancedfunctionality. SDN therefore provides not only networkvirtualization, but also abstraction. The Merriam-Websterdictionary defines the term abstract as Bdisassociatedfrom any specific instance.[ An abstract network,therefore, is one that retains the fundamental capabilities of ageneral class of networks, for example packet forwarding,without exhibiting the specific behaviors of any one instance,such as a network of Ethernet switches.An abstract network adheres to a given model. The model

defines the entities supported by the network and theirbehaviors including interactions among entities and withthe external world. Many models are possible. VirtualEthernet, described earlier, though not particularlyabstract, represents one network model. Virtual Ethernetimplementations typically provide some level of abstraction.One popular model is the Bbig switch[ in which all virtualnetwork interfaces appear to be plugged into a singleswitch regardless of the presence of multiple underlyingphysical and virtual Ethernet switches. A single link in avirtual Ethernet may actually correspond to multiple linksor paths in the physical network. Higher levels of abstractionare possible. Ethernet follows a Bdefault on[ approach.In the absence of specific action to the contrary, all interfacesconnected to the same network can communicate with oneanother.An alternate model from IBM Research called

Meridian [25] takes a Bdefault off[ approach. WithMeridian, specifying that server A can communicate withboth server B and server C does not imply B and C cancommunicate with each other. This facilitates handling realworld scenarios that are not as well suited to a default onmodel. One example is where a set of web servers mustcommunicate with a set of application servers but thereis no requirement for communication among the webor application servers themselves.As with high level programming languages, numerous

models are possible and model development is expected tocontinue for the foreseeable future. A good model isgenerally easy to use, abstract, dynamic, comprehensive andcomposable. A model should be intuitive to use and naturallyfit typical scenarios. It should not mandate specificationof details that may not always be necessary. A good model isadept at handling changes in both the abstract and physicalnetworks. Strong models provide comprehensive networkservices, for example address assignment and nameresolution, enabling but not requiring essential services beprovided externally. Finally, a good model facilitatescomposition of higher-level services from fundamentalmodel entities.An SDN controller may support multiple models

concurrently. These models may exhibit equivalence inwhich any scenario described in one can be identically

represented in another. In this case, it may be possible tointeract with a single abstract network instance usingmultiple models. Incompatibilities between models may alsoexist. In such cases, the controller may augment or adapt oneor more models to enable better interoperation. Alternately,such incompatibilities may simply prohibit an abstractnetwork specified using one model from being manipulatedvia another.

Middlebox insertionA middlebox is a hardware or virtual appliance that providesfunctionality along a network path. Example middleboxesinclude firewalls, intrusion detection and/or prevention,proxies, caches, and wide area network (WAN) accelerators.Middlebox deployment is a significant pain point for datacenters. Physical appliances are expensive to purchaseand require a specialized skill set to configure thus increasingboth capital and operational expense. They often have tobe cabled in-line at specific locations in the network limitingflexibility. Firmware upgrades can be a risky undertaking.To date, the industry has failed to converge on a standardprotocol for configuration of firewalls, load balancersand other middleboxes. Data centers are left with a variety ofheterogeneous appliances, each with their own method ofconfiguration whose semantics differ from device to device.An emerging trend is replacement of physical

middleboxes with virtual appliances, most recently termedNFV. Virtual appliances are highly flexible in that theycan be placed on any host server. Being VMs, they can becopied and started throughout the data center making themideal for auto-scaling in response to traffic load. Oneapproach taken in this direction is to create a distributedplatform for middlebox appliances (e.g., vAMOUR [26] orEmbrane** [27]). The middlebox platform controller isseparate from the SDN controller and deploys the requiredmiddlebox functionality across the VMs it manages. Themiddlebox controller tunnels traffic between middlebox VMsas specified by a user-specified chain. A drawback to thisapproach is that, typically, a given middlebox controller onlyinteroperates with middleboxes from that vendor eitherlimiting the set of available middleboxes or requiringmultiple such controllers.IBM’s vision is for the SDN controller to manage

middlebox deployment as an integral part of virtual networkdeployment, and to consider middleboxes as first classentities within the network model. In actuality, we would liketo manage network services, which might be middleboxes orother network functionality, such as higher QoS. The usercan specify which types of cloud provided network servicesneed to service which flows. The user may accept thedefault rule set that is provided by a firewall or specifysome custom changes in a manner that declares the desiredsemantics without having to delve into a low-level rulespecification language.

3 : 6 C. DIXON ET AL. IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014

Page 7: Software defined networking to support the software defined

To support middleboxes from different vendors, the SDNcontroller provides a plug-in mechanism that enables vendorsto manage their company’s middleboxes using theirvendor-specific protocol. The SDN controller maps thevirtual network model, including its network services, to aninternal representation and calls the appropriate vendorplug-in to configure the appropriate middleboxes. The SDNcontroller manages the routing of all packet flows in thenetworkVbetween hosts and network services. Thisintegrated, global view and control allows for fastscale up/down, insertion and removal of network servicesand servers.Current virtualized middleboxes mimic their physical

counterparts and thus inherit certain drawbacks. Thisincludes the need for network interfaces into each networkthat they service. Overlay network based service chainingcan aid in removing these restrictions thereby simplifying thedeployment of these devices.

Network fabricsA fabric is a set of network devices, such as switches, thatprovide connectivity as a holistically managed unit. Fabricsare a natural fit for SDN because a controller normallymanages a set of switches in much the same way. Fabricstypically provide shortest path forwarding and some degreeof multipathing. They can thus offer significantly morebandwidth than traditional networksVespecially at theEthernet layer where the spanning tree protocol typicallyrestricts these features. Fabrics also usually allowfor straightforward host mobility, a feature missing fromtraditional IP networks since moving between subnetsrequires changing a host’s IP address.While there are recent standards that build fabrics in a

distributed manner [9, 28], with SDN fabrics are constructedby having the logically centralized controller determinethe network topology, e.g., via Link Level DiscoveryProtocol (LLDP) [29], and then directly compute andinstall routes. This centralized approach can offer fasterconvergence to network changes, e.g., failures, and globallyoptimized use of network resources. A variety of suchSDN fabrics have been proposed and some built [30–33]offering different trade-offs. IBM’s SDN controllerimplements a fabric called Scalable Per-Address RouTingArchitecture (SPARTA) based on PAST (Per-AddressSpanning Tree) [34]. A key benefit of SDN fabrics is theability to improve, or replace, the fabric simply by upgradingthe SDN controller.As fabrics need not implement standard routing

protocols internally, they open the possibility of using moreexotic topologies than the typical folded-Clos-style fat treesused today. There are a variety of existing topologiesoptimized for different communication patterns, e.g.,HyperX [35], the cabling-optimized Dragonfly [36], and therandomly wired Jellyfish [37]. These topologies provide

multiple shortest paths between endpoints, so fabrics’support for multiple paths can exploit these unconventionaltopologies to increase performance and/or lower cost. Thesenew topologies typically employ a single type of switchinstead of the different core, aggregation, and edge switchescommonly found in current networks. Thus, an entirefabric can be built out of a large number of relativelyinexpensive top-of-rack switches.When a fabric has multiple paths available, it must

choose which path to use for each flow. Typically,switches hash packet header fields and use the hashfunction to choose the path; this spreads different flowsonto different paths while preventing packets of a singleflow from being reordered. Hashing works well whenthere are a large number of flows of roughly equalbandwidth, but high-bandwidth flows and randomhash collisions cause load imbalances within fabrics.Advanced fabrics can monitor for such imbalances andreroute flows to spread load more evenly, reducingcongestion and increasing overall network throughput.SDN makes such automatic traffic engineering easierto implement because the central controller can aggregatemeasurements of the whole network and perform globaloptimizations [18, 19].

Monitoring-control loopsWhile the most significant work on SDN has come on thecontrol side, the ability to program switches is worthlesswithout the context of how they are connected and whattraffic they must forward.To provide this context, SDN solutions typically

provide three monitoring mechanisms:

1. Topology discoveryVMechanisms to discover linksbetween network devices and thus deduce the topologyof the network. This includes both physical links, andmulti-hop links, e.g., tunnels.

2. New flow detectionVA way to get notifications of newflows where a flow can be defined as a new host joining,a new host-pair communicating, or a new TCP/UDPconnection depending on the circumstances.

3. CountersVCounters track the number of bytes andpackets handled at different granularities, e.g., per switch,per host, or per port.

These mechanisms provide an unprecedented level ofautomated monitoring. Rather than painstakingly maintainingan authoritative diagram of network topology, the networkitself can produce the diagram more accurately. New flowdetection and counters provide feedback about networkutilization in near real-time. This allows for both humanoperators and automated tools to understand what theirnetworks are doing and why at time scales and fidelitiespreviously impossible.

C. DIXON ET AL. 3 : 7IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014

Page 8: Software defined networking to support the software defined

Polling counters on all network devices from a centralizedSDN controller may prove to be costly. Prior work hasshown that to be effective, the polling should finish withina second or less [38, 39], which is difficult. Severalresearchers have explored using flow monitoring techniquessuch as NetFlow [40] and sFlow [41] in conjunction withtopology discovery [42]. Nearly all current virtual andphysical switches support NetFlow or sFlow. NetFlowenabled switches send Netflow records to a collectorat periodic intervals. These contain details about all flowsobserved during the last monitoring interval. sFlow enabledswitches sample one out of every N packets (where N is aconfigured sampling rate) and sends the packet header toa collector. These approaches offer potentially lower-latencyand more scalable approaches to monitoring the currentnetwork state. Significant prior work has focused on howto use these measurements to effectively infer networkconditions [43] and perform high layer functions such asnetwork aware VM management [44].Combining accurate knowledge of the network topology,

flows, and link utilization with the ability to changeforwarding behavior illustrates the strength of SDN. Thiscombination allows an adaptive control loop to monitor andreact to network events in real time. For example, this canbe used to redirect large flows away from congested links.Over time, with proper annotation of particular

workloads, the controller can even learn the networkbehavior of different workloads and adapt the forwardingrules in anticipation rather than merely as a reaction.In a virtualized environment, VM placement can be based onnetwork conditions and VMs can be migrated if beneficial.At even longer time-scales, knowledge of fine-grainedworkload usage combined with big data analytics can informdecisions of how to reconfigure and upgrade the network.For example, simply adding an extra link between certainracks might have the same effect for certain workloadsas upgrading the entire network.

Quality of serviceSDN enables several types of network QoS. A coarse formof Bbest-effort[ QoS can be achieved by quickly detectingcongestion and rerouting traffic around congested links.This is an active area of research and several researchershave looked at how to reduce time required to detectcongestion and reroute flows [45].Providing stronger QoS guarantees typically requires

two switch features: (i) the ability to create and configurequeues with dedicated bandwidth at each switch port and(ii) the ability to place flows into these queues [46].The latter is easily available through the OpenFlowenqueue action, whereas the former has been largelyachieved out-of-band. Configuration of queues is an essentialpart of the OF-Config specification [15]. An alternateapproach proposed recently is to combine rate control at end

hosts with OpenFlow-based control of flows in the physicalnetwork to give end-to-end guarantees [47].In an environment, where SDN is supported only at the

network edge, QoS options are limited. A controller canemploy explicit rate control (supported in Open vSwitch),but without knowledge of the internal topology it is difficultor impossible to avoid oversubscribing links. Alternately,the controller can explicitly set the IP type of service [48]and/or Ethernet priority code point [49] packet header fieldsfor downstream routers and switches to act on.

IBM’s SDN approach, architecture, and offeringsIBM SDN for Virtual Environments (SDN-VE) isan SDN controller that can control an overlay network, afabric, or both together. The SDN-VE controller isbased on the Linux Foundation’s OpenDaylightProject [20]. Alongside the controller, IBM provides virtualswitches designed for specific hypervisor platformsincluding virtual switch for VMware (the IBM SystemNetworking’s Distributed Virtual Switch 5000V [50])and a virtual bridge for Kernel-based VM (KVM) as partof SDN-VE.This section describes SDN-VE including its

OpenDaylight base, the architectural framework, howthat framework integrates with cloud infrastructure such asOpenStack, and operates over various underlying protocolssuch as OpenFlow.

OpenDaylightIBM was a founding member of the OpenDaylight Project, aLinux Foundation project chartered to create a unified, opensource SDN controller and other technologies. IBM’s mostsignificant contribution to date is the Open DistributedOverlay Virtual Ethernet (Open DOVE) networkvirtualization component based on earlier research atIBM [51]. The project’s first release includes support for awide range of network protocols including OpenFlow 1.0,OpenFlow 1.3, Path Computation Element CommunicationProtocol (PCEP) [52], Border Gateway Protocol (BGP)[10], the Locator/ID Separation Protocol (LISP) [53], andservices including Distributed Denial of Service (DDoS)mitigation and network virtualization. At its core, theOpenDaylight controller consists of Java**-based,OSGi bundles [54] that allow for individual components ofthe controller to be easily swapped in and out combinedwith a Service Abstraction Layer (SAL) that insulatesapplications or services running on top of the controller fromthe protocol details of Bdrivers[ that logically sit at thebottom of the controller.

Architectural frameworkThe SDN-VE architecture consists of: (i) the controllercore, northbound APIs and southbound driver plug-ininterfaces; (ii) integrated network services and applications,

3 : 8 C. DIXON ET AL. IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014

Page 9: Software defined networking to support the software defined

which can be hosted within or outside the SDN-VE platform;and (iii) the underlying physical network. The SDN-VEplatform has a layered architecture. Network services andapplications sit at the top layer and interact with thecontroller via an abstract network API. The middle layer isan orchestration layer that interprets the abstract networkAPI calls, converts them into network specific tasks andimplements the tasks through one or more drivers atthe bottom layer. The bottom layer is composed of a set ofprotocol drivers or interfaces enabling the controller to

communicate with different devices as well as deployedenvironments in the network. Figure 4 shows thearchitecture of the SDN-VE controller.

SDN-VE controller coreVThe core provides a set ofcommon functionality essential for building networkservices and applications. It includes an object modelinginfrastructure to represent network elements such asswitches, routers and gateways along with theirconfiguration and interconnections as well as a data store

Figure 4

The IBM SDN-VE architecture. The core platform consists of three layers: (i) the abstract, northbound network API, which is compatible withOpenStack Neutron, (ii) an orchestration layer (represented by BCore Services and Apps[ and the BOpenDaylight-based SDN controller,[ which mapsabstract API calls to actions on devices and vice versa, and (iii) the southbound drivers for devices including OpenFlow and DOVE. Provisioningplatforms make use of the northbound APIs to configure the network. The DOVE driver provides overlay virtual networks (shown on the middle right)across both OpenFlow and IP networks (shown at the bottom), while the OpenFlow driver allows for control of the underlying fabric if it supportsOpenFlow.

C. DIXON ET AL. 3 : 9IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014

Page 10: Software defined networking to support the software defined

for maintaining configurations, topology, state, policiesand other critical data needed for controller operation.Further, it provides synchronization between controllerinstances deployed in a cluster for fault-tolerance and toscale-out. In addition, the core provides methods forunified topology discovery and maintenance.Northbound APIsVThe SDN-VE northbound API is asuperset of Neutron [55], the OpenStack networkingAPI. This has been deliberately designed to make theintegration of SDN with SDE, which is based onOpenStack, easier as discussed in SDN integration sectionbelow. Currently, the SDN-VE northbound API providessupport layer-2 virtualization using networks, subnets,and ports as primitives. Further, it can perform layer-3routing between subnets it manages. It also supportsseveral Neutron extensions such as port binding andprovider network [56] extensions.Southbound driversVThe SDN-VE platform offersextensibility through southbound drivers that controlnetwork devices. In particular it uses (i) OpenFlow 1.0and 1.3 drivers to manage switches that support thoseprotocols and (ii) drivers for virtual switches inhypervisors to implement overlay virtualization usingDistributed Overlay Virtual Ethernet (DOVE) includingsupport for KVM [57] and VMware environments.Over time, we expect that SDN-VE will add additionaldrivers both from OpenDaylight and IBM efforts.Orchestration layerVThe orchestration layer isresponsible for mapping requests received fromnorthbound layer to the appropriate set of devices andconsequently drivers. A similar mapping from devices tonorthbound API concepts translates events generated inthe network to applications that have registered forsuch notifications.Network applications integrated with SDN-VEVTheSDN-VE platform is designed to be extensible through theaddition of network applications and even possibly opento third-party developers, but a variety of coreapplications come pre-integrated with SDN-VE:

• Logical networksVThis service enables creation ofdisparate logical networks over the shared physicalinfrastructure, along with provisioning for the necessaryphysical and virtual resources. It supports multi-tenancywith per tenant virtual network configuration, policies,traffic isolation, statistics and topology views.

• Connectivity serviceVBased on the grouping of hostsinto logical networks, the connectivity determineswhether hosts should be allowed to communicate and ifso, through what middleboxes and/or network services.

• Scalable Per-Address RouTing Architecture(SPARTA)VSPARTA is a scalable layer-2 forwardingsolution designed to make efficient use of networkfabrics. SPARTA, which is based on PAST [34],

computes per-MAC-address, destination-rooted spanningtrees and programs switches to forward traffic to eachhost in the network along its associated tree. SPARTAsupports an arbitrary (mesh) topology and balancesflows across multiple paths, thus helps do away withthe limitations of traditional Ethernet networks with asingle spanning tree. Further, SPARTA leverages accessto larger layer-2 specific forwarding tables to supportmore than 100,000 hosts on some switches.

• Flow replication and redirectionVThis allows a higherlayer application to logically tap into flows between agiven source and destination by having the trafficreplicated to another destination. This service can be usedfor pushing selected flows to diagnostic tools. It can alsotest new services on live traffic before deployment.

Service plane integrationVAs SDN is deployed, a keyaspect is how the SDN controller integrates networkservices and appliances. SDN-VE supports three differentkinds of network services. First, the controller can routetraffic through an existing physical appliance, but if theappliance is not SDN-aware, the traffic may need tobe proxied by an SDN gateway that removes anySDN-specific encapsulation and headers. Second, virtualappliances behave similarly, but can be dynamicallyinstantiated as VMs on commodity servers and proxied bythe hypervisor virtual switch. Finally, some networkservices can be implemented purely via OpenFlow rulesinstalled by a software module running in the controller.

To orchestrate this integration, the SDN-VE northboundAPI extends the Neutron API to allow for the specificationof network service and middlebox insertion. In particular,when using overlay routing, SDN-VE provides the ability todefine and configure special virtual networks that contain(either physical or virtual) network services such as firewallsand load balancers. Further, these services can be chainedand applied to fine-grained subsets of traffic. For example,only TCP traffic on port 80 between two virtual networks.The application of these chains and the services in themis determined solely by properties of the traffic and thegrouping of the source and destination into virtual networks,rather than being based on hosts’ physical locations.IBM is in the process of proposing these service chainingfeatures and APIs as additions to OpenStack Neutron.

IBM SDN integration with SDE architecturesOpenStack [58], an open source platform for cloudcomputing is emerging as the platform of choice for buildingopen clouds and forms the core of IBM’s SDE. TheOpenStack networking component, Neutron, provides waysto create basic layer-2 and layer-3 connectivity and is quicklyexpanding to support higher-level network services. Inparticular, Neutron currently defines standard ways to specify

3 : 10 C. DIXON ET AL. IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014

Page 11: Software defined networking to support the software defined

that firewalls, load balancers and Virtual Private Network(VPN) services should be applied to certain traffic. BothOpenDaylight and IBM’s SDN-VE provide plug-ins tointerface with Neutron. Further, the extensibility of Neutronallows the differentiating features of SDN-VE to be presentedto OpenStack users through Neutron extensions.While overlay network virtualization merely uses the

existing physical network for IP transport, SDN-VE is alsoable to control the underlying fabric and coordinate actionsbetween the underlay and overlay. For example, whenSDN-VE provides a SPARTA fabric, the overlay componentcan ask for link-disjoint paths to avoid correlated failures.When it comes to interoperability between the fabricand/or overlay and existing physical networks, SDN-VEensures the behavior at the edge is compatible with thelayer-2 and layer-3 expectations of existing physicalnetworks.

Early adoptionThis section describes several early SDN deployments togive an idea of what real-world use cases exist and thus howothers might start to deploy SDN.

Financial data distribution in real timeA provider of financial information had specific service-levelagreement (SLA) requirements to deliver financialinformation to different subscribers while ensuring fairnessso each subscriber received information at the same time.They used IBM’s OpenFlow solution including IBMswitches and IBM’s Programmable Network Controller tomove forwarding decisions off servers/firewalls and into anOpenFlow network. Defining flow rules that rewrote packets,added them to multicast groups and forwarded them tospecified ports, all in a few microseconds ensured that theywere able to meet the SLA requirement.

Application-driven network bandwidth allocationA large European university deployed an SDN-basedsolution using IBM switches that enables networkadministrators to easily configure and manage virtualnetworks that control traffic on a per-flow basis based onapplication requirements. This helps to ensure morepredictable performance for large transfers of data in theuniversity’s complex environment.

On-demand network monitoring via virtual tapsA large managed service provider deployed an SDN-basedsolution using IBM switches to provide the ability toBtap[ any location in their data center network and sendpackets from the tap to a centralized location for analysis.This gave them the flexibility to deploy virtual monitoringpoints throughout the network to diagnose performanceand other network issues without having to dedicate physicalresources to the monitoring when it is not in use.

Careful management of expensive and criticalnetwork resourcesOne of the first high-profile displays of the current generationof SDN solutions was Google’s announcement that itwas using OpenFlow to operate its BG-Scale[ privateinter-data-center WAN. The SDN features of this networkallow for traffic over their WAN to be scheduled and totake non-default paths enabling them to more efficiently usethe expensive resource. In particular, they are able topersistently drive their WAN links to near-maximumutilization without ill effects giving a 2–3 times improvementover standard practices [59–61].

Pervasive security enforcementOne of the major changes between traditional networking andSDN is the notion that SDN holistically manages the networkin a logically centralized manner. An obvious area thatthis can help with is security. Security appliances such asfirewalls and intrusion detection/prevention systems havetypically been point solutions and ensuring that they sitat choke points in the network has been error-prone andperformance limiting. Instead SDN allows for a controllerto dictate logically pervasive security and route trafficthrough the appropriate appliances or services. HP hasdemonstrated this approach with clients in the enterprisespace in their presentation at the 2013 Open NetworkSummit [62]. IBM offers similar features in SDN-VE.

MigrationGiven the magnitude of the shift between traditionalnetworking and SDN, it is difficult not to wonder how tointroduce SDN into an existing network, allowing forinteroperation with legacy networks, and eventuallymigrating more of the network to SDN. While this paperelides a fuller discussion of these issues due to spacelimitations, we highlight some of the issues.SDN deployments can involve an overlay, control of

physical devices, or both. In general, deploying an overlayalone requires less effort as it does not usually require newhardware and instead uses existing layer-3 connectivity.Overlays do generally require new software often in theform of a specialized virtual switch for hypervisorsand a controller, but this is still less invasive than deployinga full SDN solution which controls physical networkdevices as well.Interoperation with existing networks requires some

planning and while there are a variety of strategies, acommon strategy is to use existing internetworkingapproaches to stitch SDN-enabled portions of the network tolegacy networks. For example, having a layer-3 gatewaythat handles traffic to and from the SDN-enabled networkas though it were a normal IP prefix or subnet reduces,but does not eliminate, the integration issues. Alternately, theSDN-enabled network can speak typical interoperability

C. DIXON ET AL. 3 : 11IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014

Page 12: Software defined networking to support the software defined

protocols like OSPF, BGP, or Intermediate System toIntermediate System (IS-IS). Recent research efforts havealso looked at how to maximally benefit from a limiteddeployment of SDN by using traditional network approachesto ensure that all paths traverse at least one SDN-enableddevice [63].Lastly, deploying SDN can change the set of risks a

network operator or architect must take in to consideration.The SDN controller and the channels between it andnetwork devices are new elements that need to be consideredfrom both a security and stability standpoint. In terms ofstability, a variety of recent work [64–66] has applied modelchecking and software verification to SDN controllers andapplications to find bugs and instabilities. As for security,if the SDN controller is attached to an already-securedmanagement network, it should offer little additional attacksurface, but in some cases, especially when controllinghypervisor virtual switches, the controller must connectto the data network which makes it more critical to secureboth the controller and switches. Fortunately, most SDNprotocols, including OpenFlow, allow for authenticated andencrypted communication.

ConclusionSDN promises to drastically increase the flexibility andefficiency of computer networks. By separating theforwarding plane from the control plane and moving thecontrol plane to logically centralized software running oncommodity hardware, we can rapidly innovate and providenew network features while globally optimizing networkbehavior. This can already be seen in current solutions,which offer the ability to easily deploy multiple virtualnetworks on the same physical hardware while alsoproviding simpler, higher-level abstractions of the network toease configuration and management. Going forward, SDNwill enable more efficient use of network resources byenabling tight monitoring-control loops to perform adaptivetraffic engineering, intelligent workload placement and QoSguarantees.IBM’s SDN efforts can be seen in the SDN-VE product,

which allows for abstract virtual networks to be rapidlyinstantiated. Further, SDN-VE interoperates with OpenStackand provides the networking features of IBM’s broader SDEinitiative via an OpenStack Neutron plugin, which offersvirtual networking APIs. It is capable of providing overlayvirtual networks, managing the underlying fabric, orboth in concert. It also includes a suite of networkapplications including middlebox interposition, flowredirection/duplication, and network-aware VM placement.Combined, these enable SDN-VE to easily manage networksfor multi-tenant clouds. Going forward, SDN-VE willserve as a platform for IBM’s future SDN efforts.SDN remains a fertile ground for research and

development. Remaining challenges include (i) the design

and implementation of abstract network models that bothallow for expressive policy and are easy to understand,(ii) developing frameworks to allow SDN controllers to beextended with third-party modules while avoiding conflicts,and (iii) more intelligent orchestration of networkresources to provide self-managing and self-tuning networks.

AcknowledgmentsWe thank Amitabha Biswas and Uday Shankar Nagaraj fortheir contributions to this paper as well as the developmentteams that have made IBM’s SDN products a realityincluding the SDN-VE team and the open source communitysurrounding OpenDaylight. We also appreciate theconstructive feedback from our anonymous reviewers.

**Trademark, service mark, or registered trademark of OpenDaylightProject, Inc., Open Networking Foundation, Embrane, Inc., SunMicrosystems, or InfiniBand Trade Association in the United States,other countries, or both.

References1. Software Defined Networking Creates a New Approach to

Delivering Business Agility. [Online]. Available: http://www.gartner.com/newsroom/id/2386215

2. IEEE 802.1Q Virtual LANs (VLANs). [Online]. Available: http://www.ieee802.org/1/pages/802.1Q.html

3. [Online]. Available: http://www.etsi.org/technologies-clusters/technologies/nfv

4. V. Sekar, N. Egi, S. Ratnasamy, M. Reiter, and G. Shi, BDesignand implementation of a consolidated middlebox architecture,[ inProc. NSDI, 2012, p. 24.

5. Middlebox communication architecture and framework, IETF RFC3303. [Online]. Available: http://www.ietf.org/rfc/rfc3303.txt

6. V. Sekar, S. Ratnasamy, M. Reiter, N. Egi, and G. Shi,BThe middlebox manifesto: Enabling innovation in middleboxdeployment,[ in Proc. HotNets, 2011, p. 21.

7. J. Sherry, S. Hasan, and C. Scott, BMaking middleboxes someoneelse’s problem: Network processing as a cloud service,[ in Proc.SIGCOMM, Helsinki, Finland, 2012.

8. Spanning Tree Protocol (STP). [Online]. Available: http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/spantree.html

9. Transparent Interconnection of Lots of Links (TRILL). [Online].Available: http://tools.ietf.org/html/rfc5556

10. Border Gateway Protocol (BGP). [Online]. Available: http://www.ietf.org/rfc/rfc1771.txt

11. Open Shortest Path First (OSPF). [Online]. Available: http://www.ietf.org/rfc/rfc2328.txt

12. D. Levin, A. Wundsam, B. Heller, N. Handigol, and A. Feldmann,BLogically centralized? State distribution trade-offs in softwaredefined networks,[ in Proc. Hotnets, 2012, pp. 1–6.

13. A. Panda, C. Scotty, A. Ghodsiy, T. Koponen, and S. Shenker,BCAP for networks,[ in Proc. HotSDN, 2013, pp. 91–96.

14. T. Koponen, M. Casado, N. Gude, J. Stribling, L. Poutievski,M. Zhu, R. Ramanathan, Y. Iwata, H. Inoue, T. Hama, andS. Shenker, BOnix: A distributed control platform for large-scaleproduction networks,[ in Proc. OSDI, 2010, pp. 1–6.

15. S. H. Yeganeh and Y. Ganjali, BKandoo: A framework for efficientand scalable offloading of control applications,[ in Proc. HotSDN,2012, pp. 19–24.

16. S. Schmid and J. Suomela, BExploiting locality in distributed SDNcontrol,[ in Proc. HotSDN, 2013, pp. 121–126.

17. OpenFlow Configuration and Management Protocol (OF-Config).[Online]. Available: https://www.opennetworking.org/sdn-resources/onf-specifications/openflow-config

3 : 12 C. DIXON ET AL. IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014

Page 13: Software defined networking to support the software defined

18. M. A. Fares, S. Radhakrishnan, B. Raghavan, N. Huang, andA. Vahdat, BHedera: Dynamic flow scheduling for data centernetworks,[ in Proc. NSDI, 2010, p. 19.

19. T. Benson, A. Anand, A. Akkela, and M. Zhang, BMicroTE: Finegrained traffic engineering for data centers,[ in Proc. CoNEXT,2011, p. 8.

20. OpenDaylight j A Linux Foundation Collaborative Project.[Online]. Available: http://www.opendaylight.org/

21. Open vSwitchVAn Open Virtual Switch. [Online]. Available:http://www.openvswitch.org

22. VXLAN: A Framework for Overlaying Virtualized Layer 2Networks over Layer 3 Networks, IETF draftdraft-mahalingam-dutt-dcops-vxlan-04.txt

23. NVGRE: Network Virtualization using Generic RoutingEncapsulation, IETF draft draft-sridharan-virtualization-nvgre-00.txt

24. OpenFlow Switch Specification V1.1.0, Feb. 2011. [Online].Available: http://www.openflow.org

25. M. Banikazemi, D. Olshefski, A. Shaikh, J. Tracey, and G. Wang,BMeridian: An SDN platform for cloud network services,[ IEEECommun. Mag., vol. 51, no. 2, pp. 120–127, Feb. 2013.

26. vARMOUR. [Online]. Available: http://www.varmour.com/27. Embrane. [Online]. Available: http://www.embrane.com/28. IEEE 802.1aq Shortest Path Bridging (SPB). [Online]. Available:

http://www.ieee802.org/1/pages/802.1aq.html29. Link Layer Discovery Protocol (LLDP). [Online]. Available:

http://www.ieee802.org/1/pages/802.1ab.html30. A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim,

P. Lahiri, D. A. Maltz, P. Patel, and S. Sengupta, BVL2:A scalable and flexible data center network,[ in Proc. SIGCOMM,2009, pp. 51–62.

31. J. Mudigonda, P. Yalagandula, M. Al-Fares, and J. C. Mogul,BSPAIN: COTS data-center ethernet for multipathing overarbitrary topologies,[ in Proc. USENIX NSDI, 2010, p. 18.

32. R. Mysore, A. Pamboris, N. Farrington, N. Huang, P. Miri,S. Radhakrishnan, V. Subramanya, and A. Vahdat, BPortLand:A scalable fault-tolerant layer 2 data center network fabric,[ inProc. SIGCOMM, 2009, pp. 39–50.

33. C. Kim, M. Caesar, and J. Rexford, BFloodless in seattle:A scalable ethernet architecture for large enterprises,[ in Proc.SIGCOMM, 2008, pp. 3–14.

34. B. Stephens, A. Cox, W. Felter, C. Dixon, and J. Carter,BPAST: Scalable ethernet for data centers,[ in Proc. CoNEXT,2012, pp. 49–60.

35. J. H. Ahn, N. Binkert, A. Davis, M. McLaren, and R. S. Schreiber,BHyperX: Topology, routing, and packaging of efficient large-scalenetworks,[ in Proc. SC, 2009, p. 41.

36. J. Kim, W. J. Dally, S. Scott, and D. Abts, BTechnology-driven,highly-scalable dragonfly topology,[ in Proc. ISCA, Jun. 2008,pp. 77–88.

37. A. Singla, C. Hong, L. Popa, and P. Godfrey, BJellyfish:Networking data centers randomly,[ in Proc. NSDI, 2012, p. 17.

38. C. Raiciu, C. Pluntke, S. Barre, A. Greenhalgh, D. Wischik, andM. Handley, BData center networking with multipath TCP,[ inProc. HotNets, 2010, p. 10.

39. A. R. Curtis, J. Mogul, J. Tourrilhes, P. Yalagandula, P. Sharma,and S. Banerjee, BDevoFlow: Scaling flow management forhigh-performance networks,[ in Proc. SIGCOMM, 2011,pp. 254–265.

40. Netflow. [Online]. Available: www.cisco.com/go/netflow41. sflow. [Online]. Available: http://www.sflow.org42. V. Mann, A. Vishnoi, and S. Bidkar, BLiving on the edge:

Monitoring network flows at the edge in cloud data centers,[ inProc. COMSNETS, 2013, pp. 1–9.

43. D. Arifler, G. de Veciana, and B. L. Evans, BA factor analyticapproach to inferring congestion sharing based on flow levelmeasurements,[ IEEE Trans. Netw., vol. 15, no. 1, pp. 67–79,Feb. 2007.

44. V. Mann, A. Gupta, P. Dutta, A. Vishnoi, P. Bhattacharya,R. Poddar, and A. Iyer, BRemedy: Network-aware steady stateVM management for data centers,[ in Proc. IFIP Netw.,2012, pp. 190–204.

45. J. Su, T. Kwon, C. Dixon, W. Felter, and J. Carter, BOpenSample:A low-latency, sampling-based measurement platform for SDN.[IBM Research Technical Report, IBM Corporation, 2014.

46. V. Mann, A. Vishnoi, A. Iyer, and P. Bhattacharya, BVMPatrol:Dynamic and automated QoS for virtual machine migrations,[ inCNSM, 2012, pp. 174–178.

47. M. Mishra, P. Dutta, P. Kumar, and V. Mann, BOn managingnetwork utility of tenants in oversubscribed clouds,[ in IEEEMASCOTS, 2013, p. 221.

48. Type of Service in the Internet Protocol Suite. [Online]. Available:http://tools.ietf.org/html/rfc1349

49. IEEE 802.1p Ethernet Priority Code Point. [Online]. Available:http://www.juniper.net/techpubs/en_US//junos/topics/concept/cos-qfx-series-lossless-ieee8021p-priority-config-understanding.html

50. IBM CorporationIBM Distributed Virtual Switch 5000V. [Online].Available: http://www-03.ibm.com/systems/networking/switches/virtual/dvs5000v/

51. K. Barabash, R. Cohen, D. Hadas, V. Jain, R. Recio, andB. Rochwerger, BA case for overlays in DCN virtualization,[ inProc. 3rd Workshop DC CAVES, pp. 30–37. [Online]. Available:http://www.itc23.com/fileadmin/ITC23_files/papers/DC-CaVES-m1569472213.pdf.

52. Path Computation Element Communication Protocol (PCEP).[Online]. Available: http://tools.ietf.org/html/rfc5440

53. Locator/ID Separation Protocol (LISP). [Online]. Available: http://www.lisp4.net/

54. OSGi. [Online]. Available: http://www.osgi.org/55. NeutronVOpenstack. [Online]. Available: https://wiki.openstack.

org/wiki/Neutron56. Quantum provider network extensions. [Online]. Available: https://

review.openstack.org/#/c/25843/57. KVM. [Online]. Available: www.linux-kvm.org/page/

Main page58. OpenStack. [Online]. Available: http://www.openstack.org/59. S. Jain, A. Kumar, S. Mandal, J. Ong, L. Poutievski,

A. Singh, S. Venkata, J. Wanderer, J. Zhou, M. Zhu, J. Zolla,U. Holzle, S. Stuart, and A. Vahdat, BB4: Experience with aglobally-deployed software defined WAN,[ in Proc. SIGCOMMConf., Hong Kong, 2013, pp. 3–14.

60. A. Vahdat, BSDN@Google: Why and how,[ presented at the OpenNetworking Summit, Santa Clara, CA, USA, 2013. [Online].Available: http://www.opennetsummit.org/archives-april2013/

61. U. Hoelzle, BOpenFlow@Google,[ presented at the OpenNetworking Summit, Santa Clara, CA, USA, 2012. [Online].Available: http://www.opennetsummit.org/archives/apr12/hoelzle-tue-openflow.pdf

62. B. Mayer, BVirtual application networks innovations advancesoftware-defined network leadership,[ presented at theOpen Networking Summit, Santa Clara, CA, USA, 2013. [Online].Available: http://www.opennetsummit.org/pdf/2013/presentations/bethany_mayer.pdf

63. D. Levin, M. Canini, S. Schmid, and A. Feldmann,BToward transitional SDN deployment in enterprisenetworks,[ presented at the Open Networking Summit,Santa Clara, CA, USA, 2013. [Online]. Available: http://www.opennetsummit.org/pdf/2013/research_track/poster_papers/final/ons2013-final22.pdf

64. A. Khurshid, X. Zou, W. Zhou, M. Caesar, andP. B. Godfrey, BVeriFlow: Verifying network-wide invariantsin real time,[ in Proc. 10th USENIX Symp. NSDI, Apr. 2013,pp. 15–28.

65. P. Kazemian, G. Varghese, and N. McKeown, BHeader spaceanalysis: Static checking for networks,[ in Proc. 9th USENIXSymp. NSDI, Apr. 2012, p. 9.

66. M. Canini, D. Venzano, P. Peresıni, D. Kosti, and J. Rexford,BA NICE way to test OpenFlow applications,[ in Proc. 9thUSENIX Symp. NSDI, Apr. 2012, p. 10.

Received August 23, 2013; accepted for publicationSeptember 18, 2013

C. DIXON ET AL. 3 : 13IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014

Page 14: Software defined networking to support the software defined

Colin Dixon IBM Research Division, Austin Research Lab, Austin,TX 78758 USA ([email protected]). Dr. Dixon is a Researcher in theData Center Networking Group at the IBM Austin Research Lab.His recent work has focused on software-defined networking with afocus on data centers, but more generally, his research interests arebroadly in systems, spanning networks, distributed systems,operating systems and security with an emphasis on building real,secure, reliable, and efficient computer systems. He earned his M.S.and Ph.D. degrees in computer science from the University ofWashington in Seattle in 2007 and 2011, respectively. Previously hereceived a B.S. degree in mathematics and a B.S. degree in computerscience from the University of Maryland in Collage Park in 2005.He has coauthored more than a dozen technical papers and articles aswell as several patents and patent applications.

David Olshefski IBM Research Division, Thomas J. WatsonResearch Center, Yorktown Heights, NY 10598 USA ([email protected]). Dr. Olshefski is a Research Staff Member in the SystemsDepartment at the IBM T. J. Watson Research Center. He received aB.S. degree in computer science from SUNY Albany, M.S. degreein computer science from RPI at Hartford, and a Ph.D. degree incomputer science from Columbia University. He has coauthored morethan a dozen technical papers and articles as well as several patents.

Vinit Jain IBM Systems and Technology Group, Austin, TX 78758USA ([email protected]). Mr. Jain is the Chief SDN SolutionsEngineer for IBM Systems Networking and currently leads theintegration of IBM SDN products into solutions. During the past15 years with IBM, he has led the delivery of several innovations innetworking and virtualization products. He is a Master Inventor withmore than 50 issued patents. Mr. Jain has a B.Tech. degree in computerscience from Indian Institute of Technology, New Delhi, India, andan M.S. degree in computer science University of Maryland,College Park.

Casimer DeCusatis IBM Systems and Technology Group,Poughkeepsie, NY 12603 USA ([email protected]). Dr. DeCusatis isan IBM Distinguished Engineer and CTO for Strategic Alliances,System Networking Division. He received the M.S. and Ph.D. degreesfrom Rensselaer Polytechnic Institute (Troy, NY) in 1988 and 1990,respectively, and the B.S. degree magna cum laude in the EngineeringScience Honors Program from the Pennsylvania State University(University Park, PA) in 1986. He is an IBM Master Inventor with morethan 120 patents and has published more than 100 technical papers.He has received the IEEE Kiyo Tomiyasu Award, the Electronic DeviceNews Innovator of the Year Award, the Mensa Research FoundationCreative Achievement award, the Sigma Xi Walston Chubb award,the Penn State Outstanding Scholar Alumni award, and theIEEE/Eta Kappa Nu Outstanding Young Electrical Engineer award(including a citation from the President of the United States andan American flag flown in his honor over the U.S. Capitol). He is editorof the Handbook of Fiber Optic Data Communication (now in itsfourth edition), a member of the Order of the Engineer, and a Fellow ofthe IEEE, Optical Society of America, and SPIE, the InternationalSociety of Optical Engineering.

Wes Felter IBM Research Division, Austin Research Lab, Austin,TX 78758 USA ([email protected]). Mr. Felter is a researcher in thedata-center networking group at the IBM Austin Research Lab.His current research focus involves building scalable low-cost networkfabrics, and in the past, he worked extensively on server and storagepower management. He received a B.S. degree in computer sciencefrom the University of Texas at Austin in 2001.

John Carter IBM Research Division, Austin Research Lab, Austin,TX 78758 USA ([email protected]). Dr. Carter leads the FutureSystems department of IBM Research - Austin, which includes teamsinvestigating next generation data center network architectures,

enterprise mobile and cloud runtimes and services, and energy-efficientserver, storage, and memory system design. His personal researchhas spanned many areas of systems research, including distributedsystems (Munin and Khazana), multiprocessor computer architecture(Avalanche and S-COMA), advanced memory system design(Ultraviolet and Impulse), and networking (SPARTA). Dr. Carterreceived his Ph.D. degree from Rice University in 1993. He spent15 years on the faculty of the School of Computing at the University ofUtah and several years as Chief Scientist of MangoSoft before joiningIBM Research. He has published over 60 conference and journalarticles and written several dozen patents, and is a Senior Memberof the IEEE and ACM.

Mohammad Banikazemi IBM Research Division, Thomas J.Watson Research Center, Yorktown Heights, NY 10598 USA([email protected]). Dr. Banikazemi is a Research Staff Member in theSystems Department at the IBM T. J. Watson Research Center. Hisresearch interests include cloud computing and software-definednetworking. He has an M.S. degree in Electrical Engineering and aPh.D. degree in computer science both from the Ohio State University,where he was an IBM Graduate Fellow. He is an author or coauthorof several technical papers and patents. He is a Senior Member ofIEEE and ACM.

Vijay Mann IBM Research - India, New Delhi, India([email protected]). Mr. Mann is a senior software engineer andmanager at IBM Research - India in New Delhi. He currently managesthe data center networking team at IBM Research - India. This ishis tenth year at IBM Research - India. In the past, he has workedon systems for portfolio analytics at Morgan Stanley, New York.He has more than 12 years of experience in enterprise systemsdevelopment and research. He has authored more than 20 publicationsin well-known conferences and journals and has several filed andissued patents to his credit. He holds an M.S. degree from RutgersUniversity, New Jersey, and a B.E. degree from Malaviya NationalInstitute of Technology (MNIT), Jaipur, India.

John M. Tracey IBM Research Division, Thomas J. WatsonResearch Center, Yorktown Heights, NY 10598 USA ([email protected]). Dr. Tracey is a Senior Technical Staff Member in the SystemsNetworking Department at the IBM T. J. Watson Research Center.He received his B.S. and M.S. degrees, both in electrical engineering,from the University of Notre Dame in 1990 and 1992, respectively.In 1996, after receiving his Ph.D. degree in computer science, alsofrom Notre Dame, he joined IBM’s Research Division as an AdvisorySoftware Engineer. His primary professional focus is on systemsoftware for networking. He has contributed to multiple technicalpublications, patents and IBM products and has been named an IBMMaster Inventor. He a member of the ACM and IEEE.

Renato Recio IBM Systems and Technology Group, Austin,TX 78758 USA ([email protected]). Mr. Recio is IBM Fellow andSystem Networking CTO, responsible for IBM’s Software definedNetworking strategy and roadmap. For the past 15 years, he hasplayed a leadership role in the strategy, architecture and design of futureIBM system IO and Networks. His recent contributions include:Ethernet Virtual Bridging (EVB), Distributed Overlay Virtual Ethernet(DOVE) networks, EVB/DOVE adapter offload, and SDN servicechaining. He has been a founding engineer and author of severalinput/output and network industry standards, including InfiniBand**(cluster network), IETF Remote Direct Data Placement (over TCP/IP),PCI IO Virtualization, Convergence Enhanced Ethernet (CEE), RoCE(RDMA over CEE), Fibre Channel over CEE, and IEEE 802.1QbgEVB. He has filed over 200 patents, of which more than 100 havealready issued. He has published dozens of refereed technicalconference (e.g., IEEE and ACM) papers. He is an IEEE member andchaired the IEEE Data Center Converged and Virtual EthernetSwitching (DC CAVES) workshops.

3 : 14 C. DIXON ET AL. IBM J. RES. & DEV. VOL. 58 NO. 2/3 PAPER 3 MARCH/MAY 2014