master thesis “tools for a multi-controller sdn...

9
Master on Telematics Engineering Academic Course 2014-2015 Master Thesis “Tools for a Multi-Controller SDN Architecture” Author: Sergio N. Tamurejo Moreno Director: Carmen Guerrero L´ opez Legan´ es, 10th of September 2015 Keywords: Software Define Network, controller lock-in, SDN architecture, Logger, Profiler, Model Checker. Summary: Software Define Network (SDN) is a recent paradigm based on the separation between the data plane and the control plane, allowing to handle the traffic network by means of software. The SDN ecosystem is fragmented due to the multitude of different controller platforms. This creates a danger of a controller lock-in for SDN application developers and for SDN network operator. In order to tackle this problem an innovative SDN architecture is presented whose aim is to execute SDN applications written for different controller in an unique network. This architecture presents an important problem to debug and analyze the SDN network. Therefore, a set of tool is design and developed with the purpose of solving this limitation and guarantee the proper operation of the network. Three of these tools are detailed in depth, a logger which displays the messages that cross the architecture, a profiler that shows information of the applications and parameters of the network and a model checker whose main task is to validate important properties of the network such as assure that there are no black holes.

Upload: others

Post on 22-May-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Master Thesis “Tools for a Multi-Controller SDN Architecture”eprints.networks.imdea.org/1142/1/Tools_for_a_Multi_Controller_SDN_Architecture_2015.pdfController platform diversity

Master on Telematics EngineeringAcademic Course 2014-2015

Master Thesis

“Tools for a Multi-Controller SDN Architecture”

Author: Sergio N. Tamurejo MorenoDirector: Carmen Guerrero Lopez

Leganes, 10th of September 2015

Keywords: Software Define Network, controller lock-in, SDN architecture, Logger, Profiler, Model Checker.

Summary: Software Define Network (SDN) is a recent paradigm based on the separation between the data plane andthe control plane, allowing to handle the traffic network by means of software. The SDN ecosystem is fragmented due tothe multitude of different controller platforms. This creates a danger of a controller lock-in for SDN application developersand for SDN network operator. In order to tackle this problem an innovative SDN architecture is presented whose aim isto execute SDN applications written for different controller in an unique network. This architecture presents an importantproblem to debug and analyze the SDN network. Therefore, a set of tool is design and developed with the purpose of solvingthis limitation and guarantee the proper operation of the network. Three of these tools are detailed in depth, a logger whichdisplays the messages that cross the architecture, a profiler that shows information of the applications and parameters of thenetwork and a model checker whose main task is to validate important properties of the network such as assure that there areno black holes.

Page 2: Master Thesis “Tools for a Multi-Controller SDN Architecture”eprints.networks.imdea.org/1142/1/Tools_for_a_Multi_Controller_SDN_Architecture_2015.pdfController platform diversity

Tools for a Multi-Controller SDN ArchitectureSergio N. Tamurejo Moreno

Universidad Carlos III de MadridIMDEA Networks Institute

Email:[email protected]

Abstract—Software Define Network (SDN) is a recentparadigm based on the separation between the data plane andthe control plane, allowing to handle the traffic network bymeans of software. The SDN ecosystem is fragmented due tothe multitude of different controller platforms. This creates adanger of a controller lock-in for SDN application developers andfor SDN network operator. In order to tackle this problem aninnovative architecture is presented whose aim is to execute SDNapplications written for different controller in an unique network.This architecture presents an important problem to debug andanalyze the SDN network. Therefore, a set of tool is designand developed with the purpose of solving this limitation andguarantee the proper operation of the network. Three of thesetools are detailed in depth, a logger which displays the messagesthat cross the architecture, a profiler that shows information ofthe applications and parameters of the network and a modelchecker whose main task is to validate important properties ofthe network such as assure that there are no black holes.

I. INTRODUCTION

Software Defined Network (SDN) is an emerging paradigmwhich stands as the key to solve the limitations and difficultiesof the current networks which has the data and control planesvertically integrated [1]. Its control plane (responsible of thelogic that handles the traffic) and data plane (whose task isforwarding traffic according to the logic established by theprior plane) are bundled in the same forwarding element.Therefore, to introduce a protocol in this type of networks,each device must be configured individually.

SDN allows the separation between the control plane andthe data plane, giving rise to a logical entity called controller,which implements into the underlying forwarding elements thenecessary logic to handle the traffic of the network. Hence, theswitches forwards the incoming packets according to the rulesinstalled by that controller. If a switch do not have a rule for apacket, the packet will be sent to the controller and this logicalentity will notify to the device what to do with it (drop thepacket or forward the packet).

The separation between both planes can be realized bymeans of a well-defined programming interface between theforwarding elements and the SDN controller. The most notableexample of such an API is OpenFlow [2]. The main advantagesthe SDN paradigm provides are: flexibility, simpler networkmanagement and faster evolution and innovation of the net-work [3]. In Fig.1 is represented the infrastructure of a SDNnetwork.

Fig. 1. Software Defined Network Infrastructure

The impact of this new technology has been of greatimportance. It began as an academic experiment and hasbecome (in just a few years) in one of the best embracedparadigms by industry. In fact, some of the biggest companiessuch as Google, Facebook, Yahoo, Microsoft, Verizon andDeutsche Telekom founded the Open Networking Foundation(ONF)[11] with the aim to promote and adopt SDN throughopen standards development.

Nowadays, SDN paradigm is considered as a researchingspotlight and a lot of work is being carried out by industryand academia. This effort of the networking community has re-sulted in a wide variety of open and closed-source controllers,e.g., Opendaylight [4], Ryu [5], Pox [6], Nox [7], etc. (Eachone has its own programming model, execution model andcapabilities). This has led to a fragmented control plane, thatforces to network application developers as well as to networkoperators, to choose one control platform, which is a harddecision since all have their strengths and weaknesses.

Controller platform diversity would be an asset, if portingSDN applications among different controllers was an easytask. However, this is not the case and the danger of acontroller lock-in arises. To tackle the problem and to enrichthe flexibility of using SDN in a network, a platform ispresented, whose main goal is supporting the execution ofSDN applications across different controllers in the samenetwork. Such platform allows to deploy multiple controllerscalled ’Client Controllers’ inside a single network, each ofthem run their pertaining network applications. These Client

2

Page 3: Master Thesis “Tools for a Multi-Controller SDN Architecture”eprints.networks.imdea.org/1142/1/Tools_for_a_Multi_Controller_SDN_Architecture_2015.pdfController platform diversity

Controllers communicate to an unique controller (known as’Server Controller’), responsible for the communication withthe underlying Network Elements. This platform is denom-inated ’NetIDE architecture’ and has been designed by theconsortium of the FP7EU Project NetIDE [8] and describedin detailed in [9].

Nevertheless, the NetIDE architecture presents one impor-tant issue that must be solved. How a network deployed usingthis platform can be debugged, since a bug in a SDN networkapplication could suppose an Achilles heel to the performanceof the whole network. Therefore, the main contribution inthis paper is a set of tools which is completely integratedin the platform and whose purpose is diagnose, debug andtroubleshoot a SDN network ensuring its properly operation.

The following section II introduces the design principles ofthe architecture. Section III presents a toolset for the proposedarchitecture. Section IV provides an overview of the state ofthe art. In section V open issues and relevant future directionsare discussed. Section VI summarizes the main conclusions.

II. NETIDE ARCHITECTURE

The NetIDE Architecture is depicted in Fig.2, which isbreakdown in three layers: (i) the Integrated DevelopmentEnvironment (IDE) (ii) the application repository which storessimple modules and composed applications and (iii) the Net-work Engine, where controllers and applications are executed.

Fig. 2. The NetIDE Architecture

The IDE is basically composed by several code editorsthat support different network programming languages and

a particular graphical editor to create topologies. It readsmodules and applications from the repository and also cando the reverse path (writing applications and modules to it).Finally, the IDE is able to deploy those applications on theNetwork Engine to be executed. Similar to the architecture,the Network Engine is structured in layers, specifically in two.In the upper layer are the Client Controllers which will runtheir respective modules or applications. In the bottom layera unique Server Controller will be the responsible of talkingdirectly to the underlying data plane.

The main challenge lies in the communication between theClient Controller layer and the Server Controller layer. Theupper one, must be able to transmit all the messages generatedby the network applications to the bottom layer and vice-versa. This task is not trivial, due to the client’s South-boundinterfaces (SBI) do not match with the server’s North-boundinterfaces (NBI). Hence, there is a need of an intermediateprotocol which make feasible the intercommunication betweenthe above layers. To solve this issue, we develop two differentmodules, the former called ’Backend’ integrated in each ofthe Client Controllers and the latter denominated ’Shim layer’incorporated to the Server Controller. These modules speakthe same protocol and the interaction becomes possible.

A. Intermediate Protocol

The Intermediate Protocol covers three essential needs forthe intercommunication between the Client Controller layerand the Server Controller layer. (i) Transport control messagesbetween Backend and Shim layers, e.g., to start up/take downan application or a particular module fixing a unique identifierfor each of them. (ii) Carry event and action messages betweenBackend and Shim layers and demultiplex properly thosemessages to the right modules or applications according to theunique identifier fixed previously. And finally, (iii) encapsulatemessages specific to a particular SBI protocol version (e.g.,OF 1.X, NETCONF, etc.) towards the Client Controllers withproper information to recognize these messages as such.

In the first prototypes of the Network Engine, we leveragedthe Pyretic Protocol [10], although it was very useful to accom-plish some preliminary proofs of concepts, its current versionlimits the functionality of the network applications running ontop of the Engine Network, due to this protocol only supportsa restricted subset of OpenFlow v.1.0 messages. Therefore,a new intermediate protocol was implemented from scratch.This new protocol uses TCP as transport and encapsulates thepayload with the following header:

0 1 2 30 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+| netide_ver | type | length |+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+| xid |+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+| module_id |+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+| |+ datapath_id || |+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

3

Page 4: Master Thesis “Tools for a Multi-Controller SDN Architecture”eprints.networks.imdea.org/1142/1/Tools_for_a_Multi_Controller_SDN_Architecture_2015.pdfController platform diversity

Where netide_ver is the version of the NetIDE protocol,length is the total length of the payload in bytes andtype indicates the type of the message (e.g NETIDE_HELLO,NETIDE_OPENFLOW, etc.). datapath_id is a 64-bits fieldthat uniquely identifies the Network Elements.module_id isa 32-bits field that uniquely identifies the application modulesrunning on top of each Client Controller. The compositionmechanism in the core leverages on this field to implementthe correct execution flow of these modules. Finally,xid is thetransaction identifier associated to the each message. Repliesmust use the same value to facilitate the pairing.

III. TOOLS

In the architecture exposed before, exist two well differen-tiated classes of tools. The first one, groups all the set of toolswhose purpose is to provide resources and facilities to the userfor designing, creating and deploying a SDN network, whilstthe second group aims to debug, inspection and diagnose thedeployed network. The first class encompasses three essentialstools which are summarized in Table I:

Tool FunctionalityTopologyEditor

Graphical editor to create and modify networktopologies.

NetworkProgram-mingLanguageEditors

The editors to implement network applications sup-port several programming languages as Python forPyretic, POX and Ryu controllers; or C/C++ forNOX controller or Java for Floodlight, Beacon andOpenDaylight controllers.

DeploymentConfigura-tion

User interface in which developers can choose apreviously specified topology model and the userscan choose a platform and the path of the networkapplication code for each controller in the topologymodel.

TABLE ITOOLS FOR DESIGNING AND DEPLOYING A SDN NETWORK

The goal of the diagnosing set of tools is to provide a frame-work that enables the developer to systematically test, profileand troubleshoot the SDN network, besides tuning the networkapplications. The first efforts to achieve this objective in theproject are focused on identifying the different tools needed inSoftware Defined Networking (SDN) environments. Currently,we have identified the following categories: Logger, Simulator,Model Checker, Resource Manager, Garbage Collector andProler. A taxonomy of the tools according to its functionalityand its execution nature, is presented in Table II.

The Execution Nature field only can take two values,Runtime or Off-line depending on the moment the tool isexecuted. If the tool is executed when the SDN network isdeployed and the Network Engine is running then we refer toit as Runtime tool. However, if the tool needs to be executedwithout being the Network Engine working, then the tool willbe categorized as Off-line. The Functionality field provides abriefly description of the utility of each tool.

In the subsections III-A, III-B and III-C a detailed analysiswill be made about the Logger, Profiler and Model Checkertools, respectively.

Tool Functionality ExecutionNature

Logger Captures the messages exchanged between thedata plane and the control plane to print theminto a screen.

Runtime

Simulator Provides an environment to emulate networkevents to allow for quick prototyping and per-formance prediction.

Off-line

ModelChecker

Proves the correctness of a network applicationfor a given network topology, and an arbitrarilylarge number of packets.

Off-line

ResourceManager

Handles and globally optimizes the usage ofnetwork resources that are under the control ofthe Network Engine.

Runtime

Garbagecollector

Optimizes the performance of the network re-sources.

Runtime

Profiler Shows different statistics about the NetworkElements, as well as the execution time of thenetwork applications besides their memory con-sumption.

Runtime

Debugger Provides a wide visibility of the network in realtime and it also provides post-mortem analysis.

Runtime

TABLE IITOOLS FOR DIAGNOSING AND DEBUGGING A SDN NETWORK

A. Logger

This tool captures the messages exchanged between thedata plane (Network Elements) and the control plane (ClientController, where a network application is executed) and printthese messages into a screen, allowing to the user to visualizeand filtering them accordingly to the type of message. Itis based on RabbitMQ (a description of this technology isdetailed in the Appendix). The Shim layer is modified toinclude a module that queue the messages required by thetool. In this specific case, those messages are all the ones thatthe Shim layer intercepts. The software architecture of the tooland its integration within the overall architecture are detailedas follows and illustrated in Fig.3.

Fig. 3. Logger integration into the NetIDE Architecture

4

Page 5: Master Thesis “Tools for a Multi-Controller SDN Architecture”eprints.networks.imdea.org/1142/1/Tools_for_a_Multi_Controller_SDN_Architecture_2015.pdfController platform diversity

The LogPub module, integrated inside the Shim layer, isresponsible of intercepting all the messages that arrive to theShim layer (i.e. the messages exchanged between the NetworksElements and the Client Controller), replicating them andsending back to the broker element. The second module calledLogger, will retrieve the messages kept in the broker and willprint them in a terminal.

The Fig.4 represents two instances of the second module,Logger (1) and Logger (2), which are consuming themessages kept in the broker and sent by the LogPub. Firstly,the first module establishes a connection with the RabbitMQserver (Broker) and captures the messages in order to sendthem to the Exchange element (called X in Fig.4). This newelement is in charge of knowing what to do with the messages.In our case, it has been configured for sending the messages tothe different queues depending on the type of message that theuser wants to visualize. If there is no consumer (an instance ofLogger module running) there will not be any queue, hence theExchange element will drop the messages received. Finally, thesecond module (Logger) retrieves the messages of the queuesand print them in a terminal. The user has the possibility tochoose what kind of message wants to observe. There are twotypes of messages, in or out. The former, are messages sentby the Network Elements to the Client Controller and they arerepresented in green color. The latter, are the messages whichdo the reverse path (from the controller to the network) andare represented in yellow.

Fig. 4. Communication between LogPub and Logger

To simplify the complexity and limit the development of thetool in time, we started the practical work with a restrictedversion of the architecture. In particular, we made the follow-ing assumptions: (i) a single Client Controller scenario. (ii)The development has been performed for Ryu Shim layer.

In Fig.5, is represented the outcomes obtained when theLogger tool is running. The image displays the messages sentfrom the data plane to the controller (in green) and from thedata plane to the controller (coloured in yellow). The structure

of the messages depicted in the image is the following: the firstfield is a timestamp to know when the packet was sent, nextinformation provided in the message is the direction of themessage (to the controller or to data plane). The last one, isthe content of the message.

Fig. 5. Logger outcomes

B. ProfilerThe Profiler tool is used to analyze programs, providing to

the user relevant statistical information, such as the executiontime of various parts of that program or the consumed RAMmemory.

The Profiler has followed two different approaches. The firstone (Application Profiler), is based on the classical behavior ofthe tool (described above). It is able to provide the executiontime of the functions of the analyzed software. Exist twooptions, analyze only one SDN application or analyze allthe applications which are being executed plus the controllerframework. Moreover, the Application Profiler provides thememory consumption of the Client Controller and the executedapplications. The second approach called Network Profilerfocuses on the network and offers statistics of the forwardingelements (number of rules installed by the controller andnumber of removed rules).

1) Application Profiler: To accomplish the development ofthe first approach, two assumptions have been established: (i)The Client Controller is a Ryu controller, (ii) Only one applica-tion is executed. Moreover, we have used two different Pythonmodules, one called cProfile for obtaining the executiontimes of the methods and other denominated memprof forgetting the memory usage of the profiled functions. The logicprocess for obtaining the execution times of different methodsis depicted below in Fig.6.

While the SDN application is being executed, we gatherrelevant data. Once the execution has stopped, the informationcontained in the binary file is dumped to a text file. Thisnew file is processed and read with a software for statisticalcomputing and graphics. Finally, three graphics are generatedto show the outcomes. The procedure to get the memory usageis identical.

The outcomes produced are the followings: a text file whereis specified the execution time of each analyzed function, agraphic which contains the same information but more visualand a histogram of the amount of functions that have a specificexecution time. In the case of memory consumption it isshown a graphic that illustrates the memory consumed by thecontroller plus the executed application.

5

Page 6: Master Thesis “Tools for a Multi-Controller SDN Architecture”eprints.networks.imdea.org/1142/1/Tools_for_a_Multi_Controller_SDN_Architecture_2015.pdfController platform diversity

Fig. 6. Aplication Profiler flowchart

Fig.7 and Fig.8 are graphics obtained with the Profiler toolafter having obtained data from the execution of the controllerplus the SDN application.

0

10

20

30

40

0 100 200 300Execution Time (seconds)

Num

ber

of fu

nctio

ns

0

10

20

30

40

Number of functions

Fig. 7. Function histogram

Fig.7 is an histogram and represents the amount of functionsthat share the same execution time. For instance, there areapproximately about forty two functions with execution timesclosed to zero seconds and about ten functions which takelonger than 300 seconds.

In Fig.8 it is displayed the execution time of the sixtyfunctions that consume more time when are executed. Inabscissa axis are specified the names of those functions.

0

100

200

300

app_

man

ager

.py:

24(<

mod

ule>

)cf

g.py

:13(

<m

odul

e>)

cfg.

py:1

7(<

mod

ule>

)cf

g.py

:316

(<m

odul

e>)

colle

ctio

ns.p

y:28

8(na

med

tupl

e)co

ntro

ller.p

y:13

8(_r

ecv_

loop

)co

ntro

ller.p

y:21

8(se

rve)

cont

rolle

r.py:

23(<

mod

ule>

)co

ntro

ller.p

y:30

2(da

tapa

th_c

onne

ctio

n_fa

ctor

y)co

ntro

ller.p

y:68

(__c

all_

_)co

ntro

ller.p

y:72

(ser

ver_

loop

)ep

olls

.py:

61(d

o_po

ll)gr

eeni

o.py

:187

(_tr

ampo

line)

gree

nio.

py:2

06(a

ccep

t)gr

eeni

o.py

:306

(rec

v)gr

eent

hrea

d.py

:212

(mai

n)ha

ndle

r.py:

108(

regi

ster

_ser

vice

)hu

b.py

:118

(ser

ve_f

orev

er)

hub.

py:1

7(<

mod

ule>

)hu

b.py

:318

(run

)hu

b.py

:438

(fire

_tim

ers)

hub.

py:8

4(jo

inal

l){_

_im

port

__}

__in

it__.

py:1

24(t

ram

polin

e)__

init_

_.py

:17(

<m

odul

e>)

__in

it__.

py:1

(<m

odul

e>)

__in

it__.

py:6

(<m

odul

e>)

__in

it__.

py:9

9(<

mod

ule>

)in

spec

t.py:

1026

(get

oute

rfra

mes

)in

spec

t.py:

1053

(sta

ck)

insp

ect.p

y:25

(<m

odul

e>)

log.

py:1

7(<

mod

ule>

)m

anag

er.p

y:19

(<m

odul

e>)

man

ager

.py:

59(m

ain)

{map

}{m

etho

d 'e

nabl

e' o

f '_l

spro

f.Pro

filer

' obj

ects

}{m

etho

d 'p

oll'

of 's

elec

t.epo

ll' o

bjec

ts}

{met

hod

'sw

itch'

of '

gree

nlet

.gre

enle

t' ob

ject

s}of

p_ev

ent.p

y:19

(<m

odul

e>)

ofpr

oto_

prot

ocol

.py:

17(<

mod

ule>

)po

ll.py

:76(

wai

t)pr

ofile

r.py:

5(pr

ofile

d_fu

nc)

re.p

y:18

8(co

mpi

le)

re.p

y:22

6(_c

ompi

le)

requ

est.p

y:1(

<m

odul

e>)

sim

ple_

switc

h.py

:58(

_pac

ket_

in_h

andl

er)

sre_

com

pile

.py:

178(

_com

pile

_cha

rset

)sr

e_co

mpi

le.p

y:32

(_co

mpi

le)

sre_

com

pile

.py:

478(

_cod

e)sr

e_co

mpi

le.p

y:49

3(co

mpi

le)

sre_

pars

e.py

:301

(_pa

rse_

sub)

sre_

pars

e.py

:379

(_pa

rse)

sre_

pars

e.py

:675

(par

se)

thre

ad.p

y:36

(__t

hrea

d_bo

dy)

timer

.py:

53(_

_cal

l__)

toke

nize

.py:

23(<

mod

ule>

)ty

pes.

py:1

3(<

mod

ule>

)ty

pes.

py:2

0(<

mod

ule>

)w

sgi.p

y:17

(<m

odul

e>)

wsg

i.py:

1(<

mod

ule>

)

Function Name

Exe

cutio

n T

ime

(sec

onds

)

0

100

200

300Seconds

Fig. 8. Execution time of functions

In both figures, there is a legend in the right side. It specifiesthe units and the range of values that ordinate axis can take.With these outcomes, the user can know whether or not isnecessary a code optimization in the SDN applications thatare executed in the SDN network.

2) Network Profiler: In this approach it is assumed that theServer Controller is a Ryu controller. Similar to the Logger,a module denominated ProfPub is added to the Shim layerwhich captures the messages denominated flow mod. Thosemessages are sent by the controller to remove or install rulesin the switches. Once captured, the messages are parsed and itis possible to identify if the content is for installing a rule orfor removing it. Therefore, two counters are implemented onefor installed rules and the other for removed rules. With thisprocedure we know the amount of rules installed and removedin the switches.

A second module called Network Profile accesses to theaforementioned counters, read them and print the informationinto a screen. This method offers outcomes in real time whilethe SDN network is running.

C. Model Checker

Model Checking is an automatic verification technique forfinite state concurrent systems. It automatically provides com-plete proofs of correctness. A model checker systematicallyexplores all possible ways to execute a program (as opposedto testing, which only executes one path depending on yourinput data).

The process for model-checking a software consists on threemain steps:

• Modelling: converts the system into a formalism. Forexample, a state diagram. It is necessary to identify allthe system states and the transitions from one to another.

• Specification: Determine the correctness properties to bechecked by the tool. For instance: no loops in a SDNnetwork or the existence of black holes.

6

Page 7: Master Thesis “Tools for a Multi-Controller SDN Architecture”eprints.networks.imdea.org/1142/1/Tools_for_a_Multi_Controller_SDN_Architecture_2015.pdfController platform diversity

• Verification: validates the correctness properties in themodelled system and offers the outcomes obtained(whether or not the property is fulfilled in the system).

The main advantage of this tool is that all possible cases arecontemplated, so the outcomes obtained are absolutes, whichmeans that if the property no loops in the system has beenvalidated, the network will never have a loop. Therefore, weconsider it as a tool needed for a SDN network, due to itallows to validate important properties of a network beforedeploying it.

The most pronounced disadvantage in this technique isthe State Space Explosion problem. It consists on having ahuge state space, therefore the validation of a property wouldsuppose a high complexity and computationally would beinordinately costly.

This issue always appears in SDN networks, because thebehavior of a SDN network application depends on manyfactors: the end-host applications sending and receiving traffic,the switches handling the incoming packets, installing rulesand generating events (which are sent to the controller). Allof them affect the program running on top of the controllerleading to endless possibilities (states with transitions). Con-sequently, we consider that develop a Model Checker forvalidating properties in SDN Networks from scratch is notfeasible.

The solution adopted was to study in depth a Model Checkeralready created and integrate the tool to this project. Veryfew such tools have been developed. Nevertheless, there isa interesting open source project called NICE which providesa model checker for old NOX controller [12].

NICE addresses the State Space Explosion issue by mean ofthe Symbolic Execution Engine. They consider that the eventhandlers of the SDN application are the key to explore thewhole state space. These handlers must be triggered in orderto exercise all different code paths. Hence, the first option isexplore all possible inputs for triggering them (which it doesnot scale) and the second option is to identify the packets thattrigger these events (Symbolic packets) and feed the networkwith them. This task is performed by the Symbolic ExecutionEngine. The Fig.9 represents the structure of NICE tool.

Fig. 9. NICE tool structure

Given a SDN network application, a network topology andthe correctness properties to be validated, ‘NICE performs astate space search and outputs traces of property violations’[12].

A research is being carried out to adapt NICE to othercontrollers platforms. The accomplished work has been thecreation of a simple SDN application for old NOX controllerand the execution of NICE tool with this input. The futurework is to create an UML diagram of the entire tool in orderto understand properly how it works and find out which is thenext step to migrate NICE to Ryu controller.

IV. RELATED WORK

One of the main purposes of this project is to allow the com-position of SDN network applications with software modulesimplemented for different SDN controllers. In this context,few solutions have been proposed. Two of the most relevantcontributions in this field are FlowVisor and OpenVirteX [13],[14] which are network hypervisors that split the traffic intoslices. Each slice manages its traffic by mean of a networkapplication. Both of them suppose an important advance,improving the security and flexibility of the network due to theintegration of various controller operating at the same time.Moreover, these projects offer to multiple tenants to sharethe network. However, the network applications of each slicecannot cooperate to process the same traffic, so it is similarto have different networks manage each one by a uniquecontroller.

Flowbricks [15], is another SDN hypervisor that integratesheterogeneous controllers using only the standardized con-troller to switch communication protocol. The most flagrantpoint of this project, is that it only runs on emulated environ-ments and not over standard network hardware.

Debugging and troubleshooting have been important sub-jects in computing infrastructures, parallel and distributedsystems, embedded systems, and desktop applications [1]. Thetwo predominant strategies applied to debug and troubleshootare runtime debugging (e. g. , gdb-like tools) and post-mortemanalysis (e. g. , tracing, replay and visualization) . Despite theconstant evolution and the emergence of new techniques toimprove debugging and troubleshooting, there are still severalopen avenues and research questions [16]. Debugging andtroubleshooting in networking is at a very primitive stage.In traditional networks, engineers and developers have touse tools such as ping, traceroute, tcpdump, nmap, netflow,and SNMP [17] statistics for debugging and troubleshooting.Debugging a complex network with such primitive toolsis very hard. SDN offers some hope in this respect. Thehardware-agnostic software-based control capabilities and theuse of open standards for control communication can poten-tially make debug and troubleshoot easier. The flexibility andprogrammability introduced by SDN is indeed opening newavenues for developing better tools to debug, troubleshoot,verify and test networks.

7

Page 8: Master Thesis “Tools for a Multi-Controller SDN Architecture”eprints.networks.imdea.org/1142/1/Tools_for_a_Multi_Controller_SDN_Architecture_2015.pdfController platform diversity

Therefore, other of the most important objectives in thisproject is the provision of a varied toolset for debugging,diagnosing and troubleshooting the SDN network. We brieflydescribe and analyse similar efforts in this area, such asNetSight [18] which includes different tools such as: aninteractive network debugger, a live network monitor, a loggerand a hierarchical network profiler. NetSight assembles packethistories using postcards, event records sent out whenever apacket traverses a switch. This approach decouples the fate ofthe postcard from the original packet, helping to troubleshootpackets lost down the road, unlike approaches that append tothe original packet. Each postcard contains the packet header,switch ID, output port, and current version of the switch state.Combining topology information with the postcards generatedby a packet, NetSight can reconstruct the complete packethistory: the exact path taken by the packet along with thestate and header modifications encountered by it at each hopalong the path. NetSight is developed in C++, it focuses on thefollowing controller frameworks: NOX [7], POX [6] and RipL-POX [19], but it does not specify the OF versions supported.

NICE [12] is a powerful model checker whose objectiveis uncover bugs in SDN network applications. To reach thisgoal, they propose a tool that combines model checking andsymbolic execution. Given a SDN network application, a SDNnetwork topology and the properties to be checked (e.g., assureno loops in a SDN network) performs a space search andshows traces of property violations.

[20] is a publication that proposes another Model Checkerfor SDN networks. The correctness properties they want tovalidate with this tool are: loop freedom (a packet does notloop back to a switch which it has already visited) andno invalid drop (a packet is not dropped due to an invalidcontroller update to some switch). They use the AbstractionMethod to address the scalability problem. This techniquekeeps only one packet in the system, and provokes networkupdates, injecting packets (whose header values are arbitrary)directly into the ports of the switches.

V. FUTURE WORK

The preliminary tests performed have produced positiveoutcomes. It suggests that the proposed architecture in thispaper can be a viable solution for tackling the issue ofapplication portability across different SDN control platforms.Nevertheless, there is a great room for enhancement. One ofthe main problems found in the architecture is to guaranteethe applications running on top of the controller do not collideone with other. A mechanism should avoid the installation ofincompatible rules in the switches. To address this problem, anew architectural element has been proposed to be introducedwithin the overall architecture, the Core Layer. This layer isplaced between the Backend and the Shim layer. Its main taskis to provide the necessary logic for the coexistence of theSDN applications.

A SDN network deployed with our architecture could havea bigger delay in the communication between data planeand control plane than a conventional SDN network, due

to the integration of heterogeneous controllers which mustbe interconnected. Therefore, performance tests are beingperformed to verify that this extra delay is insignificant anddoes not harm to the behavior of the SDN network. Otherimportant drawback is the possible incompatibility betweensome Client Controllers and some Server Controller. If theclient possesses a different version of the OpenFlow protocolthat the version supported by the Server Controller exits aconflict and it is impossible to interconnect them. To solve thislast problem, we are trying to find the commonalities betweendifferent versions of OpenFlow.

The impact of all the future modifications in the NetIDEArchitecture will be analyzed and the tools will be adaptedand integrated into the new architecture.

The provided toolset has some shortcomings that must besolved. For instance, the ‘Runtime’ tools detailed in depth: theLogger and the Profiler have been developed for Ryu controllerplatform. So, both tools have to be adapted for the rest of thecontrollers. With the Core element described previously, thisadaptation would be much easier.

All the tools that are fed with messages exchanged betweenthe two planes, should capture those messages from theCore layer instead of the Shim layer module of each ServerController. So only one development would be needed insteadof one development for each controller.

Regarding to the tools presented in subsections III-A, III-Band III-C some improvements have been proposed.

In the first case, a new filter will be implemented. This filterallows to the user, choose the type of openflow message wantsto visualize (flow mod, packet out, port mod, etc.). The graphicthat the Profiler generates to show the memory consumptionis not updated while the tool is gathering the data. So the nextstep is provide a ‘real-time’ graphic that is updated at the sametime the data is collected. The model checker suggests variousopen issues. This tool can guarantee that one SDN applicationdoes not provoke loops or black holes in a given network, butit can not assure that a set of applications fulfil that property.Another issue is the high execution time of model checkerwhen the topology provided to the tool is composed of severalelements. Besides, some controllers have mechanisms to avoidloops in the network, so one of the main utilities of modelchecker tool is satisfied.

VI. CONCLUSIONS

Based on fact that this architecture has been implementedin a real prototype, the work presented in this paper is afeasible solution to address the problem of supporting SDNapplications written for different controllers within the sameSDN network. Moreover, the set of tools provided allow todebug, diagnose and troubleshoot the network.

In summary, the presented architecture and the providedtools are valid for practical deployment and solve heterogene-ity, migration and debugging issues of SDN networks.

8

Page 9: Master Thesis “Tools for a Multi-Controller SDN Architecture”eprints.networks.imdea.org/1142/1/Tools_for_a_Multi_Controller_SDN_Architecture_2015.pdfController platform diversity

REFERENCES

[1] Kreutz, D., Ramos, F. M., Esteves Verissimo, P., Esteve Rothenberg,C., Azodolmolky, S., Uhlig, S. (2015). Software-defined networking: Acomprehensive survey. proceedings of the IEEE, 103(1), 14-76.

[2] McKeown, N., Anderson, T., Balakrishnan, H., Parulkar, G., Peterson,L., Rexford, J., ... Turner, J. (2008). OpenFlow: enabling innovation incampus networks. ACM SIGCOMM Computer Communication Review,38(2), 69-74.

[3] Kim, H., Feamster, N. (2013). Improving network management withsoftware defined networking. Communications Magazine, IEEE, 51(2),114-119.

[4] Opendaylight controller. https://www.opendaylight.org/[5] Ryu controller. http://osrg.github.io/ryu/[6] POX controller. http://www.noxrepo.org/pox/about-pox/.[7] NOX controller. http://www.noxrepo.org/nox/about-nox/.[8] NetIDE Project source code. https://github.com/fp7-netide/Engine[9] Aranda Gutierrez, P. A., Karl, H., Rojas, E., Leckey, A. (2015, June).

On Network Application representation and controller independencein SDN. In Networks and Communications (EuCNC), 2015 EuropeanConference on (pp. 429-433). IEEE.

[10] Pyretic programming language. http://frenetic-lang.org/pyretic/[11] Open Networking Foundation (ONF), 2014. [Online]. Available:

https://www.opennetworking.org/[12] Canini, M., Venzano, D., Peresini, P., Kostic, D., Rexford, J. (2012,

April). A NICE Way to Test OpenFlow Applications. In NSDI (Vol. 12,pp. 127-140).

[13] Sherwood, R., Chan, M., Covington, A., Gibb, G., Flajslik, M., Hand-igol, N., ... Parulkar, G. (2010). Carving research slices out of yourproduction networks with OpenFlow. ACM SIGCOMM Computer Com-munication Review, 40(1), 129-130.

[14] Al-Shabibi, A., De Leenheer, M., Gerola, M., Koshibe, A., Parulkar,G., Salvadori, E., Snow, B. (2014, August). OpenVirteX: Make yourvirtual SDNs programmable. In Proceedings of the third workshop onHot topics in software defined networking (pp. 25-30). ACM.

[15] Dixit, A., Kogan, K., Eugster, P. (2014, October). Composing heteroge-neous SDN controllers with flowbricks. In Network Protocols (ICNP),2014 IEEE 22nd International Conference on (pp. 287-292). IEEE.

[16] Layman, L., Diep, M., Nagappan, M., Singer, J., DeLine, R., Venolia,G. (2013, October). Debugging revisited: Toward understanding thedebugging needs of contemporary software developers. In EmpiricalSoftware Engineering and Measurement, 2013 ACM/IEEE InternationalSymposium on (pp. 383-392). IEEE.

[17] J.D. Case, M. Fedor, M.L. Schoffstall, and J. Davin. Simple NetworkManagement Protocol (SNMP). RFC 1098, April 1989. Obsoleted byRFC 1157.

[18] Handigol, N., Heller, B., Jeyakumar, V., Mazires, D., McKeown, N.(2014, April). I know what your packet did last hop: Using packethistories to troubleshoot networks. In Proc. NSDI.

[19] Ripcord-Lite for POX): A simple network controller for OpenFlow-based data centers. https://github.com/brandonheller/riplpox.

[20] Sethi, D., Narayana, S., Malik, S. (2013, October). Abstractions formodel checking SDN controllers. In Formal Methods in Computer-Aided Design (FMCAD), 2013 (pp. 145-148). IEEE.

APPENDIX

In this appendix, an overview of RabbitMQ technology isexposed, citing its operating principles and its main advan-tages.

RabbitMQ is one of the technologies that implementsAMQP (an open standard application layer protocol for ex-changing messages between applications. This protocol deter-mines the behavior of the server that provides the messagesand the behavior of the client). It is based on a brokerarchitecture, which means that all information (all messagesgenerated by an application and sent to another) will beintercepted by a central element that will be the responsiblefor distributing those messages to the consumer application or

delete these messages (broker). In other words, the communi-cation between two applications will never be direct, withoutpassing first through the middle element (described above).

On one hand, several advantages are found using the broker-architecture approach:

• Complete independence between applications. The clientapplication does not have to know where the consumingapplication is, in fact there may be no message consumer.It only needs to know how to send messages to the broker,which will route the messages to the right applicationsbased on business criteria (queue name, routing key,topic, message properties, etc.) rather than on physicaltopology (IP addresses, host names).

• It is not necessary that the lifetime of applications over-lap, because the messages sent by the first applicationwill be kept in the broker and the receiver applicationwill be able to retrieve these messages any time later.

• Even if the application fails, messages sent to the brokerwill be buffered there.

On the other hand, there are some drawbacks, which shouldbe taken into account:

• Since all messages must pass through the broker, thereis some overhead in this element, causing a bottleneck,deteriorating the performance between applications (e. g.increasing latency).

• There is a single point of failure in the broker.Apart from all the advantages offered in a broker architec-

ture, this software also provides other features that make iteven more exible:

• Libraries are available for almost all programming lan-guages.

• Several RabbitMQ servers on a local network can beclustered together, forming a single logical broker inorder to avoid queuing system fails. (One of the mostagrant properties of the broker architecture).

9