ccn emulation with mininet - · pdf file•emulation can be a solution for the validation...

52
CCN emulation with Mininet Taekyoung Kwon, Myungchul Kwak {tk,mckwak}@mmlab.snu.ac.kr Seoul National University 2014. 03 *Some slides are from official site’s slides 1

Upload: doankhanh

Post on 15-Mar-2018

231 views

Category:

Documents


1 download

TRANSCRIPT

CCN emulation with Mininet

Taekyoung Kwon, Myungchul Kwak

{tk,mckwak}@mmlab.snu.ac.kr

Seoul National University

2014. 03

*Some slides are from official site’s slides 1

2

CCN

content centric networking

• aka named data networking (NDN)

a new network architecture

content name is the key element

• no locator like IP address

in-network caching

mobility, multicast, security supported inherently

3

IP networking vs. CCN

Network prefix • Content name

Destination Next Hop

192.168.0.0/16 Router C

Content Name Next Hop

/a.com/b.jpg Router C

/a.com/b.jpg

4

CCN basics

Content name • Hierarchical, variable-length, semantics

No IP address

Consumers send Interest Packets Content holders send back Data Packets

Source: Van Jacobson@PARC

5

A user wants some content

Source: Van Jacobson@PARC

6

Content is downloaded

Content is cached!

In-network

caching

Source: Van Jacobson@PARC

7

Another user requests the same content

Source: Van Jacobson@PARC

8

Source: Van Jacobson@PARC

CCN forwarding

9

Openflow (OF)

separation between control plane and data plane

programmability and controllability

rule – action – statistics

control messages between controller and OF switches

actions in OF switches

Data Path (Hardware)

Control Path OpenFlow

OpenFlow Controller

OpenFlow Protocol (SSL/TCP)

what to extend?

10

what to extend?

Switch Port

MAC src

MAC dst

Eth type

VLAN ID

IP Src

IP Dst

IP Prot

TCP sport

TCP dport

Rule Action Stats

1. Forward packet to port(s) 2. Encapsulate and forward to controller 3. Drop packet 4. Send to normal processing pipeline 5. Modify Fields

+ mask what fields to match

Packet + byte counters

11

12

testing OF network

testbed prototyping

emulation

• mininet-HiFi

13

what is network emulation

the act of introducing a device to a test network (typically in a lab environment) that alters packet flow in such a way as to mimic the behavior of a production network

Devices incorporate a varying amount of standard network attributes into their designs including: RTT, BW, packet loss,...

validation

simulation

cheap

details at arbitrary level

scalable

slow evaluation

reality issue

emulation

close to reality

functional correctness

system dynamics

some cost

H/W constraint

prototyping

reality

scalability issue

cost and time

14

MININET AND C-FLOW

15

16

Introduction

Network experiment papers are hard to be proven

• In an ideal world, all papers would be runnable

• Emulation can be a solution for the validation

What is a network emulation(emulator)?

• Implemented on 1 machine

• Emulated network that runs real code

17

Why emulation is good?

Simulators Testbeds

Emulators Shared Custom

Functional Realism

O O O

Timing Realism

O O O ?

Traffic Realism

O O O

Topology Flexibility

O LIMITED O

Easy Replication

O O O

Low Cost O O

18

Mininet

A tool for rapid prototyping of SDN

Create a realistic virtual network with real working components

Running on single machine for ease of testing & flexible topology

Provide the ability to emulate hosts, switches, controllers via: • CLI (Command Line Interface)

• Interactive User Interface

• Python application

19

Getting started

Example command:

• sudo mn --test pingall --topo single,3

This command will:

• Create a single network with 3 hosts connected to a single switch

• Perform a ping from all hosts to all others

h1 h2 h3

s1

20

Topology setting

We applied the pre-defined topology in previous slide, but… • Also could be defined by python script

Simple example : single switch, 2 hosts • sudo mn --topo mytopo --custom mytopo.py

h1 h2

s1

mytopo.py

21

Switch & controller

Ability to customize controller or switch

Set-up on boot with options • sudo mn --switch=user –controller=remote

• Select user-space switch instead of the default kernel switch

• Connect to independent remote controller

Applicable options • Controller : none | nox | ovsc | ref | remote

• Switch : ovsk | ovs | user

This (remote) controller will run independently • Control switches’ forwarding table - assign route entry, drop flow, etc.

22

Host emulation (1/3)

Each host’s terminal could be emulated via CLI or python script • mininet> xterm h1 -> show emulated terminal for host 1, we could use

ordinary shell commands, or applications (e.g. wireshark)

Each host has own networking resources – link, cpu

23

Host emulation (2/3)

Example 1) send content via http • We used open-source http based application

• You could execute linux applications in emulated node

• Each node shares host machine’s hard disk

Controller’s

HTTP traffic

24

Host emulation (3/3)

Example 2) give traffic on link w/ iperf • Give heavy traffic on emulated link with ordinary network tool iperf

• Heavy UDP traffic flows on emulated link

Heavy UDP

traffic

C-flow: An efficient Content Delivery Framework with OpenFlow

25

26

Outline

Background

C-flow : overview

• Name based content delivery

• In-network caching

SDN based networking schemes

• Dynamic re-routing

• Parallel Transmission

• Centralized Caching

Experiment

Conclusion

27

C-flow

C-flow : OpenFlow based content delivery framework • Openflow : universal protocol for SDN paradigm

Support ICN operations

• Name-based content delivery

• In-network caching

Exploit characteristics of SDN which benefit the content delivery • Dynamic re-routing

• Parallel transmission

• Centralized caching

ICN

SDN

Name-based content delivery

In-network caching

Dynamic re-routing

Parallel transmission

Centralized caching

28

Name-based content delivery (1/2)

Exploit the NDN’s approach for content delivery • Users request the content in the form of Interest packet

• Content providers send the content in the form of Data packet

Our approach • Mapping content name to IP address

• DPI(Deep Packet Inspection) on HTTP request/content packet for identifying content name

• Convert the content flow’s IP address to private IP address which is used in C-flow network only

• Routing & in-network caching with this private IP

For incremental deployment of C-flow

147.46.210.1

DPI on HTTP header & Resolution

210.12.80.37

210.12.80.37 147.46.210.1

IP src IP dst

HTTP request

HTTP response

Original flow’s IP header GET http://youtube.com/a.avi HTTP/1.1

Content name Private IP

a.avi 10.0.0.1

b.avi 10.0.0.2

… …

Resolution table in controller

C-flow’s IP header

147.46.210.1 10.0.0.1

10.0.0.1 147.46.210.1

IP src IP dst

Interest packet

Data packet

29

Name-based content delivery (2/2)

Controller

end user A1

server S1

R1

R2

R3

R4 A1 S1 A1 P1

A1 P1 A1 S1

S1 A1 P1 A1 (1)

Interest packet

Data packet

src dst

src dst

P1 A1 S1 A1

(2)

(3)

(4)

User A1 requests “a.avi”

Report to Controller

Controller sends routing entries & IP header changing actions to switches

Interest flows…

Dst IP is changed to private IP “P1”

Dst IP is changed to original IP “S1”

Server sends the content

Src IP is changed to private IP “P1”

Content flows…

Src IP is changed to original IP “S1”

Original IP is needed for end user transparency -> Incremental deployment of ICN

30

In-network caching (1/2)

Inherent cache on switch/router • Representative advantage of ICN approaches

• Users could retrieve the requested content from anywhere among the networking entities

Specifications • Cache on in-memory space of Openflow switch

• Cache popular content only : based on # of request

• By chunk unit : chunk size = 1K~64K

• Replacement policy : decentralized LRU or centralized LRU - Decentralized LRU : LRU on each switch level

- Centralized LRU : LRU based on network’s entire requested history -> SDN based

31

In-network caching (2/2)

SDN’s centralized network controller

User 2

server

R1

R2

R3

R4

Content request flow

Content data flow

In-network storage

FIB setup

User 1

CACHE action = a.avi will be cached RETRIEVE action = a.avi is cached CACHE action is deleted & RETRIEVE action is added

user 2 requests a.avi

Switch R2 sends cached a.avi

32

On implementation..

Define new Openflow actions on Openflow interface • OFPAT_CACHE : cache chunks in switch’s memory

• OFPAT_RETRIEVE : retrieve chunks from switch’s memory

Modify Openflow software switch’s source code • Implement the CACHE & RETRIEVE action’s detail process

- CACHE : cache matched packets to switch memory -> add entry to cache table

- RETRIEVE : look-up cache table -> send matched packets back to in_port

• Identification of a chunk – exploit IP option header - Parse packet’s IP option header & regard it as chunk number

- Entire processes are based on chunk unit

33

Schemes based on SDN

Dynamic rerouting (Dynamic RR) • Dynamically reselects destination server and path to it

• Exploit the SDN’s network monitoring function

Parallel transmission (Parallel Tx) • Supports multi-sources & multipath routing by controller

Centralized caching • The controller decides where to cache & what content to

be replaced

34

Dynamic re-routing

Controller

end user

Original Server

Content request flow

Content data flow

In-network storage

R1 R2

R3

R4

FIB setup

Replication Server

Congestion!

Report to controller

Controller re-calculates the route to another server & adds the routing entries to switches

Flow’s routing path is changed dynamically

35

Monitoring link statistics

• Pre-defined function in NOX controller module

• Send periodic port stat probe (1 sec)

• Measure the ratio of link BW used

• If link load is higher than some threshold -> recalculate route to use other link

𝑳𝒊𝒏𝒌 𝒍𝒐𝒂𝒅 (%) = 𝒕𝒙𝒍𝒐𝒂𝒅 + 𝒓𝒙𝒍𝒐𝒂𝒅

𝒍𝒊𝒏𝒌 𝒔𝒑𝒆𝒆𝒅 × 𝟐

Dynamic re-routing detail

Controller

Link Stat Monitor

Switch

Forwarding table Routing Manager

Give re-route signal when link load > threshold

Re-calculated routing path

36

Load balancing by multicast

Calculate multiple routes to replication server • Each chunk selects the own route based on modulation

Parallel route -> 5:{mod:3121} 6:{mod:3102} 7:{mod:3021} 8:{output:1} 9:{output:1}

Sw 5

Sw 6

Sw 7

Sw 9

Sw 8 1

2

1

2

2

1

1

1

host

Server 1

Server 2

Server 3 Route to server 1 5:1->6:1->8:1 Route to server 2 5:2->7:2->9:1 Route to server 3 5:1->6:2->7:1

Parallel transmission

Each switch with PARALLEL action will forward flows by mod operation

37

Centralized caching

The controller manages caching information • Decide where to cache & what content to replace in centralized

manner

Where to cache • Select cache location based on cache history

• Distribute the cache location

What content to replace • LRU cache replacement in centralized manner

• Replace the unpopular content based on entire network’s request information

Whole process is controller’s module based on inherent

38

Openflow.h (from Openflow spec. interface) • OFPAT_CACHE -> cache chunks in switch’s memory

• OFPAT_RETRIEVE -> retrieve chunks from switch’s memory

• OFPAT_PARALLEL -> parallel tx by exploiting the mod operation

dp_act.c (from switch’s source code for Openflow actions) • Custom action’s detail process is implemented in here

Modified Openflow interface

Defined actions on openflow.h

39

Modules on controller

40

Experimental environment

Emulation • Host machine hardware spec. - Intel i5-2500K 3.30GHz*4 (Quad core) CPU - 16 GB RAM

• Software tools -Mininet : Software-defined network emulator -Open vSwitch : Virtual switch for Openflow support - NOX : Openflow controller

Topologies

• Two types : Simple, GEANT topology

41

Dynamic RR & Parallel Tx

DynamicRR could avoid the congestion efficiently

Parallel Tx shows worse performance than DynamicRR • Because some of routes are still affected by background traffic

• Could be better when each replication has different capacity

- # of host = 1 (h1) - Content size = 50MB - h2~h4 makes background traffic to server, randomly

Simple topology

42

GEANT topology

For centralized caching experiment

43

Centralized caching

Compare w/ decentralized caching • LRU in switch level

• Select the cache location randomly

In-switch caching could achieve efficient content delivery • Centralized caching is better

than decentralized way

• SDN synergy effect with ICN

- GEANT topology - # of content = 2000, Zipf distribution (α = 1) - # of host = 10 - Size of content = 1MB

44

Conclusion

C-flow : Openflow based ICN implementation • Name based routing • In-network caching

Provide more efficient network functionalities

• Dynamic re-routing • Parallel Tx • Centralized Caching

On-going work

• Support wireless traffic • Federation with EU testbed • Provide c-flow’s functionalities between inter-level networks

45

Appendix A: Dynamic re-routing Snapshot

0

0.005

0.01

0.015

0.02

0.0

03

0.2

01

0.9

43

1.7

84

2.8

66

3.5

54

4.3

1

5.2

88

6.1

34

7.2

99

7.9

58

8.7

23

9.9

51

10

.80

1

11

.58

5

12

.58

9

13

.36

6

14

.21

4

15

.02

6

15

.85

16

.59

5

17

.47

9

18

.29

1

19

.30

3

20

.07

3

20

.95

2

21

.71

8

22

.47

7

23

.29

9

24

.14

2

24

.94

25

.92

8

26

.70

1

27

.53

9

28

.35

3

29

.14

4

29

.97

1

30

.81

6

31

.63

32

.67

8

33

.45

6

34

.22

8

34

.95

re-routing off

re-routing off

0

0.005

0.01

0.015

0.02

0.0

03

0.2

17

0.5

57

2.1

01

2.7

37

3.5

17

4.5

37

5.3

22

6.1

58

6.4

7.7

87

8.9

13

9.9

59

12

.58

61

3.1

04

14

.03

61

4.7

38

15

.71

41

6.6

73

17

.20

21

8.1

49

18

.88

51

9.7

98

20

.71

92

1.4

92

2.4

43

23

.34

23

.95

22

4.9

66

25

.68

92

6.5

97

27

.47

92

8.1

19

29

.02

72

9.9

14

30

.82

33

1.6

17

32

.53

33

3.2

13

4.1

27

34

.98

83

5.8

68

36

.81

1

re-routing on

re-routing on

Re-routed

Background Traffic is occurred

46

Appendix B: Caching process timeline

47

Appendix C: How to Realize Chunks

Chunk number • specified in the IP option header additionally

Identification of a chunk • IP option header - extend OpenFlow protocol for matching chunk number

Unit of chunk • size of IP packet - ~65kbytes

• MTU size of ethernet - ~1500 bytes

48

Appendix D: Effect of In-network caching

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0

25

50

75

10

0

12

5

15

0

17

5

20

0

22

5

25

0

27

5

30

0

32

5

35

0

37

5

40

0

42

5

45

0

47

5

50

0

52

5

55

0

57

5

60

0

62

5

65

0

67

5

70

0

72

5

75

0

77

5

80

0

82

5

85

0

87

5

90

0

92

5

95

0

97

5

Pac

ket

del

ay (

s)

Chunk Number

NC_R

C_R

Cached chunks (#0~#99)

Measure the transfer delay of each chunk

• Content size : 1MB

• Chunk size : 1KB, 100 chunks are cached

host Server

100 chunks are cached in here

49

Appendix E: Mininet codes for topology

nodes edges

50

In-switch cache (1/3)

Make a cache entry into switch #10

Computed route by controller

5->6->10->12->16->18

51

In-switch cache (2/3)

Retrieve entry is inserted after first request

Client(Host 1) and Server(Host 4) application

52

In-switch cache (3/3)

Wireshark capture - port 5 of switch #10

First request

Second request

Secondary request for chunk #0 is not forwarded to this port, because this chunk is cached in this switch