orchestration in a real network: a case study

17
t Orchestration in a real network: a case study Fulvio Risso, Politecnico di Torino, Italy

Upload: hoangcong

Post on 13-Feb-2017

232 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Orchestration in a real network: a case study

t Orchestration in a real network: a case study

Fulvio Risso, Politecnico di Torino, Italy

Page 2: Orchestration in a real network: a case study

2 2

Outline

• The multidomain orchestration ride @ POLITO/TIM

• Steps 1…2…3…4…5

• What we learned

• What to do next?

Page 3: Orchestration in a real network: a case study

3 3

OpenStack overview

VM images

Authentication Server + user

profiles (Keystone)

VM images manager (Glance)

Controller Node

Nova compute agent

Software switch (OVS)

VM

libvirt

Compute node

Network API (Neutron)

Modular layer 2 (ML2)

OVS Linux-bridge

ODL

Compute API (Nova)

Nova scheduler

Auth DB

Generic object store

(Swift)

Object store

Dashboard (Horizon) Service-layer orchestrator (Heat)

Nova compute agent

Software switch (OVS)

VM

libvirt

Compute node Note: arrows show the most

important communication paths among components

Network infrastructure

Page 4: Orchestration in a real network: a case study

4 4

Step 1: Pure OpenStack

Controller Node

Nova compute agent

Software switch (OVS)

VM

libvirt

Compute node

Network API (Neutron)

Modular layer 2 (ML2)

OVS Linux-bridge

ODL

Compute API (Nova)

Nova scheduler

Dashboard (Horizon) Service-layer orchestrator (Heat)

Nova compute agent

Software switch (OVS)

VM

libvirt

Compute node

Network infrastructure

OpenStack is a solution for data centers • No traffic steering • VMs scheduled based on availability of

compute resources • Cannot schedule VMs based on

network constraints • Does not consider network

topology/status • Not able to control/get feedback from

the network (overlay model, tunnels in the vSwitch)

Page 5: Orchestration in a real network: a case study

5 5

Controller Node

Step 2: OpenStack + Network Controller

Nova compute agent

Software switch (OVS)

VM

libvirt

Compute node

Network API (Neutron)

Modular layer 2 (ML2)

OVS Linux-bridge

ODL

Compute API (Nova)

Nova scheduler

Dashboard (Horizon) Service-layer orchestrator (Heat)

Nova compute agent

Software switch (OVS)

VM

libvirt

Compute node

Network infrastructure

Network controller

(OpenDayLight)

• No traffic steering • VMs scheduled based on availability of

compute resources • Cannot schedule VMs based on

network constraints • Does not consider network

topology/status • Not able to control/get feedback from

the network (overlay model, tunnels in the vSwitch)

• The network controller can do its best to implement the service requested by OpenStack • Better than nothing, but it looks

like a “slave” of OpenStack

Page 6: Orchestration in a real network: a case study

6 6

Controller Node

Step 2b: Deeply modified OS + NC

Nova compute agent

Software switch (OVS)

VM

libvirt

Compute node

Network API (Neutron)

Modular layer 2 (ML2)

OVS Linux-bridge

ODL

Compute API (Nova)

Nova scheduler

Dashboard (Horizon) Service-layer orchestrator (Heat)

Nova compute agent

Software switch (OVS)

VM

libvirt

Compute node

Network infrastructure

Network controller

(OpenDayLight)

• Deep modifications across all the software stack

• Our “OpenStack+” working in TIM/POLITO, then abandoned

• Traffic steering • VMs scheduled based on availability of

joint network and compute resources • Capability to react to

compute/network events and re-schedule the service

Main points modified

Page 7: Orchestration in a real network: a case study

7 7

Controller Node

Step 3: An overarching Orchestrator

Nova compute agent

Software switch (OVS)

VM

libvirt

Compute node

Network API (Neutron)

Modular layer 2 (ML2)

OVS Linux-bridge

ODL

Compute API (Nova)

Nova scheduler

Nova compute agent

Software switch (OVS)

VM

libvirt

Compute node

Network infrastructure

Network controller

(OpenDayLight)

Orchestrator • The (extended) network controller can be “remotely controlled” to “interpret” the service coming from OpenStack

• Complex service logic can be implemented in the orchestrator, such as • Suggesting the proper VM scheduling

(e.g., through availability zones) to OpenStack

• Relocating the service based on network feedback (network-aware scheduling in the orchestrator)

• Now OpenStack becomes the “slave” of the orchestrator

• Rather complex • E.g., traffic steering has to be added

by the Network Controller

Page 8: Orchestration in a real network: a case study

8 8

Step 3: other problems arose

• OpenStack probably not appropriate to control the entire infrastructure

of a telco

• Probably OK for the central datacenter and the POP mini-datacenter

• CPEs does not fit well in the picture

• Either domestic (almost no compute capabilities) or business

(some compute capabilities may be available) CPEs

• The network infrastructure may need a different controller

• Telecom Italia experimental network (JOLnet) may be controller better by

defining multiple domains

• In order to understand the reason, let’s have a look at the JOLnet

infrastructure

Page 9: Orchestration in a real network: a case study

9 9

Controllers (Turin – TILAB)

Milan

Trento

Pisa

Catania

Turin (University)

Turin (TILab)

The JOLnet SDN infrastructure

Core (TurinU)

CPE (TurinU)

Core (Pisa)

CPE (Pisa)

Note: some links have been omitted for the sake of clarity.

Page 10: Orchestration in a real network: a case study

12 12

OpenStack controller

Step 3: overarching orchestrator (in JOLnet)

Network controller

Core (TurinU)

CPE (TurinU)

Core (Pisa)

CPE (Pisa)

Orchestrator

OpenStack control adapter OpenDayLight control adapter

Note: some links have been omitted for the sake of clarity.

Page 11: Orchestration in a real network: a case study

13 13

Universal Node OpenStack controller

Step 4: multi-domain orchestrator

Network controller

Core (TurinU)

CPE (TurinU)

Core (Pisa)

CPE (Pisa)

Overarching orchestrator

OpenStack Domain Orchestrator

OpenFlow Domain Orchestrator

Note: some links have been omitted for the sake of clarity.

OpenStack control adapter OpenDayLight control adapter

Page 12: Orchestration in a real network: a case study

14 14

Universal Node Domain Orchestrator

OpenStack Domain Orchestrator

OpenFlow Domain Orchestrator

Step 4b: where’s the Service Layer?

OpenStack Domain

User CPE Domain

OpenFlow Domain

Bittorrent client

Firewall Internet

Overarching orchestrator

Service layer

How can the Service Layer know which port of the CPE the user is connected to, and the user MAC address?

Page 13: Orchestration in a real network: a case study

15 15

Universal Node (CPE) Domain Orchestrator

OpenStack Domain Orchestrator

OpenFlow Domain Orchestrator

ep_2_2

ep_2_1

Global Orchestrator

ep_2

Firewall

Bridge ep_2

VLAN 7

VLAN 12

ep3 ep1

(1) Domain capabilities (e.g., support for VLANs, for GRE tunnels) (2) Domain resources (e.g., VLAN IDs 7-12 available, GRE available on IP address 10.1.1.1) (3) Domain external topology (e.g., direct Ethernet link with Domain 2) (4) Service layer information (e.g., user Green connected to port 1/0)

Portion of the service graph associated to the selected domain

Service layer

Message bus (pub/sub model with YANG/OpenConfig description)

Service graph to be deployed

Internet

Bittorrent client

Firewall Internet

BitTorr. client

Step 5: bus-based architecture

Page 14: Orchestration in a real network: a case study

16 16

What we learned

We’re hitting the top of the iceberg.

Orchestration is very hard.

Orchestration is not just optimized scheduling.

Orchestration is: scalability,

multitenancy, security, isolation,

multiple technological domains, multiple administrative domains,

support for Internet of Things

Anything else?

Page 15: Orchestration in a real network: a case study

17 17

What to do next

• How many orchestrators do we have to design and engineer?

• One fits all (hence one winner and so many losers) , or should we

design domain-specific orchestrators?

• OpenStack, network-only, …

• Some possible technical actions

• Define a detailed list of technical requirements?

• Define a common language between orchestrators?

• More collaboration among the partners would be helpful

• The orchestration space is so big!

Page 16: Orchestration in a real network: a case study

18 18

Credits

• The people at TIM, who we work with

• Special thanks to Antonio Manzalini, Mario Ullio, Vinicio Vercellone, Matteo D’Ambrosio, and many others

• The EU FP7 UNIFY project and all the people there

• Universal Node was born there: http://github.com/netgroup-polito/un-orchestrator

• The crew at POLITO

• … for their effort in the FROG: http://github.com/netgroup-polito/frog4

• Ivano Cerrato, Stefano Petrangeli, Roberto Bonafiglia, Sebastiano Miano, Gabriele Castellano, Francesco Benforte, Francesco Lucrezia, Mauricio Vasquez

• Our past students, of course: Fabio Mignini, Alex Palesandro, Matteo Tiengo, Giacomo Ratta, Patrick Tomassone, Andrea Vida, Marco Migliore, Alessio Berrone, Sergio Nuccio… and many others

• The people at EIT, which are pushing hard for multi-domain orchestration

Page 17: Orchestration in a real network: a case study

eitdigital.eu

Thanks for your attention!