bth’s research in nv, nfv and cloud networking

20
BTH’s Research in NV, NFV and Cloud Networking GENI Nordic Meeting, Stockholm, 2014-09-15 Kurt Tutschku ([email protected]) With Patrik Arlos, Anders Carlsson, Dragos Ilie and Markus Fiedler Blekinge Institute of Technology (BTH), Faculty of Computing Department of Communication Systems (DIKO)

Upload: raphael-powell

Post on 03-Jan-2016

38 views

Category:

Documents


0 download

DESCRIPTION

BTH’s Research in NV, NFV and Cloud Networking. GENI Nordic Meeting, Stockholm, 2014-09-15 Kurt Tutschku ([email protected]) With Patrik Arlos, Anders Carlsson, Dragos Ilie and Markus Fiedler - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: BTH’s Research in  NV, NFV and Cloud Networking

BTH’s Research in NV, NFV and Cloud

Networking

GENI Nordic Meeting, Stockholm, 2014-09-15

Kurt Tutschku ([email protected])With Patrik Arlos, Anders Carlsson, Dragos Ilie and Markus Fiedler Blekinge Institute of Technology (BTH), Faculty of ComputingDepartment of Communication Systems (DIKO)

Page 2: BTH’s Research in  NV, NFV and Cloud Networking

Capacities at BTH

• Blekinge Institute of Technology – 7200 students; 500 staff– emphasis on applied information technology and

inno-vation for sustainable growth in industry and society

– strong industrial environment in communication industry, both legacy (Ericsson, Telenor Sverige) and startup (CompuVerde, HyperIsland, CityNetwork, ….)

– network: 1GBit; upgrade to full 10GBit (in 2015)

Page 3: BTH’s Research in  NV, NFV and Cloud Networking

Capacities at BTH’s DIKO Department

• Department of Communication Systems (DIKO)– focus on future network/FI architectures and technologies,

Quality of Experience, Cloud computing, performance evaluation, wireless communications, Internet of Things and security

– currently four professors, four senior lecturers, two university adjuncts and 10 Ph.D. students

• Current and past involvements in Future Internet projects– Current FI projects: XIFI (eXperimental Infrastructures for

the Fu-ture Internet, EU), Queen (EU, Celtic plus), ETSI’s Industry Specifi-cation Group (ISG) for Network Function Virtualization (NFV), FI-PPP FI-STAR (EU), ENGENSEC (EU), BTH’s CloudLab

– Selected past contributions to FI projects: Akari (J), G-Lab (Ger), Mevico (Celtic, EU), PlanetLabEurope, Future Internet Assembly, FI-PPP setup (AT representative)

Page 4: BTH’s Research in  NV, NFV and Cloud Networking

BTH CloudLab

• Started in early 2014• Integration effort for BTH’s FI, NV, NFV, Cloud

and SDN research • Integrated projects and labs: XIFI, DIKO’s

Network Performance Lab, ENGENSEC

• Hardware:– XIFI: 4x Dell PowerEdge 715 (AMD, 128

cores, 512GB Ram, 5TByte disk)– ENGENSEC: 48 cores (8boxes; Intel I7);

future AMD Opteron128 cores– NPL: e.g. Endace DAG 4.3GE x4, DAG 4.2GE

x2, DAG 3.5 x4, DAG 3.6 x4

• Software: – OpenStack (XIFI; ENGENSEC: Havana)

Page 5: BTH’s Research in  NV, NFV and Cloud Networking

XIFI (eXperimental Infrastructures for the Future Internet, EU)

Page 6: BTH’s Research in  NV, NFV and Cloud Networking

BTH’s XIFI Testbed

BTH’s XIFI TestbedXIFI

adapterXIFI

adapter

XIFI adapter

XIFI adapter

DPMI

NTAS

Front-end monitoring in NPL:

Back-end moni- toring in Cloud Lab

BTH’s XIFI-enhanced CloudLab running Generic Enablers (GEs)

UE executing FI-PPP applications

MP MP

Monitoring on network layer

Monitoring on user layer and client control

Possible use of GEs between IF-PPP Cloud environment and UE

Link to SUNET/GÉANT network

MP

Page 7: BTH’s Research in  NV, NFV and Cloud Networking

Educating the Next gener-ation experts in Cyber Security (ENGENSEC)

• Objective: create new Master’s program in IT Security as response on current and emerging cybersecurity threats by educating next generation experts

• Funding organization: EU Tempus program• Number or participants: 21 • Participating countries: Sweden (coordinator), Poland, Latvia, Greece,

Germany, Ukraine, Russia

• Project activities:– Defining framework of joint Master’s program, Cloud-based security

lab development, Development of the joint course curriculum, Develop new and further develop existing courses, Teacher training, Effective quality control ensured and project management, Dissemination of new Master’s programs benefits, Give pilot courses in a summer school, Prepare for participating Universities to launching new Master’s program

Page 8: BTH’s Research in  NV, NFV and Cloud Networking

Direct Involvement of BTH in FI-PPP

• FI-STAR = one out of five Call-2 FI-PPP use cases• BTH’s role

– Major Swedish participant (with significant labs)– Requirement engineering (co-chair of FI-STAR WP 1)– Validation (Co-chair FI-STAR WP6)

• Functional testing• Quality of Service (QoS) measurements• Quality of Experience (QoE) assessment• Health Technology Assessment

• BTH’s work is strongly focused on Generic Enablers (GE) and their performance

Synergy with XIFI: Hosting would provide full control and unique QoS measurement facilities

Page 9: BTH’s Research in  NV, NFV and Cloud Networking

A Very Brief View on Network Function Virtualization (NFV)

Kurt Tutschku Blekinge Institute of Technology (BTH), Faculty of ComputingDepartment of Communication Systems (DIKO)

Page 10: BTH’s Research in  NV, NFV and Cloud Networking

What is Network Function Virtualization (NFV)?

• Aims at network operators!

• Transform network architecture and operation by applying standard IT virtualization technology

• Members: >250 companies; only few academics (5); member since Jan. 2013

Amongst other: work on future curricular

Page 11: BTH’s Research in  NV, NFV and Cloud Networking

Example: BRAS – Broadband Residential Access Server

Move this box into

the cloud!

Move this box into

the cloud!

Page 12: BTH’s Research in  NV, NFV and Cloud Networking

Example: Service Chaining in NFV for Video Acceleration

• Suggested PoC by SK Telecom

Page 13: BTH’s Research in  NV, NFV and Cloud Networking

(More) Detailed Architectural Framework

Page 14: BTH’s Research in  NV, NFV and Cloud Networking

Initial Evaluation: Virtualization Concepts and Their Rough Performance

Rules of Thumb, Educated Guesses

or Scientific Results?

Rules of Thumb, Educated Guesses

or Scientific Results?

Page 15: BTH’s Research in  NV, NFV and Cloud Networking

A Metric for Isolation and Trans-parency of Virtual Elements

Kurt Tutschku Blekinge Institute of Technology (BTH), Faculty of ComputingDepartment of Communication Systems (DIKO)

With acknowledgements to the definitions and descriptions of M. Fiedler (BTH) and D. Stezenbach (University of Vienna)

Page 16: BTH’s Research in  NV, NFV and Cloud Networking

Scope and Causes of Reduced Virtualization Features?

Main cause for reduced quality of virtualization is resource sharing!

(Typically) “atomic” resources: only a single request can be served at a time.

However, requests might arrive in parallel (from other VEs due to sharing)

Concurrency is resolved by serialization. But, this might introduce additional delay (jitter) for the deferred request.

Thought experiment: two virtual appliances, arbitrary scheduling

But, severity depends on “tolerable” delay and in particular

on the delay variance.

Does this happen in real life?

Sharing Resources Server (Host

Machine)

CPU Memory I/O

Virtual Machine Monitor

Virtual Machine

Guest OS

Virtual appliance

Virtual appliance

Guest OS

Virtual I/O

Virtual Memory

Virtual CPU

Page 17: BTH’s Research in  NV, NFV and Cloud Networking

Set-up: Server: consumer hardware (Intel Core 2 Duo E8500, 4GB RAM,

Ubuntu 12.10); network interfaces: 2x1Gbit/s operating @ 100Mbit/s Virtual router appliances: Ubuntu 12.10, XEN 3.5.0 or VirtualBox ;

packet forwarding using vSwitch; four appliances used Measurement traffic: 4 parallel UDP streams; 120B Frame size

(Ethernet); CBR traffic: inter-packet time 61µs (per stream),15,65Mbit/s per Flow, 62,62Mbit/s total

Experiment: Sharing among Virtual Routers

Be aware: these are data packets but in general this can be extended to signaling/control request

Page 18: BTH’s Research in  NV, NFV and Cloud Networking

Experiment: Comparison of ingress and egress Packet sequence:

Average throughput

Ingress (all flows)

Egress (all flows) Observations:

Ingress: strict round-robin

Egress: arbitrary packet order

Ingress (all flows) Egress (all flows)

Throughput variation is indicator for reduced isolation and transparency! Methodology: comparison of ingress with egress (independent of traffic type) Implementation: compare coefficient of variation at ingress and egress

Page 19: BTH’s Research in  NV, NFV and Cloud Networking

Power of the Metric: Comparison Virtualization Technologies – Use of VirtualBox instead of Xen

VirtualBox introduces less variation than Xen

(our current assumption: this is due to VirtualBox not using the complex vSwitch)

☝ Attention: metric does not analyze why a specific virtualization technology has a better isolation/transparency!

☝ Focus of metric is on enabling a comparison!

Xen

VirtualBox

Page 20: BTH’s Research in  NV, NFV and Cloud Networking

Tack så mycket!Frågor?