openstack & ovs: from love-hate relationship to match made in heaven - erez cohen - openstack...
TRANSCRIPT
From Love-Hate Relationship to Match Made in HeavenOpenStack Israel Jun 2016
OpenStack and OVS
© 2016 Mellanox Technologies 2
What Are We Covering in this Session
Challenges with Using OVS for OpenStack Networking Mellanox OVS Offload / ASAP2 Direct Overview Initial performance results
© 2016 Mellanox Technologies 3
What Are We Covering in this Session
Challenges with Using OVS for OpenStack Networking Mellanox OVS Offload / ASAP2 Direct Overview Initial performance results
© 2016 Mellanox Technologies 4
OpenStack and OVS: A Love-Hate Relationship
Man, It is SLOW!
• It burns CPU like there is no tomorrow!
• What do you mean it drops my packets?
© 2016 Mellanox Technologies 5
Comparison of Existing I/O Virtualization Solutions
• Paravirt - Control • SRIOV - Performance
© 2016 Mellanox Technologies 6
What If We Could Enjoy the Best of Both Worlds?
© 2016 Mellanox Technologies 7
What Are We Covering in this Session
Challenges with Using OVS for OpenStack Networking Mellanox OVS Offload / ASAP2 Direct Overview Initial performance results
© 2016 Mellanox Technologies 8
Mellanox ASAP2 Direct
Accelerated Switch And Packet Processing (ASAP2) • NIC HW data path acceleration
- Based on Mellanox embedded Switch (eSwitch)• Advanced flow-based switch• Sophisticated classification engines• Multiple actions supported including:
- Steering and Forwarding - Drop / Allow- Encap/Decap
ASAP2 Direct• Full data path offload• A.K.A Full vSwitch offload
ASAP2 Flex• Accelerated virtual switch • Accelerated VNF
ASAP2
SR-IOV VM
Classify
Action
Classify
Action
SR-IOV VM
© 2016 Mellanox Technologies 9
OVS Architecture and Operations
Forwarding• Flow-based forwarding• First packet of a new flow (match miss) is
directed to user space (ovs-vswitchd)• ovs-vswitchd determines flow handling and
programs kernel (fast path) • Following packets hit kernel flow entries and
are executed in fast path
9
© 2016 Mellanox Technologies 10
OVS Offload – Let the Hardware Do the Heavy-lifting
New Flow
• A new flow will result in a ‘miss’ action in eSwitch and is directed to OVS kernel module
• Miss in kernel will punt the packet to OVS-vswitchd in user space
Configuration
• OVS-vswitchd will resolve the flow entry, and based on a policy decision to offload, propagate that to corresponding eSwitch tables for offload-enabled flows
Fast Forwarding
• Subsequent frames of offload-enabled flows will be processed and forwarded by eSwitch
© 2016 Mellanox Technologies 11
OVS and SRIOV, Isn’t it Oil and Water?
Representor ports enable OVS to “know” and service those VMs that uses SR-IOV
Representor ports are used for eSwitch / OVS communication (miss flow and PV to SR-IOV communication)
Netdev Representor
Netdev Representor netdev netdev
VMs using OVS Offload VMs using Para-Virtualization
NIC eSwitch
Policy based Flow Sync
© 2016 Mellanox Technologies 12
Software Defined Networking, at Full Speed
Leverage Open vSwitch control-plane and Software Defined Networks (SDN) capabilities to control eSwitch forwarding-plane
Enhance forwarding performance while maintaining network programmability
Benefits:• Open vSwitch interfaces to the user remain untouched
- The hardware offloads are transparent to the user• User does not need changes in his environment
© 2016 Mellanox Technologies 13
OVS-Offload with OpenStack - Nova & Neutron Flows
1. Nova compute send port create with hybrid VNIC_TYPE and PCI address to neutron server
2. Mechanism driver in neutron server binding VNIC_TYPE and return VIF_TYPE.
3. Neutron server send VIF_TYPE and segmentation id (vlan) to nova compute.
4. neutron server send RPC call to neutron agent to apply vlan tagging rule on the representor.
Nova ComputeNeutron Server(1)
Neutron Agent
ML2 Plugin(3)
(4)
Mechanism driver
© 2016 Mellanox Technologies 14
What Are We Covering in this Session
Challenges with Using OVS for OpenStack Networking Mellanox OVS Offload / ASAP2 Direct Overview Initial performance results
14
© 2016 Mellanox Technologies 15
Benchmark Targets
Matrices• Message Rate (PPS) • Network related CPU Load
Environments • 25Gbps network • Extreme performance• Open Source• Free
Standard Benchmark• RFC 2544
© 2016 Mellanox Technologies 16
Benchmark Topology
Mellanox
Kernel Kernel
Kernel Kernel
User User
UserUser
OVS Over DPDK OVS Offload
OVS
DPDK
DPDK
Testpmd
OVS
eSwitch
DPDK
Testpmd
Flows Offload
25GE 25GE
VM
Hypervisor
NIC
VS.
© 2016 Mellanox Technologies 17
Benchmark Topology – Traffic Flow
Mellanox
Kernel Kernel
Kernel Kernel
User User
UserUser
OVS Over DPDK OVS Offload
OVS
DPDK
DPDK
Testpmd
OVS
eSwitch
DPDK
Testpmd
Flows Offload
25GE 25GE
VM
Hypervisor
NIC
VS.
© 2016 Mellanox Technologies 18
Benchmark Results
RFC 2544. 64 Bytes. Zero Packet Loss
Max Theoretical PPS for 25G is 37.2M PPS
OVS over DPDK OVS Offload0
5
10
15
20
25
30
35
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
7.6 MPPS
33 MPPS
4 Cores
0 Cores
Message Rate Dedicated Hypervisor Cores
Mill
ion
Pack
et P
er S
econ
d
Num
ber o
f Ded
icat
ed C
ores
© 2016 Mellanox Technologies 19
Conclusions – OVS Offload Advantages
330% higher message rate compared to OVS over DPDK• 33M PPS VS. 7.6M PPS• OVS Offload reach near line rate at 25G (37.2M PPS)
Zero! CPU utilization on hypervisor compared to 4 cores with OVS over DPDK• This delta will grow further with packet rate and link
speed
Same CPU load on VM
OVS over DPDK OVS Offload0
5
10
15
20
25
30
35
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
7.6 MPPS
33 MPPS4 Cores
0 Cores
Message Rate Dedicated Hypervisor Cores
Mill
ion
Pack
et P
er S
econ
d
Num
ber o
f Ded
icat
ed C
ores
Thank You