taking security groups to ludicrous speed with ovs (openstack summit 2015)
Post on 28-Jul-2015
182 Views
Preview:
TRANSCRIPT
Taking Security Groupsto Ludicrous Speedwith Open vSwitch
OpenStack SummitVancouver, 2015
Miguel Angel Ajo @mangel_ajo
Ivar Lazzaro@ivarlazzaro
Thomas Graf@tgraf__
Justin Pettit@Justin_D_Pettit
Agenda
Problem Statement– Status Quo – a.k.a “The Bridge Mess”
Possible Solution– OVS + Stateful services (+ OVN)
Results– Performance Numbers
Q&A
Status Quo
Mess of Bridges.
br-eth1(Open vSwitch)
OpenFlowtable
OVS bridge
br-int(Open vSwitch)
Mess of Bridges.
br-eth1(Open vSwitch)
veth
OpenFlowtable
OVS bridge
br-int(Open vSwitch)
Mess of Bridges.
br-eth1(Open vSwitch)
qbr(Linux Bridge)
qbr(Linux Bridge)
qbr(Linux Bridge)
veth
iptablesrules
OpenFlowtableOVS
bridge
Linuxbridge
br-int(Open vSwitch)
Mess of Bridges. Why?
VM
br-eth1(Open vSwitch)
qbr(Linux Bridge)
qbr(Linux Bridge)
qbr(Linux Bridge)
VM lxc
tap
veth
iptablesrules
OpenFlowtableOVS
bridge
Linuxbridge
4-5 network devices per guest in host!
br-int(Open vSwitch)
Possible SolutionStacking Things Properly
(c) Karen Sagovac
Can we have a pure OVS Model?
br-int(Open vSwitch)
VM
br-eth1(Open vSwitch)
VM lxc
Tap, veth, or internal port
OpenFlow tablewith security groups
OVS bridge
1 network device per guest in host!
Makes VMs and containers equally happy.
Some Background(OVS, OVN, Kernel CT)
● Highly scaleable multi layer virtual switch for hypervisors
– Apache License (User Space), GPL (Kernel)● Extensive flow table programming capabilities
– OpenFlow 1.0 – 1.5 (some partial)
– Vendor Extensions● Designed to manage overlay networks
– VXLAN (+ extensions), GRE, Geneve, LISP, STT, VLAN, ...● Remote management protocol (OVSDB)
● Monitoring capabilities
Open vSwitch
● Virtual Networking for OVS– Developed by same team that made OVS
– Works on same platorms (Linux, Containers, Hyper-V)
● Provides L2/L3 virtual networking– Logical switches and routers
– Conntrack-based security groups
– L2/L3/L4 ACLs
– Physical and DPDK-based logical-physical gateways
● Integrated with OpenStack and other CMSs
OVN
Implementing a Firewall with OVS
● OVS has traditionally only supported stateless matches
● As an example, currently, two ways to implement a firewall in OVS
– Match on TCP flags (Enforce policy on SYN, allow ACK|RST)
● Pro: Fast● Con: Allows non-established flow through with ACK or RST
set, only TCP– Use “learn” action to setup new flow in reverse direction
● Pro: More “correct”● Con: Forces every new flow to OVS userspace, reducing flow
setup by orders of magnitude– Neither approach supports “related” flows or TCP window
enforcement
Connection Tracking● We are adding the ability to use the conntrack module from Linux
– Stateful tracking of flows
– Supports ALGs to punch holes for related “data” channels
● FTP, TFTP, SIP● Implement a distributed firewall with enforcement at the edge
– Better performance
– Better visibility
● Introduce new OpenFlow extensions:
– Action to send to conntrack
– Match fields on state of connection
● Have prototype working. Expect to ship as part of OVS in next release.
Netfilter Conntrack Integration
OVS Flow Table
NetfilterConnection Tracker
CTTable
Userspace Netlink API
Create & UpdateCT entries
Connection State (conn_state=)
conntrack()
Recirculation
1
2
3
4
Zone 1
Connection Tracking Zones
OVS Flow Table
CTTable
Zone 2
CTTable
NetfilterConnection Tracker
OVSFirewallDriver
OVSFirewallDriver● Original proposal from Amir Sadoughi
– https://review.openstack.org/#/c/89712
● Stable/kilo (just a POC)– https://review.openstack.org/#/c/183725/
Example HTTP Request
VM 1 VM 2HTTP reqresponse
GLOSARY of OF actions
NORMAL = “do like a normal switch”
ct(commit) = “push this packet to CT”
ct(recirc) = “grab any ct info we have, set +trk, and send to T0”
SG OpenFlow Table structure
+trk(+est/+rel) → NORMAL
ARP (with filters) → NORMAL
(…)
SG rules in OF:ip
Egress T1Input T0 ct(commit,recirc)
(…)
SG rules: tp_dst=80
Ingress T2
From VM(n)(MAC+in_port -trk)
-trk → ct(recirc)
To V
M(n
)(M
AC
)
matchct(commit),NORMAL
VM1
VM2
SG OpenFlow Table structure
+trk(+est/+rel) → NORMAL
ARP (with filters) → NORMAL
(…)
SG rules in OF:ip
Egress T1Input T0 ct(commit,recirc)
(…)
SG rules: tp_dst=80
Ingress T2
From VM(n)(MAC+in_port -trk -trk)
-trk → ct(recirc)
To V
M(n
)(M
AC
)
matchct(commit),NORMAL
VM1
SG OpenFlow Table structure
+trk(+est/+rel) → NORMAL
ARP (with filters) → NORMAL
(…)
SG rules in OF:ip
Egress T1Input T0 ct(commit,recirc)
(…)
SG rules: tp_dst=80
Ingress T2
From VM(n)(MAC+in_port -trk)
-trk → ct(recirc)
To V
M(n
)(M
AC
)
matchct(commit),NORMAL
SG OpenFlow Table structure
+trk(+est or +rel) → NORMAL
ARP (with filters) → NORMAL
(…)
SG rules in OF:ip
Egress T1Input T0 ct(commit,recirc)
(…)
SG rules: tp_dst=80
Ingress T2
From VM(n)(MAC+in_port -trk)
-trk → ct(recirc)
To V
M(n
)(M
AC
)
matchct(commit),NORMAL
SG OpenFlow Table structure
+trk(+est/+rel) → NORMAL
ARP (with filters) → NORMAL
(…)
SG rules in OF:ip
Egress T1Input T0 ct(commit,recirc)
(…)
SG rules: tp_dst=80
Ingress T2
From VM(n)(MAC+in_port -trk)
-trk → ct(recirc)
To V
M(n
)(M
AC
)
matchct(commit),NORMAL
VM2
SG OpenFlow Table structure
+trk(+est/+rel) → NORMAL
ARP (with filters) → NORMAL
(…)
SG rules in OF:ip
Egress T1Input T0 ct(commit,recirc)
(…)
SG rules: tp_dst=80
Ingress T2
From VM(n)(MAC+in_port -trk)
-trk → ct(recirc)
To V
M(n
)(M
AC
)
matchct(commit),NORMAL
VM2
SG OpenFlow Table structure
+trk(+est/+rel) → NORMAL
ARP (with filters) → NORMAL
(…)
SG rules in OF:ip
Egress T1Input T0 ct(commit,recirc)
(…)
SG rules: tp_dst=80
Ingress T2
From VM(n)(MAC+in_port -trk)
-trk → ct(recirc)
To V
M(n
)(M
AC
)
matchct(commit),NORMAL
SG OpenFlow Table structure
+trk(+est/+rel) → NORMAL
ARP (with filters) → NORMAL
(…)
SG rules in OF:ip
Egress T1Input T0 ct(commit,recirc)
(…)
SG rules: tp_dst=80
Ingress T2
From VM(n)(MAC+in_port -trk)
-trk → ct(recirc)
To V
M(n
)(M
AC
)
matchct(commit),NORMAL
VM1
SG OpenFlow Table structure
+trk(+est/+rel) → NORMAL
ARP (with filters) → NORMAL
(…)
SG rules in OF:ip
Egress T1Input T0 ct(commit,recirc)
(…)
SG rules: tp_dst=80
Ingress T2
From VM(n)(MAC+in_port -trk)
-trk → ct(recirc)
To V
M(n
)(M
AC
)
matchct(commit),NORMAL
openvswitch_firewall.py
● update_security_group_{rules, members}
● prepare_port_filter
● update_port_filter
● remove_port_filter
● filter_defer_apply_{on,off}
neutron.agent.linux.firewall.FirewallDriver
neutron.agent.linux.openvswitch_firewall.OVSFirewallDriver
Performance Numbers
Test Setup Explained
System: 2 Socket, 24 core, IvyBridge
CPU: Intel(R) Xeon(R) CPU E5-2643 v2 @ 3.50GHz
Kernel: 3.10.0-229.1.2.el7.x86_64
Test: Netperf with TCP_STREAM and TCP_RR
Notes: Virt overhead eliminated, netperf/netserver runs baremetal
Compute 1
netperf
Compute 2
10GiB Link
netservernetserver
localMulti node
TCP Stream, Local, 1 netperf thread
64 128 512 1024 9000 640000
50
100
150
200
250
300
350
400
450
500
0
5,000
10,000
15,000
20,000
25,000
TCP stream Local, 1 netperf threads
sub-title
iptables throughput
OVS throughput
iptables cycles
OVS cycles
Packet Size
CP
U M
egac
ycle
s pe
r M
bit
Mbi
t
TCP Stream, Local, 16 netperf threads
64 128 512 1024 9000 640000
50
100
150
200
250
300
350
400
0
50,000
100,000
150,000
200,000
250,000
TCP stream Local, 16 netperf threads
sub-title
iptables throughput
OVS throughput
iptables cycles
OVS cycles
Packet Size
CP
U M
egac
ycle
s pe
r M
bit
Mbi
t
TCP Stream, Multi Node, 8 netperf threads
64 128 512 1024 9000 640000
50
100
150
200
250
0
1,000
2,000
3,000
4,000
5,000
6,000
7,000
8,000
9,000
10,000
TCP stream node-to-node, 8 netperf threads
iptables throughput
OVS throughput
iptables cycles
OVS cycles
Packet Size
CP
U M
egac
ycle
s pe
r M
bit
Mbi
t
TCP Requests, Local, 1 netperf thread
64 128 512 1024 9000 640000
50
100
150
200
250
300
350
400
450
500
0
5,000
10,000
15,000
20,000
25,000
TCP stream Local, 1 netperf threads
sub-title
iptables throughput
OVS throughputiptables cycles
OVS cycles
Packet Size
CP
U M
egac
ycle
s pe
r M
bit
Mbi
t
TCP Requests, Local, 64K packets
1 4 8 160
10
20
30
40
50
60
70
0
20,000
40,000
60,000
80,000
100,000
120,000
140,000
160,000
TCP Requests/s Local, 64K packets
sub-title
iptables requests/s
OVS requests/siptables cycles
OVS cycles
Number netperf threads
CP
U M
egac
ycle
s pe
r M
bit
Req
uest
s/s
TCP Requests, Multi Node, 1 netperf thread
64 128 512 1024 9000 640000
2
4
6
8
10
12
14
16
18
20
0
5,000
10,000
15,000
20,000
25,000
30,000
TCP Requests/s node-to-node, 1 netperf threads
sub-title
iptables requests/s
OVS requests/siptables cycles
OVS cycles
Packet Size
CP
U M
egac
ycle
s pe
r M
bit
Req
uest
s/s
TCP Requests, Multi Node, 64K packets
1 4 8 160
5
10
15
20
25
30
35
40
0
2,000
4,000
6,000
8,000
10,000
12,000
14,000
16,000
18,000
20,000
TCP Requests/s node-to-node, 64K packets
sub-title
iptables requests/s
OVS requests/siptables cycles
OVS cycles
Number netperf threads
CP
U M
egac
ycle
s pe
r M
bit
Req
uest
s/s
Conclusion
● Both throughput and latency are considerably improve (Up to 6x in some situations).
● If limited by wire speed, pure OVS approach generally consumes less CPU cycles for the same result, leaving more resources for actual workload.
● Issue for specific packet sizes to be investigated and resolved before merge.
Next Steps
● Convert ML2 PoC to a patch that can be merged
– Write functional tests
– Optimize OF rules/manipulation● Complete upstream merge of connection tracking
support in Open vSwitch in the Linux kernel
● Consider and realize OVN integration of this work
● Hopefully ready for Liberty
Q&A
● OVS w/ CT Neutron ML2 plugin
– https://github.com/mangelajo/vagrant-rdo-juno-ovs-ct-firewall
● Open vSwitch
– http://openvswitch.org/● Conntrack code on GitHub
– https://github.com/justinpettit/ovs/tree/conntrack● Stateful Connection Tracking & Stateful NAT (OVS
conference)
– http://www.openvswitch.org/support/ovscon2014/17/1030-conntrack_nat.pdf
Thank You!
top related