open stack networking_101_update_2014

22
© 2014 VMware Inc. All rights reserved. © 2014 VMware Inc. All rights reserved. OpenStack Networking – 2014 Update Yves Fauser, Salvatore Orlando 8/28/2014

Upload: yfauser

Post on 15-Jan-2015

2.694 views

Category:

Technology


1 download

DESCRIPTION

This presentation was shown at the OpenStack Online Meetup session on August 28, 2014. It is an update to the 2013 sessions, and adds content on Services Plugin, Modular plugins, as well as an Outlook to some Juno features like DVR, HA and IPv6 Support

TRANSCRIPT

Page 1: Open stack networking_101_update_2014

© 2014 VMware Inc. All rights reserved. © 2014 VMware Inc. All rights reserved.

OpenStack Networking – 2014 Update Yves Fauser, Salvatore Orlando 8/28/2014

Page 2: Open stack networking_101_update_2014

Agenda

•  Nova-Networking vs. Neutron refresher –  Nova-Networking quick overview –  Nova-Networking Multi-Host mode –  Nova-Networking vs. Neutron at a glance

•  Neutron plugin concept refresher •  Service plugins

•  ML2 plugin vs. monolithic Plugins •  Plugins and mechanism drivers added in the IceHouse release (incomplete list)

•  Outlook to Juno –  Distributed Virtual Router for OVS mechanism driver –  Neutron L3 High-Availability for virtual routers –  Neutron IPv6 Support

Page 3: Open stack networking_101_update_2014

Nova-Networking quick Overview nova-api

(OS,EC2,Admin) nova-console (vnc/vmrc)

nova-compute

Nova DB

nova-scheduler

nova-consoleauth

Hypervisor (KVM, Xen,

etc.)

Queue

nova-cert

Libvirt, XenAPI, etc.

nova-metadata

nova-network

nova-volume

Network-Providers (Linux-Bridge or OVS with

brcompat, dnsmasq, IPTables)

Volume-Provider (iSCSI, LVM, etc.)

Inspired by Ken Pepple

•  Nova-Networking was OpenStack’s first network implementation

•  Nova-network is still present today, and can be used instead of Neutron

•  No new features are added since Folsom, but bug-fixing is done frequently

•  Nova-network only knows 3 basic Network-Models;

–  Flat & Flat DHCP: direct bridging of Instance to external ethernet Interface with and without DHCP

–  VLAN based: Every tenant gets a VLAN, DHCP enabled

•  Watch our first Session for more details: https://www.youtube.com/watch?v=ascEICz_WUY

Page 4: Open stack networking_101_update_2014

Nova-Networking Multi-Host mode 1/2

nova-compute

hypervisor VM VM

Bridge 30 IP Stack

Compute Node + Networking

nova-compute

hypervisor VM VM

Br 30 IP Stack

Compute Node

nova-compute

hypervisor VM VM

IP Stack

Compute Node

External Network

(or VLAN)

Internal VLANs

WAN/Internet

dnsmasq

iptables/ routing

Bridge 40

VLAN30 VLAN40

Br 40

VLAN30 VLAN40

Br 30

Br 40

VLAN30 VLAN40

VLAN Trunk VLAN Trunk

dnsmasq

NAT & floating

-IPs

nova-netw.

•  In Nova-Networking the node that is holding the nova-networking role is; –  A single point of failure –  A choke point for both east-west and north-south traffic

(traffic staying in the DC between nodes and traffic leaving/entering the DC at the perimeter) –  Nova-Networking has a “multi-host mode” to address this

Page 5: Open stack networking_101_update_2014

Nova-Networking Multi-Host mode 2/2

nova-compute

hypervisor VM VM

Bridge 30 IP Stack

Compute Node + Networking

External Network

(or VLAN)

Internal VLANs

WAN/Internet

dnsmasq

iptables/ routing

Bridge 40

VLAN30 VLAN40

VLAN Trunk VLAN Trunk

dnsmasq

NAT & floating

-IPs

nova-netw.

•  With nova-networking “Multi-Host” each compute node runs nova-networking, and provides routing, SNAT and floating-ip’s (DNAT) for its local Instances –  Pros; Inherently highly-available; scales out routing and NAT to all compute-nodes –  Cons; IP address sprawl: each compute-node needs one external IP for SNAT, and one internal IP

in each project Network

nova-compute

hypervisor VM VM

Bridge 30 IP Stack

Compute Node + Networking

dnsmasq

iptables/ routing

Bridge 40

VLAN30 VLAN40

dnsmasq

NAT & floating

-IPs

nova-netw. nova-compute

hypervisor VM VM

Bridge 30 IP Stack

Compute Node + Networking

dnsmasq

iptables/ routing

Bridge 40

VLAN30 VLAN40

dnsmasq

NAT & floating

-IPs

nova-netw.

External network

Page 6: Open stack networking_101_update_2014

Nova-Networking vs. Neutron at a glance

•  Watch our first Session for more details: https://www.youtube.com/watch?v=ascEICz_WUY

•  Neutron pros –  More network implementation options –  Dynamic network, virtual router, load

balancer, VPN creation under the tenants control instead of fixed per project allocation

–  Pluggable architecture allows vendors to integrate their network solution into OpenStack and innovate independently (e.g. using network virtualization, SDN concepts, etc.)

–  Well defined tenant accessible API for consuming network services

•  Nova-Networking pros –  Simple models with less moving parts –  “Compute centric” networking model;

easier to understand than the complex options and “networking speech” in Neutron

–  Code-Base is in “bug-fixing” mode since long time now; less friction

–  HA and scale-out trough “multi-host” option (addressed in Neutron by DVR and HA in Juno timeframe)

Page 7: Open stack networking_101_update_2014

OpenStack Neutron – Plugin Concept refresher

Neutron Core API"

Neutron Service (Server)""

•  L2 network abstraction definition and management, IP address management

•  Device and service attachment framework •  Does NOT do any actual implementation of abstraction

"

Plugin API"

"Vendor/User Plugin"

•  Maps abstraction to implementation on the Network (Overlay e.g. NSX or physical Network) •  Makes all decisions about *how* a network is to be implemented •  Can provide additional features through API extensions. •  Extensions can either be generic (e.g. L3 Router / NAT), or Vendor Specific

"

Neutron API Extension"

Extension API implementation is optional

Page 8: Open stack networking_101_update_2014

Core and service plugins •  Core plugin implement the “core” Neutron API functions

(l2 Networking, IPAM, …)

•  Service plugins implements additional network services (l3 routing, Load Balancing, Firewall, VPN)

•  Implementations might choose to implement relevant extensions in the Core plugin itself

Neutron Core API"Function"

Core "

L3 "

FW "

Core "

L3 "

FW "

Core "

L3 "

FW "

Plugin"Core Plugin

"

Core Plugin "

FW plugin

"

Core Plugin

"

FW plugin

"

L3 plugin

"

Page 9: Open stack networking_101_update_2014

OpenStack Neutron – Plugin locations

!# cat /etc/neutron/neutron.conf | grep "core_plugin"!core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin!!# cat /etc/neutron/neutron.conf | grep "service_plugins”!service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin!!

!# ls /usr/share/pyshared/neutron/plugins/!bigswitch cisco embrane __init__.py metaplugin ml2 nec openvswitch ryu!brocade common hyperv linuxbridge midonet mlnx nicira plumgrid!!# ls /usr/share/pyshared/neutron/services/!firewall __init__.py l3_router loadbalancer metering provider_configuration.py service_base.py vpn""

Page 10: Open stack networking_101_update_2014

OpenStack Neutron – Modular Plugin •  Before the modular plugin (ML2), every team or vendor had to implement a complete plugin

including IPAM, DB Access, etc.

•  The ML2 Plugin separates core functions like IPAM, virtual network id management, etc. from vendor/implementation specific functions, and therefore makes it easier for vendors not to reinvent to wheel with regards to ID Management, DB access …

•  Existing and future non-modular plugins are called “monolithic” plugins

•  ML2 calls the management of network types “type drivers”, and the implementation specific part “mechanism drivers”

Arista

Cisco Linux Bridge

OVS etc.

Mechanism

Drivers"

GRE

VLAN

VXLAN

etc. Type

Drivers"

Type Manager" Mechanism Manager "

ML2 Plugin & API Extensions"

Page 11: Open stack networking_101_update_2014

OpenStack Neutron ML2 – locations

!# cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep type_drivers! # the neutron.ml2.type_drivers namespace.! # Example: type_drivers = flat,vlan,gre,vxlan! type_drivers = gre!!# cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep mechanism_drivers! # to be loaded from the neutron.ml2.mechanism_drivers namespace.! # Example: mechanism_drivers = arista! # Example: mechanism_drivers = cisco,logger! mechanism_drivers = openvswitch,linuxbridge!!

!# ls /usr/share/pyshared/neutron/plugins/ml2/drivers/!cisco l2pop mechanism_ncs.py mech_hyperv.py mech_openvswitch.py type_gre.py type_tunnel.py type_vxlan.py __init__.py mech_agent.py mech_arista mech_linuxbridge.py type_flat.py type_local.py type_vlan.py!"

Page 12: Open stack networking_101_update_2014

OpenStack Neutron – Modular Plugin vs. Monolithic Plugins

•  A vendor is free to choose between the development of an monolithic plugin or an ML2 mechanism driver –  A vendor might want use its own integrated IPAM / DB access, or already has a stable and proven

code base for it –  Timing: Development of a monolithic plugin might have started long before ML2 emerged

•  Contrary to a common misunderstanding monolithic plugins are not deprecated, only the existing OVS-Plugin and Linux Bridge plugins have been deprecated in IceHouse in favor of the OVS / Linux Bridge mechanism drivers

•  ML2 re-uses the monolithic OVS and Linux Bridge code for its mechanism driver and agents (e.g L3 Agent, DHCP Agent, OVS Agent, etc.)

Page 13: Open stack networking_101_update_2014

•  New ML2 Mechanism Drivers:

–  Mechanism Driver for OpenDaylight Controller

–  Brocade ML2 Mechanism Driver for VDX Switch Cluster

•  New Neutron Plugins

–  IBM SDN-VE Controller Plugin, Nuage Networks Controller Plugin

•  Service Plugins

–  Embrane and Radware LBaaS driver

–  Cisco VPNaaS driver for CSR Routers

•  Various

–  Support for virtual networks plugged into Docker containers

! This list is incomplete by design, please see here for more details: https://blueprints.launchpad.net/neutron/icehouse

Plugins and Mechanism Drivers added in the IceHouse Release (incomplete list)

Page 14: Open stack networking_101_update_2014

Juno Outlook – Distributed Virtual Router for OVS – 1/5 •  There is no equivalent of nova-network “multi-host” mode in Neutron today (as of IceHouse) •  In the OVS and Linux Bridge implementations, the L3 Agent node is a single point of failure.

•  Scaling out is done by deploying multiple network nodes, but even then east-west traffic needs to go through the L3 Agent Node, and can potentially be a choke point

•  Some vendor implementation already have distributed routing an HA today (e.g. VMware’s NSX)

IP Stack

Neutron- Network-Node

nova-compute

hypervisor VM VM

IP Stack

Compute Node

nova-compute

hypervisor VM VM

Compute Node

External Network

(or VLAN)

WAN/Internet

iptables/ routing

Layer 3 Transport Network

dnsmasq NAT & floating

-IPs iptables/ routing

N.-L3-Agent N.-DHCP-Agent N.-OVS-Agent

ovsdb/ ovsvsd

Neutron-Server + OVS-Plugin

N.-OVS-Agent N.-OVS-Agent

ovsdb/ ovsvsd

ovsdb/ ovsvsd

Layer 3 Transport Net.

IP Stack

br-int br-int br-tun

br-int br-tun

br-tun

L2 in L3 Tunnel

dnsmasq

br-ex

Page 15: Open stack networking_101_update_2014

Juno Outlook – Distributed Virtual Router for OVS – 2/5 •  Similar to “multi-host” mode in nova-network, each compute node will have its own routing and

NAT service (internal router namespaces - ‘IR’ )

•  In contrast to nova-network “multi-host” mode : –  SNAT will be done on a centralized network-node to avoid IP address sprawl on the external network

(introducing a single point of failure that needs to be addressed through virtual routers HA) –  All IRs use a single logical internal IP in the tenant networks, but have separate MAC addresses

IP Stack

Neutron- Network-Node

nova-compute

hypervisor VM VM

Compute Node

External Network

(or VLAN)

WAN/Internet

iptables/ routing

Layer 3 Transport Network

dnsmasq SNAT -IPs iptables/

routing

N.-L3-Agent N.-DHCP-Agent N.-OVS-Agent

ovsdb/ ovsvsd

Neutron-Server + OVS-Plugin

N.-OVS-Agent

IP Stack

br-int br-int br-tun br-tun

L2 in L3 Tunnel

dnsmasq

br-ex

N.-L3-(DVR)-Agent

iptables/ routing

NAT for floating

-IPs

iptables/ routing

br-ex

ovsdb/ ovsvsd

nova-compute

hypervisor VM VM

Compute Node

N.-OVS-Agent

IP Stack

br-int br-tun

iptables/ routing

NAT for floating

-IPs

iptables/ routing

br-ex

ovsdb/ ovsvsd

Layer 3 Transport Net.

External Network

(or VLAN)

External Network

(or VLAN)

N.-L3-(DVR)-Agent

Page 16: Open stack networking_101_update_2014

br-int

br-int

Juno Outlook – Distributed Virtual Router for OVS – 3/5 •  For east-west traffic which is routed within a tenants distributed virtual router,

traffic is send directly between compute-nodes on the transport network (e.g. using overlay networks)

•  Traffic can also stay within a compute-node, if the source and destination are on the same compute node

•  For more details see the DRV blueprint: https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr

east-west

north-south

Com

pute

Nod

e

VM

VM

VM

VM

IR2

IR1

WAN/Internet

Com

pute

Nod

e

External Network

Transport Network (e.g. used for tunnels)

Net

wor

k N

ode

IR2

IR1 VM

VM

VM

VM

br-tun br-tun br-tun

br-ex br-ex br-ex

br-int

R2 / SNAT

R1 / SNAT

Page 17: Open stack networking_101_update_2014

br-in

t

Juno Outlook – Distributed Virtual Router for OVS – 4/5 •  For SNAT from the tenant instances to the internet/WAN (north/south) traffic is

routed through a centralized network-node

•  This avoids IP address sprawl on the external network •  For more details see the DRV blueprint:

https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr

east-west

north-south

Com

pute

Nod

e

VM

VM

VM

VM

IR2

IR1

WAN/Internet

Com

pute

Nod

e

External Network

Transport Network (e.g. used for tunnels)

Net

wor

k N

ode

R2 / SNAT

R1 / SNAT

IR2

IR1 VM

VM

VM

VM

SNAT Router

-IP

br-tun br-tun br-tun

br-ex br-ex br-ex

br-int

br-int

Page 18: Open stack networking_101_update_2014

br-int

Juno Outlook – Distributed Virtual Router for OVS – 5/5 •  For floating-ip’s to and from the tenant instances to the internet/WAN (north/

south) traffic is routed and nat’ed directly at the compute nodes (IR Namespace)

•  For more details see the DRV blueprint: https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr

east-west

north-south

Com

pute

Nod

e

VM

VM

VM

VM

IR2

IR1

WAN/Internet

Com

pute

Nod

e

External Network

Transport Network (e.g. used for tunnels)

Net

wor

k N

ode

R2 / SNAT

R1 / SNAT

IR2

IR1 VM

VM

VM

VM

floating -IP

br-tun br-tun br-tun

br-ex br-ex br-ex

br-int

br-int

Page 19: Open stack networking_101_update_2014

Juno Outlook – HA for Virtual Routers

•  In Juno timeframe there is the plan to add native HA support using ‘keepalived’ for the centralized L3 agent nodes (including the SNAT nodes of the DVR)

•  If configured for HA, one active and one standby router will be deployed on two different neutron L3 GW network nodes. Both will share Virtual IPs internally and external and will synch NAT connection states over an HA Network connection

•  For more details see the HA for virtual routers blueprint: https://github.com/openstack/neutron-specs/blob/master/specs/juno/l3-high-availability.rst

         +----+                          +----+!        |    |                          |    |!+-------+ QG +------+           +-------+ QG +------+!|       |    |      |           |       |    |      |!|       +-+--+      |           |       +-+--+      |!|     VIPs|         |           |         |VIPs     |!|         |      +--+-+      +--+-+       |         |!|         +      |    |      |    |       +         |!|  KEEPALIVED+---+ HA +------+ HA +----+KEEPALIVED  |!|         +      |    |      |    |       +         |!|         |      +--+-+      +--+-+       |         |!|     VIPs|         |           |         |VIPs     |!|       +-+--+      |           |       +-+--+      |!|       |    |      |           |       |    |      |!+-------+ QR +------+           +-------+ QR +------+!        |    |                          |    |!        +----+                          +----+!

Page 20: Open stack networking_101_update_2014

Juno Outlook – IPv6 support

•  IPv6 in dysfunctional at multiple implementation points in Neutron today –  No support for Stateless Auto Configuration (SLAAC) in OpenStack security model / IPAM, so

even when one uses an external IPv6 router, security groups and port security will prevent the Instance from working correctly

–  Dnsmasq support for DHCPv6 was problematic and “broken” –  No IPv6 Routing support on L3 Agent, Metadata, etc.

•  A new IPv6 Neutron Subteam was founded to address the multiple IPv6 requirements •  Expected critical IPv6 Features in Juno Timeframe

–  Provider Networking - upstream SLAAC Support –  Support DHCPv6 stateless and stateful mode in Dnsmasq –  Support Router Advertisement Daemon (radvd) for IPv6

•  See more details here: https://wiki.openstack.org/wiki/Neutron/IPv6

Page 21: Open stack networking_101_update_2014

Juno Outlook – More Information

•  A big number of new vendor plugins, enhancements to existing plugins and mechanism drivers, service plugins etc. are being developed for the Juno timeframe right now

•  It is to early to say what’s going to be in or out in Juno today •  See here for a list of Juno Specs (linking to the Blueprints):

https://github.com/openstack/neutron-specs/tree/master/specs/juno

•  See here for a list of Blueprints: https://blueprints.launchpad.net/neutron/juno

Page 22: Open stack networking_101_update_2014

Questions?