secure your containers: what network admins should know when moving into production
TRANSCRIPT
Secure Your Containers!
What Network Admins Should Know When Moving Into Production
Cynthia ThomasSystems Engineer@_techcet_
{Why is networking an afterthought?
Containers, Containers, Containers!
Why Containers?
• Much lighter weight and less overhead than virtual machines
• Don’t need to copy entire OS or libraries – keep track of deltas
• More efficient unit of work for cloud-native aps• Crucial tools for rapid-scale application development
• Increase density on a physical host
• Portable container image for moving/migrating resources
Containers: Old and New
• LXC: operating system-level virtualization through a virtual environment that has its own process and network space
• 8 year old technology• Leverages Linux kernel cgroup• Also other namespaces for isolation• Focus on System Containers
• Security: • Previously possible to run code on Host systems as root on guest system• LXC 1.0 brought “unprivileged containers” for HW accessibility restrictions
• Ecosystem: • Vendor neutral, Evolving LXD, CGManager, LXCFS
Containers: Old and New
• Explosive growth: Docker created a de-facto standard image format and API for defining and interacting with containers
• Docker: also operating system-level virtualization through a virtual environment• 3 year old technology• Application-centric API• Also leverages Linux kernel cgroups and kernal namespaces• Moved from LXC to libcontainer implementation• Portable deployment across machines • Brings image management and more seamless updates through versioning
• Security: • Networking: linuxbridge, IPtables
• Ecosystem: • CoreOS, Rancher, Kubernetes
Container Orchestration Engines
• Step forth the management of containers for application deployment!
• Scale applications with clusters where the underlying deployment unit is a container
• Examples include Docker Swarm, Kubernetes, Apache Mesos
Today’s COEs have vulnerabilities
What’s the problem? Why are containers insecure?
• They weren’t designed with full isolation like VMs
• Not everything in Linux is namespaced
• What do they do to the network?
COEs help container orchestration!…but what about networking?
• Scaling Issues for ad-hoc security implementation with Security/Policy complexity
• Which networking model to choose? CNM? CNI?
• Why is network security always seemingly considered last?
{Your Network Security team!And you should too.
Who’s going to care?
Containers add network complexity!!!
• More components = more endpoints
• Network Scaling Issues
• Security/Policy complexity
Perimeter Security approach is not enough
• Legacy architectures tended to put higher layer services like Security and FWs at the core
• Perimeter protection is useful for north-south flows, but what about east-west?
• More = better? How to manage more pinch points?
#ThrowbackThursdayWhat did OpenStack do?
• Started in 2010 as an open source community for cloud compute
• Gained a huge following and became production ready
• Enabled collaboration amongst engineers for technology advancement
#ThrowbackThursdayNeutron came late in the game!
• Took 3 years before dedicated project formed
• Neutron enabled third party plugin solutions
• Formed advanced networking framework via community
What is Neutron?• Production-grade open framework for Networking: Multi-tenancy Scalable, fault-tolerant devices (or
device-agnostic network services). L2 isolation L3 routing isolation
• VPC• Like VRF (virtual routing and fwd-
ing) Scalable Gateways Scalable control plane
• ARP, DHCP, ICMP Floating/Elastic Ips Decoupled from Physical Network
Stateful NAT• Port masquerading• DNAT
ACLs Stateful (L4) Firewalls
• Security Groups Load Balancing with health checks Single Pane of Glass (API, CLI, GUI) Integration with COEs & management platforms
• Docker Swarm, K8S• OpenStack, CloudStack• vSphere, RHEV, System Center
Hardened Neutron Plugins
{Leverage Neutron
Kuryr Can Deliver Networking to Containers
{Bridging the container networking framework with OpenStack network abstractions
The Kuryr Mission
What is Kuryr?Kuryr has become a collection of projects and repositories:
- kuryr-lib: common libraries (neutron-client, keystone-client)
- kuryr-libnetwork: docker networking plugin- kuryr-kubernetes: k8s api watcher and CNI
driver- fuxi: docker cinder driver
Project Kuryr Contributions
As of Oct. 18th, 2016: http://stackalytics.com/?release=all&module=kuryr-group&metric=commits
Some previous* networking options with Docker With Security?
STOP
IPtables maybe?
IPtables maybe?
Done with Neutron? Tell me more, please!
• libnetwork:
• Null (with nothing in its networking namespace) • Bridge• Overlay• Remote
Kuryr: Docker (1.9+)’s remote driver for Neutron networking
Kuryr implements a libnetwork remote network driver and maps its calls to OpenStack Neutron.
It translates between libnetwork's Container Network Model (CNM) and Neutron's networking model.
Kuryr also acts as a libnetwork IPAM driver.
Libnetwork implements CNM • CNM has 3 main networking components: sandbox,
endpoint, and network
Kuryr translation please!• Docker uses PUSH model to call a service for
libnetwork
• Kuryr maps the 3 main CNM components to Neutron networking constructs
• Ability to attach to existing Neutron networks with host isolation (container cannot see host network)
libnetwork neutronNetwork NetworkSandbox Subnet, Ports,
netnsEndpoint Port
Networking services from Neutron, for containers!
Distributed Layer 2 Switching
Distributed Layer 3 Gateways
Floating IPs
Service Insertion
Layer 4 Distributed Stateful NAT
Distributed Firewall
VTEP Gateways
Distributed DHCP
Layer 4 Load Balancer-as-a-Service (with Health Checks)
Policy without the need for IP tables
Distributed Metadata
TAP-as-a-Service
Launching a Container in Docker with Kuryr/MidoNet
{It’s an enabler for existing, well-defined networking plugins for containers
Kuryr delivers for CNM, but what about CNI?
Kubernetes Presence in Container Orchestration
• Open sourced from production-grade, scalable technology used by Borg & Omega at Google for over 10 years
• Explosive use over the last 12 months, including users like eBay and Lithium Technologies
• Portable, extensible, self-healingImpressive automated rollouts & rollbacks with one command
• Growing ecosystem supporting Kubernetes:• CoreOS, RH OpenShift, Platform9, Weaveworks, Midokura!
Kubernetes Architecture
• Uses PULL model architecture for config changes
• Mean K8S emits events on its API server
• etcd• All persistent master state is
stored in an instance of etcd• To date, runs as single instance;
HA clusters in future• Provides a “great” way to store
configuration data reliably• With watch support,
coordinating components can be notified very quickly of changes
Kubernetes Control Plane
• K8S API Server• Serves up the Kubernetes API• Intended to be a CRUD-y server, with separate components or in plug-ins
for logic implementation • Processes REST operations, validates them, and updates the corresponding
objects in etcd
• Scheduler• Binds unscheduled pods to nodes • Pluggable, for multiple cluster schedulers and even user-provided
schedulers in the future
• K8S Controller Manager Server• All other cluster-level functions are currently performed by the Controller
Manager• E.g. Endpoints objects are created and updated by the endpoints
controller; and nodes are discovered, managed, and monitored by the node controller.
• The replicationcontroller is a mechanism that is layered on top of the simple pod API
• Planned to be a pluggable mechanism
Kubernetes Control Plane Continued
• kubelet• Manages pods and their
containers, their images, their volumes, etc
• kube-proxy• Run on each node to provide
a simple network proxy and load balancer
• Reflects services as defined in the Kubernetes API on each node and can do simple TCP and UDP stream forwarding (round robin) across a set of backends
Kubernetes Worker Node
Kubernetes Networking Model
There are 4 distinct networking problems to solve:
1. Highly-coupled container-to-container communications
2. Pod-to-Pod communications
3. Pod-to-Service communications
4. External-to-internal communications
Kubernetes Networking Options
Flannel provides an overlay to enable cross-host communication
- IP per POD
- VXLAN tunneling between hosts
- IPtables for NAT
- Multi-tenancy?- Host per tenant? - Cluster per tenant?
- How to share VMs and containers on the same network for the same tenant?
- Security Risk on docker bridge? Shared networking stack
MidoNet Integration with Kubernetes using Kuryr
35
MidoNet: 6+ years of steady growth
Security at the edge
1. vPort1 initiates a packet flow through the virtual network2. MN Agent fetches the virtual topology/state3. MN simulates the packet through the virtual network4. MN installs a flow in the kernel at the ingress host5. Packet is sent in tunnel to egress host
Kubernetes Integration: How with Kuryr?
Kubernetes 1.2+
Two integration components:CNI driver• Standard container networking: preferred K8S network extension point• Can serve rkt, appc, docker• Uses Kuryr port binding library to bind local pod using metadata
Raven (Part of Kuryr project)• Python 3• AsyncIO• Extensible API watcher• Drives the K8S API to Neutron API translation
Kubernetes Integration: How with Kuryr+MidoNet?
Defaults:
kube-proxy: generates iptables rules which map portal_ips such that the traffic gets to the local kube-proxy daemon. Does the equivalent of a NAT to the actual pod addressflannel: default networking integration in CoreOS
Enhanced by:
Kuryr CNI driver: enables the host bindingRaven: process used to proxy K8S API to Neutron APIMidoNet agent: provides higher layer services to the pods
Kubernetes Integration: How with Kuryr? Raven: used to proxy K8S API to Neutron API + IPAM
- focuses only on building the virtual network topology translated from the events of the internal state changes of K8S through its API server
Kuryr CNI driver: takes care of binding virtual ports to physical interfaces on worker nodes for deployed pods
Kubernetes API Neutron API
Namespace Network
Cluster Subnet Subnet
Pod Port
Service LBaaS Pool LBaaS VIP (FIP)
Endpoint LBaaS Pool Member
Kubernetes Integration: How with Kuryr+MidoNet?Raven: used to proxy K8S API to Neutron API
Kuryr CNI driver: takes care of binding virtual ports to physical interfaces on worker nodes for deployed pods
Kubernetes Integration: How with Kuryr+MidoNet?
Raven: used to proxy K8S API to Neutron API
Kuryr CNI driver: takes care of binding virtual ports to physical interfaces on worker nodes for deployed pods
Completed integration components:- CNI driver- Raven- Namespace Implementation (a mechanism to partition resources created
by users into a logically named group):- - each namespace gets its own router- - all pods driven by the RC should be on the same logical network
CoreOS support- Containerized MidoNet services
Kubernetes Integration: Where are we now with MidoNet?
Where will Kuryr go next?
• Bring container and VM networking under one API• Multi-tenancy• Advanced networking services/map Network Policies • QoS• Adapt implementation to work with other COEs
• kuryr-mesos• kuryr-cloudfoundry• kuryr-openshift
• Magnum Support (containers in VMs) in OpenStack
Kuryr Project Launchpad
https://launchpad.net/kuryr Project Git Repository
https://github.com/openstack/kuryr Weekly IRC Meeting
http://eavesdrop.openstack.org/#Kuryr_Project_Meeting
IRC #openstack-neutron @ Freenode
MidoNet Community Site
www.midonet.org Project Git Repository
https://github.com/midonet/midonet Try MidoNet with one
command: $> curl -sL quickstart.midonet.org | sudo
bash
Join Slack slack.midonet.org
Get Involved!
{Cynthia ThomasSystems Engineer@_techcet_
Thank you!