running production-grade kubernetes on aws

34
1 Running Production-Grade Kubernetes on AWS

Upload: doit-international

Post on 07-Jan-2017

675 views

Category:

Technology


5 download

TRANSCRIPT

Page 1: Running Production-Grade Kubernetes on AWS

1

Running Production-Grade Kubernetes on AWS

Page 3: Running Production-Grade Kubernetes on AWS

3

Let’s Play

Join at kahoot.it with Game PIN:728274

Page 4: Running Production-Grade Kubernetes on AWS

4

Agenda

● What’s new in Kubernetes v1.3● Bootstrapping K8s cluster on AWS● Watchouts & Limitations!

Page 5: Running Production-Grade Kubernetes on AWS

Copyright 2015 Google Inc

Kubernetes 101

Replication controllers create new pod "replicas" from a template and ensures that a configurable number of those pods are running.

Services provide a bridge based on an IP and port pair for client applications to access backends without needing to write code that is Kubernetes-specific.

Replication Controllers ServicesLabels

Labels are metadata that are attached to objects, such as pods.

They enable organization and selection of subsets of objects with a cluster.

Pods

Pods are ephemeral units that are used to manage one or more tightly coupled containers.

They enable data sharing and communication among their constituent components.

Page 6: Running Production-Grade Kubernetes on AWS

6

What's new in Kubernetes 1.3

Page 7: Running Production-Grade Kubernetes on AWS

7

Release Highlights ● Init Containers (alpha)● Fixed PDs● Cluster Federation (alpha)● Optional HTTP2● Pod Level QoS Policy● TLS Secrets● kubectl set command● UI● Jobs● RBAC (alpha, experimental)● Garbage Collector (alpha)● Pet Sets● rkt runtime● Network Policies● kubectl auto-complete

Page 8: Running Production-Grade Kubernetes on AWS

8

Init Containers

Page 9: Running Production-Grade Kubernetes on AWS

9

Init Container: register pod to external service

Page 10: Running Production-Grade Kubernetes on AWS

10

Init Container: clone a git repo into a volume

Page 11: Running Production-Grade Kubernetes on AWS

11

Jobs (pods are *expected* to terminate)

Creates 1...n pods and ensures that a certain number of them run to completion.

3 job types:

● Non-Parallel (normally only one pod is started, unless the pod fails)

● Parallel with fixed count (complete when there is one successful pod for each value in range 1 to .spec.completions)

● Parallel with a work queue

Page 12: Running Production-Grade Kubernetes on AWS

12

Job: Work Queue with Pod Per Work Item

Page 13: Running Production-Grade Kubernetes on AWS

13

Increased Scale

● Up w/ up to 2k nodes per cluster● Up to 60k pods per cluster

Under the bonnet, the biggest change that has resulted in the improvements in scalability is to use Protocol Buffer-based serialization in the API instead of JSON.

Page 14: Running Production-Grade Kubernetes on AWS

14

Multi-Zone Clusters

Deploy clusters to multiple availability zones to increase availability:

● Multiple zones can be configured at cluster creation or can be added to a cluster after the fact.

Page 15: Running Production-Grade Kubernetes on AWS

15

Heterogeneous Clusters

Customers can now add different types of nodes to the same cluster.

● NodePools allow for different types of nodes to be joined to a single master, minimizing administrative overhead

● Built-in scheduler changes to allow scheduling to node types with only a configuration change

Page 16: Running Production-Grade Kubernetes on AWS

16

Cluster Federation

Deploy a service to multiple clusters simultaneously (including external load balancer configuration) via a single Federated API.

● Federated Services span multiple clusters (possibly running on different cloud providers, or on premise), and are created with a single API call.

● The federation service automatically:○ deploys the service across multiple clusters in the federation○ monitors the health of these services○ manages DNS records to ensure that clients are always

directed to the closest healthy instance of the federated service.

More info:● Sneak peek video

Page 17: Running Production-Grade Kubernetes on AWS

17

New kubectl commands

A new command kubectl set now allows the container image to be set in a single one-line command.

$ kubectl set image deployment/web nginx=nginx:1.9.1

To watch the update rollout and verify it succeeds, there is now a new convenient command: rollout status. So, for example, to see the rollout of nginx/nginx:1.9.1 from nginx/nginx:1.7.9:

$ kubectl rollout status deployment/web

Waiting for rollout to finish: 2 out of 4 new replicas has been updated...Waiting for rollout to finish: 2 out of 4 new replicas has been updated...Waiting for rollout to finish: 2 out of 4 new replicas has been updated...Waiting for rollout to finish: 3 out of 4 new replicas has been updated...Waiting for rollout to finish: 3 out of 4 new replicas has been updated...Waiting for rollout to finish: 3 out of 4 new replicas has been updated...deployment nginx successfully rolled out

Page 18: Running Production-Grade Kubernetes on AWS

18

clusters can now automatically request more compute when the have scheduled more jobs than there is CPU or memory available

● If there are no resources in the cluster to schedule a recently created pod, a new node is added.

● If a nodes is underutilized and all pods running on it can be easily moved elsewhere, then the node can be drained and deleted.

● Pay only for resources that are actually needed and get new resources when the demand increases.

Cluster Autoscaling (alpha)

Page 19: Running Production-Grade Kubernetes on AWS

19

Improved dashboard

Manage Kubernetes almost entirely through a web browser.

● All workload types are now supported, including DaemonSets, Deployments and Rolling updates

Page 20: Running Production-Grade Kubernetes on AWS

20

Minikube

Minikube is a new local development platform for Kubernetes, so customers can begin developing on their desktop or laptop.

● Packages and configures a Linux VM, Docker and all Kubernetes components, optimized for local development

● Can be installed with a single command● Alongside the regular pods, services and controllers, supports advanced

Kubernetes features: ● DNS● NodePorts● ConfigMaps and Secrets● Dashboards

Page 21: Running Production-Grade Kubernetes on AWS

21

The new "PetSet" object provides a raft of features for supporting containers that run stateful workloads (such as databases or key value stores), including:

● Permanent hostnames, that persist across restarts

● Automatically provisioned Persistent Disks per-container, that live beyond the life of a container

● Unique identities in a group, to allow for clustering and leader election

● Initialization containers, which are critical for starting up clustered applications

Stateful workload support (Pet Sets)In Alpha in Kubernetes 1.3

Page 22: Running Production-Grade Kubernetes on AWS

22

What's coming next

Page 23: Running Production-Grade Kubernetes on AWS

23

New features for Kubernetes in 1.4● Full cross-cluster federation, including

○ Single universal API

○ Global load balancer

○ Replica sets that span multiple clusters

● Granular permissions for clusters

● Simplified installation for common applicationsOne line install for simple applications in fully tested configurations

● Universal setupGreatly simplified on-prem and complex cloud deployments

● Integrated external DNS (including Route53)Simplified integration with external DNS providers

Expected release date for 1.4 is 16 September

Page 24: Running Production-Grade Kubernetes on AWS

24

Deploying K8s to Amazon AWS

Page 25: Running Production-Grade Kubernetes on AWS

25

What we wanted to achieve...

Page 26: Running Production-Grade Kubernetes on AWS

26

4.5 Step Deployment into existing VPC

Based on CoreOS K8s project:

$ kube-aws init & adjust your cluster.yaml

$ kube-aws render (generates CF stack)

$ kube-aws validate

$ kube-aws up (deploys the CF stack)

Page 27: Running Production-Grade Kubernetes on AWS

27

What you get...

CloudFormation Stack w/:

● Controller (master) node with EIP

● Autoscaling Group/Launch Config for Worker Nodes (fixed scaling)

● A Record in Route53 for Controller

● Security Groups to allow traffic between controller and works

● IAM Roles for both Controller and Workers

● AWS Addons (ELB, EBS integration)

Page 28: Running Production-Grade Kubernetes on AWS

28

Watchouts!

etcd high availability - build your own etcd cluster and expose it with internal ELB (CF stack)

default TLS keys 90-days expiration - replace generated TLS assets with your own

master/controller sizing - m3.xlarge for < 100 nodes - m3.2xlarge for < 250 nodes - c4.4xlarge > 500 nodes

Page 29: Running Production-Grade Kubernetes on AWS

29

Limitations

can’t deploy the cluster into existing subnets - the fix is on the way in 0.9

pv/pvc are available only in the same zone - because ebs volumes available in single AZ

Page 30: Running Production-Grade Kubernetes on AWS

30

Scaling the cluster

Page 31: Running Production-Grade Kubernetes on AWS

31

Exposing Services

$ kubectl expose deployment nginx --port:80 --type=”LoadBalancer”

kind: ServiceapiVersion: v1metadata: name: nginx annotations: service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

Externally with ELB (nodePort implementation)

Internally with ELB:

Page 32: Running Production-Grade Kubernetes on AWS

32

Persistent Volumes/Claims

EBS Volumes (available in single AZ)

EFS Volumes (multi AZ but with require manual recovery)

Page 33: Running Production-Grade Kubernetes on AWS

33

Spot Instances

Import ASG to Spotinst’s Elastigroup

Page 34: Running Production-Grade Kubernetes on AWS

34

meetup.com/multicloudmeetup.com/Kubernetes-Tel-Aviv

Next meetups: