f5 101 lab kubernetes - read the docs · •all containers can communicate with all other...

41
F5 101 Lab Kubernetes Release 0.1 Nicolas Menant Aug 01, 2018

Upload: others

Post on 20-May-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

  • F5 101 Lab KubernetesRelease 0.1

    Nicolas Menant

    Aug 01, 2018

  • Getting Started

    1 Introduction 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Kubernetes overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Kubernetes networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4 Kubernetes services overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.5 Labs setup/access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.6 Connecting to UDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.7 Automated F5 solutions deployment via CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.8 Automated F5 solutions deployment via Jenkins . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.9 Overview of F5® Container Connector (CC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.10 Container Connector(CC) Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.11 Container Connector Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211.12 Cluster installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281.13 Setup master . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291.14 Node setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341.15 Test our setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    i

  • ii

  • CHAPTER 1

    Introduction

    #Introduction to Kubernetes and F5 solutions for Kubernetes

    The purpose of this lab is to give you more visibility on

    • Overview of Kubernetes and its key components

    • Install Kubernetes in different flavors: All-in-one, One kubernetes Cluster (1 Master and 2 minions),

    • How to launch application in Kubernetes

    • How to install and use F5 containerized solutions (Container connector, Application Service proxy and F5 kubeproxy)

    We consider that you have a valid UDF access (Private Cloud of F5 Networks) to do this lab. If not, you may reviewthe pre-requisites about our lab setup .

    Contents:

    1.1 Introduction

    The purpose of this lab is to give you more visibility on

    • Overview of Kubernetes and its key components

    • How to install Kubernetes on Ubuntu

    • How to launch application in Kubernetes

    • How to install and use F5 containerized solutions (Container connector, Application Service proxy and F5 kubeproxy)

    To focus on the F5 components, you have the opportunity to leverage an existing environment. To do so, we considerthat you have a valid UDF access (Private Cloud of F5 Networks) to do this lab. If not, you may review the pre-requisites about our lab setup to build your own.

    We also have the option to use a script to automatically deploy the F5 solutions

    1

  • F5 101 Lab Kubernetes, Release 0.1

    1.2 Kubernetes overview

    Kubernetes has a lot of documentation available at this location: Kubernetes docs

    On this page, we will try to provide all the relevant information to deploy successfully a cluster (Master + nodes)

    1.2.1 Overview

    Extract from: Kubernetes Cluster intro

    Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit. Theabstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specificallyto individual machines. To make use of this new model of deployment, applications need to be packaged in a way thatdecouples them from individual hosts: they need to be containerized.

    Containerized applications are more flexible and available than in past deployment models, where applications wereinstalled directly onto specific machines as packages deeply integrated into the host. Kubernetes automates the dis-tribution and scheduling of application containers across a cluster in a more efficient way. Kubernetes is an open-sourceplatform and is production-ready.

    A Kubernetes cluster consists of two types of resources:

    • The Master coordinates the cluster

    • Nodes are the workers that run applications

    2 Chapter 1. Introduction

    http://kubernetes.io/docs/http://kubernetes.io/docs/tutorials/kubernetes-basics/cluster-intro/

  • F5 101 Lab Kubernetes, Release 0.1

    The Master is responsible for managing the cluster. The master coordinates all activity in your cluster, such asscheduling applications, maintaining applications’ desired state, scaling applications, and rolling out new updates.

    A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster. Each node hasa Kubelet, which is an agent for managing the node and communicating with the Kubernetes master. The node shouldalso have tools for handling container operations, such as Docker or rkt. A Kubernetes cluster that handles productiontraffic should have a minimum of three nodes.

    Masters manage the cluster and the nodes are used to host the running applications.

    When you deploy applications on Kubernetes, you tell the master to start the application containers. The master sched-ules the containers to run on the cluster’s nodes. The nodes communicate with the master using the KubernetesAPI, which the master exposes. End users can also use the Kubernetes API directly to interact with the cluster.

    1.2.2 Kubernetes concepts

    Extract from Kubernetes concepts

    Cluster: Kubernetes Cluster A cluster is a set of physical or virtual machines and other infrastructure resources usedby Kubernetes to run your applications.

    Namespace: Kubernetes Namespace Kubernetes supports multiple virtual clusters backed by the same physical clus-ter. These virtual clusters are called namespaces. Namespaces are intended for use in environments with many users

    1.2. Kubernetes overview 3

    http://kubernetes.io/docs/user-guide/https://kubernetes.io/docs/admin/https://kubernetes.io/docs/user-guide/namespaces/

  • F5 101 Lab Kubernetes, Release 0.1

    spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create orthink about namespaces at all. Start using namespaces when you need the features they provide. Namespaces provide ascope for names. Names of resources need to be unique within a namespace, but not across namespaces. Namespacesare a way to divide cluster resources between multiple uses

    Node: Kubernetes Node A node is a physical or virtual machine running Kubernetes, onto which pods can be sched-uled. It was previously known as Minion

    Pod: Kubernetes Pods A pod is a co-located group of containers and volumes. The applications in a pod all use thesame network namespace (same IP and port space), and can thus find each other and communicate using localhost.Because of this, applications in a pod must coordinate their usage of ports. Each pod has an IP address in a flat sharednetworking space that has full communication with other physical computers and pods across the network. In additionto defining the application containers that run in the pod, the pod specifies a set of shared storage volumes. Volumesenable data to survive container restarts and to be shared among the applications within the pod.

    Label: Kubernetes Label and Selector A label is a key/value pair that is attached to a resource, such as a pod, to conveya user-defined identifying attribute. Labels can be used to organize and to select subsets of resources.

    Selector: Kubernetes Label and Selector A selector is an expression that matches labels in order to identify relatedresources, such as which pods are targeted by a load-balanced service.

    deployments: Kubernetes deployments A Deployment provides declarative updates for Pods and Replica Sets (thenext-generation Replication Controller). You only need to describe the desired state in a Deployment object, and theDeployment controller will change the actual state to the desired state at a controlled rate for you. You can defineDeployments to create new resources, or replace existing ones by new ones. A typical use case is:

    • Create a Deployment to bring up a Replica Set and Pods.

    • Check the status of a Deployment to see if it succeeds or not.

    • Later, update that Deployment to recreate the Pods (for example, to use a new image).

    • Rollback to an earlier Deployment revision if the current Deployment isn’t stable.

    • Pause and resume a Deployment

    ConfigMap: Kubernetes ConfigMap Any applications require configuration via some combination of config files,command line arguments, and environment variables. These configuration artifacts should be decoupled from imagecontent in order to keep containerized applications portable. The ConfigMap API resource provides mechanisms toinject containers with configuration data while keeping containers agnostic of Kubernetes. ConfigMap can be used tostore fine-grained information like individual properties or coarse-grained information like entire config files or JSONblobs

    Replication Controller: Kubernetes replication controller A replication controller ensures that a specified number ofpod replicas are running at any one time. It both allows for easy scaling of replicated systems and handles re-creationof a pod when the machine it is on reboots or otherwise fails.

    Service: Kubernetes Services A service defines a set of pods and a means by which to access them, such as singlestable IP address and corresponding DNS name. Kubernetes pods are mortal. They are born and they die, and theyare not resurrected. Replication Controllers in particular create and destroy pods dynamically (e.g. when scaling upor down or when doing rolling updates). While each pod gets its own IP address, even those IP addresses cannot berelied upon to be stable over time. This leads to a problem: if some set of pods (let’s call them backends) providesfunctionality to other pods (let’s call them frontends) inside the Kubernetes cluster, how do those frontends find outand keep track of which backends are in that set? Enter Services.

    A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them -sometimes called a micro-service. The set of pods targeted by a service is (usually) determined by a label selector

    Volume: Kuebernetes volume A volume is a directory, possibly with some data in it, which is accessible to a Containeras part of its filesystem. Kubernetes volumes build upon Docker Volumes, adding provisioning of the volume directoryand/or device.

    4 Chapter 1. Introduction

    https://kubernetes.io/docs/admin/node/https://kubernetes.io/docs/user-guide/pods/https://kubernetes.io/docs/user-guide/labels/https://kubernetes.io/docs/user-guide/labels/https://kubernetes.io/docs/user-guide/deployments/https://kubernetes.io/docs/user-guide/configmap/https://kubernetes.io/docs/user-guide/replication-controller/https://kubernetes.io/docs/user-guide/services/https://kubernetes.io/docs/user-guide/volumes/

  • F5 101 Lab Kubernetes, Release 0.1

    1.3 Kubernetes networking

    This is an extract from Networking in Kubernetes

    1.3.1 Summary

    Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We giveevery pod its own IP address so you do not need to explicitly create links between pods and you almost never need todeal with mapping container ports to host ports. This creates a clean, backwards-compatible model where pods canbe treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, loadbalancing, application configuration, and migration

    1.3.2 Docker model

    Before discussing the Kubernetes approach to networking, it is worthwhile to review the “normal” way that networkingworks with Docker.

    By default, Docker uses host-private networking. It creates a virtual bridge, called docker0 by default, and allocatesa subnet from one of the private address blocks defined in RFC1918 for that bridge. For each container that Dockercreates, it allocates a virtual ethernet device (called veth) which is attached to the bridge. The veth is mapped to appearas eth0 in the container, using Linux namespaces. The in-container eth0 interface is given an IP address from thebridge’s address range. The result is that Docker containers can talk to other containers only if they are on the samemachine (and thus the same virtual bridge). Containers on different machines can not reach each other - in fact theymay end up with the exact same network ranges and IP addresses. In order for Docker containers to communicateacross nodes, they must be allocated ports on the machine’s own IP address, which are then forwarded or proxied tothe containers. This obviously means that containers must either coordinate which ports they use very carefully or elsebe allocated ports dynamically.

    1.3.3 Kubernetes model

    Coordinating ports across multiple developers is very difficult to do at scale and exposes users to cluster-level issuesoutside of their control. Dynamic port allocation brings a lot of complications to the system - every application hasto take ports as flags, the API servers have to know how to insert dynamic port numbers into configuration blocks,services have to know how to find each other, etc. Rather than deal with this, Kubernetes takes a different approach.

    Kubernetes imposes the following fundamental requirements on any networking implementation (barring any inten-tional network segmentation policies):

    • All containers can communicate with all other containers without NAT

    • All nodes can communicate with all containers (and vice-versa) without NAT

    • The IP that a container sees itself as is the same IP that others see it as

    • What this means in practice is that you can not just take two computers running Docker and expect Kubernetesto work. You must ensure that the fundamental requirements are met.

    Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - includingtheir IP address. This means that containers within a Pod can all reach each other’s ports on localhost. This doesimply that containers within a Pod must coordinate port usage, but this is no different than processes in a VM. We callthis the IP-per-pod model. This is implemented in Docker as a pod container which holds the network namespaceopen while “app containers” (the things the user specified) join that namespace with Docker’s –net=container:function

    1.3. Kubernetes networking 5

    http://http://kubernetes.io/docs/admin/networking/https://tools.ietf.org/html/rfc1918

  • F5 101 Lab Kubernetes, Release 0.1

    1.3.4 How to achieve this

    There are a number of ways that this network model can be implemented. Here is a list of possible options:

    • Contiv

    • Flannel

    • Open vswitch

    • L2 networks and linux bridging. You have a tutorial here

    • Project Calico

    • Romana

    • Weave net

    For this lab, we will use Flannel.

    1.4 Kubernetes services overview

    Refer to Kubernetes services for more information

    A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them. Theset of pods targeted by a service is (usually) determined by a label selector.

    As an example, consider an image-processing backend which is running with 3 replicas. Those replicas are fungible- frontends do not care which backend they use. While the actual pods that compose the backend set may change,the frontend clients should not need to be aware of that or keep track of the list of backends themselves. The serviceabstraction enables this decoupling.

    For Kubernetes-native applications, Kubernetes offers a simple Endpoints API that is updated whenever the set ofpods in a service changes. For non-native applications, Kubernetes offers a virtual-IP-based bridge to services whichredirects to the backend pods.

    1.4.1 Defining a service

    A service in Kubernetes is a REST object, similar to a pod. Like all of the REST objects, a service definition can bePOSTed to the apiserver to create a new instance. For example, suppose you have a set of pods that each expose port9376 and carry a label “app=MyApp”.

    {"kind": "Service","apiVersion": "v1","metadata": {

    "name": "my-service"},"spec": {

    "selector": {"app": "MyApp"

    },"ports": [

    {"protocol": "TCP","port": 80,"targetPort": 9376

    }(continues on next page)

    6 Chapter 1. Introduction

    https://github.com/contiv/netpluginhttps://github.com/coreos/flannel#flannelhttp://kubernetes.io/docs/admin/ovs-networkinghttp://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/http://docs.projectcalico.org/http://romana.io/https://www.weave.works/products/weave-net/http://kubernetes.io/docs/user-guide/services/

  • F5 101 Lab Kubernetes, Release 0.1

    (continued from previous page)

    ]}

    }

    This specification will create a new service object named “my-service” which targets TCP port 9376 on any pod withthe “app=MyApp” label.

    This service will also be assigned an IP address (sometimes called the cluster IP), which is used by the service proxies. The service’s selector will be evaluated continuously and the results will be POSTed to an Endpoints object alsonamed “my-service”.

    if the service is not a native kubernetes app, then you can do a service definition without the selector field. In such acase you’ll have to specify yourself the endpoints

    {"kind": "Service","apiVersion": "v1","metadata": {

    "name": "my-service"},"spec": {

    "ports": [{

    "protocol": "TCP","port": 80,"targetPort": 9376

    }]

    }}

    {"kind": "Endpoints","apiVersion": "v1","metadata": {

    "name": "my-service"},"subsets": [

    {"addresses": [

    { "ip": "1.2.3.4" }],"ports": [{ "port": 9376 }]

    }]}

    Note that a service can map an incoming port to any targetPort. By default the targetPort will be set to the same valueas the port field. In the example above, the port for the service is 80 (HTTP) and will redirect traffic to port 9376 onthe Pods

    You can specify multiple ports if needed (like HTTP/HTTPS for an app)

    Kubernetes service supports TCP (default) and UDP.

    1.4. Kubernetes services overview 7

  • F5 101 Lab Kubernetes, Release 0.1

    1.4.2 Publishing services - service types

    For some parts of your application (e.g. frontends) you may want to expose a Service onto an external (outside of yourcluster, maybe public internet) IP address, other services should be visible only from inside of the cluster.

    Kubernetes ServiceTypes allow you to specify what kind of service you want. The default and base type is *Clus-terIP*, which exposes a *service* to connection from inside the cluster. NodePort and LoadBalancer are two typesthat expose services to external traffic.

    Valid values for the ServiceType field are:

    • ExternalName: map the service to the contents of the externalName field (e.g.foo.bar.example.com), by returning a CNAME record with its value. No proxying of anykind is set up. This requires version 1.7 or higher of kube-dns.

    • ClusterIP: use a cluster-internal IP only - this is the default and is discussed above. Choosing thisvalue means that you want this service to be reachable only from inside of the cluster.

    • NodePort: on top of having a cluster-internal IP, expose the service on a port on each nodeof the cluster (the same port on each node). You’ll be able to contact the service on any:NodePort address. If you set the type field to “NodePort”, the Kubernetes master willallocate a port from a flag-configured range (default: 30000-32767), and each Node will proxy thatport (the same port number on every Node) into your Service. That port will be reported in yourService’s spec.ports[*].nodePort field.

    If you want a specific port number, you can specify a value in the nodePort field, and the systemwill allocate you that port or else the API transaction will fail (i.e. you need to take care aboutpossible port collisions yourself). The value you specify must be in the configured rangefor node ports.

    • LoadBalancer: on top of having a cluster-internal IP and exposing service on a NodePort also, ask thecloud provider for a load balancer which forwards to the Service exposed as a :NodePortfor each Node

    1.4.3 Service type: LoadBalancer

    On cloud providers which support external load balancers, setting the type field to “LoadBalancer” will provision aload balancer for your Service. The actual creation of the load balancer happens asynchronously, and informationabout the provisioned balancer will be published in the Service’s status.loadBalancer field. For example:

    {"kind": "Service","apiVersion": "v1","metadata": {

    "name": "my-service"},"spec": {

    "selector": {"app": "MyApp"

    },"ports": [

    {"protocol": "TCP","port": 80,"targetPort": 9376,"nodePort": 30061

    }

    (continues on next page)

    8 Chapter 1. Introduction

  • F5 101 Lab Kubernetes, Release 0.1

    (continued from previous page)

    ],"clusterIP": "10.0.171.239","loadBalancerIP": "78.11.24.19","type": "LoadBalancer"

    },"status": {

    "loadBalancer": {"ingress": [

    {"ip": "146.148.47.155"

    }]

    }}

    }

    Traffic from the external load balancer will be directed at the backend *Pods*,→˓though exactly how that works depends on the cloud provider (AWS, GCE, ...). Some→˓cloud providers allow the loadBalancerIP to be specified. In those cases, the load-→˓balancer will be created with the user-specified loadBalancerIP. If the→˓loadBalancerIP field is not specified, an ephemeral IP will be assigned to the→˓loadBalancer. If the loadBalancerIP is specified, but the cloud provider does not→˓support the feature, the field will be ignored

    1.4.4 Service proxies

    Every node in a Kubernetes cluster runs a kube-proxy. kube-proxy is responsible for implementing a form of virtualIP for Services

    Since Kubernetes 1.2, the iptables proxy is the default behavior (another implementation of kube-proxy is the userspaceimplementation)

    In this mode, kube-proxy watches the Kubernetes master for the addition and removal of Service and Endpoints objects.For each*Service*, it installs iptables rules which capture traffic to the Service’s cluster IP (which is virtual) and Portand redirects that traffic to one of the Service’s backend sets. For each Endpoints object, it installs iptables rules whichselect a backend Pod.

    By default, the choice of backend is random. Client-IP based session affinity can be selected by setting ser-vice.spec.sessionAffinity to “ClientIP” (the default is “None”).

    As with the userspace proxy, the net result is that any traffic bound for the Service’s IP:Port is proxied to an appropriatebackend without the clients knowing anything about Kubernetes or Services or Pods. This should be faster and morereliable than the userspace proxy. However, unlike the userspace proxier, the iptables proxier cannot automaticallyretry another Pod if the one it initially selects does not respond, so it depends on having working readiness probes. Areadiness probes give you the capability to monitor the status of a pod via health-checks

    1.4.5 Service discovery

    The recommended way to implement Service discovery with Kubernetes is the same as with Mesos: DNS

    when building a cluster, you can add add-on to it. One of the available add-on is a DNS Server.

    The DNS server watches the Kubernetes API for new Services and creates a set of DNS records for each.If DNS has been enabled throughout the cluster then all Pods should be able to do name resolution ofServices automatically.

    1.4. Kubernetes services overview 9

  • F5 101 Lab Kubernetes, Release 0.1

    For example, if you have a Service called “my-service” in Kubernetes Namespace “my-ns” a DNS record for “my-service.my-ns” is created. Pods which exist in the “my-ns” Namespace should be able to find it by simply doing aname lookup for “my-service”. Pods which exist in other Namespaces must qualify the name as “my-service.my-ns”.The result of these name lookups is the cluster IP.

    Kubernetes also supports DNS SRV (service) records for named ports. If the “my-service.my-ns” *Servic*e has a portnamed “http” with protocol TCP, you can do a DNS SRV query for “_http._tcp.my-service.my-ns” to discover the portnumber for “http”

    1.5 Labs setup/access

    In this section, we will cover our setup:

    • 1 basic cluster:

    – 1 master (no master HA)

    – 2 nodes

    Here is the setup we will leverage to either create a new environment or to connect to an existing environment (F5UDF - Blueprint called [Kubernetes] how to setup ASP and CC )

    In the existing environment, here is the setup you’ll get:

    Hostname Management IP Kubernetes IP Role Login/Passwordip-10-1-1-4 10.1.1.4 10.1.10.11 Master ssh: ubuntu/ - su : root/defaultip-10-1-1-5 10.1.1.5 10.1.10.21 node ssh: ubuntu/ - su : root/defaultip-10-1-1-6 10.1.1.6 10.1.10.22 node ssh: ubuntu/ - su : root/defaultWindows 10.1.1.7 10.1.10.50 Jumpbox administrator / &NUyBADsdo

    In case you don’t use UDF and an existing blueprint, here are a few things to know that could be useful (if you wantto reproduce this in another environment)

    Here are the different things to take into accounts during this installation guide:

    • We use Ubuntu xenial in the UDF blueprints

    • We updated on all the nodes the /etc/hosts file so that each node is reachable via its name

    #master and nodes host file$ cat /etc/hosts127.0.0.1 localhost10.1.10.11 ip-10-1-1-4 master1 master1.my-lab10.1.10.21 ip-10-1-1-5 node1 node1.my-lab10.1.10.22 ip-10-1-1-6 node2 node2.my-lab

    You have many manuals available to explain how to install Kubernetes. If you don’t use Ubuntu, you can reference tothis page to find the appropriate guide: Getting started guides - bare metal

    Here you’ll find guides for:

    • fedora

    • Centos

    • Ubuntu

    • CoreOS

    and some other guides for non bare metal deployment (AWS, Google Compute Engine, Rackspace, . . . )

    10 Chapter 1. Introduction

    http://kubernetes.io/docs/getting-started-guides/#bare-metal

  • F5 101 Lab Kubernetes, Release 0.1

    1.6 Connecting to UDF

    We consider that you have access to UDF for the different labs.

    This guide will help you to either setup your own environment or leverage F5 UDF (you must be a F5 employee) tolearn about this.

    1.6.1 Create your environment

    If you want to setup your own kubernetes environment, you need to create your own deployment reflecting what hasbeen explained in the previous section. Please go to the Cluster setup guide to do this: Cluster installation

    1.6.2 Start your UDF environment

    Connect to UDF and go to deployment.

    Select the relevant blueprint: F5 BIG-IP Controller for Kubernetes blueprint and deploy it. This blueprint contains astandalone Kubernetes deployment (1 master, 2 nodes).

    Warning: With this blueprint, you don’t have to do the cluster setup guide

    Hostname Management IP Kubernetes IP Role Login/PasswordMaster 1 10.1.1.10 10.1.10.11 Master ssh: centos/ - su : root/defaultnode 1 10.1.1.11 10.1.10.21 node ssh: centos/ - su : root/defaultnode 2 10.1.1.12 10.1.10.22 node ssh: centos/ - su : root/defaultWindows 10.1.1.7 10.1.10.50 Jumpbox administrator / &NUyBADsdosplunk 10.1.1.8 10.1.10.100 jenkins/splunk ssh: ubuntu/ - su : root/default

    1.6.3 Access your environment

    If you deployed the existing blueprint mentioned above; Once your environment is started, find the ‘Jumpbox’ com-ponent under ‘Components’ and launch RDP (in the ACCESS menu)

    Click on the shortcut that got downloaded and it should open your RDP session. The credentials to use are adminis-trator/&NUyBADsdo

    If you have trouble reading the text please see optional directions for changing text size in the Appendix.

    Warning: For MAC user, it is recommended to use Microsoft Remote Desktop. You may not be able to accessyour jumpbox otherwise. It is available in the App store (FREE).

    1.6. Connecting to UDF 11

  • F5 101 Lab Kubernetes, Release 0.1

    Change keyboard input

    The default keyboard mapping is set to english. If you need to change it, here is the method

    • Click on the start menu button and type ‘Language’ in the search field.

    • Click on ‘Language’ option in the search list

    • Click on ‘Add a language’

    • Add the language you want to have for your keyboard mapping.

    Once you have access to your environment, you can go directly to the container connector section: Overview of F5®Container Connector (CC)

    1.7 Automated F5 solutions deployment via CLI

    This is only available for users having access to F5 UDF. You need to use the blueprint called F5 BIG-IP Controllerfor Kubernetes

    You may just need to have access to a kubernetes environment with F5 solutions already deployed on top of it (customerdemo, troubleshooting, testing).

    If this is the case, you can just leverage a script to automatically deploy everything mentioned in the later sections:

    • F5 container connector

    • F5 ASP and kube-proxy

    Here is how to use the script:

    1. Connect to the Splunk node (Either from UDF interface and SSH or from the Jumpbox/PUTTY) as user ubuntu

    12 Chapter 1. Introduction

  • F5 101 Lab Kubernetes, Release 0.1

    2. Go to f5-demo (/home/ubuntu/f5-demo)

    3. Execute the script setup_demo.sh

    ./setup_demo.sh

    This will do the following:

    1. Deploy F5 BIGIP-IP Controller

    2. Deploy a Frontend Application that will use our BIG-IP (BIG-IP VS will be 10.1.10.80)

    3. Deploy F5 ASP

    4. Replace Kube proxy with the F5 Kube Proxy

    5. Deploy a Backend Application that is accessible from the frontend (you have a backend link on the frontend appwebpage)

    Once the script is done you may need to wait a bit for all the containers to be up and running. This is something youcan monitor with the command:

    kubectl get pods --all-namespaces

    Once all the containers are in a running state, you can try to access the Frontend application through the BIG-IP onhttp://10.1.10.80 (you’ll need to do this from the JumpBox).

    Note:

    • Sometimes a kube discovery service may be in an error state. It shouldn’t create any issue in the lab

    • If you access the BIG-IP to check its configuration, the setup is done in the kubernetes partition

    1.8 Automated F5 solutions deployment via Jenkins

    This is only available for users having access to F5 UDF. You need to use the blueprint called F5 BIG-IP Controllerfor Kubernetes

    You may just need to have access to a kubernetes environment with F5 solutions already deployed on top of it (customerdemo, troubleshooting, testing).

    The following will perform the same steps as the previous CLI example, but use Jenkins as the tool to launch the script.

    From Google Chrome click on the Jenkins bookmark and login with username: admin, password: admin

    Click on the “1. setup_demo” item.

    Click on “Build now”

    1.8. Automated F5 solutions deployment via Jenkins 13

    http://10.1.10.80

  • F5 101 Lab Kubernetes, Release 0.1

    You should see.

    1.9 Overview of F5® Container Connector (CC)

    1.9.1 Overview

    The Container Connector makes L4-L7 services available to users deploying microservices-based applications in acontainerized infrastructure. The CC - Kubernetes allows you to expose a Kubernetes Service outside the cluster as avirtual server on a BIG-IP® device entirely through the Kubernetes API.

    The offical F5 documentation is here: F5 Kubernetes Container Integration

    1.9.2 Architecture

    The Container Connector for Kubernetes comprises the f5-k8s-controller and user-defined “F5 resources”. The f5-k8s-controller is a Docker container that can run in a Kubernetes Pod. The “F5 resources” are Kubernetes ConfigMapresources that pass encoded data to the f5-k8s-controller. These resources tell the f5-k8s-controller:

    • What objects to configure on your BIG-IP

    • What Kubernetes Service the BIG-IP objects belong to (the frontend and backend properties in the ConfigMap,respectively).

    The f5-k8s-controller watches for the creation and modification of F5 resources in Kubernetes. When it discoverschanges, it modifies the BIG-IP accordingly. For example, for an F5 virtualServer resource, the CC - Kubernetes doesthe following:

    • creates objects to represent the virtual server on the BIG-IP in the specified partition;

    • creates pool members for each node in the Kubernetes cluster, using the NodePort assigned to the service portby Kubernetes;

    • monitors the F5 resources and linked Kubernetes resources for changes and reconfigures the BIG-IP accordingly.

    • the BIG-IP then handles traffic for the Service on the specified virtual address and load-balances to all nodes inthe cluster.

    • within the cluster, the allocated NodePort is load-balanced to all pods for the Service.

    Before being able to use the Container Connecter, you need to handle some prerequisites

    14 Chapter 1. Introduction

    http://clouddocs.f5.com/containers/v1/kubernetes/

  • F5 101 Lab Kubernetes, Release 0.1

    1.9.3 Prerequisites

    • You must have a fully active/licensed BIG-IP

    • A BIG-IP partition needs to be setup for the Container connector.

    • You need a user with administrative access to this partition

    • Your kubernetes environment must be up and running already

    1.10 Container Connector(CC) Setup

    the official CC documentation is here: Install the F5 Kubernetes BIG-IP Controller

    1.10.1 BIG-IP setup

    To use F5 Container connector, you’ll need a BIG-IP up and running first.

    In the UDF blueprint, you should have a BIG-IP available at the following URL: https://10.1.10.60

    Warning: if you use UDF, it’s recommended to connect to your BIG-IP from the RDP session instead of goingdirectly to it from the UDF portal and TMUI option

    Connect to your BIG-IP and check it is active and licensed. Its login and password are: admin/admin

    Note: if your BIG-IP has no license or its license expired, renew the license. You just need a LTM VE license for thislab. No specific add-ons are required

    You need to setup a partition that will be used by F5 Container Connector.

    To do so go to : System > Users > Partition List. Create a new partition called “kubernetes”

    1.10. Container Connector(CC) Setup 15

    http://clouddocs.f5.com/containers/v1/kubernetes/kctlr-app-install.htmlhttps://10.1.10.60

  • F5 101 Lab Kubernetes, Release 0.1

    Once your partition is created, we can go back to Kubernetes to setup the F5 Container connector

    1.10.2 Container Connector deployment

    Here we consider you have already retrieved the F5 container connector image and loaded it in the environment.

    Now that our container is loaded, we need to define a deployment: Kubernetes deployments and create a secret to hideour bigip credentials. Kubernetes secrets

    On the master , we need to setup a deployment file to load our container and also setup a secret for our big-ipcredentials

    Note: You can access it by running PUTTY in the RDP session, a session is already setup there

    16 Chapter 1. Introduction

    https://kubernetes.io/docs/user-guide/deployments/https://kubernetes.io/docs/user-guide/secrets/

  • F5 101 Lab Kubernetes, Release 0.1

    You will need to create a serviceaccount for the controller to be able to access the Kubernetes API when RBAC isenabled in Kubernetes. To create a serviceaccount run:

    kubectl create serviceaccount bigip-ctlr -n kube-system

    You will also need to apply an RBAC policy. create a file called f5-k8s-sample-rbac.yaml

    # for use in k8s clusters using RBACkind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata:

    name: bigip-ctlr-clusterrolerules:- apiGroups:

    - ""- "extensions"resources:- nodes- services- endpoints- namespaces- ingresses- secretsverbs:- get- list- watch

    - apiGroups:- ""- "extensions"resources:- configmaps- events- ingresses/statusverbs:

    (continues on next page)

    1.10. Container Connector(CC) Setup 17

  • F5 101 Lab Kubernetes, Release 0.1

    (continued from previous page)

    - get- list- watch- update- create- patch

    ---

    kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata:

    name: bigip-ctlr-clusterrole-bindingnamespace: kube-system

    roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: bigip-ctlr-clusterrole

    subjects:- kind: ServiceAccount

    name: bigip-ctlrnamespace: kube-system

    To apply the RBAC policy run

    kubectl create -f f5-k8s-sample-rbac.yaml

    To setup the secret containing your BIG-IP login and password, you can run the following command:

    kubectl create secret generic bigip-login --namespace kube-system --from-→˓literal=username=admin --from-literal=password=admin

    you should see something like this:

    create a file called f5-cc-deployment.yaml. Here is its content:

    apiVersion: extensions/v1beta1kind: Deploymentmetadata:

    name: k8s-bigip-ctlr-deploymentnamespace: kube-system

    spec:replicas: 1template:metadata:

    name: k8s-bigip-ctlrlabels:

    app: k8s-bigip-ctlrspec:

    serviceAccountName: bigip-ctlrcontainers:

    - name: k8s-bigip-ctlr# replace the version as neededimage: "f5networks/k8s-bigip-ctlr:1.3.0"

    (continues on next page)

    18 Chapter 1. Introduction

  • F5 101 Lab Kubernetes, Release 0.1

    (continued from previous page)

    env:- name: BIGIP_USERNAMEvalueFrom:secretKeyRef:name: bigip-loginkey: username

    - name: BIGIP_PASSWORDvalueFrom:secretKeyRef:name: bigip-loginkey: password

    command: ["/app/bin/k8s-bigip-ctlr"]args: ["--bigip-username=$(BIGIP_USERNAME)","--bigip-password=$(BIGIP_PASSWORD)","--bigip-url=10.1.10.60","--bigip-partition=kubernetes",# The Controller can use local DNS to resolve hostnames;# defaults to LOOKUP; can be replaced with custom DNS server IP# or left blank (introduced in v1.3.0)"--resolve-ingress-names=LOOKUP"# The Controller can access Secrets by default;# set to "false" if you only want to use preconfigured# BIG-IP SSL profiles#"--use-secrets=false",# The Controller watches all namespaces by default.# To manage a single namespace, or multiple namespaces, provide a# single entry for each. For example:# "--namespace=test",# "--namespace=prod"]

    imagePullSecrets:- name: f5-docker-images- name: bigip-login

    Note: If you use UDF, you have templates you can use in your jumpbox. It’s on the Desktop > F5 > kubernetes-demofolder. If you use those files, you’ll need to :

    • check the container image path in the deployment file is accurate

    • Update the “bindAddr” in the configMap for an IP you want to use in this blueprint.

    if you don’t use the UDF blueprint, you need to update the field image with the appropriate path to your image.

    If you have issues with your yaml and syntax (identation MATTERS), you can try to use an online parser to help you: Yaml parser

    Once you have your yaml file setup, you can try to launch your deployment. It will start our f5-k8s-controller containeron one of our node (may take around 30sec to be in a running state):

    kubectl create -f f5-cc-deployment.yaml

    kubectl get deployment k8s-bigip-ctlr-deployment --namespace kube-system

    1.10. Container Connector(CC) Setup 19

    http://codebeautify.org/yaml-validator

  • F5 101 Lab Kubernetes, Release 0.1

    FYI, To locate on which node the container connector is running, you can use the following command:

    kubectl get pods -o wide -n kube-system

    We can see that our container is running on ip-10-1-1-5 (Agent1)

    If you need to troubleshoot your container, you have two different ways to check the logs of your container:

    1. via kubectl command (recommended - easier)

    2. by connecting to the relevant node and use docker command. Here you’ll need to identify on which node it runsand use docker logs command:

    If you want to use kubectl command: you need to use the full name of your pod as showed in the previous image andrun the command kubectl logs k8s-bigip-ctlr-deployment- -n kube-system

    kubectl logs k8s-bigip-ctlr-deployment-710074254-b9dr8 -n kube-system

    If you want to use docker logs command

    On ip-10-1-1-5 which is Node1 (or another node depending on the previous command):

    sudo docker ps

    Here we can see our container ID: 7a774293230b

    Now we can check our container logs:

    sudo docker logs 7a774293230b

    20 Chapter 1. Introduction

  • F5 101 Lab Kubernetes, Release 0.1

    You can connect to your container with kubectl also:

    kubectl exec -it k8s-bigip-ctlr-deployment-710074254-b9dr8 -n kube-system -- /bin/sh

    cd /app

    ls -lR

    exit

    1.11 Container Connector Usage

    Now that our container connector is up and running, let’s deploy an application and leverage our CC.

    if you don’t use UDF, you can deploy any application you want. In UDF, the blueprint already has a container calledf5-demo-app already loaded as an image (Application provided by Eric Chen - F5 Cloud SA). It is loaded in DockerHub chen23/f5-demo-app

    To deploy our front-end application, we will need to do the following:

    1. Define a deployment: this will launch our application running in a container

    2. Define a ConfigMap: ConfigMap can be used to store fine-grained information like individual properties orcoarse-grained information like entire config files or JSON blobs. It will contain the BIG-IP configuration weneed to push

    3. Define a Service: A Kubernetes service is an abstraction which defines a logical set of pods and a policy bywhich to access them. expose the service on a port on each node of the cluster (the same port on each node).You’ll be able to contact the service on any :NodePort address. If you set the type field to “NodePort”,the Kubernetes master will allocate a port from a flag-configured range (default: 30000-32767), and each Nodewill proxy that port (the same port number on every Node) into your Service.

    1.11.1 App Deployment

    On the master we will create all the required files:

    Create a file called my-frontend-deployment.yaml:

    apiVersion: extensions/v1beta1kind: Deploymentmetadata:

    name: my-frontendspec:

    replicas: 2template:metadata:

    labels:run: my-frontend

    spec:containers:- image: "chen23/f5-demo-app"

    env:- name: F5DEMO_APPvalue: "frontend"

    - name: F5DEMO_BACKEND_URLvalue: "http://my-backend/"

    (continues on next page)

    1.11. Container Connector Usage 21

  • F5 101 Lab Kubernetes, Release 0.1

    (continued from previous page)

    imagePullPolicy: IfNotPresentname: my-frontendports:- containerPort: 80protocol: TCP

    Create a file called my-frontend-configmap.yaml:

    kind: ConfigMapapiVersion: v1metadata:

    name: my-frontendnamespace: defaultlabels:f5type: virtual-server

    data:schema: "f5schemadb://bigip-virtual-server_v0.1.3.json"data: |-{

    "virtualServer": {"frontend": {"balance": "round-robin","mode": "http","partition": "kubernetes","virtualAddress": {"bindAddr": "10.1.10.80","port": 80

    }},"backend": {"serviceName": "my-frontend","servicePort": 80

    }}

    }

    Create a file called my-frontend-service.yaml:

    apiVersion: v1kind: Servicemetadata:

    name: my-frontendlabels:run: my-frontend

    spec:ports:- port: 80protocol: TCPtargetPort: 80

    type: NodePortselector:run: my-frontend

    Note: If you use UDF, you have templates you can use in your jumpbox. It’s on the Desktop > F5 > kubernetes-demofolder. If you use those files, you’ll need to : * check the container image path in the deployment file is accurate *

    22 Chapter 1. Introduction

  • F5 101 Lab Kubernetes, Release 0.1

    Update the “bindAddr” in the configMap for an IP you want to use in this blueprint.

    We can now launch our application :

    kubectl create -f my-frontend-deployment.yaml

    kubectl create -f my-frontend-configmap.yaml

    kubectl create -f my-frontend-service.yaml

    to check the status of our deployment, you can run the following commands:

    kubectl get pods -n default

    kubectl describe svc -n default

    Here you need to pay attention to:

    • the NodePort value. That is the port used by Kubernetes to give you access to the app from the outside. Hereit’s 32402

    • the endpoints. That’s our 2 instances (defined as replicas in our deployment file) and the port assigned to theservice: port 80

    Now that we have deployed our application sucessfully, we can check our BIG-IP configuration.

    Warning: Don’t forget to select the “kubernetes” partition or you’ll see nothing

    1.11. Container Connector Usage 23

  • F5 101 Lab Kubernetes, Release 0.1

    Here you can see that the pool members listed are all the kubernetes nodes.

    Now you can try to access your application via your BIG-IP VIP: 10.1.10.80:

    24 Chapter 1. Introduction

  • F5 101 Lab Kubernetes, Release 0.1

    Hit Refresh many times and go to your BIG-IP UI, go to Local Traffic > Pools > Pool list > my-frontend_10.1.10.80_80 > Statistics to see that traffic is distributed as expected

    How does traffic is forwarded in Kubernetes from the :32402 to the :80 ? Thisis done via iptables that is managed via the kube-proxy instances:

    On any nodes (master/nodes), run the following command:

    sudo iptables-save | grep my-frontend

    This will list you the different iptables rules that were created regarding our frontend service.

    1.11.2 Ingress Resources

    The Container Connector also supports Kubernetes Ingress Resources as of version 1.1.0 of the F5 Container Connec-tor.

    Ingress Resources provides L7 path/hostname routing of HTTP requests.

    On the master we will create a file called “node-blue.yaml”.

    1.11. Container Connector Usage 25

  • F5 101 Lab Kubernetes, Release 0.1

    apiVersion: extensions/v1beta1kind: Deploymentmetadata:

    name: node-bluespec:

    replicas: 1template:

    metadata:labels:

    run: node-bluespec:containers:- image: "chen23/f5-demo-app"

    env:- name: F5DEMO_APPvalue: "website"

    - name: F5DEMO_NODENAMEvalue: "Node Blue"

    - name: F5DEMO_COLORvalue: 0000FF

    imagePullPolicy: IfNotPresentname: node-blueports:- containerPort: 80protocol: TCP

    ---apiVersion: v1kind: Servicemetadata:

    name: node-bluelabels:

    run: node-bluespec:

    ports:- port: 80

    protocol: TCPtargetPort: 80

    type: NodePortselector:

    run: node-blue

    and another file “node-green.yaml”.

    apiVersion: extensions/v1beta1kind: Deploymentmetadata:

    name: node-greenspec:

    replicas: 1template:

    metadata:labels:

    run: node-greenspec:containers:

    (continues on next page)

    26 Chapter 1. Introduction

  • F5 101 Lab Kubernetes, Release 0.1

    (continued from previous page)

    - image: "chen23/f5-demo-app"env:- name: F5DEMO_APPvalue: "website"

    - name: F5DEMO_NODENAMEvalue: "Node Green"

    - name: F5DEMO_COLORvalue: 00FF00

    imagePullPolicy: IfNotPresentname: node-greenports:- containerPort: 80protocol: TCP

    ---apiVersion: v1kind: Servicemetadata:

    name: node-greenlabels:

    run: node-greenspec:

    ports:- port: 80

    protocol: TCPtargetPort: 80

    type: NodePortselector:

    run: node-green

    These will represent two resources that we want to place behind a single F5 Virtual Server.

    Next we will create a file “blue-green-ingress.yaml”.

    apiVersion: extensions/v1beta1kind: Ingressmetadata:

    name: blue-green-ingressannotations:

    virtual-server.f5.com/ip: "10.1.10.82"virtual-server.f5.com/http-port: "80"virtual-server.f5.com/partition: "kubernetes"virtual-server.f5.com/health: |[

    {"path": "blue.f5demo.com/","send": "HTTP GET /","interval": 5,"timeout": 15

    }, {"path": "green.f5demo.com/","send": "HTTP GET /","interval": 5,"timeout": 15

    }]

    kubernetes.io/ingress.class: "f5"

    (continues on next page)

    1.11. Container Connector Usage 27

  • F5 101 Lab Kubernetes, Release 0.1

    (continued from previous page)

    spec:rules:- host: blue.f5demo.com

    http:paths:- backend:

    serviceName: node-blueservicePort: 80

    - host: green.f5demo.comhttp:paths:- backend:

    serviceName: node-greenservicePort: 80

    We can now deploy our ingress resources.

    kubectl create -f node-blue.yamlkubectl create -f node-green.yamlkubectl create -f blue-green-ingress.yaml

    1.12 Cluster installation

    1.12.1 Overview

    As a reminder, in this example, this is our cluster setup:

    Hostname Management IP Kubernetes IP RoleMaster 1 10.1.1.4 10.1.10.11 Masternode 1 10.1.1.5 10.1.10.21 nodenode 2 10.1.1.6 10.1.10.22 node

    Warning: This guide is for Kubernetes version 1.7.11

    For this setup we will leverage kubeadm to install Kubernetes on your own Ubuntu Servers version 16.04; steps weare going to use are specified in details here: Ubuntu getting started guide 16.04

    If you think you’ll need a custom solution installed on on-premises VMs, please refer to this documentation: Kuber-netes on Ubuntu

    Here are the steps that are involved (detailed later):

    1. make sure that firewalld is disabled (not supported today with kubeadm)

    2. disable Apparmor

    3. make sure all systems are up to date

    4. install docker if not already done (many kubernetes services will run into containers for reliability)

    5. install kubernetes packages

    28 Chapter 1. Introduction

    https://kubernetes.io/docs/getting-started-guides/ubuntu/https://kubernetes.io/docs/getting-started-guides/ubuntu/

  • F5 101 Lab Kubernetes, Release 0.1

    To make sure the systems are up to date, run these commands on all systems:

    Warning: Make sure that your /etc/hosts files on master and nodes resolve your hostnames with 10.1.10.X IPs

    1.12.2 installation

    We need to be sure the Ubuntu OS is up to date, we add Kubernetes repository to the list ov available Ubuntu packagesources and we install Kubernetes packages for version 1.7.x, making sure to hold with this version even when up-grading the OS. This procedure will install Docker on all systems because most of the component of Kubernetes willleverage this container technology.

    As previously said, you need root privileges for this section, either use sudo or su to gain the required privileges;morover be sure to execute this procedure on all systems.

    sudo apt-get updatesudo apt-get -y upgradesudo apt-get install -y apt-transport-httpssudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -sudo cat

  • F5 101 Lab Kubernetes, Release 0.1

    going to listen for API requests. We will specify the CIDR for our PODs Network and the CIDR for our SERVICEsNetwork; moreover, for lab purposes, we will ask for a token which will never expire.

    To setup master as a Kubernetes master, run the following command:

    sudo kubeadm init --apiserver-advertise-address 10.1.10.11 --pod-network-cidr 10.244.→˓0.0/16 --service-cidr 10.166.0.0/16 --token-ttl 0 --node-name [DNS node name]

    Here we specify:

    • The IP address that should be used to advertise the master. 10.1.10.0/24 is the network for our control plane.if you don’t specify the –api-advertise-addresses argument, kubeadm will pick the first interface with a defaultgateway (because it needs internet access).

    When running the command you should see something like this:

    The initialization is successful if you see “Your Kubernetes master has initialized successfully!”

    30 Chapter 1. Introduction

  • F5 101 Lab Kubernetes, Release 0.1

    In the output of the kubeadm command you should see a line like this:

    sudo kubeadm join --token=62468f.9dfb3fc97a985cf9 10.1.10.11

    This is the command to run on the node so that it registers itself with the master. Keep the secret safe since anyonewith this token can add authenticated node to your cluster. This is used for mutual auth between the master and thenodes

    Warning: save this command somewhere since you’ll need it later

    To start using your clusterm you need to:

    mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

    You can monitor that the services start to run by using the command:

    kubectl get pods --all-namespaces -o wide

    You’ll see a number of PODs running plus kube-dns in pending state because it won’t start until the network pod issetup.

    1.13.2 Network pod

    You must install a pod network add-on so that your pods can communicate with each other.

    It is necessary to do this before you try to deploy any applications to your cluster, and before* kube-dns* willstart up. Note also that kubeadm only supports CNI based networks and therefore kubenet based networks will notwork.

    Here is the list of add-ons available:

    • Calico

    • Canal

    • Flannel

    • Romana

    • Weave net

    We will use Flannel as mentioned previously. To set Flannel as a network pod, we need to first modify the flanneldeployment. First download the YAML deployment file.

    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-→˓flannel.yml

    Change “vxlan” to “host-gw” for Type and delete the “portmap” plugin; you shoud have a json section which reads:

    1.13. Setup master 31

  • F5 101 Lab Kubernetes, Release 0.1

    data:cni-conf.json: |{

    "name": "cbr0","plugins": [

    {"type": "flannel","delegate": {

    "hairpinMode": true,"isDefaultGateway": true

    }}

    ]}

    net-conf.json: |{

    "Network": "10.244.0.0/16","Backend": {

    "Type": "host-gw"}

    }

    If we are using multiple interface in Ubuntu Server, we need to specify the “–iface” argument to the flanneld command:

    containers:- name: kube-flannel

    image: quay.io/coreos/flannel:v0.9.1-amd64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgr- --iface=eth1

    Now deploy flannel.

    kubectl apply -f ./kube-flannel.yml

    1.13.3 Final checks on Kubernetes Master Server state

    If everything runs as expected you should have kube-dns that started successfully. To check the status of the differentservice, you can run the command:

    watch kubectl get pods --all-namespaces

    The output should show all services as running

    32 Chapter 1. Introduction

  • F5 101 Lab Kubernetes, Release 0.1

    kubectl get cs

    kubectl cluster-info

    In AWS you may need to modify kubelet to ensure that the correct name is used by the node and that it can joincorrectly. Example in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf. This attempts to workaround: https://github.com/kubernetes/kubernetes/issues/47695

    [Service]Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --→˓require-kubeconfig=true --hostname-override=ip-10-1-1-11.us-west-2.compute.internal→˓--node-ip=10.1.10.21"Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --→˓allow-privileged=true"Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d -→˓-cni-bin-dir=/opt/cni/bin"Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/→˓kubernetes/pki/ca.crt"Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"ExecStart=ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS→˓$KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS→˓$KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS

    You will also want to remove the line “- –admission-control=. . . ” from /etc/kubernetes/manifests/kube-apiserver.yaml

    The next step will be to have our nodes join the master

    1.13. Setup master 33

    https://github.com/kubernetes/kubernetes/issues/47695https://github.com/kubernetes/kubernetes/issues/47695

  • F5 101 Lab Kubernetes, Release 0.1

    1.14 Node setup

    Once the master is setup and running, we need to connect our nodes to it. Please be sure you did all the steps reportedin the cluster installation section on each node of the cluster.

    1.14.1 Join the master

    to join the master we need to run the command highlighted during the master initialization. In our setup it was:

    sudo kubeadm join --token=62468f.9dfb3fc97a985cf9 10.1.10.11 --node-name [DNS name]

    the output should be like this :

    to make sure that your nodes have joined, you can run this command on the master:

    kubectl get nodes -o wide

    You should see your cluster (ie master + nodes)

    Check that all the services are started as expected and everything is in a stable “running” state, running this commandon the master:

    kubectl get pods --all-namespaces -o wide

    34 Chapter 1. Introduction

  • F5 101 Lab Kubernetes, Release 0.1

    In AWS you may need to modify kubelet to ensure that the correct name is used by the node and that it can joincorrectly. Example in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf. This attempts to workaround: https://github.com/kubernetes/kubernetes/issues/47695

    [Service]Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --→˓require-kubeconfig=true --hostname-override=ip-10-1-1-11.us-west-2.compute.internal→˓--node-ip=10.1.10.21"Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --→˓allow-privileged=true"Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d -→˓-cni-bin-dir=/opt/cni/bin"Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/→˓kubernetes/pki/ca.crt"Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"ExecStart=ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS→˓$KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS→˓$KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS

    1.15 Test our setup

    Our environment is setup. We can try our environment by deploying a large application built as micro services

    We will use this application: Micro services demo

    Connect to the master and run the following command:

    git clone https://github.com/microservices-demo/microservices-demo

    kubectl create namespace sock-shop

    kubectl apply -f microservices-demo/deploy/kubernetes/manifests

    You can monitor the deployment of the application with the command:

    kubectl get pods -n sock-shop

    1.15. Test our setup 35

    https://github.com/kubernetes/kubernetes/issues/47695https://github.com/kubernetes/kubernetes/issues/47695https://github.com/microservices-demo/microservices-demo

  • F5 101 Lab Kubernetes, Release 0.1

    Warning: The deployment of this application may take quite some time (10-20min)

    Once all the containers are in a “Running” state, we can try to access our application. To access our application, weneed to identify on which port our application is listening to. We can do so with the following command:

    kubectl describe svc front-end -n sock-shop

    You can now access your application with the following URL: http://:

    36 Chapter 1. Introduction

    http:/

  • F5 101 Lab Kubernetes, Release 0.1

    You can also try to access it with the following URL: http://: , http://:

    1.15. Test our setup 37

    http:/http:/

    IntroductionIntroductionKubernetes overviewKubernetes networkingKubernetes services overviewLabs setup/accessConnecting to UDFAutomated F5 solutions deployment via CLIAutomated F5 solutions deployment via JenkinsOverview of F5® Container Connector (CC)Container Connector(CC) SetupContainer Connector UsageCluster installationSetup masterNode setupTest our setup