guides administration, and user deployment, · 11.4 install helm client and tiller 129 11.5 default...

354
Deployment, Administration, and User Guides SUSE Cloud Application Platform 1.5.1

Upload: others

Post on 29-May-2020

48 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Deployment,Administration, and UserGuides

SUSE Cloud Application Platform 1.5.1

Page 2: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Deployment, Administration, and User GuidesSUSE Cloud Application Platform 1.5.1by Carla Schroder, Billy Tat, Claudia-Amelia Marin, and Lukas Kucharczyk

Introducing SUSE Cloud Application Platform, a software platform for cloud-nativeapplication deployment based on SUSE Cloud Foundry and Kubernetes.

Publication Date: May 18, 2020

SUSE LLC1800 South Novell PlaceProvo, UT 84606USA

https://documentation.suse.com

Copyright © 2006– 2020 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free

Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this

copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU

Free Documentation License”.

For SUSE trademarks, see http://www.suse.com/company/legal/ . All other third-party trademarks are the

property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its

affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does

not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be

held liable for possible errors or the consequences thereof.

Page 3: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Contents

About This Guide xii

I OVERVIEW OF SUSE CLOUD APPLICATION PLATFORM 1

1 About SUSE Cloud Application Platform 21.1 New in Version 1.5.1 2

1.2 SUSE Cloud Application Platform Overview 2

1.3 Minimum Requirements 4

1.4 SUSE Cloud Application Platform Architecture 7

SUSE Cloud Foundry Components 9 • SUSE Cloud Foundry

Containers 9 • SUSE Cloud Foundry Service Diagram 12 • Detailed

Services Diagram 15

2 Other Kubernetes Systems 17

2.1 Kubernetes Requirements 17

II DEPLOYING SUSE CLOUD APPLICATION PLATFORM 18

3 Deployment and Administration Notes 193.1 README First 19

3.2 Important Changes 19

3.3 Usage of Helm Chart Fields in Cloud Application Platform 22

3.4 Helm Values in scf-config-values.yaml 23

3.5 Status of Pods during Deployment 23

3.6 Namespaces 24

3.7 DNS Management 24

iii Deployment, Administration, and User Guides

Page 4: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

3.8 Releases and Associated Versions 25

4 Using an Ingress Controller with Cloud ApplicationPlatform 29

4.1 Deploying NGINX Ingress Controller 29

4.2 Changing the Max Body Size 33

5 Deploying SUSE Cloud Application Platform on SUSECaaS Platform 35

5.1 Prerequisites 36

5.2 Creating a SUSE CaaS Platform Cluster 37

5.3 Installing Helm Client and Tiller 39

5.4 Storage Class 40

5.5 Deployment Configuration 40

5.6 Add the Kubernetes Charts Repository 42

5.7 Deploying SUSE Cloud Application Platform 43

Deploy uaa 43 • Deploy scf 44

5.8 Expanding Capacity of a Cloud Application Platform Deployment onSUSE® CaaS Platform 45

6 Installing the Stratos Web Console 47

6.1 Deploy Stratos on SUSE® CaaS Platform 47

Connecting SUSE® CaaS Platform to Stratos 50

6.2 Deploy Stratos on Amazon EKS 51

Connecting Amazon EKS to Stratos 54

6.3 Deploy Stratos on Microsoft AKS 55

Connecting Microsoft AKS to Stratos 57

6.4 Deploy Stratos on Google GKE 58

Connecting Google GKE to Stratos 60

6.5 Upgrading Stratos 61

iv Deployment, Administration, and User Guides

Page 5: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

6.6 Stratos Metrics 62

Exporter Configuration 62 • Install Stratos Metrics with

Helm 64 • Connecting Stratos Metrics 65

7 SUSE Cloud Application Platform HighAvailability 69

7.1 Configuring Cloud Application Platform for High Availability 69

Prerequisites 69 • Finding Default and Allowable Sizing

Values 69 • Simple High Availability Configuration 70 • Example

Custom High Availability Configurations 71

7.2 Availability Zones 74

Availability Zone Information Handling 75

8 LDAP Integration 77

8.1 Prerequisites 77

8.2 Example LDAP Integration 78

9 Deploying SUSE Cloud Application Platform onMicrosoft Azure Kubernetes Service (AKS) 81

9.1 Prerequisites 81

9.2 Create Resource Group and AKS Instance 84

9.3 Install Helm Client and Tiller 88

9.4 Pod Security Policies 88

9.5 Default Storage Class 88

9.6 DNS Configuration 89

9.7 Deployment Configuration 90

9.8 Add the Kubernetes Charts Repository 92

9.9 Deploying SUSE Cloud Application Platform 92

Deploy uaa 93 • Deploy scf 95

9.10 Configuring and Testing the Native Microsoft AKS Service Broker 97

v Deployment, Administration, and User Guides

Page 6: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

9.11 Resizing Persistent Volumes 99

Prerequisites 100 • Example Procedure 100

9.12 Expanding Capacity of a Cloud Application Platform Deployment onMicrosoft AKS 102

10 Deploying SUSE Cloud Application Platform on AmazonElastic Kubernetes Service (EKS) 104

10.1 Prerequisites 104

10.2 IAM Requirements for EKS 106

Unscoped Operations 106 • Scoped Operations 107

10.3 Install Helm Client and Tiller 109

10.4 Default Storage Class 109

10.5 DNS Configuration 110

10.6 Deployment Configuration 111

10.7 Deploying Cloud Application Platform 112

10.8 Add the Kubernetes Charts Repository 113

10.9 Deploy uaa 114

10.10 Deploy scf 115

10.11 Deploying and Using the AWS Service Broker 117

Prerequisites 117 • Setup 117 • Cleanup 121

10.12 Resizing Persistent Volumes 121

Prerequisites 122 • Example Procedure 122

11 Deploying SUSE Cloud Application Platform on GoogleKubernetes Engine (GKE) 125

11.1 Prerequisites 125

11.2 Creating a GKE cluster 127

11.3 Get kubeconfig File 129

vi Deployment, Administration, and User Guides

Page 7: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

11.4 Install Helm Client and Tiller 129

11.5 Default Storage Class 130

11.6 DNS Configuration 130

11.7 Deployment Configuration 131

11.8 Add the Kubernetes charts repository 133

11.9 Deploying SUSE Cloud Application Platform 134

Deploy uaa 134 • Deploy scf 136

11.10 Deploying and Using the Google Cloud Platform Service Broker 138

Enable APIs 138 • Create a Service Account 139 • Create a Database for

the GCP Service Broker 139 • Deploy the Service Broker 141

11.11 Resizing Persistent Volumes 144

Prerequisites 144 • Example Procedure 144

11.12 Expanding Capacity of a Cloud Application Platform Deployment onGoogle GKE 146

12 Eirini 148

12.1 Enabling Eirini 148

13 Setting Up a Registry for an Air GappedEnvironment 150

13.1 Prerequisites 150

13.2 Mirror Images to Registry 150

III SUSE CLOUD APPLICATION PLATFORM ADMINISTRATION 153

14 Upgrading SUSE Cloud Application Platform 15414.1 Important Considerations 154

14.2 Upgrading SUSE Cloud Application Platform 155

High Availability mysql through config.HA 158 • High Availability

mysql through Custom Sizing Values 163 • Single Availability

mysql 167 • Change in URL of Internal cf-usb Broker Endpoint 168

vii Deployment, Administration, and User Guides

Page 8: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

14.3 Installing Skipped Releases 169

15 External Database for uaa and scf 170

15.1 Configuration 170

16 Configuration Changes 173

16.1 Configuration Change Example 173

16.2 Other Examples 174

17 Creating Admin Users 175

17.1 Prerequisites 175

17.2 Creating an Example Cloud Application Platform ClusterAdministrator 176

18 Managing Passwords 178

18.1 Password Management with the Cloud Foundry Client 178

18.2 Changing User Passwords with Stratos 179

19 Accessing the UAA User Interface 180

19.1 Prerequisites 180

19.2 Procedure 180

20 Cloud Controller Database Secret Rotation 182

20.1 Tables with Encrypted Information 184

Update Existing Data with New Encryption Key 185

21 Backup and Restore 186

21.1 Backup and Restore Using cf-plugin-backup 186

Installing the cf-plugin-backup 186 • Using cf-plugin-backup 187 • Scope

of Backup 188

viii Deployment, Administration, and User Guides

Page 9: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

21.2 Disaster Recovery through Raw Data Backup and Restore 191

Prerequisites 192 • Scope of Raw Data Backup and

Restore 192 • Performing a Raw Data Backup 192 • Performing a Raw

Data Restore 194

22 Service Brokers 198

22.1 cf-usb 198

Enabling and Disabling Service

Brokers 199 • Prerequisites 200 • Deploying on SUSE CaaS

Platform 200 • Configuring the MySQL Deployment 201 • Deploying the

MySQL Chart 202 • Create and Bind a MySQL Service 204 • Deploying

the PostgreSQL Chart 204 • Removing Service Broker Sidecar

Deployments 205 • Change in URL of Internal cf-usb Broker Endpoint 207

22.2 Provisioning Services with Minibroker 208

Deploy Minibroker 208 • Setting Up the Environment for Minibroker

Usage 209 • Using Minibroker with Applications 211

23 App-AutoScaler 213

23.1 Prerequisites 213

23.2 Enabling and Disabling the App-AutoScaler Service 214

23.3 Upgrade Considerations 214

23.4 Using the App-AutoScaler Service 216

The App-AutoScaler cf CLI Plugin 217 • App-AutoScaler API 218

23.5 Policies 218

Scaling Types 219

24 Logging 221

24.1 Logging to an External Syslog Server 222

Configuring Cloud Application Platform 222 • Example using the ELK

Stack 222

24.2 Log Levels 224

ix Deployment, Administration, and User Guides

Page 10: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

25 Managing Certificates 228

25.1 Certificate Characteristics 228

25.2 Deployment Configuration 228

Configuring Multiple Certificates 230

25.3 Deploying SUSE Cloud Application Platform with Certificates 231

25.4 Rotating Automatically Generated Secrets 234

25.5 Dierence between TRUSTED_CERTS andROOTFS_TRUSTED_CERTS 235

26 Integrating CredHub with SUSE Cloud ApplicationPlatform 236

26.1 Installing the CredHub Client 236

26.2 Enabling and Disabling CredHub 236

26.3 Upgrade Considerations 238

26.4 Connecting to the CredHub Service 238

27 Buildpacks 240

27.1 System Buildpacks 240

27.2 Using Buildpacks 241

27.3 Adding Buildpacks 242

27.4 Updating Buildpacks 244

27.5 Oine Buildpacks 246

Creating an Oine Buildpack 246

28 Custom Application Domains 253

28.1 Customizing Application Domains 253

29 Managing Nproc Limits of Pods 256

29.1 Configuring and Applying Nproc Limits 256

New Deployments 257 • Existing Deployments 259

x Deployment, Administration, and User Guides

Page 11: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

IV SUSE CLOUD APPLICATION PLATFORM USER GUIDE 261

30 Deploying and Managing Applications with the CloudFoundry Client 262

30.1 Using the cf CLI with SUSE Cloud Application Platform 262

V TROUBLESHOOTING 266

31 Troubleshooting 26731.1 Logging 267

31.2 Using Supportconfig 268

31.3 Deployment Is Taking Too Long 268

31.4 Deleting and Rebuilding a Deployment 269

31.5 Querying with Kubectl 270

31.6 autoscaler-metrics Pod Liquibase Error 272

31.7 api-group Pod Not Ready after Buildpack Update 275

Recovery and Prevention 275

31.8 mysql Pods Fail to Start 276

Data initialization or incomplete transactions 276 • File ownership

issues 278

A Appendix 279A.1 Manual Configuration of Pod Security Policies 279

Using Custom Pod Security Policies 282

A.2 Complete suse/uaa values.yaml File 282

A.3 Complete suse/scf values.yaml File 290

B GNU Licenses 337

B.1 GNU Free Documentation License 337

xi Deployment, Administration, and User Guides

Page 12: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

About This Guide

SUSE Cloud Application Platform is a software platform for cloud-native application develop-ment, based on Cloud Foundry, with additional supporting services and components. The coreof the platform is SUSE Cloud Foundry, a Cloud Foundry distribution for Kubernetes which runson SUSE Linux Enterprise containers.

Cloud Application Platform is designed to run on any Kubernetes cluster. This guide describeshow to deploy it on:

For SUSE® CaaS Platform, see Chapter 5, Deploying SUSE Cloud Application Platform on SUSE

CaaS Platform.

For Microsoft Azure Kubernetes Service, see Chapter 9, Deploying SUSE Cloud Application Plat-

form on Microsoft Azure Kubernetes Service (AKS).

For Amazon Elastic Kubernetes Service, see Chapter 10, Deploying SUSE Cloud Application Plat-

form on Amazon Elastic Kubernetes Service (EKS).

For Google Kubernetes Engine, see Chapter 11, Deploying SUSE Cloud Application Platform on

Google Kubernetes Engine (GKE).

1 Required BackgroundTo keep the scope of these guidelines manageable, certain technical assumptions have beenmade:

You have some computer experience and are familiar with common technical terms.

You are familiar with the documentation for your system and the network on which it runs.

You have a basic understanding of Linux systems.

2 Available DocumentationWe provide HTML and PDF versions of our books in different languages. Documentation for ourproducts is available at http://documentation.suse.com/ , where you can also nd the latestupdates and browse or download the documentation in various formats.

xii Required Background SUSE Cloud Applic… 1.5.1

Page 13: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

The following documentation is available for this product:

Deployment, Administration, and User Guides

The SUSE Cloud Application Platform guide is a comprehensive guide providing deploy-ment, administration, and user guides, and architecture and minimum system require-ments.

3 FeedbackSeveral feedback channels are available:

Bugs and Enhancement Requests

For services and support options available for your product, refer to http://www.suse.com/

support/ .To report bugs for a product component, go to https://scc.suse.com/support/requests ,log in, and click Create New.

User Comments

We want to hear your comments about and suggestions for this manual and the otherdocumentation included with this product. Use the User Comments feature at the bottomof each page in the online documentation or go to http://documentation.suse.com/feed-

back.html and enter your comments there.

Mail

For feedback on the documentation of this product, you can also send a mail to [email protected] . Make sure to include the document title, the product version and thepublication date of the documentation. To report errors or suggest enhancements, providea concise description of the problem and refer to the respective section number and page(or URL).

4 Documentation ConventionsThe following notices and typographical conventions are used in this documentation:

/etc/passwd : directory names and le names

PLACEHOLDER : replace PLACEHOLDER with the actual value

xiii Feedback SUSE Cloud Applic… 1.5.1

Page 14: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

PATH : the environment variable PATH

ls , --help : commands, options, and parameters

user : users or groups

package name : name of a package

Alt , Alt – F1 : a key to press or a key combination; keys are shown in uppercase as ona keyboard

File, File Save As: menu items, buttons

AMD/Intel This paragraph is only relevant for the AMD64/Intel 64 architecture. The ar-rows mark the beginning and the end of the text block. IBM Z, POWER This paragraph is only relevant for the architectures z Systems and POWER .

The arrows mark the beginning and the end of the text block.

Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter inanother manual.

Commands that must be run with root privileges. Often you can also prefix these com-mands with the sudo command to run them as non-privileged user.

root # commandtux > sudo command

Commands that can be run by non-privileged users.

tux > command

Notices

Warning: Warning NoticeVital information you must be aware of before proceeding. Warns you about securityissues, potential loss of data, damage to hardware, or physical hazards.

Important: Important NoticeImportant information you should be aware of before proceeding.

xiv Documentation Conventions SUSE Cloud Applic… 1.5.1

Page 15: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Note: Note NoticeAdditional information, for example about differences in software versions.

Tip: Tip NoticeHelpful information, like a guideline or a piece of practical advice.

5 About the Making of This DocumentationThis documentation is written in GeekoDoc (https://github.com/openSUSE/geekodoc) , a subsetof DocBook 5 (http://www.docbook.org) . The XML source les were validated by jing (seehttps://code.google.com/p/jing-trang/ ), processed by xsltproc , and converted into XSL-FOusing a customized version of Norman Walsh's stylesheets. The final PDF is formatted throughFOP from Apache Software Foundation (https://xmlgraphics.apache.org/fop) . The open sourcetools and the environment used to build this documentation are provided by the DocBook Au-thoring and Publishing Suite (DAPS). The project's home page can be found at https://github.com/

openSUSE/daps .

The XML source code of this documentation can be found at https://github.com/SUSE/doc-cap .

xv About the Making of This Documentation SUSE Cloud Applic… 1.5.1

Page 16: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

I Overview of SUSE Cloud ApplicationPlatform

1 About SUSE Cloud Application Platform 2

2 Other Kubernetes Systems 17

Page 17: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

1 About SUSE Cloud Application Platform

1.1 New in Version 1.5.1

SUSE Cloud Foundry has been updated to version 2.19.1:

Support for Eirini SSH feature

Support for external Cloud Controller and UAA database configuration

Ingress controller now available for UAA embedded in SCF

AUDIT_WRITE capabilities added for CRI-O

The Stratos UI has been updated to version 2.6.0:

For a full list of features and fixes see https://github.com/SUSE/stratos/releas-

es/tag/2.6.0

The Stratos Metrics has been updated to version 1.1.1:

For a full list of updates see https://github.com/SUSE/stratos-metrics/blob/mas-

ter/CHANGELOG.md#111

See all product manuals for SUSE Cloud Application Platform 1.x at SUSE Cloud Application

Platform 1 (https://documentation.suse.com/suse-cap/1/) .

Tip: Read the Release NotesMake sure to review the release notes for SUSE Cloud Application Platform published athttps://www.suse.com/releasenotes/x86_64/SUSE-CAP/1/ .

1.2 SUSE Cloud Application Platform OverviewSUSE Cloud Application Platform is a software platform for cloud-native application deploymentbased on SUSE Cloud Foundry and Kubernetes.

SUSE Cloud Application Platform describes the complete software stack, including the operatingsystem, Kubernetes, and SUSE Cloud Foundry.

2 New in Version 1.5.1 SUSE Cloud Applic… 1.5.1

Page 18: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

SUSE Cloud Application Platform is comprised of the SUSE Linux Enterprise builds of the UserAccount and Authentication ( uaa ) Server, SUSE Cloud Foundry ( scf ), the Stratos Web userinterface, and Stratos Metrics.

The Cloud Foundry code base provides the basic functionality. SUSE Cloud Foundry differenti-ates itself from other Cloud Foundry distributions by running in Linux containers managed byKubernetes, rather than virtual machines managed with BOSH, for greater fault tolerance andlower memory use.

All Docker images for the SUSE Linux Enterprise builds are hosted on registry.suse.com .These are the commercially-supported images. (Community-supported images for openSUSE arehosted on Docker Hub (https://hub.docker.com) .) Product manuals on SUSE Doc: SUSE Cloud

Application Platform 1 (https://documentation.suse.com/suse-cap/1/) refer to the commercial-ly-supported SUSE Linux Enterprise version.

Cloud Application Platform is designed to run on any Kubernetes cluster. This guide describeshow to deploy it:

For SUSE® CaaS Platform, see Chapter 5, Deploying SUSE Cloud Application Platform on SUSE

CaaS Platform.

For Microsoft Azure Kubernetes Service, see Chapter 9, Deploying SUSE Cloud Application Plat-

form on Microsoft Azure Kubernetes Service (AKS).

For Amazon Elastic Kubernetes Service, see Chapter 10, Deploying SUSE Cloud Application Plat-

form on Amazon Elastic Kubernetes Service (EKS).

For Google Kubernetes Engine, see Chapter 11, Deploying SUSE Cloud Application Platform on

Google Kubernetes Engine (GKE).

SUSE Cloud Application Platform serves different but complementary purposes for operatorsand application developers.

For operators, the platform is:

Easy to install, manage, and maintain

Secure by design

Fault tolerant and self-healing

Offers high availability for critical components

3 SUSE Cloud Application Platform Overview SUSE Cloud Applic… 1.5.1

Page 19: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Uses industry-standard components

Avoids single vendor lock-in

For developers, the platform:

Allocates computing resources on demand via API or Web interface

Offers users a choice of language and Web framework

Gives access to databases and other data services

Emits and aggregates application log streams

Tracks resource usage for users and groups

Makes the software development workflow more efficient

The principle interface and API for deploying applications to SUSE Cloud Application Platformis SUSE Cloud Foundry. Most Cloud Foundry distributions run on virtual machines managed byBOSH. SUSE Cloud Foundry runs in SUSE Linux Enterprise containers managed by Kubernetes.Containerizing the components of the platform itself has these advantages:

Improves fault tolerance. Kubernetes monitors the health of all containers, and automati-cally restarts faulty containers faster than virtual machines can be restarted or replaced.

Reduces physical memory overhead. SUSE Cloud Foundry components deployed in con-tainers consume substantially less memory, as host-level operations are shared betweencontainers by Kubernetes.

SUSE Cloud Foundry packages upstream Cloud Foundry BOSH releases to produce containersand configurations which are deployed to Kubernetes clusters using Helm.

1.3 Minimum RequirementsThis guide details the steps for deploying SUSE Cloud Foundry on supported Kubernetes envi-ronments, namely SUSE CaaS Platform, Microsoft Azure Kubernetes Service, Google KubernetesEngine, and Amazon Elastic Kubernetes Service.

4 Minimum Requirements SUSE Cloud Applic… 1.5.1

Page 20: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Important: Required KnowledgeInstalling and administering SUSE Cloud Application Platform requires knowledge of Lin-ux, Docker, Kubernetes, and your Kubernetes platform. You must plan resource allocationand network architecture by taking into account the requirements of your Kubernetesplatform in addition to SUSE Cloud Foundry requirements. SUSE Cloud Foundry is a dis-crete component in your cloud stack, but it still requires knowledge of administering andtroubleshooting the underlying stack.

You may create a minimal deployment on four Kubernetes nodes for testing. However, this isinsufficient for a production deployment. A supported deployment includes SUSE Cloud Foundryinstalled on SUSE CaaS Platform, Amazon EKS, Google GKE, or Microsoft AKS. You also need astorage back-end such as SUSE Enterprise Storage or NFS, a DNS/DHCP server, and an Internetconnection to download additional packages during installation and ~10 GB of Docker images.Figure 1.1, “Minimal Example Production Deployment” describes a minimal production deploymentwith SUSE Cloud Foundry deployed on a Kubernetes cluster containing three Kubernetes mastersand three workers, plus an ingress controller, administration workstation, DNS/DHCP server,and a SUSE Enterprise Storage cluster.

5 Minimum Requirements SUSE Cloud Applic… 1.5.1

Page 21: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Customer Workloads

SUSE Cloud Foundry

SUSE CaaS Platform

K8s Worker 1

K8s Worker 3

Customer Workloads

SUSE Cloud Foundry

SUSE CaaS Platform

API server, etcd,controller manager,

scheduler

K8s Master

SUSE CaaS Platform

Customer Workloads

SUSE Cloud Foundry

SUSE CaaS Platform

K8s Worker 2

API server, etcd,controller manager,

scheduler

K8s Master

SUSE CaaS Platform

API server, etcd,controller manager,

scheduler

K8s Master

SUSE CaaS Platform

Admin user

DNS/DHCP

SES cluster

FIGURE 1.1: MINIMAL EXAMPLE PRODUCTION DEPLOYMENT

Note that after you have deployed your cluster and start building and running applications,your applications may depend on buildpacks that are not bundled in the container images thatship with SUSE Cloud Foundry. These will be downloaded at runtime, when you are pushingapplications to the platform. Some of these buildpacks may include components with propri-etary licenses. (See Customizing and Developing Buildpacks (https://docs.cloudfoundry.org/build-

packs/developing-buildpacks.html) to learn more about buildpacks, and creating and managingyour own.)

6 Minimum Requirements SUSE Cloud Applic… 1.5.1

Page 22: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

1.4 SUSE Cloud Application Platform ArchitectureThe following figures illustrate the main structural concepts of SUSE Cloud Application Platform.Figure 1.2, “Cloud Platform Comparisons” shows a comparison of the basic cloud platforms:

Infrastructure as a Service (IaaS)

Container as a Service (CaaS)

Platform as a Service (PaaS)

Software as a Service (SaaS)

SUSE CaaS Platform is a Container as a Service platform, and SUSE Cloud Application Platformis a PaaS.

Application

Runtime

Userland

Kernel

Virtualization

Hardware

Application

Runtime

Userland

Kernel

Virtualization

Hardware

Application

Runtime

Userland

Kernel

Virtualization

Hardware

Application

Runtime

Userland

Kernel

Virtualization

Hardware

Infrastructure(IaaS)

Container(CaaS)

Platform(PaaS)

Software(SaaS)

Managed by user:

Managed by provider:

Cloud Platforms Comparison

FIGURE 1.2: CLOUD PLATFORM COMPARISONS

Figure 1.3, “Containerized Platforms” illustrates how SUSE Cloud Application Platform containerizethe platform itself on top of a cloud provider.

7 SUSE Cloud Application Platform Architecture SUSE Cloud Applic… 1.5.1

Page 23: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Container Container

Container(CaaS)

Application

Runtime

Userland

Kernel

Virtualization

Hardware

Platform(PaaS)

Application

Runtime

Userland

Kernel

Virtualization

Hardware

FIGURE 1.3: CONTAINERIZED PLATFORMS

Figure 1.4, “SUSE Cloud Application Platform Stack” shows the relationships of the major componentsof the software stack. SUSE Cloud Application Platform runs on Kubernetes, which in turn runson multiple platforms, from bare metal to various cloud stacks. Your applications run on SUSECloud Application Platform and provide services.

FIGURE 1.4: SUSE CLOUD APPLICATION PLATFORM STACK

8 SUSE Cloud Application Platform Architecture SUSE Cloud Applic… 1.5.1

Page 24: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

1.4.1 SUSE Cloud Foundry Components

SUSE Cloud Foundry is comprised of developer and administrator clients, trusted downloadsites, transient and long-running components, APIs, and authentication:

Clients for developers and admins to interact with SUSE Cloud Foundry: the cf CLI, whichprovides the cf command, Stratos Web interface, IDE plugins.

Docker Trusted Registry owned by SUSE.

SUSE Helm chart repository.

Helm, the Kubernetes package manager, which includes Tiller, the Helm server, and thehelm command line client.

kubectl , the command line client for Kubernetes.

Long-running SUSE Cloud Foundry components.

SUSE Cloud Foundry post-deployment components: Transient SUSE Cloud Foundry com-ponents that start after all SUSE Cloud Foundry components are started, perform theirtasks, and then exit.

SUSE Cloud Foundry Linux cell, an elastic runtime component that runs Linux applications.

uaa , a Cloud Application Platform service for authentication and authorization.

The Kubernetes API.

1.4.2 SUSE Cloud Foundry Containers

Figure 1.5, “SUSE Cloud Foundry Containers, Grouped by Function” provides a look at SUSE CloudFoundry's containers.

9 SUSE Cloud Foundry Components SUSE Cloud Applic… 1.5.1

Page 25: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

FIGURE 1.5: SUSE CLOUD FOUNDRY CONTAINERS, GROUPED BY FUNCTION

LIST OF SUSE CLOUD FOUNDRY CONTAINERS

adapter

Part of the logging system, manages connections to user application syslog drains.

api-group

Contains the SUSE Cloud Foundry Cloud Controller, which implements the CF API. It isexposed via the router.

blobstore

A WebDAV blobstore for storing application bits, buildpacks, and stacks.

cc-clock

Sidekick to the Cloud Controller, periodically performing maintenance tasks such as re-source cleanup.

cc-uploader

Assists droplet upload from Diego.

cc-worker

Sidekick to the Cloud Controller, processes background tasks.

cf-usb

Universal Service Broker; SUSE's own component for managing and publishing servicebrokers.

10 SUSE Cloud Foundry Containers SUSE Cloud Applic… 1.5.1

Page 26: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

diego-api

API for the Diego scheduler.

diego-brain

Contains the Diego auctioning system that schedules user applications across the elasticlayer.

diego-cell (privileged)

The elastic layer of SUSE Cloud Foundry, where applications live.

diego-ssh

Provides SSH access to user applications, exposed via a Kubernetes service.

eirini

An alternative to the Diego scheduler.

eirini-persi

Enables persistent storage for applications when using the Eirini scheduler.

eirini-ssh

Provides SSH access to user applications when using the Eirini scheduler.

doppler

Routes log messages from applications and components.

log-api

Part of the logging system; exposes log streams to users using web sockets and proxies userapplication log messages to syslog drains. Exposed using the router.

mysql

A MariaDB server and component to route requests to replicas. (A separate copy is deployedfor uaa .)

nats

A pub-sub messaging queue for the routing system.

nfs-broker (privileged)

A service broker for enabling NFS-based application persistent storage when using theDiego scheduler.

post-deployment-setup

Used as a Kubernetes job, performs cluster setup after installation has completed.

11 SUSE Cloud Foundry Containers SUSE Cloud Applic… 1.5.1

Page 27: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

router

Routes application and API traffic. Exposed using a Kubernetes service.

routing-api

API for the routing system.

secret-generation

Used as a Kubernetes job to create secrets (certificates) when the cluster is installed.

syslog-scheduler

Part of the logging system that allows user applications to be bound to a syslog drain.

tcp-router

Routes TCP traffic for your applications.

1.4.3 SUSE Cloud Foundry Service Diagram

This simple service diagram illustrates how SUSE Cloud Foundry components communicate witheach other (Figure 1.6, “Simple Services Diagram”). See Figure 1.7, “Detailed Services Diagram” for amore detailed view.

Internet SCF Network Kube

Cloud Foundry Client...

SUSE Docker Registry

Helm Repository

kubectl

Non Privileged Components

Privileged Components

helm

Kube API

Tiller API

CAP Components

CAP Jobs

UAA

Diego Cell

1

1

2

2

4, 5, 6

4, 5, 6

3

3

3

7

NFS Broker

FIGURE 1.6: SIMPLE SERVICES DIAGRAM

This table describes how these services operate.

12 SUSE Cloud Foundry Service Diagram SUSE Cloud Applic… 1.5.1

Page 28: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Interface, Net-work Name,Network Proto-col

Requestor & Re-quest

Request Cre-dentials & Re-quest Autho-rization

Listener, Re-sponse & Re-sponse Creden-tials

Description ofOperation

1

External(HTTPS)

Requestor:Helm Client

Request: DeployCloud Applica-tion Platform

Request Cre-dentials:OAuth2 Bearertoken

Request Au-thorization:Deployment ofCloud Applica-tion PlatformServices on Ku-bernetes

Listener: Helm/Kubernetes API

Response: Op-eration ack andhandle

Response Cre-dentials: TLScertificate on ex-ternal endpoint

Operator de-ploys Cloud Ap-plication Plat-form on Kuber-netes

2

External(HTTPS)

Requestor: In-ternal Kuber-netes compo-nents

Request: Down-load DockerImages

Request Cre-dentials: Re-fer to reg-istry.suse.com

Request Au-thorization:Refer to reg-istry.suse.com

Listener: reg-istry.suse.com

Response:Docker images

Response Cre-dentials: None

Docker imagesthat make upCloud Applica-tion Platform aredownloaded

3

Tenant (HTTPS)

Requestor:Cloud Applica-tion Platformcomponents

Request: Get to-kens

Request Cre-dentials:OAuth2 clientsecret

Request Autho-rization: Varies,based on con-figured OAuth2client scopes

Listener: uaa

Response: AnOAuth2 refreshtoken used to in-teract with otherservice

Response Cre-dentials: TLScertificate

SUSE CloudFoundry compo-nents ask uaafor tokens sothey can talk toeach other

13 SUSE Cloud Foundry Service Diagram SUSE Cloud Applic… 1.5.1

Page 29: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Interface, Net-work Name,Network Proto-col

Requestor & Re-quest

Request Cre-dentials & Re-quest Autho-rization

Listener, Re-sponse & Re-sponse Creden-tials

Description ofOperation

4

External(HTTPS)

Requestor:SUSE CloudFoundry clients

Request: SUSECloud FoundryAPI Requests

Request Cre-dentials:OAuth2 Bearertoken

Request Autho-rization: SUSECloud Foundryapplication man-agement

Listener: CloudApplication Plat-form compo-nents

Response: JSONobject and HTTPStatus code

Response Cre-dentials: TLScertificate on ex-ternal endpoint

Cloud Applica-tion PlatformClients interactwith the SUSECloud FoundryAPI (for exampleusers deployingapps)

5

External (WSS)

Requestor:SUSE CloudFoundry clients

Request: Logstreaming

Request Cre-dentials:OAuth2 Bearertoken

Request Autho-rization: SUSECloud Foundryapplication man-agement

Listener: CloudApplication Plat-form compo-nents

Response: Astream of SUSECloud Foundrylogs

Response Cre-dentials: TLScertificate on ex-ternal endpoint

SUSE CloudFoundry clientsask for logs (forexample userlooking at ap-plication logs oradministratorviewing systemlogs)

6

External (SSH)

Requestor:SUSE CloudFoundry clients,SSH clients

Request Cre-dentials:OAuth2 bearertoken

Listener: CloudApplication Plat-form compo-nents

SUSE CloudFoundry Clientsopen an SSHconnection toan application'scontainer (for

14 SUSE Cloud Foundry Service Diagram SUSE Cloud Applic… 1.5.1

Page 30: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Interface, Net-work Name,Network Proto-col

Requestor & Re-quest

Request Cre-dentials & Re-quest Autho-rization

Listener, Re-sponse & Re-sponse Creden-tials

Description ofOperation

Request: SSHAccess to Appli-cation

Request Autho-rization: SUSECloud Foundryapplication man-agement

Response: A du-plex connectionis created allow-ing the user tointeract with ashell

Response Cre-dentials: RSASSH Key on ex-ternal endpoint

example usersdebugging theirapplications)

7

External(HTTPS)

Requestor:Helm

Request: Down-load charts

Request Cre-dentials: Re-fer to kuber-netes-chart-s.suse.com

Request Au-thorization:Refer to ku-bernetes-chart-s.suse.com

Listener: ku-bernetes-chart-s.suse.com

Response: Helmcharts

Response Cre-dentials: Helmcharts for CloudApplication Plat-form are down-loaded

Helm charts forCloud Applica-tion Platform aredownloaded

1.4.4 Detailed Services Diagram

Figure 1.7, “Detailed Services Diagram” presents a more detailed view of SUSE Cloud Foundry ser-vices and how they interact with each other. Services labeled in red are unencrypted, whileservices labeled in green run over HTTPS.

15 Detailed Services Diagram SUSE Cloud Applic… 1.5.1

Page 31: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

FIGURE 1.7: DETAILED SERVICES DIAGRAM

16 Detailed Services Diagram SUSE Cloud Applic… 1.5.1

Page 32: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

2 Other Kubernetes Systems

2.1 Kubernetes RequirementsSUSE Cloud Application Platform is designed to run on any Kubernetes system that meets thefollowing requirements:

Kubernetes API version of at least 1.10

Ensure nodes use a mininum kernel version of 3.19.

If your Kubernetes nodes have swap, then kernel parameter swapaccount=1 is set

docker info must not show aufs as the storage driver

The Kubernetes cluster must have a storage class for SUSE Cloud Application Platform touse. The default storage class is persistent . You may specify a different storage class inyour deployment's values.yaml le (which is called scf-config-values.yaml in theexamples in this guide), or as a helm command option, for example --set kube.stor-age_class.persistent=my_storage_class .

kube-dns must be be running

ntp , systemd-timesyncd , or chrony must be installed and active

Docker must be configured to allow privileged containers

Privileged container must be enabled in kube-apiserver . See kube-apiserver (https://

kubernetes.io/docs/admin/kube-apiserver) .

For Kubernetes deployments prior to version 1.15, privileged must be enabled in kubelet

The TasksMax property of the containerd service definition must be set to infinity

Helm's Tiller has to be installed and active, with Tiller on the Kubernetes cluster and Helmon your remote administration machine

17 Kubernetes Requirements SUSE Cloud Applic… 1.5.1

Page 33: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

II Deploying SUSE Cloud ApplicationPlatform

3 Deployment and Administration Notes 19

4 Using an Ingress Controller with Cloud Application Platform 29

5 Deploying SUSE Cloud Application Platform on SUSE CaaS Platform 35

6 Installing the Stratos Web Console 47

7 SUSE Cloud Application Platform High Availability 69

8 LDAP Integration 77

9 Deploying SUSE Cloud Application Platform on Microsoft Azure Kuber-netes Service (AKS) 81

10 Deploying SUSE Cloud Application Platform on Amazon Elastic KubernetesService (EKS) 104

11 Deploying SUSE Cloud Application Platform on Google Kubernetes Engine(GKE) 125

12 Eirini 148

13 Setting Up a Registry for an Air Gapped Environment 150

Page 34: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

3 Deployment and Administration Notes

Important things to know before deploying SUSE Cloud Application Platform.

3.1 README First

README first!

Before you start deploying SUSE Cloud Application Platform, review the following docu-ments:Read the Release Notes: Release Notes SUSE Cloud Application Platform (https://

www.suse.com/releasenotes/x86_64/SUSE-CAP/1/)

Read Chapter 3, Deployment and Administration Notes

3.2 Important Changes

Warning: Deprecation of cflinuxfs2 and sle12 StacksAs of SUSE Cloud Foundry 2.18.0, since our cf-deployment version is 9.5 , thecflinuxfs2 stack is no longer supported, as was advised in SUSE Cloud Foundry 2.17.1or Cloud Application Platform 1.4.1. The cflinuxfs2 buildpack is no longer shipped,but if you are upgrading from an earlier version, cflinuxfs2 will not be removed. How-ever, for migration purposes, we encourage all admins to move to cflinuxfs3 or sle15as newer buildpacks will not work with the deprecated cflinuxfs2 . If you still want touse the older stack, you will need to build an older version of a buildpack to continue forthe application to work, but you will be unsupported. (If you are running on sle12 , wewill be retiring that stack in a future version so start planning your migration to sle15 .The procedure is described below.)

Migrate applications to the new stack using one of the methods listed. Note that bothmethods will cause application downtime. Downtime can be avoided by followinga Blue-Green Deployment strategy. See https://docs.cloudfoundry.org/devguide/de-

ploy-apps/blue-green.html for details.Note that stack association support is available as of cf CLI v6.39.0.

Option 1 - Migrating applications using the Stack Auditor plugin.

19 README First SUSE Cloud Applic… 1.5.1

Page 35: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Stack Auditor rebuilds the application onto the new stack without a changein the application source code. If you want to move to a new stack with up-dated code, please follow Option 2 below. For additional information aboutthe Stack Auditor plugin, see https://docs.cloudfoundry.org/adminguide/stack-

auditor.html .

1. Install the Stack Auditor plugin for the cf CLI. For instructions, seehttps://docs.cloudfoundry.org/adminguide/stack-auditor.html#install .

2. Identify the stack applications are using. The audit lists all applicationsin orgs you have access to. To list all applications in your Cloud Appli-cation Platform deployment, ensure you are logged in as a user withaccess to all orgs.

tux > cf audit-stack

For each application requiring migration, perform the steps below.

3. If necessary, switch to the org and space the application is deployed to.

tux > cf target ORG SPACE

4. Change the stack to sle15 .

tux > cf change-stack APP_NAME sle15

5. Identify all buildpacks associated with the sle12 and cflinuxfs2stacks.

tux > cf buildpacks

6. Remove all buildpacks associated with the sle12 and cflinuxfs2stacks.

tux > cf delete-buildpack BUILDPACK -s sle12

tux > cf delete-buildpack BUILDPACK -s cflinuxfs2

7. Remove the sle12 and cflinuxfs2 stacks.

tux > cf delete-stack sle12

20 Important Changes SUSE Cloud Applic… 1.5.1

Page 36: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > cf delete-stack cflinuxfs2

Option 2 - Migrating applications using the cf CLI.Perform the following for all orgs and spaces in your Cloud Application Plat-form deployment. Ensure you are logged in as a user with access to all orgs.

1. Target an org and space.

tux > cf target ORG SPACE

2. Identify the stack an applications in the org and space is using.

tux > cf app APP_NAME

3. Re-push the app with the sle15 stack using one of the following meth-ods.

Push the application with the stack option, -s passed.

tux > cf push APP_NAME -s sle15

1. Update the application manifest le to includestack: sle15 . See https://docs.cloudfoundry.org/de-

vguide/deploy-apps/manifest-attributes.html#stack for de-tails.

--- ... stack: sle15

2. Push the application.

tux > cf push APP_NAME

4. Identify all buildpacks associated with the sle12 and cflinuxfs2stacks.

tux > cf buildpacks

5. Remove all buildpacks associated with the sle12 and cflinuxfs2stacks.

21 Important Changes SUSE Cloud Applic… 1.5.1

Page 37: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > cf delete-buildpack BUILDPACK -s sle12

tux > cf delete-buildpack BUILDPACK -s cflinuxfs2

6. Remove the sle12 and cflinuxfs2 stacks using the CF API. See https://

apidocs.cloudfoundry.org/7.11.0/#stacks for details.List all stacks then nd the GUID of the sle12 cflinuxfs2 stacks.

tux > cf curl /v2/stacks

Delete the sle12 and cflinuxfs2 stacks.

tux > cf curl -X DELETE /v2/stacks/SLE12_STACK_GUID

tux > cf curl -X DELETE /v2/stacks/CFLINUXFS2_STACK_GUID

Warning: Doppler Port ChangeAs of SUSE Cloud Application Platform 1.4, Doppler has been changed to use port 443 toallow for SSL communication and passthrough to Ingress controller. Firewall rules shouldbe updated accordingly.

3.3 Usage of Helm Chart Fields in Cloud ApplicationPlatformThere are slight differences in the way Cloud Application Platform uses some Helm chart eldsthan what is defined in https://v2.helm.sh/docs/developing_charts . Take note of the followingelds:

APP VERSION ( appVersion in Chart.yaml )

In Cloud Application Platform, the APP VERSION eld indicates the Cloud ApplicationPlatform release that a Helm chart belongs to. This is in contrast to indicating the ver-sion of the application as defined in https://v2.helm.sh/docs/developing_charts/#the-appver-

sion-field . For example, in the suse/uaa Helm chart, an APP VERSION of 1.4 is in ref-erence to Cloud Application Platform release 1.4 and does not indicate uaa is version 1.4.

22 Usage of Helm Chart Fields in Cloud Application Platform SUSE Cloud Applic… 1.5.1

Page 38: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

CHART VERSION ( version in Chart.yaml )

In Cloud Application Platform, the CHART VERSION eld indicates the Helm chart version,the same as defined in https://v2.helm.sh/docs/developing_charts/#charts-and-versioning .For Cloud Application Platform Helm charts, the chart version is also the release num-ber of the coresponding component. For example, in the suse/uaa Helm chart, a CHARTVERSION of 2.16.4 also indicates uaa is release 2.16.4.

3.4 Helm Values in scf-config-values.yamlTake note of the following Helm values when defining your scf-config-values.yaml le.

GARDEN_ROOTFS_DRIVER

For SUSE® CaaS Platform and other Kubernetes deployments where the nodes are basedon SUSE Linux Enterprise, the btrfs le system driver must be used. By default, btrfsis selected as the default.For Microsoft AKS, Amazon EKS, Google GKE, and other Kubernetes deployments wherethe nodes are based on other operating systems, the overlay-xfs le system driver mustbe used.

3.5 Status of Pods during Deployment

Some Pods Show as Not Running

Some uaa and scf pods perform only deployment tasks, and it is normal for them to showas unready and Completed after they have completed their tasks, as these examples show:

tux > kubectl get pods --namespace uaasecret-generation-1-z4nlz 0/1 Completed

tux > kubectl get pods --namespace scfsecret-generation-1-m6k2h 0/1 Completedpost-deployment-setup-1-hnpln 0/1 Completed

Some Pods Terminate and Restart during Deployment

When monitoring the status of a deployment, pods can be observed transitioning from aRunning state to a Terminating state, then returning to a Running again.

23 Helm Values in scf-config-values.yaml SUSE Cloud Applic… 1.5.1

Page 39: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

If a RESTARTS count of 0 is maintained during this process, this is normal behavior and notdue to failing pods. It is not necessary to stop the deployment. During deployment, podswill modify annotations on itself via the StatefulSet pod spec. In order to get the correctannotations on the running pod, it is stopped and restarted. Under normal circumstances,this behavior should only result in a pod restarting once.

3.6 Namespaces

Length of release names

Release names (for example, when you run helm install --name ) have a maximumlength of 36 characters.

Always install to a fresh namespace

If you are not creating a fresh SUSE Cloud Application Platform installation, but havedeleted a previous deployment and are starting over, you must create new namespaces. Donot re-use your old namespaces. The helm delete command does not remove generatedsecrets from the scf and uaa namespaces as it is not aware of them. These leftover secretsmay cause deployment failures. See Section 31.4, “Deleting and Rebuilding a Deployment” formore information.

3.7 DNS ManagementThe following tables list the minimum DNS requirements to run SUSE Cloud Application Plat-form, using example.com as the example domain. Your DNS management is platform-depen-dent, for example Microsoft AKS assigns IP addresses to your services, which you will map toA records. Amazon EKS assigns host names, which you will use to create CNAMEs. SUSE CaaSPlatform provides the flexibility to manage your name services in nearly any way you wish. Thechapters for each platform in this guide provide the relevant DNS instructions.

Warning: Supported DomainsWhen selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be eithera subdomain or a root domain. Setting DOMAIN to a top-level domain, such suse , is notsupported.

24 Namespaces SUSE Cloud Applic… 1.5.1

Page 40: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Domains Services

uaa.example.com uaa-uaa-public

*.uaa.example.com uaa-uaa-public

example.com router-gorouter-public

*.example.com router-gorouter-public

tcp.example.com tcp-router-tcp-router-public

ssh.example.com diego-ssh-ssh-proxy-public

A SUSE Cloud Application Platform cluster exposes these four services:

Kubernetes service descriptions Kubernetes service names

User Account and Authentication ( uaa ) uaa-uaa-public

Cloud Foundry (CF) TCP routing service tcp-router-tcp-router-public

CF application SSH access diego-ssh-ssh-proxy-public

CF router router-gorouter-public

uaa-uaa-public is in the uaa namespace, and the rest are in the scf namespace.

3.8 Releases and Associated VersionsThe supported upgrade method is to install all upgrades, in order. Skipping releases is not sup-ported. This table matches the Helm chart versions to each release as well as other version re-lated information.

CAP Re-lease

SCF andUAA HelmChart Ver-sion

StratosHelmChart Ver-sion

StratosMetricsHelmChart Ver-sion

CF API Im-plemented

KnownCompati-ble CF CLIVersion

CF CLI URL

1.5.1 (cur-rent re-lease)

2.19.1 2.6.1 1.1.1 2.138.0 6.46.1 https://

github.com/

cloud-

25 Releases and Associated Versions SUSE Cloud Applic… 1.5.1

Page 41: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

CAP Re-lease

SCF andUAA HelmChart Ver-sion

StratosHelmChart Ver-sion

StratosMetricsHelmChart Ver-sion

CF API Im-plemented

KnownCompati-ble CF CLIVersion

CF CLI URL

foundry/cli/

releas-

es/tag/

v6.46.1

1.5 2.18.0 2.5.3 1.1.0 2.138.0 6.46.1 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.46.1

1.4.1 2.17.1 2.4.0 1.0.0 2.134.0 6.46.0 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.46.0

1.4 2.16.4 2.4.0 1.0.0 2.134.0 6.44.1 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.44.1

1.3.1 2.15.2 2.3.0 1.0.0 2.120.0 6.42.0 https://

github.com/

cloud-

26 Releases and Associated Versions SUSE Cloud Applic… 1.5.1

Page 42: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

CAP Re-lease

SCF andUAA HelmChart Ver-sion

StratosHelmChart Ver-sion

StratosMetricsHelmChart Ver-sion

CF API Im-plemented

KnownCompati-ble CF CLIVersion

CF CLI URL

foundry/cli/

releas-

es/tag/

v6.42.0

1.3 2.14.5 2.2.0 1.0.0 2.115.0 6.40.1 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.40.1

1.2.1 2.13.3 2.1.0 2.115.0 6.39.1 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.39.1

1.2.0 2.11.0 2.0.0 2.112.0 6.38.0 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.38.0

1.1.1 2.10.1 1.1.0 2.106.0 6.37.0 https://

github.com/

cloud-

27 Releases and Associated Versions SUSE Cloud Applic… 1.5.1

Page 43: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

CAP Re-lease

SCF andUAA HelmChart Ver-sion

StratosHelmChart Ver-sion

StratosMetricsHelmChart Ver-sion

CF API Im-plemented

KnownCompati-ble CF CLIVersion

CF CLI URL

foundry/cli/

releas-

es/tag/

v6.37.0

1.1.0 2.8.0 1.1.0 2.103.0 6.36.1 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.36.1

1.0.1 2.7.0 1.0.2 2.101.0 6.34.1 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.34.1

1.0 2.6.11 1.0.0 6.34.0 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.34.0

28 Releases and Associated Versions SUSE Cloud Applic… 1.5.1

Page 44: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

4 Using an Ingress Controller with Cloud ApplicationPlatform

An Ingress controller (see https://kubernetes.io/docs/concepts/services-networking/ingress/ ) isa Kubernetes resource that manages traffic to services in a Kubernetes cluster.

Using an Ingress controller has the benefit of:

Having only one load balancer.

SSL can be terminated on the controller.

All traffic can be routed through ports 80 and 443 on the controller. Tracking differentports(for example, port 2793 for UAA, port (4)443 for Gorouter) is no longer needed. TheIngress routing rules will then manage the traffic ow to the appropriate backend services.

Note that only the NGINX Ingress Controller has been verified to be compatible with CloudApplication Platform. Other Ingress controller alternatives may work, but compatibility withCloud Application Platform is not supported.

4.1 Deploying NGINX Ingress Controller

1. Prepare your Kubernetes cluster according to the documentation for your platform. Pro-ceed to the next step when you reach the uaa deployment phase for your platform. Notethat the DNS sections in the platform-specific documentation can be omitted.

For SUSE® CaaS Platform, see Chapter 5, Deploying SUSE Cloud Application Platform on

SUSE CaaS Platform.

For Microsoft Azure Kubernetes Service, see Chapter 9, Deploying SUSE Cloud Application

Platform on Microsoft Azure Kubernetes Service (AKS).

For Amazon Elastic Kubernetes Service, see Chapter 10, Deploying SUSE Cloud Applica-

tion Platform on Amazon Elastic Kubernetes Service (EKS).

For Google Kubernetes Engine, see Chapter 11, Deploying SUSE Cloud Application Platform

on Google Kubernetes Engine (GKE).

2. Install the NGINX Ingress Controller.

29 Deploying NGINX Ingress Controller SUSE Cloud Applic… 1.5.1

Page 45: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > helm install suse/nginx-ingress \--name nginx-ingress \--namespace ingress \--set rbac.create=true

3. Monitor the progess of the deployment:

tux > watch --color 'kubectl get pods --namespace ingress'

4. After the deployment completes, the Ingress controller service will be deployed with eitheran external IP or a hostname. For SUSE CaaS Platform, Microsoft AKS, and Google GKE,this will be an IP and for Amazon EKS, it will be a hostname. Find the external IP orhostname.

tux > kubectl get services nginx-ingress-controller --namespace ingress

You will get output similar to the following.

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)nginx-ingress-controller LoadBalancer 10.63.248.70 35.233.191.177 80:30344/TCP,443:31386/TCP

5. Set up appropriate DNS records (CNAME for Amazon EKS, A records for SUSE CaaS Plat-form, Microsoft AKS, and Google GKE) corresponding to the controller service IP or host-name with the following entries. Replace example.com with your actual domain.

example.com

*.example.com

uaa.example.com

*.uaa.example.com

6. Obtain a PEM formatted certificate and ensure it includes Subject Alternative Names (SAN)for uaa and scf listed in the previous step.

7. Add the following to your configuration le, scf-config-values.yaml , to trigger thecreation of the Ingress objects. Ensure crt and key are encoded in the PEM format. Notethe port changes, this ensures all communications to uaa are routed through the Ingresscontroller.

30 Deploying NGINX Ingress Controller SUSE Cloud Applic… 1.5.1

Page 46: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

The nginx.ingress.kubernetes.io/proxy-body-size value indicates the maximumclient request body size. Actions such as pushing an application through cf push canresult in larger request body sizes depending on the application type you work with. Thisvalue will need to be adapted to your workflow.

UAA_PORT: 443UAA_PUBLIC_PORT: 443...ingress: enabled: true annotations: nginx.ingress.kubernetes.io/proxy-body-size: 1024m tls: crt: | -----BEGIN CERTIFICATE----- MIIE8jCCAtqgAwIBAgIUT/Yu/Sv8AUl5zHXXEKCy5RKJqmYwDQYJKoZIhvcMOQMM [...] xC8x/+zB7XlvcRJRio6kk670+25ABP== -----END CERTIFICATE----- key: | -----BEGIN RSA PRIVATE KEY----- MIIE8jCCAtqgAwIBAgIUSI02lj2b2ImLy/zMrjNgW5d8EygwQSVJKoZIhvcYEGAW [...] to2WV7rPMb9W9fd2vVUXKKHTc+PiNg== -----END RSA PRIVATE KEY-----

8. Deploy uaa .

tux > helm install suse/uaa \--name susecf-uaa \--namespace uaa \--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, whichyou can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

For the secret-generation pod, the STATUS is Completed and the READY columnis at 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

31 Deploying NGINX Ingress Controller SUSE Cloud Applic… 1.5.1

Page 47: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Press Ctrl – C to exit the watch command.

9. When all uaa pods are up and ready, verify uaa is working. Pass the CA certificate usedto sign the Ingress controller certificate as the value of --cacert .

tux > curl --cacert INGRESS_CONTROLLER_CA_CERT https://uaa.example.com/.well-known/openid-configuration

10. Update the Ingress controller to set up TCP forwarding

tux > helm upgrade nginx-ingress suse/nginx-ingress \ --reuse-values \ --set "tcp.20000=scf/tcp-router-tcp-router-public:20000" \ --set "tcp.20001=scf/tcp-router-tcp-router-public:20001" \ --set "tcp.20002=scf/tcp-router-tcp-router-public:20002" \ --set "tcp.20003=scf/tcp-router-tcp-router-public:20003" \ --set "tcp.20004=scf/tcp-router-tcp-router-public:20004" \ --set "tcp.20005=scf/tcp-router-tcp-router-public:20005" \ --set "tcp.20006=scf/tcp-router-tcp-router-public:20006" \ --set "tcp.20007=scf/tcp-router-tcp-router-public:20007" \ --set "tcp.20008=scf/tcp-router-tcp-router-public:20008" \ --set "tcp.2222=scf/diego-ssh-ssh-proxy-public:2222"

11. Set up the scf deployment to trust the CA certificate that signed the certificate of theIngress Controller.

tux > export INGRESS_CA_CERT=$(cat ingress-ca-cert.pem)

12. Deploy scf .

tux > helm install suse/cf \--name susecf-scf \--namespace scf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${INGRESS_CA_CERT}"

Wait until you have a successful scf deployment before going to the next steps, whichyou can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

32 Deploying NGINX Ingress Controller SUSE Cloud Applic… 1.5.1

Page 48: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

When scf is successfully deployed, the following is observed:

For the secret-generation and post-deployment-setup pods, the STATUS isCompleted and the READY column is at 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

13. When the deployment completes, verify you are able to login using the cf CLI.

tux > cf login https://api.example.com -u username -p password

4.2 Changing the Max Body SizeThe nginx.ingress.kubernetes.io/proxy-body-size value indicates the maximum clientrequest body size. Actions such as pushing an application through cf push can result in largerrequest body sizes depending on the application type you work with. If your current setting isinsufficient, you may encounter a 413 Request Entity Too Large error.

The maximum client request body size can be changed to adapt to your workflow using thefollowing.

1. Add nginx.ingress.kubernetes.io/proxy-body-size to your scf-config-val-

ues.yaml and specify a value.

ingress: annotations: nginx.ingress.kubernetes.io/proxy-body-size: 1024m

2. Set up the scf deployment to trust the CA certificate that signed the certificate of theIngress Controller.

tux > export INGRESS_CA_CERT=$(cat ingress-ca-cert.pem)

3. Use helm upgrade to apply the change.

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${INGRESS_CA_CERT}" \--version 2.19.1

4. Monitor the deployment progress using the watch command.

33 Changing the Max Body Size SUSE Cloud Applic… 1.5.1

Page 49: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > watch --color 'kubectl get pods --namespace scf'

34 Changing the Max Body Size SUSE Cloud Applic… 1.5.1

Page 50: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

5 Deploying SUSE Cloud Application Platform onSUSE CaaS Platform

Important

README first!

Before you start deploying SUSE Cloud Application Platform, review the followingdocuments:Read the Release Notes: Release Notes SUSE Cloud Application Platform (https://

www.suse.com/releasenotes/x86_64/SUSE-CAP/1/)

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on SUSE CaaS Platform. SUSE CaaS Plat-form is an enterprise-class container management solution that enables IT and DevOps profes-sionals to more easily deploy, manage, and scale container-based applications and services. Itincludes Kubernetes to automate lifecycle management of modern applications, and surround-ing technologies that enrich Kubernetes and make the platform itself easy to operate. As a result,enterprises that use SUSE CaaS Platform can reduce application delivery cycle times and improvebusiness agility. This chapter describes the steps to prepare a SUSE Cloud Application Platformdeployment on SUSE CaaS Platform. See https://documentation.suse.com/suse-caasp/4.1/ formore information on SUSE CaaS Platform.

35 SUSE Cloud Applic… 1.5.1

Page 51: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

5.1 PrerequisitesThe following are required to deploy and use SUSE Cloud Application Platform on SUSE CaaSPlatform:

Access to one of the following platforms to deploy SUSE CaaS Platform:

SUSE OpenStack Cloud 8

VMware VMware ESXi 6.7.0.20000

Bare Metal x86_64

A management workstation, which is used to deploy and control a SUSE CaaS Platformcluster, that is capable of running skuba (see https://github.com/SUSE/skuba for instal-lation instructions). The management workstation can be a regular desktop workstationor laptop running SUSE Linux Enterprise 15 SP1 or later.

cf , the Cloud Foundry command line interface. For more information, see https://doc-

s.cloudfoundry.org/cf-cli/ .For SUSE Linux Enterprise and openSUSE systems, install using zypper .

tux > sudo zypper install cf-cli

For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Addthe module using YaST or SUSEConnect.

tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-

cli.html .

kubectl , the Kubernetes command line tool. For more information, refer to https://kuber-

netes.io/docs/reference/kubectl/overview/ .For SLE 12 SP3 or 15 systems, install the package kubernetes-client from the PublicCloud module.For other systems, follow the instructions at https://kubernetes.io/docs/tasks/tools/in-

stall-kubectl/ .

jq , a command line JSON processor. See https://stedolan.github.io/jq/ for more infor-mation and installation instructions.

36 Prerequisites SUSE Cloud Applic… 1.5.1

Page 52: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

curl , the Client URL (cURL) command line tool.

sed , the stream editor.

ImportantThe prerequisites and configurations described is this chapter only reflect the require-ments for a minimal SUSE Cloud Application Platform deployment. For a more produc-tion-ready environment, consider incoporating the following features:

Stratos Web Console

For details, see Chapter 6, Installing the Stratos Web Console.

High Availability

For details, see Chapter 7, SUSE Cloud Application Platform High Availability.

LDAP Integration

For details, see Chapter 8, LDAP Integration.

External Log Server Integration

For details, see Chapter 24, Logging.

Managing Certificates

For details, see Chapter 25, Managing Certificates.

Other Features

Refer to the Administration Guide at Part III, “SUSE Cloud Application Platform Adminis-

tration” for additional features.

5.2 Creating a SUSE CaaS Platform ClusterWhen creating a SUSE CaaS Platform cluster, take note of the following general guidelines toensure there are sufficient resources available to run a SUSE Cloud Application Platform de-ployment:

Minimum 2.3 GHz processor

2 vCPU per physical core

37 Creating a SUSE CaaS Platform Cluster SUSE Cloud Applic… 1.5.1

Page 53: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

4 GB RAM per vCPU

Worker nodes need a minimum of 4 vCPU and 16 GB RAM

As a minimum, a SUSE Cloud Application Platform deployment with a basic workload willrequire:

1 master node

vCPU: 2

RAM: 8 GB

Storage: 60 GB (SSD)

2 worker nodes. Each node configured with:

(v)CPU: 4

RAM: 16 GB

Storage: 100 GB

Persistent storage: 40 GB

For steps to deploy a SUSE CaaS Platform cluster, refer to the SUSE CaaS Platform DeploymentGuide at https://documentation.suse.com/suse-caasp/4.1/single-html/caasp-deployment/

When proceeding through the instructions, take note of the following to ensure the SUSE CaaSPlatform cluster is suitable for a deployment of SUSE Cloud Application Platform:

At the cluster initialization step, do not use the --strict-capability-defaults optionwhen running

tux > skuba cluster init

This ensures the presence of extra CRI-O capabilities compatible with Docker containers.For more details refer to the https://documentation.suse.com/suse-caasp/4.1/single-html/

caasp-deployment/#_transitioning_from_docker_to_cri_o

Swapaccounting is enabled on all worker nodes. For each node:

1. SSH into the node.

38 Creating a SUSE CaaS Platform Cluster SUSE Cloud Applic… 1.5.1

Page 54: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

a. If the ssh-agent is running, run:

tux > ssh sles@<NODE_IP_ADDRESS>

b. Without the ssh-agent running, run:

tux > ssh sles@<NODE_IP_ADDRESS> -i <PATH_TO_YOUR_SSH_PRIVATE_KEY>

2. Enable swapaccounting on the node.

tux > grep "swapaccount=1" /etc/default/grub || sudo sed -i -r 's|^(GRUB_CMDLINE_LINUX_DEFAULT=)\"(.*.)\"|\1\"\2 swapaccount=1 \"|' /etc/default/grubtux > sudo grub2-mkconfig -o /boot/grub2/grub.cfgtux > sudo systemctl reboot

5.3 Installing Helm Client and TillerHelm is a Kubernetes package manager. It consists of a client and server component, both ofwhich are required in order to install and manage Cloud Application Platform.

The Helm client, helm , can be installed on your management workstation, which is part of theSUSE CaaS Platform package repository, by running

tux > sudo zypper install helm

Alternatively, refer to the upstream documentation at https://docs.helm.sh/using_helm/#in-

stalling-helm . Cloud Application Platform is compatible with both Helm 2 and Helm 3. Exam-ples in this guide are based on Helm 2. To use Helm 3, refer to the Helm documentation athttps://helm.sh/docs/ .

Tiller, the Helm server component, needs to be installed on your Kubernetes cluster. To install,follow the instructions at https://documentation.suse.com/suse-caasp/4.1/single-html/caasp-ad-

min/#_installing_tiller . Alternatively, refer to the upstream documentation at https://helm.sh/

docs/using_helm/#installing-tiller to install Tiller with a service account and ensure your instal-lation is appropriately secured according to your requirements as described in https://helm.sh/

docs/using_helm/#securing-your-helm-installation .

39 Installing Helm Client and Tiller SUSE Cloud Applic… 1.5.1

Page 55: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

5.4 Storage ClassSUSE Cloud Application Platform requires a persistent storage class to store persistent data insidea persistent volume (PV). When a PV is provisioned, the storage class' provisioner determines thevolume plugin used. Create a storage class using a provisioner (see https://kubernetes.io/docs/

concepts/storage/storage-classes/#provisioner ) that ts your storage strategy. Examples of pro-visioners include:

SUSE Enterprise Storage (see https://documentation.suse.com/suse-caasp/4.1/single-html/

caasp-admin/#_RBD-dynamic-persistent-volumes )If you are using SUSE Enterprise Storage you must copy the Ceph admin secret to the uaaand scf namespaces used by SUSE Cloud Application Platform:

tux > kubectl get secret ceph-secret-admin --output json --namespace default | \sed 's/"namespace": "default"/"namespace": "uaa"/' | kubectl create --filename -tux > kubectl get secret ceph-secret-admin --output json --namespace default | \sed's/"namespace": "default"/"namespace": "scf"/' | kubectl create --filename -

Network File System (see https://kubernetes.io/docs/concepts/storage/volumes/#nfs

By default SUSE Cloud Application Platform assumes your storage class is named persistent .Examples in this guide follow this assumption. If your storage class uses a different name, adjustthe kube.storage_class.persistent value in your configuration le.

5.5 Deployment ConfigurationSUSE Cloud Application Platform is configured using Helm values (see https://helm.sh/docs/

chart_template_guide/values_files/ . Helm values can be set as either command line parame-ters or using a values.yaml le. The following values.yaml le, called scf-config-val-ues.yaml in this guide, provides an example of a SUSE Cloud Application Platform configura-tion.

Ensure DOMAIN maps to the load balancer configured for your SUSE CaaS Platform clus-ter (see https://documentation.suse.com/suse-caasp/4.1/single-html/caasp-deployment/#loadbal-

ancer ).

40 Storage Class SUSE Cloud Applic… 1.5.1

Page 56: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Warning: Supported DomainsWhen selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be eithera subdomain or a root domain. Setting DOMAIN to a top-level domain, such suse , is notsupported.

### example deployment configuration file### scf-config-values.yaml

env: DOMAIN: example.com # the UAA prefix is required UAA_HOST: uaa.example.com UAA_PORT: 2793 GARDEN_ROOTFS_DRIVER: "btrfs"

kube: external_ips: ["<CAASP_MASTER_NODE_EXTERNAL_IP>","<CAASP_MASTER_NODE_INTERNAL_IP>"]

storage_class: persistent: "persistent" shared: "shared"

registry: hostname: "registry.suse.com" username: "" password: "" organization: "cap"

secrets: # Create a very strong password for user 'admin' CLUSTER_ADMIN_PASSWORD: password

# Create a very strong password, and protect it because it # provides root access to everything UAA_ADMIN_CLIENT_SECRET: password

Take note of the following Helm values when defining your scf-config-values.yaml le.

GARDEN_ROOTFS_DRIVER

For SUSE® CaaS Platform and other Kubernetes deployments where the nodes are basedon SUSE Linux Enterprise, the btrfs le system driver must be used. By default, btrfsis selected as the default.

41 Deployment Configuration SUSE Cloud Applic… 1.5.1

Page 57: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

For Microsoft AKS, Amazon EKS, Google GKE, and other Kubernetes deployments wherethe nodes are based on other operating systems, the overlay-xfs le system driver mustbe used.

Important: Protect UAA_ADMIN_CLIENT_SECRETThe UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Appli-cation Platform cluster. Make this a very strong password, and protect it just as carefullyas you would protect any root password.

5.6 Add the Kubernetes Charts RepositoryDownload the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm :

tux > helm repo listNAME URLstable https://kubernetes-charts.storage.googleapis.comlocal http://127.0.0.1:8879/chartssuse https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suseNAME CHART VERSION APP VERSION DESCRIPTIONsuse/cf 2.19.1 1.5.1 A Helm chart for SUSE Cloud Foundrysuse/cf-usb-sidecar-mysql 1.0.1 A Helm chart for SUSE Universal Service Broker Sidecar fo...suse/cf-usb-sidecar-postgres 1.0.1 A Helm chart for SUSE Universal Service Broker Sidecar fo...suse/console 2.6.1 1.5.1 A Helm chart for deploying Stratos UI Consolesuse/log-agent-rsyslog 1.0.1 8.39.0 Log Agent for forwarding logs of K8s control pl...suse/metrics 1.1.1 1.5.1 A Helm chart for Stratos Metricssuse/minibroker 0.3.1 A minibroker for your minikubesuse/nginx-ingress 0.28.4 0.15.0 An nginx Ingress controller that uses ConfigMap to store ...

42 Add the Kubernetes Charts Repository SUSE Cloud Applic… 1.5.1

Page 58: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

suse/uaa 2.19.1 1.5.1 A Helm chart for SUSE UAA

5.7 Deploying SUSE Cloud Application Platform

5.7.1 Deploy uaa

Note: Embedded uaa in scfThe User Account and Authentication ( uaa ) Server is included as an optional feature ofthe scf Helm chart. This simplifies the Cloud Application Platform deployment processas a separate installation and/or upgrade of uaa is no longer a prerequisite to installingand/or upgrading scf .

It is important to note that:

This feature should only be used when uaa is not shared with other projects.

You cannot migrate from an existing external uaa to an embedded one. In this situ-ation, enabling this feature during an upgrade will result in a single admin account.

To enable this feature, add the following to your scf-config-values.yaml .

enable: uaa: true

When deploying and/or upgrading scf , run helm install and/or helm upgrade andnote that:

Installing and/or upgrading uaa using helm install suse/uaa ... and/or helmupgrade is no longer required.

It is no longer necessary to set the UAA_CA_CERT parameter. Previously, this para-meter was passed the CA_CERT variable, which was assigned the CA certificate ofuaa .

Use Helm to deploy the uaa server:

tux > helm install suse/uaa \

43 Deploying SUSE Cloud Application Platform SUSE Cloud Applic… 1.5.1

Page 59: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

--name susecf-uaa \--namespace uaa \--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you canmonitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

For the secret-generation pod, the STATUS is Completed and the READY column isat 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

Use curl to verify you are able to connect to the uaa OAuth server on the DNS name config-ured:

tux > curl --insecure https://uaa.example.com:2793/.well-known/openid-configuration

This should return a JSON object, as this abbreviated example shows:

{"issuer":"https://uaa.example.com:2793/oauth/token","authorization_endpoint":"https://uaa.example.com:2793/oauth/authorize","token_endpoint":"https://uaa.example.com:2793/oauth/token"

5.7.2 Deploy scfBefore deploying scf , ensure the DNS records for the uaa domains have been set up as specifiedin the previous section. Next, pass your uaa secret and certificate to scf , then use Helm todeploy scf :

tux > SECRET=$(kubectl get pods --namespace uaa \ --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}') tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \ --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)" tux > helm install suse/cf \ --name susecf-scf \ --namespace scf \

44 Deploy scf SUSE Cloud Applic… 1.5.1

Page 60: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

--values scf-config-values.yaml \ --set "secrets.UAA_CA_CERT=${CA_CERT}"

Wait until you have a successful scf deployment before going to the next steps, which you canmonitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

For the secret-generation and post-deployment-setup pods, the STATUS is Com-pleted and the READY column is at 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

When the deployment completes, use the Cloud Foundry command line interface to log in toSUSE Cloud Foundry to deploy and manage your applications. (See Section 30.1, “Using the cf CLI

with SUSE Cloud Application Platform”)

5.8 Expanding Capacity of a Cloud ApplicationPlatform Deployment on SUSE® CaaS PlatformIf the current capacity of your Cloud Application Platform deployment is insufficient for yourworkloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 5, Deploying SUSE Cloud

Application Platform on SUSE CaaS Platform and have a running Cloud Application Platform de-ployment on SUSE® CaaS Platform.

1. Add additional nodes to your SUSE® CaaS Platform cluster asdescribed in https://documentation.suse.com/suse-caasp/4.1/html/caasp-admin/_clus-

ter_management.html#adding_nodes .

2. Verify the new nodes are in a Ready state before proceeding.

tux > kubectl get nodes

3. Add or update the following in your scf-config-values.yaml le to increase the num-ber of diego-cell in your Cloud Application Platform deployment. Replace the examplevalue with the number required by your workflow.

45

Expanding Capacity of a Cloud Application Platform Deployment on SUSE® CaaS Plat-

form SUSE Cloud Applic… 1.5.1

Page 61: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

sizing: diego_cell: count: 4

4. Pass your uaa secret and certificate to scf .

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

5. Perform a helm upgrade to apply the change.

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--version 2.19.1

6. Monitor progress of the additional diego-cell pods:

tux > watch --color 'kubectl get pods --namespace scf'

46

Expanding Capacity of a Cloud Application Platform Deployment on SUSE® CaaS Plat-

form SUSE Cloud Applic… 1.5.1

Page 62: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

6 Installing the Stratos Web Console

The Stratos user interface (UI) is a modern web-based management application for CloudFoundry. It provides a graphical management console for both developers and system adminis-trators. Install Stratos with Helm after all of the uaa and scf pods are running.

6.1 Deploy Stratos on SUSE® CaaS PlatformThe steps in this section describe how to install Stratos on SUSE® CaaS Platform without anexternal load balancer, instead mapping a worker node to your SUSE Cloud Application Platformdomain as described in Section 5.5, “Deployment Configuration”. These instructions assume youhave followed the procedure in Chapter 5, Deploying SUSE Cloud Application Platform on SUSE CaaS

Platform, have deployed uaa and scf successfully, and have created a default storage class.

If you are using SUSE Enterprise Storage as your storage back-end, copy the secret into theStratos namespace:

tux > kubectl get secret ceph-secret-admin --output json --namespace default | \sed 's/"namespace": "default"/"namespace": "stratos"/' | kubectl create --filename -

You should already have the Stratos charts when you downloaded the SUSE charts repository(see Section 5.6, “Add the Kubernetes Charts Repository”). Search your Helm repository to verify thatyou have the suse/console chart:

tux > helm search suseNAME CHART VERSION APP VERSION DESCRIPTIONsuse/cf 2.19.1 1.5.1 A Helm chart for SUSE Cloud Foundrysuse/cf-usb-sidecar-mysql 1.0.1 A Helm chart for SUSE Universal Service Broker Sidecar fo...suse/cf-usb-sidecar-postgres 1.0.1 A Helm chart for SUSE Universal Service Broker Sidecar fo...suse/console 2.6.1 1.5.1 A Helm chart for deploying Stratos UI Consolesuse/log-agent-rsyslog 1.0.1 8.39.0 Log Agent for forwarding logs of K8s control pl...suse/metrics 1.1.1 1.5.1 A Helm chart for Stratos Metricssuse/minibroker 0.3.1 A minibroker for your minikubesuse/nginx-ingress 0.28.4 0.15.0 An nginx Ingress controller that uses ConfigMap to store ...

47 Deploy Stratos on SUSE® CaaS Platform SUSE Cloud Applic… 1.5.1

Page 63: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

suse/uaa 2.19.1 1.5.1 A Helm chart for SUSE UAA

Use Helm to install Stratos, using the same scf-config-values.yaml configuration le youused to deploy uaa and scf :

Important: Non-default SUSE Cloud Foundry namespaceIf you used a non-default namespace for SUSE Cloud Foundry, you need to change theUAA_ZONE environment variable in the scf-config-values.yaml to the correct name-space. You can do so in the env section:

env: # Enter the domain you created for your CAP cluster DOMAIN: example.com

# uaa host and port UAA_HOST: uaa.example.com UAA_PORT: 2793 # scf namespace, default is "scf" UAA_ZONE: scf2

Note: Technology Preview FeaturesSome Stratos releases may include features as part of a technology preview. Technologypreview features are for evaluation purposes only and not supported for production use.To see the technology preview features available for a given release, refer to https://

github.com/SUSE/stratos/blob/master/CHANGELOG.md .

To enable technology preview features, set the console.techPreview Helm value totrue . For example, when running helm install add --set console.techPre-view=true .

tux > helm install suse/console \ --name susecf-console \ --namespace stratos \ --values scf-config-values.yaml

You can monitor the status of your stratos deployment with the watch command:

tux > watch --color 'kubectl get pods --namespace stratos'

48 Deploy Stratos on SUSE® CaaS Platform SUSE Cloud Applic… 1.5.1

Page 64: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

When stratos is successfully deployed, the following is observed:

For the volume-migration pod, the STATUS is Completed and the READY column is at0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

When the stratos deployment completes, query with Helm to view your release information:

tux > helm status susecf-consoleLAST DEPLOYED: Wed Mar 27 06:51:36 2019NAMESPACE: stratosSTATUS: DEPLOYED

RESOURCES:==> v1/SecretNAME TYPE DATA AGEsusecf-console-secret Opaque 2 3hsusecf-console-mariadb-secret Opaque 2 3h

==> v1/PersistentVolumeClaimNAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGEsusecf-console-upgrade-volume Bound pvc-711380d4-5097-11e9-89eb-fa163e15acf0 20Mi RWO persistent 3hsusecf-console-encryption-key-volume Bound pvc-711b5275-5097-11e9-89eb-fa163e15acf0 20Mi RWO persistent 3hconsole-mariadb Bound pvc-7122200c-5097-11e9-89eb-fa163e15acf0 1Gi RWO persistent 3h

==> v1/ServiceNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEsusecf-console-mariadb 172.24.137.195 <none> 3306/TCP 3hsusecf-console-ui-ext 172.24.80.22 10.86.101.115,172.28.0.31,172.28.0.36,172.28.0.7,172.28.0.22 8443/TCP 3h

==> v1beta1/DeploymentNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEstratos-db 1 1 1 1 3h

==> v1beta1/StatefulSetNAME DESIRED CURRENT AGEstratos 1 1 3h

49 Deploy Stratos on SUSE® CaaS Platform SUSE Cloud Applic… 1.5.1

Page 65: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Find the external IP address with kubectl get service susecf-console-ui-ext --namespace stratos to access your new Stratos Web console, for examplehttps://10.86.101.115:8443, or use the domain you created for it, and its port, for examplehttps://example.com:8443. Wade through the nag screens about the self-signed certificates andlog in as admin with the password you created in scf-config-values.yaml .

FIGURE 6.1: STRATOS UI CLOUD FOUNDRY CONSOLE

6.1.1 Connecting SUSE® CaaS Platform to Stratos

Stratos can show information from your SUSE® CaaS Platform environment.

To enable this, you must register and connect your SUSE® CaaS Platform environment withStratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in thetop-right of the view - you should be shown the "Register new Endpoint" view.

1. In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + iconin the top-right of the view.

2. On the Register a new Endpoint view, click the SUSE CaaS Platform button.

3. Enter a memorable name for your SUSE® CaaS Platform environment in the Name eld.For example, my-endpoint .

50 Connecting SUSE® CaaS Platform to Stratos SUSE Cloud Applic… 1.5.1

Page 66: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

4. Enter the URL of the API server for your Kubernetes environment in the Endpoint Addresseld. Run kubectl cluster-info and use the value of Kubernetes master as the URL.

tux > kubectl cluster-info

5. Activate the Skip SSL validation for the endpoint check box if using self-signed certificates.

6. Click Register.

7. Activate the Connect to my-endpoint now (optional). check box.

8. Provide a valid kubeconfig le for your SUSE® CaaS Platform environment.

9. Click Connect.

10. In the Stratos UI, go to Kubernetes in the left-hand side navigation. Information for yourSUSE® CaaS Platform environment should now be displayed.

FIGURE 6.2: KUBERNETES ENVIRONMENT INFORMATION ON STRATOS

6.2 Deploy Stratos on Amazon EKSBefore deploying Stratos, ensure uaa and scf have been successfully deployed on Amazon EKS(see Chapter 10, Deploying SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS)).

51 Deploy Stratos on Amazon EKS SUSE Cloud Applic… 1.5.1

Page 67: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Configure a scoped storage class for your Stratos deployment. Create a configuration le, calledscoped-storage-class.yaml in this example, using the following as a template. Specify theregion you are using as the zone and be sure to include the letter (for example, the letter a inus-west-2a ) identifier to indicate the Availability Zone used:

kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: gp2scopedprovisioner: kubernetes.io/aws-ebsparameters: type: gp2 zone: "us-west-2a"reclaimPolicy: RetainmountOptions: - debug

Create the storage class using the scoped-storage-class.yaml configuration le:

tux > kubectl create --filename scoped-storage-class.yaml

Verify the storage class has been created:

tux > kubectl get storageclassNAME PROVISIONER AGEgp2 (default) kubernetes.io/aws-ebs 1dgp2scoped kubernetes.io/aws-ebs 1d

Use Helm to install Stratos:

Important: Non-default SUSE Cloud Foundry namespaceIf you used a non-default namespace for SUSE Cloud Foundry, you need to change theUAA_ZONE environment variable in the scf-config-values.yaml to the correct name-space. You can do so in the env section:

env: # Enter the domain you created for your CAP cluster DOMAIN: example.com

# uaa host and port UAA_HOST: uaa.example.com UAA_PORT: 2793 # scf namespace, default is "scf"

52 Deploy Stratos on Amazon EKS SUSE Cloud Applic… 1.5.1

Page 68: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

UAA_ZONE: scf2

Note: Technology Preview FeaturesSome Stratos releases may include features as part of a technology preview. Technologypreview features are for evaluation purposes only and not supported for production use.To see the technology preview features available for a given release, refer to https://

github.com/SUSE/stratos/blob/master/CHANGELOG.md .

To enable technology preview features, set the console.techPreview Helm value totrue . For example, when running helm install add --set console.techPre-view=true .

tux > helm install suse/console \ --name susecf-console \ --namespace stratos \ --values scf-config-values.yaml \ --set kube.storage_class.persistent=gp2scoped \ --set services.loadbalanced=true

You can monitor the status of your stratos deployment with the watch command:

tux > watch --color 'kubectl get pods --namespace stratos'

When stratos is successfully deployed, the following is observed:

For the volume-migration pod, the STATUS is Completed and the READY column is at0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

Obtain the host name of the service exposed through the public load balancer:

tux > kubectl get service susecf-console-ui-ext --namespace stratos

Use this host name to create a CNAME record.

53 Deploy Stratos on Amazon EKS SUSE Cloud Applic… 1.5.1

Page 69: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

FIGURE 6.3: STRATOS UI CLOUD FOUNDRY CONSOLE

6.2.1 Connecting Amazon EKS to Stratos

Stratos can show information from your Amazon EKS environment.

To enable this, you must register and connect your Amazon EKS environment with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in thetop-right of the view - you should be shown the "Register new Endpoint" view.

1. In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + iconin the top-right of the view.

2. On the Register a new Endpoint view, click the Amazon EKS button.

3. Enter a memorable name for your Amazon EKS environment in the Name eld. For exam-ple, my-endpoint .

4. Enter the URL of the API server for your Kubernetes environment in the Endpoint Addresseld. Run kubectl cluster-info and use the value of Kubernetes master as the URL.

tux > kubectl cluster-info

5. Activate the Skip SSL validation for the endpoint check box if using self-signed certificates.

6. Click Register.

54 Connecting Amazon EKS to Stratos SUSE Cloud Applic… 1.5.1

Page 70: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

7. Activate the Connect to my-endpoint now (optional). check box.

8. Enter the name of your Amazon EKS cluster in the Cluster eld.

9. Enter your AWS Access Key ID in the Access Key ID eld.

10. Enter your AWS Secret Access Key in the Secret Access Key eld.

11. Click Connect.

12. In the Stratos UI, go to Kubernetes in the left-hand side navigation. Information for yourAmazon EKS environment should now be displayed.

FIGURE 6.4: KUBERNETES ENVIRONMENT INFORMATION ON STRATOS

6.3 Deploy Stratos on Microsoft AKSBefore deploying Stratos, ensure uaa and scf have been successfully deployed on MicrosoftAKS (see Chapter 9, Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service

(AKS)).

Important: Non-default SUSE Cloud Foundry namespaceIf you used a non-default namespace for SUSE Cloud Foundry, you need to change theUAA_ZONE environment variable in the scf-config-values.yaml to the correct name-space. You can do so in the env section:

env:

55 Deploy Stratos on Microsoft AKS SUSE Cloud Applic… 1.5.1

Page 71: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# Enter the domain you created for your CAP cluster DOMAIN: example.com

# uaa host and port UAA_HOST: uaa.example.com UAA_PORT: 2793 # scf namespace, default is "scf" UAA_ZONE: scf2

Note: Technology Preview FeaturesSome Stratos releases may include features as part of a technology preview. Technologypreview features are for evaluation purposes only and not supported for production use.To see the technology preview features available for a given release, refer to https://

github.com/SUSE/stratos/blob/master/CHANGELOG.md .

To enable technology preview features, set the console.techPreview Helm value totrue . For example, when running helm install add --set console.techPre-view=true .

Use Helm to install Stratos:

tux > helm install suse/console \ --name susecf-console \ --namespace stratos \ --values scf-config-values.yaml \ --set services.loadbalanced=true

You can monitor the status of your stratos deployment with the watch command:

tux > watch --color 'kubectl get pods --namespace stratos'

When stratos is successfully deployed, the following is observed:

For the volume-migration pod, the STATUS is Completed and the READY column is at0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

Obtain the IP address of the service exposed through the public load balancer:

tux > kubectl get service susecf-console-ui-ext --namespace stratos

56 Deploy Stratos on Microsoft AKS SUSE Cloud Applic… 1.5.1

Page 72: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Use this IP address to create an A record.

FIGURE 6.5: STRATOS UI CLOUD FOUNDRY CONSOLE

6.3.1 Connecting Microsoft AKS to Stratos

Stratos can show information from your Microsoft AKS environment.

To enable this, you must register and connect your Microsoft AKS environment with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in thetop-right of the view - you should be shown the "Register new Endpoint" view.

1. In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + iconin the top-right of the view.

2. On the Register a new Endpoint view, click the Azure AKS button.

3. Enter a memorable name for your Microsoft AKS environment in the Name eld. For ex-ample, my-endpoint .

4. Enter the URL of the API server for your Kubernetes environment in the Endpoint Addresseld. Run kubectl cluster-info and use the value of Kubernetes master as the URL.

tux > kubectl cluster-info

5. Activate the Skip SSL validation for the endpoint check box if using self-signed certificates.

57 Connecting Microsoft AKS to Stratos SUSE Cloud Applic… 1.5.1

Page 73: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

6. Click Register.

7. Activate the Connect to my-endpoint now (optional). check box.

8. Provide a valid kubeconfig le for your Microsoft AKS environment.

9. Click Connect.

10. In the Stratos UI, go to Kubernetes in the left-hand side navigation. Information for yourMicrosoft AKS environment should now be displayed.

FIGURE 6.6: KUBERNETES ENVIRONMENT INFORMATION ON STRATOS

6.4 Deploy Stratos on Google GKEBefore deploying Stratos, ensure uaa and scf have been successfully deployed on Google GKE(see Chapter 11, Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE)).

Use Helm to install Stratos:

Important: Non-default SUSE Cloud Foundry namespaceIf you used a non-default namespace for SUSE Cloud Foundry, you need to change theUAA_ZONE environment variable in the scf-config-values.yaml to the correct name-space. You can do so in the env section:

env: # Enter the domain you created for your CAP cluster

58 Deploy Stratos on Google GKE SUSE Cloud Applic… 1.5.1

Page 74: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

DOMAIN: example.com

# uaa host and port UAA_HOST: uaa.example.com UAA_PORT: 2793 # scf namespace, default is "scf" UAA_ZONE: scf2

Note: Technology Preview FeaturesSome Stratos releases may include features as part of a technology preview. Technologypreview features are for evaluation purposes only and not supported for production use.To see the technology preview features available for a given release, refer to https://

github.com/SUSE/stratos/blob/master/CHANGELOG.md .

To enable technology preview features, set the console.techPreview Helm value totrue . For example, when running helm install add --set console.techPre-view=true .

tux > helm install suse/console \ --name susecf-console \ --namespace stratos \ --values scf-config-values.yaml \ --set services.loadbalanced=true

You can monitor the status of your stratos deployment with the watch command:

tux > watch --color 'kubectl get pods --namespace stratos'

When stratos is successfully deployed, the following is observed:

For the volume-migration pod, the STATUS is Completed and the READY column is at0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

Obtain the IP address of the service exposed through the public load balancer:

tux > kubectl get service susecf-console-ui-ext --namespace stratos

Use this IP address to create an A record.

59 Deploy Stratos on Google GKE SUSE Cloud Applic… 1.5.1

Page 75: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

FIGURE 6.7: STRATOS UI CLOUD FOUNDRY CONSOLE

6.4.1 Connecting Google GKE to Stratos

Stratos can show information from your Google GKE environment.

To enable this, you must register and connect your Google GKE environment with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in thetop-right of the view - you should be shown the "Register new Endpoint" view.

1. In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + iconin the top-right of the view.

2. On the Register a new Endpoint view, click the Google Kubernetes Engine button.

3. Enter a memorable name for your Microsoft AKS environment in the Name eld. For ex-ample, my-endpoint .

4. Enter the URL of the API server for your Kubernetes environment in the Endpoint Addresseld. Run kubectl cluster-info and use the value of Kubernetes master as the URL.

tux > kubectl cluster-info

5. Activate the Skip SSL validation for the endpoint check box if using self-signed certificates.

6. Click Register.

60 Connecting Google GKE to Stratos SUSE Cloud Applic… 1.5.1

Page 76: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

7. Activate the Connect to my-endpoint now (optional). check box.

8. Provide a valid Application Default Credentials le for your Google GKE environ-ment. Generate the le using the command below. The command saves the credentials toa le named application_default_credentials.json and outputs the path of the le.

tux > gcloud auth application-default login

9. Click Connect.

10. In the Stratos UI, go to Kubernetes in the left-hand side navigation. Information for yourGoogle GKE environment should now be displayed.

FIGURE 6.8: KUBERNETES ENVIRONMENT INFORMATION ON STRATOS

6.5 Upgrading StratosFor instructions to upgrade Stratos, follow the process described in Chapter 14, Upgrading SUSE

Cloud Application Platform. Take note that uaa and scf , other components along with Stratosthat make up a Cloud Application Platform release, are upgraded prior to upgrading Stratos.

61 Upgrading Stratos SUSE Cloud Applic… 1.5.1

Page 77: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

6.6 Stratos MetricsStratos Metrics provides a Helm chart for deploying Prometheus (see https://prometheus.io/ )and the following metrics exporters to Kubernetes:

Cloud Foundry Firehose Exporter (enabled by default)

Cloud Foundry Exporter (disabled by default)

Kubernetes State Metrics Exporter (disabled by default)

The Stratos Metrics Helm chart deploys a Prometheus server and the configured Exporters andfronts the Prometheus server with an nginx server to provide authenticated access to Prometheus(currently basic authentication over HTTPS).

When required by configuration, it also contains an initialization script that will setup users inthe UAA that have correct scopes/permissions to be able to read data from the Cloud FoundryFirehose and/or API.

Lastly, the Helm chart generates a small metadata le in the root of the nginx server that is usedby Stratos to determine which Cloud Foundry and Kubernetes clusters the Prometheus serveris providing Metrics for.

To learn more about Stratos Metrics and its full list of configuration options, see https://

github.com/SUSE/stratos-metrics .

6.6.1 Exporter Configuration

6.6.1.1 Firehose Exporter

This exporter can be enabled/disabled via the Helm value firehoseExporter.enabled . Bydefault this exporter is enabled.

You must provide the following Helm chart values for this Exporter to work correctly:

cloudFoundry.apiEndpoint - API Endpoint of the Cloud Foundry API Server

cloudFoundry.uaaAdminClient - Admin client of the UAA used by the Cloud Foundryserver

62 Stratos Metrics SUSE Cloud Applic… 1.5.1

Page 78: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

cloudFoundry.uaaAdminClientSecret - Admin client secret of the UAA used by theCloud Foundry server

cloudFoundry.skipSslVerification - Whether to skip SSL verification when commu-nicating with Cloud Foundry and the UAA APIs

You can scale the firehose nozzle in Stratos Metrics by specifying the following override:

firehoseExporter: instances: 1

Please note, the number of firehose nozzles should be proportional to the number of TrafficControllers in your Cloud Foundry (see docs at https://docs.cloudfoundry.org/loggregator/log-

ops-guide.html ). Otherwise, Loggregator will not split the firehose between the nozzles.

6.6.1.2 Cloud Foundry Exporter

This exporter can be enabled/disabled via the Helm value cfExporter.enabled . By defaultthis exporter is disabled.

You must provide the following Helm chart values for this Exporter to work correctly:

cloudFoundry.apiEndpoint - API Endpoint of the Cloud Foundry API Server

cloudFoundry.uaaAdminClient - Admin client of the UAA used by the Cloud Foundryserver

cloudFoundry.uaaAdminClientSecret - Admin client secret of the UAA used by theCloud Foundry server

cloudFoundry.skipSslVerification - Whether to skip SSL verification when commu-nicating with Cloud Foundry and the UAA APIs

6.6.1.3 Kubernetes Monitoring

This exporter can be enabled/disabled via the Helm value prometheus.kubeStateMetric-s.enabled . By default this exporter is disabled.

63 Exporter Configuration SUSE Cloud Applic… 1.5.1

Page 79: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

You must provide the following Helm chart values for this Exporter to work correctly:

kubernetes.apiEndpoint - The API Endpoint of the Kubernetes API Server

6.6.2 Install Stratos Metrics with Helm

In order to display metrics data with Stratos, you need to deploy the stratos-metrics Helmchart. As with deploying Stratos, you should deploy the metrics Helm chart using the same scf-config-values.yaml le that was used for deploying scf and uaa .

Additionally, create a new YAML le, named stratos-metrics-values.yaml in this example,for configuration options specific to Stratos Metrics.

The following is an example stratos-metrics-values.yaml le.

cloudFoundry: apiEndpoint: https://api.example.com uaaAdminClient: admin uaaAdminClientSecret: password skipSslVerification: "true" env: DOPPLER_PORT: 443kubernetes: apiEndpoint: kube_server_address.example.commetrics: username: username password: passwordprometheus: kubeStateMetrics: enabled: true server: storageClass: "persistent"services: loadbalanced: true

where:

kubernetes.apiEndpoint is the same URL that you used when registering your Kuber-netes environment with Stratos (the Kubernetes API Server URL).

prometheus.server.storageClass is the storage class to be used by Stratos Metrics. Ifa storage class is not assigned, the default storage class will be used. If a storage class isnot specified and there is no default storage class, the prometheus pod will fail to start.

64 Install Stratos Metrics with Helm SUSE Cloud Applic… 1.5.1

Page 80: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

metrics.username is the username used to authenticate with the nginx server that frontsPrometheus. This username is also used during the Section 6.6.3, “Connecting Stratos Metrics”)process.

metrics.password is the password used to authenticate with the nginx server that frontsPrometheus. This username is also used during the Section 6.6.3, “Connecting Stratos Metrics”)process. Ensure a secure password is chosen.

services.loadbalanced is set to true if your Kubernetes deployment supports auto-matic configuration of a load balancer (for example, AKS, EKS, and GKE).

If you are using SUSE Enterprise Storage, you must copy the Ceph admin secret to the metricsnamespace:

tux > kubectl get secret ceph-secret-admin --output json --namespace default | \sed 's/"namespace": "default"/"namespace": "metrics"/' | kubectl create --filename -

Install Metrics with:

tux > helm install suse/metrics \ --name susecf-metrics \ --namespace metrics \ --values scf-config-values.yaml \ --values stratos-metrics-values.yaml

Monitor progress:

$ watch --color 'kubectl get pods --namespace metrics'

When all statuses show Ready , press Ctrl – C to exit and to view your release information.

6.6.3 Connecting Stratos Metrics

When Stratos Metrics is connected to Stratos, additional views are enabled that show metricsmetadata that has been ingested into the Stratos Metrics Prometheus server.

To enable this, you must register and connect your Stratos Metrics instance with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in thetop-right of the view - you should be shown the "Register new Endpoint" view. Next:

1. Select Metrics from the Endpoint Type dropdown.

65 Connecting Stratos Metrics SUSE Cloud Applic… 1.5.1

Page 81: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

2. Enter a memorable name for your environment in the Name eld.

3. Enter the Endpoint Address. Use the following to nd the endpoint value.

tux > kubectl get service susecf-metrics-metrics-nginx --namespace metrics

For Microsoft AKS, Amazon EKS, and Google GKE deployments which use a loadbalancer, the output will be similar to the following:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEsusecf-metrics-metrics-nginx LoadBalancer 10.0.202.180 52.170.253.229 443:30263/TCP 21h

Preprend https:// to the public IP of the load balancer, and enter it in-to the Endpoint Address eld. Using the values from the example above,https://52.170.253.229 is entered as the endpoint address.

For SUSE CaaS Platform deployments which do not use a load balancer, the outputwill be similar to the following:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEsusecf-metrics-metrics-nginx NodePort 172.28.107.209 10.86.101.115,172.28.0.31 443:30685/TCP 21h

Prepend https:// to the external IP of your node, followed by the nodePort , andenter it into the Endpoint Address eld. Using the values from the example above,https://10.86.101.115:30685 is entered as the endpoint address.

4. Check the Skip SSL validation for the endpoint checkbox if using self-signed certificates.

5. Click Finish.

The view will refresh to show the new endpoint in the disconnected state. Next you will needto connect to this endpoint.

In the table of endpoints, click the overflow menu icon alongside the endpoint that you addedabove, then:

1. Click on Connect in the dropdown menu.

2. Enter the username for your Stratos Metrics instance. This will be the metrics.usernamedefined in your stratos-metrics-values.yaml le.

66 Connecting Stratos Metrics SUSE Cloud Applic… 1.5.1

Page 82: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

3. Enter the password for your Stratos Metrics instance. This will be the metrics.passworddefined in your stratos-metrics-values.yaml le.

4. Click Connect.

Once connected, you should see that the name of your Metrics endpoint is a hyperlink andclicking on it should show basic metadata about the Stratos Metrics endpoint.

Metrics data and views should now be available in the Stratos UI, for example:

On the Instances tab for an Application, the table should show an additional Cell column toindicate which Diego Cell the instance is running on. This should be clickable to navigateto a Cell view showing Cell information and metrics.

FIGURE 6.9: CELL COLUMN ON APPLICATION INSTANCE TAB AFTER CONNECTING STRATOS METRICS

On the view for an Application there should be a new Metrics tab that shows Applicationmetrics.

67 Connecting Stratos Metrics SUSE Cloud Applic… 1.5.1

Page 83: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

FIGURE 6.10: APPLICATION METRICS TAB AFTER CONNECTING STRATOS METRICS

On the Kubernetes views, views such as the Node view should show an additional Metricstab with metric information.

FIGURE 6.11: NODE METRICS ON THE STRATOS KUBERNETES VIEW

68 Connecting Stratos Metrics SUSE Cloud Applic… 1.5.1

Page 84: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

7 SUSE Cloud Application Platform High Availability

7.1 Configuring Cloud Application Platform for HighAvailabilityThere are two ways to make your SUSE Cloud Application Platform deployment highly available.The simplest method is to set the config.HA parameter in your deployment configuration leto true . The second method is to create custom configuration les with your own sizing values.

7.1.1 Prerequisites

A running deployment of SUSE Cloud Application Platform where the database roles,mysql , are single availability for both uaa and scf . By starting with single availabilitymysql roles then transitioning to high availability mysql roles, it allows other resourcesdependent on the database to come up properly. It is not recommended for initial deploy-ments of SUSE Cloud Application Platform to have the mysql roles in high availabilitymode.

7.1.2 Finding Default and Allowable Sizing Values

The sizing: section in the Helm values.yaml les for each chart describes which roles can bescaled, and the scaling options for each role. You may use helm inspect to read the sizing:section in the Helm charts:

tux > helm inspect suse/uaa | less +/sizing:tux > helm inspect suse/cf | less +/sizing:

Another way is to use Perl to extract the information for each role from the sizing: section.The following example is for the uaa namespace.

tux > helm inspect values suse/uaa | \perl -ne '/^sizing/..0 and do { print $.,":",$_ if /^ [a-z]/ || /high avail|scale|count/ }'199: # The mysql instance group can scale between 1 and 7 instances.200: # For high availability it needs at least 3 instances.

69 Configuring Cloud Application Platform for High Availability SUSE Cloud Applic… 1.5.1

Page 85: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

201: count: ~226: # The mysql_proxy instance group can scale between 1 and 5 instances.227: # For high availability it needs at least 2 instances.228: count: ~247: # The secret_generation instance group cannot be scaled.248: count: ~276: # for managing user accounts and for registering OAuth2 clients, as well as282: # The uaa instance group can scale between 1 and 65535 instances.283: # For high availability it needs at least 2 instances.284: count: ~

The default values.yaml les are also included in this guide at Section A.2, “Complete suse/uaa

values.yaml File” and Section A.3, “Complete suse/scf values.yaml File”.

7.1.3 Simple High Availability Configuration

Important

Always install to a fresh namespace

If you are not creating a fresh SUSE Cloud Application Platform installation, buthave deleted a previous deployment and are starting over, you must create newnamespaces. Do not re-use your old namespaces. The helm delete command doesnot remove generated secrets from the scf and uaa namespaces as it is not aware ofthem. These leftover secrets may cause deployment failures. See Section 31.4, “Deleting

and Rebuilding a Deployment” for more information.

The simplest way to make your SUSE Cloud Application Platform deployment highly available isto set config.HA to true . This changes the size of all roles to the minimum required for a high-ly available deployment. In your deployment configuration le, scf-config-values.yaml ,include the following.

config: # Flag to activate high-availability mode HA: true

Use helm install to deploy or helm upgrade to apply the change to an existing deployment.

tux > helm install suse/uaa \

70 Simple High Availability Configuration SUSE Cloud Applic… 1.5.1

Page 86: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

--name susecf-uaa \--namespace uaa \--values scf-config-values.yaml

When config.HA is set to true , instances groups can be made to allow sizing values other thanthe minimum required for High Availability mode. To do so, set the config.HA_strict agto false in conjunction with specifying the count for a given instance group in the sizing:section. As an example, the following configuration enables High Availability mode while onlyusing 1 instance of the mysql instance group.

config: # Flag to activate high-availability mode HA: true HA_strict: falsesizing: mysql: count: 1

7.1.4 Example Custom High Availability Configurations

The following two example High Availability configuration les are for the uaa and scf name-spaces. The example values are not meant to be copied, as these depend on your particulardeployment and requirements.

Note: Embedded uaa in scfThe User Account and Authentication ( uaa ) Server is included as an optional feature ofthe scf Helm chart. This simplifies the Cloud Application Platform deployment processas a separate installation and/or upgrade of uaa is no longer a prerequisite to installingand/or upgrading scf .

It is important to note that:

This feature should only be used when uaa is not shared with other projects.

You cannot migrate from an existing external uaa to an embedded one. In this situ-ation, enabling this feature during an upgrade will result in a single admin account.

To enable this feature, add the following to your scf-config-values.yaml .

enable:

71 Example Custom High Availability Configurations SUSE Cloud Applic… 1.5.1

Page 87: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

uaa: true

When deploying and/or upgrading scf , run helm install and/or helm upgrade andnote that:

Installing and/or upgrading uaa using helm install suse/uaa ... and/or helmupgrade is no longer required.

It is no longer necessary to set the UAA_CA_CERT parameter. Previously, this para-meter was passed the CA_CERT variable, which was assigned the CA certificate ofuaa .

The rst example is for the uaa namespace, uaa-sizing.yaml . The values specified are theminimum required for a High Availability deployment, equivalent to setting config.HA to true.

sizing: mysql: count: 3 mysql_proxy: count: 2 uaa: count: 2

The second example is for scf , scf-sizing.yaml . The values specified are the minimumrequired for a High Availability deployment, equivalent to setting config.HA to true.

sizing: adapter: count: 2 api_group: count: 2 cc_clock: count: 2 cc_uploader: count: 2 cc_worker: count: 2 cf_usb_group: count: 2 diego_api: count: 2 diego_brain: count: 2

72 Example Custom High Availability Configurations SUSE Cloud Applic… 1.5.1

Page 88: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

diego_cell: count: 3 diego_ssh: count: 2 doppler: count: 2 locket: count: 2 log_api: count: 2 log_cache_scheduler: count: 2 loggregator_agent: count: 2 mysql: count: 3 mysql_proxy: count: 2 nats: count: 2 nfs_broker: count: 2 router: count: 2 routing_api: count: 2 syslog_scheduler: count: 2 tcp_router: count: 2

When using custom sizing configurations, take note that the mysql instance group, for bothuaa and scf , must have have an odd number of instances.

Important

Always install to a fresh namespace

If you are not creating a fresh SUSE Cloud Application Platform installation, buthave deleted a previous deployment and are starting over, you must create newnamespaces. Do not re-use your old namespaces. The helm delete command doesnot remove generated secrets from the scf and uaa namespaces as it is not aware ofthem. These leftover secrets may cause deployment failures. See Section 31.4, “Deleting

and Rebuilding a Deployment” for more information.

73 Example Custom High Availability Configurations SUSE Cloud Applic… 1.5.1

Page 89: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

After creating your configuration les, follow the steps in Section 5.5, “Deployment Configuration”

until you get to Section 5.7.1, “Deploy uaa”. Then deploy uaa with this command:

tux > helm install suse/uaa \--name susecf-uaa \--namespace uaa \--values scf-config-values.yaml \--values uaa-sizing.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you canmonitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

For the secret-generation pod, the STATUS is Completed and the READY column isat 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

When the uaa deployment completes, deploy SCF with these commands:

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \--name susecf-scf \--namespace scf \--values scf-config-values.yaml \--values scf-sizing.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}"

7.2 Availability ZonesAvailability Zones (AZ) are logical arrangements of compute nodes within a region that provideisolation from each other. A deployment that is distributed across multiple AZs can use thisseparation to increase resiliency against downtime in the event a given zone experiences issues.

74 Availability Zones SUSE Cloud Applic… 1.5.1

Page 90: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Refer to the following for platform-specific information about availability zones:

https://azure.microsoft.com/en-ca/updates/availability-zones-az-support-for-aks/

https://docs.aws.amazon.com/AWSEC2/

latest/UserGuide/using-regions-availability-zones.html

https://cloud.google.com/compute/docs/regions-zones/

https://docs.cloudfoundry.org/concepts/high-availability.html#azs

7.2.1 Availability Zone Information Handling

In Cloud Application Platform, availability zone handling is done using the AZ_LABEL_NAMEHelm chart value. By default, AZ_LABEL_NAME is set to failure-domain.beta.kuber-

netes.io/zone , which is the predefined Kubernetes label for availability zones. On most publiccloud providers, nodes will already have this label set and availability zone support will workwithout further configuration. For on-premise installations, it is recommended that nodes arelabeled with the same label.

Run the following to see the labels on your nodes.

tux > kubectl get nodes --show-labels

To label a node, use kubectl label nodes . For example:

tux > kubectl label nodes cap-worker-1 failure-domain.beta.kubernetes.io/zone=zone-1

To see which node and availability zone a given diego-cell pod is assigned to, refer to thefollowing example:

tux > kubectl logs diego-cell-0 --namespace scf | grep ^AZ

For more information on the failure-domain.beta.kubernetes.io/zone la-bel, see https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#fail-

ure-domainbetakubernetesiozone .

Note that due to a bug in Cloud Application Platform 1.4 and earlier, this label was not workingfor AZ_LABEL_NAME .

75 Availability Zone Information Handling SUSE Cloud Applic… 1.5.1

Page 91: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Important: Performance with Availability ZonesFor the best performance, all availability zones should have a similar number of nodesbecause app instances will be evenly distributed, so that each zone has about the samenumber of instances.

76 Availability Zone Information Handling SUSE Cloud Applic… 1.5.1

Page 92: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

8 LDAP Integration

SUSE Cloud Application Platform can be integrated with identity providers (https://docs.cloud-

foundry.org/uaa/identity-providers.html) to help manage authentication of users. The Light-weight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Ap-plication Platform integrates with. This section describes the necessary components and steps inorder to configure the integration. See User Account and Authentication LDAP Integration (https://

docs.cloudfoundry.org/uaa/uaa-user-management.html) for more information.

8.1 PrerequisitesThe following prerequisites are required in order to complete an LDAP integration with SUSECloud Application Platform.

cf , the Cloud Foundry command line interface. For more information, see https://doc-

s.cloudfoundry.org/cf-cli/ .For SUSE Linux Enterprise and openSUSE systems, install using zypper .

tux > sudo zypper install cf-cli

For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Addthe module using YaST or SUSEConnect.

tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-

cli.html .

uaac , the Cloud Foundry uaa command line client (UAAC). See https://docs.cloud-

foundry.org/uaa/uaa-user-management.html for more information and installation in-structions.On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages havebeen installed before installing the cf-uaac gem.

tux > sudo zypper install ruby-devel gcc-c++

An LDAP server and the credentials for a user/service account with permissions to searchthe directory.

77 Prerequisites SUSE Cloud Applic… 1.5.1

Page 93: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

8.2 Example LDAP IntegrationRun the following commands to complete the integration of your Cloud Application Platformdeployment and LDAP server. In this example, scf has been deployed to a namespace namedscf .

1. Use UAAC to target your uaa server.

tux > uaac target --skip-ssl-validation https://uaa.example.com:2793

2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set inyour scf-config-values.yaml le.

tux > uaac token client get admin --secret password

3. Create the LDAP identity provider. A 201 response will be returned when the iden-tity provider is successfully created. See the UAA API Reference (http://docs.cloud-

foundry.org/api/uaa/version/4.21.0/index.html#ldap) and Cloud Foundry UAA-LDAP Doc-

umentation (https://github.com/cloudfoundry/uaa/blob/4.21.0/docs/UAA-LDAP.md) for in-formation regarding the request parameters and additional options available to configureyour identity provider.The following is an example of a uaac curl command and its request parameters usedto create an identity provider. Specify the parameters according to your LDAP server'scredentials and directory structure. Ensure the user specifed in the bindUserDn has per-missions to search the directory.

tux > uaac curl /identity-providers?rawConfig=true \ --request POST \ --insecure \ --header 'Content-Type: application/json' \ --header 'X-Identity-Zone-Subdomain: scf' \ --data '{ "type" : "ldap", "config" : { "ldapProfileFile" : "ldap/ldap-search-and-bind.xml", "baseUrl" : "ldap://ldap.example.com:389", "bindUserDn" : "cn=admin,dc=example,dc=com", "bindPassword" : "password", "userSearchBase" : "dc=example,dc=com", "userSearchFilter" : "uid={0}", "ldapGroupFile" : "ldap/ldap-groups-map-to-scopes.xml", "groupSearchBase" : "dc=example,dc=com",

78 Example LDAP Integration SUSE Cloud Applic… 1.5.1

Page 94: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

"groupSearchFilter" : "member={0}" }, "originKey" : "ldap", "name" : "My LDAP Server", "active" : true }'

4. Verify the LDAP identify provider has been created in the scf zone. The output shouldnow contain an entry for the ldap type.

tux > uaac curl /identity-providers --insecure --header "X-Identity-Zone-Id: scf"

5. Use the cf CLI to target your SUSE Cloud Application Platform deployment.

tux > cf api --skip-ssl-validation https://api.example.com

6. Log in as an administrator.

tux > cf loginAPI endpoint: https://api.example.com

Email> admin

Password>Authenticating...OK

7. Create users associated with your LDAP identity provider.

tux > cf create-user username --origin ldapCreating user username...OK

TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'.

8. Assign the user a role. Roles define the permissions a user has for a given org or spaceand a user can be assigned multiple roles. See Orgs, Spaces, Roles, and Permissions (https://

docs.cloudfoundry.org/concepts/roles.html) for available roles and their correspondingpermissions. The following example assumes that an org named Org and a space namedSpace have already been created.

tux > cf set-space-role username Org Space SpaceDeveloperAssigning role RoleSpaceDeveloper to user username in org Org / space Space as admin...

79 Example LDAP Integration SUSE Cloud Applic… 1.5.1

Page 95: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

OKtux > cf set-org-role username Org OrgManagerAssigning role OrgManager to user username in org Org as admin...OK

9. Verify the user can log into your SUSE Cloud Application Platform deployment using theirassociated LDAP server credentials.

tux > cf loginAPI endpoint: https://api.example.com

Email> username

Password>Authenticating...OK

API endpoint: https://api.example.com (API version: 2.115.0)User: [email protected]

If the LDAP identity provider is no longer needed, it can be removed with the following steps.

1. Obtain the ID of your identity provider.

tux > uaac curl /identity-providers \ --insecure \ --header "Content-Type:application/json" \ --header "Accept:application/json" \ --header"X-Identity-Zone-Id:scf"

2. Delete the identity provider.

tux > uaac curl /identity-providers/IDENTITY_PROVIDER_ID \ --request DELETE \ --insecure \ --header "X-Identity-Zone-Id:scf"

80 Example LDAP Integration SUSE Cloud Applic… 1.5.1

Page 96: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

9 Deploying SUSE Cloud Application Platform on Mi-crosoft Azure Kubernetes Service (AKS)

Important

README first!

Before you start deploying SUSE Cloud Application Platform, review the followingdocuments:Read the Release Notes: Release Notes SUSE Cloud Application Platform (https://

www.suse.com/releasenotes/x86_64/SUSE-CAP/1/)

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on Microsoft Azure Kubernetes Service(AKS), Microsoft's managed Kubernetes service. This chapter describes the steps for preparingAzure for a SUSE Cloud Application Platform deployment, with a basic Azure load balancer.Note that you will not create any DNS records until after uaa is deployed. (See Azure Kuber-

netes Service (AKS) (https://azure.microsoft.com/en-us/services/container-service/) for more in-formation.)

In Kubernetes terminology a node used to be a minion, which was the name for a workernode. Now the correct term is simply node (see https://kubernetes.io/docs/concepts/architec-

ture/nodes/ ). This can be confusing, as computing nodes have traditionally been defined asany device in a network that has an IP address. In Azure they are called agent nodes. In thischapter we call them agent nodes or Kubernetes nodes.

9.1 PrerequisitesThe following are required to deploy and use SUSE Cloud Application Platform on AKS:

az , the Azure command line client. See https://docs.microsoft.com/en-us/cli/azure/?

view=azure-cli-latest for more information and installation instructions.

A Microsoft Azure account. For details, refer to https://azure.microsoft.com .

The name of the SSH key that is attached to your Azure account.

81 Prerequisites SUSE Cloud Applic… 1.5.1

Page 97: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Your Azure account as sufficient quota. The minimal installation described in this chapterrequire 24 vCPUs. If your account has insufficient quota, you can request a quota increaseby going to https://docs.microsoft.com/en-us/azure/azure-supportability/resource-manag-

er-core-quotas-request .

Ensure nodes use a mininum kernel version of 3.19.

cf , the Cloud Foundry command line interface. For more information, see https://doc-

s.cloudfoundry.org/cf-cli/ .For SUSE Linux Enterprise and openSUSE systems, install using zypper .

tux > sudo zypper install cf-cli

For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Addthe module using YaST or SUSEConnect.

tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-

cli.html .

kubectl , the Kubernetes command line tool. For more information, refer to https://kuber-

netes.io/docs/reference/kubectl/overview/ .For SLE 12 SP3 or 15 systems, install the package kubernetes-client from the PublicCloud module.For other systems, follow the instructions at https://kubernetes.io/docs/tasks/tools/in-

stall-kubectl/ .

jq , a command line JSON processor. See https://stedolan.github.io/jq/ for more infor-mation and installation instructions.

curl , the Client URL (cURL) command line tool.

sed , the stream editor.

ImportantThe prerequisites and configurations described is this chapter only reflect the require-ments for a minimal SUSE Cloud Application Platform deployment. For a more produc-tion-ready environment, consider incoporating the following features:

Stratos Web Console

82 Prerequisites SUSE Cloud Applic… 1.5.1

Page 98: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

For details, see Chapter 6, Installing the Stratos Web Console.

High Availability

For details, see Chapter 7, SUSE Cloud Application Platform High Availability.

LDAP Integration

For details, see Chapter 8, LDAP Integration.

External Log Server Integration

For details, see Chapter 24, Logging.

Managing Certificates

For details, see Chapter 25, Managing Certificates.

Other Features

Refer to the Administration Guide at Part III, “SUSE Cloud Application Platform Adminis-

tration” for additional features.

Log in to your Azure Account:

tux > az login

Your Azure user needs the User Access Administrator role. Check your assigned roles with theaz command:

tux > az role assignment list --assignee login-name[...]"roleDefinitionName": "User Access Administrator",

If you do not have this role, then you must request it from your Azure administrator.

You need your Azure subscription ID. Extract it with az :

tux > az account show --query "{ subscription_id: id }"{"subscription_id": "a900cdi2-5983-0376-s7je-d4jdmsif84ca"}

Replace the example subscription-id in the next command with your subscription-id .Then export it as an environment variable and set it as the current subscription:

tux > export SUBSCRIPTION_ID="a900cdi2-5983-0376-s7je-d4jdmsif84ca"

83 Prerequisites SUSE Cloud Applic… 1.5.1

Page 99: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > az account set --subscription $SUBSCRIPTION_ID

Verify that the Microsoft.Network , Microsoft.Storage , Microsoft.Compute , and Mi-crosoft.ContainerService providers are enabled:

tux > az provider list | egrep --word-regexp 'Microsoft.Network|Microsoft.Storage|Microsoft.Compute|Microsoft.ContainerService'

If any of these are missing, enable them with the az provider register --name providercommand.

9.2 Create Resource Group and AKS InstanceNow you can create a new Azure resource group and AKS instance. Define the required variablesas environment variables in a le, called env.sh . This helps to speed up the setup, and to reduceerrors. Verify your environment variables at any time by using the source to load the le, thenrunning echo $VARNAME , for example:

tux > source ./env.sh

tux > echo $RG_NAMEcap-aks

This is especially useful when you run long compound commands to extract and set environmentvariables.

Tip: Use Dierent NamesEnsure each of your AKS clusters use unique resource group and managed cluster names,and not copy the examples, especially when your Azure subscription supports multipleusers. Azure has no tools for sorting resources by user, so creating unique names andputting everything in your deployment in a single resource group helps you keep track,and you can delete the whole deployment by deleting the resource group.

In env.sh , define the environment variables below. Replace the example values with your own.

Set a resource group name.

84 Create Resource Group and AKS Instance SUSE Cloud Applic… 1.5.1

Page 100: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

export RG_NAME="cap-aks"

Set an AKS managed cluster name. Azure's default is to use the resource group name, thenprepend it with MC and append the location, for example MC_cap-aks_cap-aks_eastus .This example uses the creator's initials for the AKS_NAME environment variable, whichwill be mapped to the az command's --name option. The --name option is for creatingarbitrary names for your AKS resources. This example will create a managed cluster namedMC_cap-aks_cjs_eastus :

tux > export AKS_NAME=cjs

Set the Azure location. See Quotas, virtual machine size restrictions, and region availability

in Azure Kubernetes Service (AKS) (https://docs.microsoft.com/en-us/azure/aks/container-ser-

vice-quotas) for supported locations. Run az account list-locations to verify thecorrect way to spell your location name, for example East US is eastus in your az com-mands:

tux > export REGION="eastus"

Set the Kubernetes agent node count. (Cloud Application Platform requires a minimumof 3.)

tux > export NODE_COUNT="3"

Set the virtual machine size (see General purpose virtual machine sizes (https://doc-

s.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general) ). A virtual machinesize of at least Standard_DS4_v2 using premium storage (see Built in storage class-

es (https://docs.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv) ) is recommend-ed. Note managed-premium has been specified in the example scf-config-values.yamlused (see Section 9.9, “Deploying SUSE Cloud Application Platform”):

tux > export NODE_VM_SIZE="Standard_DS4_v2"

Set the public SSH key name associated with your Azure account:

tux > export SSH_KEY_VALUE="~/.ssh/id_rsa.pub"

85 Create Resource Group and AKS Instance SUSE Cloud Applic… 1.5.1

Page 101: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Set a new admin username:

tux > export ADMIN_USERNAME="scf-admin"

Create a unique nodepool name. The default is aks-nodepool followed by an auto-gen-erated number, for example aks-nodepool1-39318075-2 . You have the option to changenodepool1 and create your own unique identifier. For example, mypool results in aks-mypool-39318075-2 . Note that uppercase characters are considered invalid in a nodepoolname and should not be used.

tux > export NODEPOOL_NAME="mypool"

The below is an example env.sh le after all the environment variables have been defined.

### example environment variable definition file### env.sh

export RG_NAME="cap-aks"export AKS_NAME="cjs"export REGION="eastus"export NODE_COUNT="3"export NODE_VM_SIZE="Standard_DS4_v2"export SSH_KEY_VALUE="~/.ssh/id_rsa.pub"export ADMIN_USERNAME="scf-admin"export NODEPOOL_NAME="mypool"

Now that your environment variables are in place, load the le:

tux > source ./env.sh

Create a new resource group:

tux > az group create --name $RG_NAME --location $REGION

List the Kubernetes versions currently supported by AKS (see https://docs.microsoft.com/en-us/

azure/aks/supported-kubernetes-versions for more information on the AKS version supportpolicy):

tux > az aks get-versions --location $REGION --output table

Create a new AKS managed cluster, and specify the Kubernetes version for consistent deploy-ments:

tux > az aks create --resource-group $RG_NAME --name $AKS_NAME \

86 Create Resource Group and AKS Instance SUSE Cloud Applic… 1.5.1

Page 102: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

--node-count $NODE_COUNT --admin-username $ADMIN_USERNAME \ --ssh-key-value $SSH_KEY_VALUE --node-vm-size $NODE_VM_SIZE \ --node-osdisk-size=80 --nodepool-name $NODEPOOL_NAME \ --vm-set-type AvailabilitySet

NoteAn OS disk size of at least 80 GB must be specified using the --node-osdisk-size ag.

This takes a few minutes. When it is completed, fetch your kubectl credentials. The defaultbehavior for az aks get-credentials is to merge the new credentials with the existing defaultconfiguration, and to set the new credentials as as the current Kubernetes context. The contextname is your AKS_NAME value. You should rst backup your current configuration, or move itto a different location, then fetch the new credentials:

tux > az aks get-credentials --resource-group $RG_NAME --name $AKS_NAME Merged "cap-aks" as current context in /home/tux/.kube/config

Verify that you can connect to your cluster:

tux > kubectl get nodesNAME STATUS ROLES AGE VERSIONaks-mypool-47788232-0 Ready agent 5m v1.11.9aks-mypool-47788232-1 Ready agent 6m v1.11.9aks-mypool-47788232-2 Ready agent 6m v1.11.9

tux > kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system azureproxy-79c5db744-fwqcx 1/1 Running 2 6mkube-system heapster-55f855b47-c4mf9 2/2 Running 0 5mkube-system kube-dns-v20-7c556f89c5-spgbf 3/3 Running 0 6mkube-system kube-dns-v20-7c556f89c5-z2g7b 3/3 Running 0 6mkube-system kube-proxy-g9zpk 1/1 Running 0 6mkube-system kube-proxy-kph4v 1/1 Running 0 6mkube-system kube-proxy-xfngh 1/1 Running 0 6mkube-system kube-svc-redirect-2knsj 1/1 Running 0 6mkube-system kube-svc-redirect-5nz2p 1/1 Running 0 6mkube-system kube-svc-redirect-hlh22 1/1 Running 0 6mkube-system kubernetes-dashboard-546686-mr9hz 1/1 Running 1 6mkube-system tunnelfront-595565bc78-j8msn 1/1 Running 0 6m

When all nodes are in a ready state and all pods are running, proceed to the next steps.

87 Create Resource Group and AKS Instance SUSE Cloud Applic… 1.5.1

Page 103: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

9.3 Install Helm Client and TillerHelm is a Kubernetes package manager. It consists of a client and server component, both ofwhich are required in order to install and manage Cloud Application Platform.

The Helm client, helm , can be installed on your remote administration computer by referring tothe documentation at https://v2.helm.sh/docs/using_helm/#installing-helm . Cloud ApplicationPlatform is compatible with both Helm 2 and Helm 3. Examples in this guide are based on Helm2. To use Helm 3, refer to the Helm documentation at https://helm.sh/docs/ .

Tiller, the Helm server component, needs to be installed on your Kubernetes cluster. Followthe instructions at https://v2.helm.sh/docs/using_helm/#installing-tiller to install Tiller with aservice account and ensure your installation is appropriately secured according to your require-ments as described in https://v2.helm.sh/docs/using_helm/#securing-your-helm-installation .

9.4 Pod Security PoliciesRole-based access control (RBAC) is enabled by default on AKS. SUSE Cloud Application Platform1.3.1 and later do not need to be configured manually. Older Cloud Application Platform releasesrequire manual PSP configuration; see Section A.1, “Manual Configuration of Pod Security Policies”

for instructions.

9.5 Default Storage ClassThis example creates a managed-premium (see https://docs.microsoft.com/en-us/azure/aks/

azure-disks-dynamic-pv ) storage class for your cluster using the manifest defined in stor-age-class.yaml below:

---apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: annotations: storageclass.kubernetes.io/is-default-class: "true" labels: kubernetes.io/cluster-service: "true" name: persistentparameters: kind: Managed

88 Install Helm Client and Tiller SUSE Cloud Applic… 1.5.1

Page 104: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

storageaccounttype: Premium_LRSprovisioner: kubernetes.io/azure-diskreclaimPolicy: DeleteallowVolumeExpansion: true

Then apply the new storage class configuration with this command:

tux > kubectl create --filename storage-class.yaml

Specify the newly created created storage class, called persistent , as the value for kube.s-torage_class.persistent in your deployment configuration le, like this example:

kube: storage_class: persistent: "persistent" shared: "persistent"

See Section 9.7, “Deployment Configuration” for a complete example deployment configuration le,scf-config-values.yaml .

9.6 DNS ConfigurationThis section provides an overview of the domain and sub-domains that require A records to beset up for. The process is described in more detail in the deployment section.

The following table lists the required domain and sub-domains, using example.com as the ex-ample domain:

Warning: Supported DomainsWhen selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be eithera subdomain or a root domain. Setting DOMAIN to a top-level domain, such suse , is notsupported.

Domains Services

uaa.example.com uaa-uaa-public

*.uaa.example.com uaa-uaa-public

example.com router-gorouter-public

89 DNS Configuration SUSE Cloud Applic… 1.5.1

Page 105: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Domains Services

*.example.com router-gorouter-public

tcp.example.com tcp-router-tcp-router-public

ssh.example.com diego-ssh-ssh-proxy-public

A SUSE Cloud Application Platform cluster exposes these four services:

Kubernetes service descriptions Kubernetes service names

User Account and Authentication ( uaa ) uaa-uaa-public

Cloud Foundry (CF) TCP routing service tcp-router-tcp-router-public

CF application SSH access diego-ssh-ssh-proxy-public

CF router router-gorouter-public

uaa-uaa-public is in the uaa namespace, and the rest are in the scf namespace.

9.7 Deployment ConfigurationIt is not necessary to create any DNS records before deploying uaa . Instead, after uaa is runningyou will nd the load balancer IP address that was automatically created during deployment,and then create the necessary records.

The following le, scf-config-values.yaml , provides a complete example deployment con-figuration.

Warning: Supported DomainsWhen selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be eithera subdomain or a root domain. Setting DOMAIN to a top-level domain, such suse , is notsupported.

### example deployment configuration file### scf-config-values.yaml

env: DOMAIN: example.com

90 Deployment Configuration SUSE Cloud Applic… 1.5.1

Page 106: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# the UAA prefix is required UAA_HOST: uaa.example.com UAA_PORT: 2793 GARDEN_ROOTFS_DRIVER: "overlay-xfs"

kube: storage_class: persistent: "persistent" shared: "persistent"

registry: hostname: "registry.suse.com" username: "" password: "" organization: "cap"

secrets: # Create a very strong password for user 'admin' CLUSTER_ADMIN_PASSWORD: password

# Create a very strong password, and protect it because it # provides root access to everything UAA_ADMIN_CLIENT_SECRET: password

services: loadbalanced: true

Take note of the following Helm values when defining your scf-config-values.yaml le.

GARDEN_ROOTFS_DRIVER

For SUSE® CaaS Platform and other Kubernetes deployments where the nodes are basedon SUSE Linux Enterprise, the btrfs le system driver must be used. By default, btrfsis selected as the default.For Microsoft AKS, Amazon EKS, Google GKE, and other Kubernetes deployments wherethe nodes are based on other operating systems, the overlay-xfs le system driver mustbe used.

Important: Protect UAA_ADMIN_CLIENT_SECRETThe UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Appli-cation Platform cluster. Make this a very strong password, and protect it just as carefullyas you would protect any root password.

91 Deployment Configuration SUSE Cloud Applic… 1.5.1

Page 107: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

9.8 Add the Kubernetes Charts RepositoryDownload the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm :

tux > helm repo listNAME URLstable https://kubernetes-charts.storage.googleapis.comlocal http://127.0.0.1:8879/chartssuse https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suseNAME CHART VERSION APP VERSION DESCRIPTIONsuse/cf 2.19.1 1.5.1 A Helm chart for SUSE Cloud Foundrysuse/cf-usb-sidecar-mysql 1.0.1 A Helm chart for SUSE Universal Service Broker Sidecar fo...suse/cf-usb-sidecar-postgres 1.0.1 A Helm chart for SUSE Universal Service Broker Sidecar fo...suse/console 2.6.1 1.5.1 A Helm chart for deploying Stratos UI Consolesuse/log-agent-rsyslog 1.0.1 8.39.0 Log Agent for forwarding logs of K8s control pl...suse/metrics 1.1.1 1.5.1 A Helm chart for Stratos Metricssuse/minibroker 0.3.1 A minibroker for your minikubesuse/nginx-ingress 0.28.4 0.15.0 An nginx Ingress controller that uses ConfigMap to store ...suse/uaa 2.19.1 1.5.1 A Helm chart for SUSE UAA

9.9 Deploying SUSE Cloud Application PlatformThis section describes how to deploy SUSE Cloud Application Platform with a basic AKS loadbalancer and how to configure your DNS records.

92 Add the Kubernetes Charts Repository SUSE Cloud Applic… 1.5.1

Page 108: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

9.9.1 Deploy uaa

Note: Embedded uaa in scfThe User Account and Authentication ( uaa ) Server is included as an optional feature ofthe scf Helm chart. This simplifies the Cloud Application Platform deployment processas a separate installation and/or upgrade of uaa is no longer a prerequisite to installingand/or upgrading scf .

It is important to note that:

This feature should only be used when uaa is not shared with other projects.

You cannot migrate from an existing external uaa to an embedded one. In this situ-ation, enabling this feature during an upgrade will result in a single admin account.

To enable this feature, add the following to your scf-config-values.yaml .

enable: uaa: true

When deploying and/or upgrading scf , run helm install and/or helm upgrade andnote that:

Installing and/or upgrading uaa using helm install suse/uaa ... and/or helmupgrade is no longer required.

It is no longer necessary to set the UAA_CA_CERT parameter. Previously, this para-meter was passed the CA_CERT variable, which was assigned the CA certificate ofuaa .

Use Helm to deploy the uaa server:

tux > helm install suse/uaa \--name susecf-uaa \--namespace uaa \--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you canmonitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

93 Deploy uaa SUSE Cloud Applic… 1.5.1

Page 109: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

When uaa is successfully deployed, the following is observed:

For the secret-generation pod, the STATUS is Completed and the READY column isat 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

After the deployment completes, a Kubernetes service for uaa will be exposed on an Azure loadbalancer that is automatically set up by AKS (named kubernetes in the resource group thathosts the worker node VMs).

List the services that have been exposed on the load balancer public IP. The name of theseservices end in -public . For example, the uaa service is exposed on 40.85.188.67 and port2793.

tux > kubectl get services --namespace uaa | grep publicuaa-uaa-public LoadBalancer 10.0.67.56 40.85.188.67 2793:32034/TCP

Use the DNS service of your choice to set up DNS A records for the service from the previousstep. Use the public load balancer IP associated with the service to create domain mappings:

For the uaa service, map the following domains:

uaa.DOMAIN

Using the example values, an A record for uaa.example.com that points to40.85.188.67

*.uaa.DOMAIN

Using the example values, an A record for *.uaa.example.com that points to40.85.188.67

If you wish to use the DNS service provided by Azure, see the Azure DNS Documentation (https://

docs.microsoft.com/en-us/azure/dns/) to learn more.

Use curl to verify you are able to connect to the uaa OAuth server on the DNS name config-ured:

tux > curl --insecure https://uaa.example.com:2793/.well-known/openid-configuration

This should return a JSON object, as this abbreviated example shows:

{"issuer":"https://uaa.example.com:2793/oauth/token",

94 Deploy uaa SUSE Cloud Applic… 1.5.1

Page 110: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

"authorization_endpoint":"https://uaa.example.com:2793/oauth/authorize","token_endpoint":"https://uaa.example.com:2793/oauth/token"

9.9.2 Deploy scf

Before deploying scf , ensure the DNS records for the uaa domains have been set up as specifiedin the previous section. Next, pass your uaa secret and certificate to scf , then use Helm todeploy scf :

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \--name susecf-scf \--namespace scf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}"

Wait until you have a successful scf deployment before going to the next steps, which you canmonitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

For the secret-generation and post-deployment-setup pods, the STATUS is Com-pleted and the READY column is at 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

After the deployment completes, a number of public services will be setup using a load balancerthat has been configured with corresponding load balancing rules and probes as well as havingthe correct ports opened in Network Security Group.

List the services that have been exposed on the load balancer public IP. The name of theseservices end in -public . For example, the gorouter service is exposed on 23.96.32.205 :

tux > kubectl get services --namespace scf | grep public

95 Deploy scf SUSE Cloud Applic… 1.5.1

Page 111: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

diego-ssh-ssh-proxy-public LoadBalancer 10.0.44.118 40.71.187.83 2222:32412/TCP 1drouter-gorouter-public LoadBalancer 10.0.116.78 23.96.32.205 80:32136/TCP,443:32527/TCP,4443:31541/TCP 1dtcp-router-tcp-router-public LoadBalancer 10.0.132.203 23.96.46.98 20000:30337/TCP,20001:31530/TCP,20002:32118/TCP,20003:30750/TCP,20004:31014/TCP,20005:32678/TCP,20006:31528/TCP,20007:31325/TCP,20008:30060/TCP 1d

Use the DNS service of your choice to set up DNS A records for the services from the previousstep. Use the public load balancer IP associated with the service to create domain mappings:

For the gorouter service, map the following domains:

DOMAIN

Using the example values, an A record for example.com that points to23.96.32.205 would be created.

*.DOMAIN

Using the example values, an A record for *.example.com that points to23.96.32.205 would be created.

For the diego-ssh service, map the following domain:

ssh.DOMAIN

Using the example values, an A record for ssh.example.com that points to40.71.187.83 would be created.

For the tcp-router service, map the following domain:

tcp.DOMAIN

Using the example values, an A record for tcp.example.com that points to23.96.46.98 would be created.

If you wish to use the DNS service provided by Azure, see the Azure DNS Documentation (https://

docs.microsoft.com/en-us/azure/dns/) to learn more.

Your load balanced deployment of Cloud Application Platform is now complete. Verify you canaccess the API endpoint:

tux > cf api --skip-ssl-validation https://api.example.com

96 Deploy scf SUSE Cloud Applic… 1.5.1

Page 112: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

9.10 Configuring and Testing the Native MicrosoftAKS Service BrokerMicrosoft Azure Kubernetes Service provides a service broker called the Open Service Brokerfor Azure (see https://github.com/Azure/open-service-broker-azure . This section describes howto use it with your SUSE Cloud Application Platform deployment.

Start by extracting and setting a batch of environment variables:

tux > SBRG_NAME=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 8)-service-broker

tux > REGION=eastus

tux > export SUBSCRIPTION_ID=$(az account show | jq -r '.id')

tux > az group create --name ${SBRG_NAME} --location ${REGION}

tux > SERVICE_PRINCIPAL_INFO=$(az ad sp create-for-rbac --name ${SBRG_NAME})

tux > TENANT_ID=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.tenant')

tux > CLIENT_ID=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.appId')

tux > CLIENT_SECRET=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.password')

tux > echo SBRG_NAME=${SBRGNAME}

tux > echo REGION=${REGION}

tux > echo SUBSCRIPTION_ID=${SUBSCRIPTION_ID} \; TENANT_ID=${TENANT_ID}\; CLIENT_ID=${CLIENT_ID}\; CLIENT_SECRET=${CLIENT_SECRET}

Add the necessary Helm repositories and download the charts:

tux > helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com

tux > helm repo update

tux > helm install svc-cat/catalog --name catalog \ --namespace catalog \ --set controllerManager.healthcheck.enabled=false \ --set apiserver.healthcheck.enabled=false

tux > kubectl get apiservice

tux > helm repo add azure https://kubernetescharts.blob.core.windows.net/azure

97

Configuring and Testing the Native Microsoft AKS Service Broker SUSE Cloud Applic…

1.5.1

Page 113: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > helm repo update

Set up the service broker with your variables:

tux > helm install azure/open-service-broker-azure \--name osba \--namespace osba \--set azure.subscriptionId=${SUBSCRIPTION_ID} \--set azure.tenantId=${TENANT_ID} \--set azure.clientId=${CLIENT_ID} \--set azure.clientSecret=${CLIENT_SECRET} \--set azure.defaultLocation=${REGION} \--set redis.persistence.storageClass=default \--set basicAuth.username=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 16) \--set basicAuth.password=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 16) \--set tls.enabled=false

Monitor the progress:

tux > watch --color 'kubectl get pods --namespace osba'

When all pods are running, create the service broker in SUSE Cloud Foundry using the cf CLI:

tux > cf login

tux > cf create-service-broker azure $(kubectl get deployment osba-open-service-broker-azure \--namespace osba --output jsonpath='{.spec.template.spec.containers[0].env[?(@.name == "BASIC_AUTH_USERNAME")].value}') $(kubectl get secret --namespace osba osba-open-service-broker-azure --output jsonpath='{.data.basic-auth-password}' | base64 --decode) http://osba-open-service-broker-azure.osba

List the available service plans. For more information about the services supported see https://

github.com/Azure/open-service-broker-azure#supported-services :

tux > cf service-access -b azure

Use cf enable-service-access to enable access to a service plan. This example enables allbasic plans:

tux > cf service-access -b azure | \awk '($2 ~ /basic/) { system("cf enable-service-access " $1 " -p " $2)}'

Test your new service broker with an example PHP application. First create an organization andspace to deploy your test application to:

tux > cf create-org testorg

98

Configuring and Testing the Native Microsoft AKS Service Broker SUSE Cloud Applic…

1.5.1

Page 114: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > cf create-space scftest -o testorg

tux > cf target -o "testorg" -s "scftest"

tux > cf create-service azure-mysql-5-7 basic question2answer-db \-c "{ \"location\": \"${REGION}\", \"resourceGroup\": \"${SBRG_NAME}\", \"firewallRules\": [{\"name\": \\"AllowAll\", \"startIPAddress\":\"0.0.0.0\",\"endIPAddress\":\"255.255.255.255\"}]}"

tux > cf service question2answer-db | grep status

Find your new service and optionally disable TLS. You should not disable TLS on a productiondeployment, but it simplifies testing. The mysql2 gem must be configured to use TLS, see bri-

anmario/mysql2/SSL options (https://github.com/brianmario/mysql2#ssl-options) on GitHub:

tux > az mysql server list --resource-group $SBRG_NAME

tux > az mysql server update --resource-group $SBRG_NAME \--name scftest --ssl-enforcement Disabled

Look in your Azure portal to nd your database --name .

Build and push the example PHP application:

tux > git clone https://github.com/scf-samples/question2answer

tux > cd question2answer

tux > cf push

tux > cf service question2answer-db # => bound apps

When the application has finished deploying, use your browser and navigate to the URL specifiedin the routes eld displayed at the end of the staging logs. For example, the application routecould be question2answer.example.com .

Press the button to prepare the database. When the database is ready, further verify by creatingan initial user and posting some test questions.

9.11 Resizing Persistent VolumesDepending on your workloads, the default persistent volume (PV) sizes of your Cloud Appli-cation Platform deployment may be insufficient. This section describes the process to resize apersistent volume in your Cloud Application Platform deployment, by modifying the persistentvolumes claim (PVC) object.

99 Resizing Persistent Volumes SUSE Cloud Applic… 1.5.1

Page 115: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Note that PVs can only be expanded, but cannot be shrunk. shrunk.

9.11.1 Prerequisites

The following are required in order to use the process below to resize a PV.

Kubernetes 1.11 or newer.

The volume being expanded is among the list of supported volume types. Refer tothe list at https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-per-

sistent-volumes-claims

9.11.2 Example Procedure

The following describes the process required to resize a PV, using the PV and PVC associatedwith uaa 's mysql as an example.

1. Find the storage class and PVC associated with the PV being expanded. In This example,the storage class is called persistent and the PVC is called mysql-data-mysql-0 .

tux > kubectl get persistentvolume

2. Verify whether the storage class has allowVolumeExpansion set to true . If it does not,run the following command to update the storage class.

tux > kubectl get storageclass persistent --output json

If it does not, run the below command to update the storage class.

tux > kubectl patch storageclass persistent \--patch '{"allowVolumeExpansion": true}'

3. Cordon all nodes in your cluster.

a. tux > export VM_NODES=$(kubectl get nodes -o name)

b. tux > for i in $VM_NODES do kubectl cordon `echo "${i//node\/}"`done

100 Prerequisites SUSE Cloud Applic… 1.5.1

Page 116: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

4. Increase the storage size of the PVC object associated with the PV being expanded.

tux > kubectl patch persistentvolumeclaim --namespace uaa mysql-data-mysql-0 \--patch '{"spec": {"resources": {"requests": {"storage": "25Gi"}}}}'

5. List all pods that use the PVC, in any namespace.

tux > kubectl get pods --all-namespaces --output=json | jq -c '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName: .spec | select( has ("volumes") ).volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'

6. Restart all pods that use the PVC.

tux > kubectl delete pod mysql-0 --namespace uaa

7. Run kubectl describe persistentvolumeclaim and monitor the status.conditionseld.

tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'

When the following is observed, press Ctrl – C to exit the watch command and proceedto the next step.

status.conditions.message is

message: Waiting for user to (re-)start a pod to finish file system resize of volume on node.

status.conditions.type is

type: FileSystemResizePending

8. Uncordon all nodes in your cluster.

tux > for i in $VM_NODES do kubectl uncordon `echo "${i//node\/}"`done

9. Wait for the resize to finish. Verify the storage size values match for status.capaci-ty.storage and spec.resources.requests.storage .

101 Example Procedure SUSE Cloud Applic… 1.5.1

Page 117: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'

10. Also verify the storage size in the pod itself is updated.

tux > kubectl --namespace uaa exec mysql-0 -- df --human-readable

9.12 Expanding Capacity of a Cloud ApplicationPlatform Deployment on Microsoft AKSIf the current capacity of your Cloud Application Platform deployment is insufficient for yourworkloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 9, Deploying SUSE Cloud Ap-

plication Platform on Microsoft Azure Kubernetes Service (AKS) and have a running Cloud ApplicationPlatform deployment on Microsoft AKS. The instructions below will use environment variablesdefined in Section 9.2, “Create Resource Group and AKS Instance”.

1. Get the current number of Kubernetes nodes in the cluster.

tux > export OLD_NODE_COUNT=$(kubectl get nodes --output json | jq '.items | length')

2. Set the number of Kubernetes nodes the cluster will be expanded to. Replace the examplevalue with the number of nodes required for your workload.

tux > export NEW_NODE_COUNT=4

3. Increase the Kubernetes node count in the cluster.

tux > az aks scale --resource-group $RG_NAME --name $AKS_NAME \--node-count $NEW_NODE_COUNT \--nodepool-name $NODEPOOL_NAME

4. Verify the new nodes are in a Ready state before proceeding.

tux > kubectl get nodes

102

Expanding Capacity of a Cloud Application Platform Deployment on Microsoft AKS SUSE

Cloud Applic… 1.5.1

Page 118: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

5. Add or update the following in your scf-config-values.yaml le to increase the num-ber of diego-cell in your Cloud Application Platform deployment. Replace the examplevalue with the number required by your workflow.

sizing: diego_cell: count: 4

6. Pass your uaa secret and certificate to scf .

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

7. Perform a helm upgrade to apply the change.

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--version 2.19.1

8. Monitor progress of the additional diego-cell pods:

tux > watch --color 'kubectl get pods --namespace scf'

103

Expanding Capacity of a Cloud Application Platform Deployment on Microsoft AKS SUSE

Cloud Applic… 1.5.1

Page 119: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

10 Deploying SUSE Cloud Application Platform onAmazon Elastic Kubernetes Service (EKS)

Important

README first!

Before you start deploying SUSE Cloud Application Platform, review the followingdocuments:Read the Release Notes: Release Notes SUSE Cloud Application Platform (https://

www.suse.com/releasenotes/x86_64/SUSE-CAP/1/)

Read Chapter 3, Deployment and Administration Notes

This chapter describes how to deploy SUSE Cloud Application Platform on Amazon Elastic Ku-bernetes Service (EKS), using Amazon's Elastic Load Balancer to provide fault-tolerant accessto your cluster.

10.1 PrerequisitesThe following are required to deploy and use SUSE Cloud Application Platform on EKS:

An Amazon AWS account. For details, refer to https://aws.amazon.com/ .

Create an Amazon EKS cluster by following the instructions at https://docs.aws.ama-

zon.com/eks/latest/userguide/getting-started.html .

When you create your cluster, use node sizes that are at least t2.large

The NodeVolumeSize must be a minimum of 80 GB.

Ensure nodes use a mininum kernel version of 3.19.

Identity and Access Management (IAM) is configured for your users. See Section 10.2, “IAM

Requirements for EKS” for guidance.

cf , the Cloud Foundry command line interface. For more information, see https://doc-

s.cloudfoundry.org/cf-cli/ .For SUSE Linux Enterprise and openSUSE systems, install using zypper .

104 Prerequisites SUSE Cloud Applic… 1.5.1

Page 120: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > sudo zypper install cf-cli

For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Addthe module using YaST or SUSEConnect.

tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-

cli.html .

kubectl , the Kubernetes command line tool. For more information, refer to https://kuber-

netes.io/docs/reference/kubectl/overview/ .For SLE 12 SP3 or 15 systems, install the package kubernetes-client from the PublicCloud module.For other systems, follow the instructions at https://kubernetes.io/docs/tasks/tools/in-

stall-kubectl/ .

curl , the Client URL (cURL) command line tool.

ImportantThe prerequisites and configurations described is this chapter only reflect the require-ments for a minimal SUSE Cloud Application Platform deployment. For a more produc-tion-ready environment, consider incoporating the following features:

Stratos Web Console

For details, see Chapter 6, Installing the Stratos Web Console.

High Availability

For details, see Chapter 7, SUSE Cloud Application Platform High Availability.

LDAP Integration

For details, see Chapter 8, LDAP Integration.

External Log Server Integration

For details, see Chapter 24, Logging.

Managing Certificates

For details, see Chapter 25, Managing Certificates.

Other Features

105 Prerequisites SUSE Cloud Applic… 1.5.1

Page 121: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Refer to the Administration Guide at Part III, “SUSE Cloud Application Platform Adminis-

tration” for additional features.

10.2 IAM Requirements for EKSThese IAM policies provide sufficient access to use EKS.

10.2.1 Unscoped Operations

Some of these permissions are very broad. They are difficult to scope effectively, in part becausemany resources are created and named dynamically when deploying an EKS cluster using theCloudFormation (https://console.aws.amazon.com/cloudformation) console. It may be helpfulto enforce certain naming conventions, such as prefixing cluster names with ${aws:username}for pattern-matching in Conditions. However, this requires special consideration beyond the EKSdeployment guide, and should be evaluated in the broader context of organizational IAM poli-cies.

{ "Version": "2012-10-17", "Statement": [ { "Sid": "UnscopedOperations", "Effect": "Allow", "Action": [ "cloudformation:CreateUploadBucket", "cloudformation:EstimateTemplateCost", "cloudformation:ListExports", "cloudformation:ListStacks", "cloudformation:ListImports", "cloudformation:DescribeAccountLimits", "eks:ListClusters", "cloudformation:ValidateTemplate", "cloudformation:GetTemplateSummary", "eks:CreateCluster" ], "Resource": "*" }, { "Sid": "EffectivelyUnscopedOperations", "Effect": "Allow",

106 IAM Requirements for EKS SUSE Cloud Applic… 1.5.1

Page 122: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

"Action": [ "iam:CreateInstanceProfile", "iam:DeleteInstanceProfile", "iam:GetRole", "iam:DetachRolePolicy", "iam:RemoveRoleFromInstanceProfile", "cloudformation:*", "iam:CreateRole", "iam:DeleteRole", "eks:*" ], "Resource": [ "arn:aws:eks:*:*:cluster/*", "arn:aws:cloudformation:*:*:stack/*/*", "arn:aws:cloudformation:*:*:stackset/*:*", "arn:aws:iam::*:instance-profile/*", "arn:aws:iam::*:role/*" ] } ]}

10.2.2 Scoped Operations

These policies deal with sensitive access controls, such as passing roles and attaching/detachingpolicies from roles.

This policy, as written, allows unrestricted use of only customer-managed policies, and not Ama-zon-managed policies. This prevents potential security holes such as attaching the IAMFullAc-cess policy to a role. If you are using roles in a way that would be undermined by this, youshould strongly consider integrating a Permissions Boundary (https://docs.aws.amazon.com/IAM/

latest/UserGuide/access_policies_boundaries.html) before using this policy.

{ "Version": "2012-10-17", "Statement": [ { "Sid": "UseCustomPoliciesWithCustomRoles", "Effect": "Allow", "Action": [ "iam:DetachRolePolicy", "iam:AttachRolePolicy" ], "Resource": [

107 Scoped Operations SUSE Cloud Applic… 1.5.1

Page 123: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

"arn:aws:iam::<YOUR_ACCOUNT_ID>:role/*", "arn:aws:iam::<YOUR_ACCOUNT_ID>:policy/*" ], "Condition": { "ForAllValues:ArnNotLike": { "iam:PolicyARN": "arn:aws:iam::aws:policy/*" } } }, { "Sid": "AllowPassingRoles", "Effect": "Allow", "Action": "iam:PassRole", "Resource": "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/*" }, { "Sid": "AddCustomRolesToInstanceProfiles", "Effect": "Allow", "Action": "iam:AddRoleToInstanceProfile", "Resource": "arn:aws:iam::<YOUR_ACCOUNT_ID>:instance-profile/*" }, { "Sid": "AssumeServiceRoleForEKS", "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/<EKS_SERVICE_ROLE_NAME>" }, { "Sid": "DenyUsingAmazonManagedPoliciesUnlessNeededForEKS", "Effect": "Deny", "Action": "iam:*", "Resource": "arn:aws:iam::aws:policy/*", "Condition": { "ArnNotEquals": { "iam:PolicyARN": [ "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy", "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy", "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" ] } } }, { "Sid": "AllowAttachingSpecificAmazonManagedPoliciesForEKS", "Effect": "Allow", "Action": [ "iam:DetachRolePolicy",

108 Scoped Operations SUSE Cloud Applic… 1.5.1

Page 124: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

"iam:AttachRolePolicy" ], "Resource": "*", "Condition": { "ArnEquals": { "iam:PolicyARN": [ "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy", "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy", "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" ] } } } ]}

10.3 Install Helm Client and TillerHelm is a Kubernetes package manager. It consists of a client and server component, both ofwhich are required in order to install and manage Cloud Application Platform.

The Helm client, helm , can be installed on your remote administration computer by referring tothe documentation at https://v2.helm.sh/docs/using_helm/#installing-helm . Cloud ApplicationPlatform is compatible with both Helm 2 and Helm 3. Examples in this guide are based on Helm2. To use Helm 3, refer to the Helm documentation at https://helm.sh/docs/ .

Tiller, the Helm server component, needs to be installed on your Kubernetes cluster. Followthe instructions at https://v2.helm.sh/docs/using_helm/#installing-tiller to install Tiller with aservice account and ensure your installation is appropriately secured according to your require-ments as described in https://v2.helm.sh/docs/using_helm/#securing-your-helm-installation .

10.4 Default Storage ClassThis example creates a simple storage class for your cluster in storage-class.yaml :

apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: gp2 annotations: storageclass.kubernetes.io/is-default-class: "true"

109 Install Helm Client and Tiller SUSE Cloud Applic… 1.5.1

Page 125: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

labels: kubernetes.io/cluster-service: "true"provisioner: kubernetes.io/aws-ebsparameters: type: gp2allowVolumeExpansion: true

Then apply the new storage class configuration with this command:

tux > kubectl create --filename storage-class.yaml

10.5 DNS ConfigurationCreation of the Elastic Load Balancer is triggered by a setting in the Cloud Application Platformdeployment configuration le. First deploy uaa , then create CNAMEs for your domain and uaasubdomains (see the table below). Then deploy scf , and create the appropriate scf CNAMEs.This is described in more detail in the deployment sections.

The following table lists the required domain and sub-domains, using example.com as the ex-ample domain:

Warning: Supported DomainsWhen selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be eithera subdomain or a root domain. Setting DOMAIN to a top-level domain, such suse , is notsupported.

Domains Services

uaa.example.com uaa-uaa-public

*.uaa.example.com uaa-uaa-public

example.com router-gorouter-public

*.example.com router-gorouter-public

tcp.example.com tcp-router-tcp-router-public

ssh.example.com diego-ssh-ssh-proxy-public

110 DNS Configuration SUSE Cloud Applic… 1.5.1

Page 126: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

A SUSE Cloud Application Platform cluster exposes these four services:

Kubernetes service descriptions Kubernetes service names

User Account and Authentication ( uaa ) uaa-uaa-public

Cloud Foundry (CF) TCP routing service tcp-router-tcp-router-public

CF application SSH access diego-ssh-ssh-proxy-public

CF router router-gorouter-public

uaa-uaa-public is in the uaa namespace, and the rest are in the scf namespace.

10.6 Deployment ConfigurationUse this example scf-config-values.yaml as a template for your configuration.

Warning: Supported DomainsWhen selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be eithera subdomain or a root domain. Setting DOMAIN to a top-level domain, such suse , is notsupported.

### example deployment configuration file### scf-config-values.yaml

env: DOMAIN: example.com UAA_HOST: uaa.example.com UAA_PORT: 2793

GARDEN_ROOTFS_DRIVER: overlay-xfs GARDEN_APPARMOR_PROFILE: ""

services: loadbalanced: true

kube: storage_class: # Change the value to the storage class you use persistent: "gp2"

111 Deployment Configuration SUSE Cloud Applic… 1.5.1

Page 127: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

shared: "gp2"

# The default registry images are fetched from registry: hostname: "registry.suse.com" username: "" password: "" organization: "cap"

secrets: # Create a very strong password for user 'admin' CLUSTER_ADMIN_PASSWORD: password

# Create a very strong password, and protect it because it # provides root access to everything UAA_ADMIN_CLIENT_SECRET: password

Take note of the following Helm values when defining your scf-config-values.yaml le.

GARDEN_ROOTFS_DRIVER

For SUSE® CaaS Platform and other Kubernetes deployments where the nodes are basedon SUSE Linux Enterprise, the btrfs le system driver must be used. By default, btrfsis selected as the default.For Microsoft AKS, Amazon EKS, Google GKE, and other Kubernetes deployments wherethe nodes are based on other operating systems, the overlay-xfs le system driver mustbe used.

Important: Protect UAA_ADMIN_CLIENT_SECRETThe UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Appli-cation Platform cluster. Make this a very strong password, and protect it just as carefullyas you would protect any root password.

10.7 Deploying Cloud Application PlatformThe following list provides an overview of Helm commands to complete the deployment. In-cluded are links to detailed descriptions.

1. Download the SUSE Kubernetes charts repository (Section 10.8, “Add the Kubernetes Charts

Repository”)

112 Deploying Cloud Application Platform SUSE Cloud Applic… 1.5.1

Page 128: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

2. Deploy uaa , then create appropriate CNAMEs (Section 10.9, “Deploy uaa”)

3. Copy the uaa secret and certificate to the scf namespace, deploy scf , create CNAMES(Section 10.10, “Deploy scf”)

10.8 Add the Kubernetes Charts RepositoryDownload the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm :

tux > helm repo listNAME URLstable https://kubernetes-charts.storage.googleapis.comlocal http://127.0.0.1:8879/chartssuse https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suseNAME CHART VERSION APP VERSION DESCRIPTIONsuse/cf 2.19.1 1.5.1 A Helm chart for SUSE Cloud Foundrysuse/cf-usb-sidecar-mysql 1.0.1 A Helm chart for SUSE Universal Service Broker Sidecar fo...suse/cf-usb-sidecar-postgres 1.0.1 A Helm chart for SUSE Universal Service Broker Sidecar fo...suse/console 2.6.1 1.5.1 A Helm chart for deploying Stratos UI Consolesuse/log-agent-rsyslog 1.0.1 8.39.0 Log Agent for forwarding logs of K8s control pl...suse/metrics 1.1.1 1.5.1 A Helm chart for Stratos Metricssuse/minibroker 0.3.1 A minibroker for your minikubesuse/nginx-ingress 0.28.4 0.15.0 An nginx Ingress controller that uses ConfigMap to store ...suse/uaa 2.19.1 1.5.1 A Helm chart for SUSE UAA

113 Add the Kubernetes Charts Repository SUSE Cloud Applic… 1.5.1

Page 129: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

10.9 Deploy uaa

Note: Embedded uaa in scfThe User Account and Authentication ( uaa ) Server is included as an optional feature ofthe scf Helm chart. This simplifies the Cloud Application Platform deployment processas a separate installation and/or upgrade of uaa is no longer a prerequisite to installingand/or upgrading scf .

It is important to note that:

This feature should only be used when uaa is not shared with other projects.

You cannot migrate from an existing external uaa to an embedded one. In this situ-ation, enabling this feature during an upgrade will result in a single admin account.

To enable this feature, add the following to your scf-config-values.yaml .

enable: uaa: true

When deploying and/or upgrading scf , run helm install and/or helm upgrade andnote that:

Installing and/or upgrading uaa using helm install suse/uaa ... and/or helmupgrade is no longer required.

It is no longer necessary to set the UAA_CA_CERT parameter. Previously, this para-meter was passed the CA_CERT variable, which was assigned the CA certificate ofuaa .

Use Helm to deploy the uaa (User Account and Authentication) server. You may create yourown release --name :

tux > helm install suse/uaa \--name susecf-uaa \--namespace uaa \--values scf-config-values.yaml

114 Deploy uaa SUSE Cloud Applic… 1.5.1

Page 130: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Wait until you have a successful uaa deployment before going to the next steps, which you canmonitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

For the secret-generation pod, the STATUS is Completed and the READY column isat 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

When uaa is finished deploying, create CNAMEs for the required domains. (See Section 10.5,

“DNS Configuration”.) Use kubectl to nd the service host names. These host names include theelb sub-domain, so use this to get the correct results:

tux > kubectl get services --namespace uaa | grep elb

Use curl to verify you are able to connect to the uaa OAuth server on the DNS name config-ured:

tux > curl --insecure https://uaa.example.com:2793/.well-known/openid-configuration

This should return a JSON object, as this abbreviated example shows:

{"issuer":"https://uaa.example.com:2793/oauth/token","authorization_endpoint":"https://uaa.example.com:2793/oauth/authorize","token_endpoint":"https://uaa.example.com:2793/oauth/token"

10.10 Deploy scfBefore deploying scf , ensure the DNS records for the uaa domains have been set up as specifiedin the previous section. Next, pass your uaa secret and certificate to scf , then use Helm todeploy scf :

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

115 Deploy scf SUSE Cloud Applic… 1.5.1

Page 131: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \--name susecf-scf \--namespace scf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}"

Wait until you have a successful scf deployment before going to the next steps, which you canmonitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

For the secret-generation and post-deployment-setup pods, the STATUS is Com-pleted and the READY column is at 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

Before traffic is routed to EKS load balancers, health checks are performed on them. These healthchecks require access to ports on the tcp-router service, which are only opened by requestvia cf map-route and other associated commands.

Use kubectl patch to patch the tcp-router service to expose a port to the EKS load balancerso the health checks can be completed.

tux > kubectl patch service tcp-router-tcp-router-public --namespace scf \--type strategic \--patch '{"spec": {"ports": [{"name": "healthcheck", "port": 8080}]}}'

Remove port 8080 from the load balancer's listeners list. This ensures the port is not exposed toexternal traffic while still allowing it to serve as the port for health checks internally.

tux > aws elb delete-load-balancer-listeners \--load-balancer-name a635338ecc2ed11e607790a6b408ef5f \--load-balancer-ports 8080

Where the --load-balancer-name can be found by examining the EXTERNAL-IP eld afterrunning command below. The load balancer name will be the string preceding the rst - char-acter.

116 Deploy scf SUSE Cloud Applic… 1.5.1

Page 132: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > kubectl get service -n scf tcp-router-tcp-router-public

The health checks should now operate correctly. When the status shows RUNNING for all ofthe scf pods, create CNAMEs for the required domains. (See Section 10.5, “DNS Configuration”)Use kubectl to nd the service host names. These host names include the elb sub-domain,so use this to get the correct results:

tux > kubectl get services --namespace scf | grep elb

10.11 Deploying and Using the AWS Service BrokerThe AWS Service Broker (https://aws.amazon.com/partners/servicebroker/) provides integrationof native AWS services with SUSE Cloud Application Platform.

10.11.1 Prerequisites

Deploying and using the AWS Service Broker requires the following:

Cloud Application Platform has been successfully deployed on Amazon EKS as describedearlier in this chapter (see Chapter 10, Deploying SUSE Cloud Application Platform on Amazon

Elastic Kubernetes Service (EKS))

The AWS Command Line Interface (CLI) (https://aws.amazon.com/cli/)

The OpenSSL command line tool (https://www.openssl.org/docs/man1.0.2/apps/openss-

l.html)

Ensure the user or role running the broker has an IAM policy set as specified in the IAM

section of the AWS Service Broker (https://github.com/awslabs/aws-servicebroker/blob/mas-

ter/docs/install_prereqs.md#iam) documentation on Github

10.11.2 Setup

1. Create the required DynamoDB table where the AWS service broker will store its data.This example creates a table named awssb :

tux > aws dynamodb create-table \

117 Deploying and Using the AWS Service Broker SUSE Cloud Applic… 1.5.1

Page 133: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

--attribute-definitions \ AttributeName=id,AttributeType=S \ AttributeName=userid,AttributeType=S \ AttributeName=type,AttributeType=S \ --key-schema \ AttributeName=id,KeyType=HASH \ AttributeName=userid,KeyType=RANGE \ --global-secondary-indexes \ 'IndexName=type-userid-index,KeySchema=[{AttributeName=type,KeyType=HASH},{AttributeName=userid,KeyType=RANGE}],Projection={ProjectionType=INCLUDE,NonKeyAttributes=[id,userid,type,locked]},ProvisionedThroughput={ReadCapacityUnits=5,WriteCapacityUnits=5}' \ --provisioned-throughput \ ReadCapacityUnits=5,WriteCapacityUnits=5 \ --region ${AWS_REGION} --table-name awssb

2. Wait until the table has been created. When it is ready, the TableStatus will change toACTIVE . Check the status using the describe-table command:

aws dynamodb describe-table --table-name awssb

(For more information about the describe-table command, see https://docs.aws.ama-

zon.com/cli/latest/reference/dynamodb/describe-table.html .)

3. Set a name for the Kubernetes namespace you will install the service broker to. This namewill also be used in the service broker URL:

tux > BROKER_NAMESPACE=aws-sb

4. Create a server certificate for the service broker:

a. Create and use a separate directory to avoid conflicts with other CA les:

tux > mkdir /tmp/aws-service-broker-certificates && cd $_

b. Get the CA certificate:

kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert}' | base64 -di > ca.pem

c. Get the CA private key:

kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert-key}' | base64 -di > ca.key

118 Setup SUSE Cloud Applic… 1.5.1

Page 134: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

d. Create a signing request. Replace BROKER_NAMESPACE with the namespace assignedin Step 3:

tux > openssl req -newkey rsa:4096 -keyout tls.key.encrypted -out tls.req -days 365 \ -passout pass:1234 \ -subj '/CN=aws-servicebroker.'${BROKER_NAMESPACE} -batch \ -subj '/CN=aws-servicebroker-aws-servicebroker.aws-sb.svc.cluster.local' -batch </dev/null

e. Decrypt the generated broker private key:

tux > openssl rsa -in tls.key.encrypted -passin pass:1234 -out tls.key

f. Sign the request with the CA certificate:

tux > openssl x509 -req -CA ca.pem -CAkey ca.key -CAcreateserial -in tls.req -out tls.pem

5. Install the AWS service broker as documented at https://github.com/awslabs/aws-service-

broker/blob/master/docs/getting-started-k8s.md . Skip the installation of the KubernetesService Catalog. While installing the AWS Service Broker, make sure to update the Helmchart version (the version as of this writing is 1.0.1 ). For the broker install, pass in avalue indicating the Cluster Service Broker should not be installed (for example --setdeployClusterServiceBroker=false ). Ensure an account and role with adequate IAMrights is chosen (see Section 10.11.1, “Prerequisites”:

tux > helm install aws-sb/aws-servicebroker \ --name aws-servicebroker \ --namespace $BROKER_NAMESPACE \ --version 1.0.1 \ --set aws.secretkey=$AWS_ACCESS_KEY \ --set aws.accesskeyid=$AWS_KEY_ID \ --set deployClusterServiceBroker=false \ --set tls.cert="$(base64 -w0 tls.pem)" \ --set tls.key="$(base64 -w0 tls.key)" \ --set-string aws.targetaccountid=$AWS_TARGET_ACCOUNT_ID \ --set aws.targetrolename=$AWS_TARGET_ROLE_NAME \ --set aws.tablename=awssb \ --set aws.vpcid=$VPC_ID \ --set aws.region=$AWS_REGION \ --set authenticate=false

119 Setup SUSE Cloud Applic… 1.5.1

Page 135: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

To nd the values of aws.targetaccoundid , aws.targetrolename , and vpcId runthe following command.

tux > aws eks describe-cluster --name $CLUSTER_NAME

For aws.targetaccoundid and aws.targetrolename , examine the cluster.roleArneld. For vpcId , refer to the cluster.resourcesVpcConfig.vpcId eld.

6. Log into your Cloud Application Platform deployment. Select an organization and spaceto work with, creating them if needed:

tux > cf api --skip-ssl-validation https://api.example.comtux > cf login -u admin -p passwordtux > cf create-org orgtux > cf create-space spacetux > cf target -o org -s space

7. Create a service broker in scf . Note the name of the service broker should be the same asthe one specified for the --name ag in the helm install step (for example aws-ser-vicebroker . Note that the username and password parameters are only used as dummyvalues to pass to the cf command:

tux > cf create-service-broker aws-servicebroker username password https://aws-servicebroker-aws-servicebroker.aws-sb.svc.cluster.local

8. Verify the service broker has been registered:

tux > cf service-brokers

9. List the available service plans:

tux > cf service-access

10. Enable access to a service. This example uses the -p to enable access to a specific serviceplan. See https://github.com/awslabs/aws-servicebroker/blob/master/templates/rdsmysql/

template.yaml for information about all available services and their associated plans:

tux > cf enable-service-access rdsmysql -p custom

11. Create a service instance. As an example, a custom MySQL instance can be created as:

tux > cf create-service rdsmysql custom mysql-instance-name -c '{ "AccessCidr": "192.0.2.24/32",

120 Setup SUSE Cloud Applic… 1.5.1

Page 136: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

"BackupRetentionPeriod": 0, "MasterUsername": "master", "DBInstanceClass": "db.t2.micro", "EngineVersion": "5.7.17", "PubliclyAccessible": "true", "region": "$AWS_REGION", "StorageEncrypted": "false", "VpcId": "$VPC_ID", "target_account_id": "$AWS_TARGET_ACCOUNT_ID", "target_role_name": "$AWS_TARGET_ROLE_NAME"}'

10.11.3 Cleanup

When the AWS Service Broker and its services are no longer required, perform the followingsteps:

1. Unbind any applications using any service instances then delete the service instance:

tux > cf unbind-service my_app mysql-instance-nametux > cf delete-service mysql-instance-name

2. Delete the service broker in scf :

tux > cf delete-service-broker aws-servicebroker

3. Delete the deployed Helm chart and the namespace:

tux > helm delete --purge aws-servicebrokertux > kubectl delete namespace ${BROKER_NAMESPACE}

4. The manually created DynamoDB table will need to be deleted as well:

tux > aws dynamodb delete-table --table-name awssb --region ${AWS_REGION}

10.12 Resizing Persistent VolumesDepending on your workloads, the default persistent volume (PV) sizes of your Cloud Appli-cation Platform deployment may be insufficient. This section describes the process to resize apersistent volume in your Cloud Application Platform deployment, by modifying the persistentvolumes claim (PVC) object.

121 Cleanup SUSE Cloud Applic… 1.5.1

Page 137: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Note that PVs can only be expanded, but cannot be shrunk. shrunk.

10.12.1 Prerequisites

The following are required in order to use the process below to resize a PV.

Kubernetes 1.11 or newer.

The volume being expanded is among the list of supported volume types. Refer tothe list at https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-per-

sistent-volumes-claims

10.12.2 Example Procedure

The following describes the process required to resize a PV, using the PV and PVC associatedwith uaa 's mysql as an example.

1. Find the storage class and PVC associated with the PV being expanded. In This example,the storage class is called persistent and the PVC is called mysql-data-mysql-0 .

tux > kubectl get persistentvolume

2. Verify whether the storage class has allowVolumeExpansion set to true . If it does not,run the following command to update the storage class.

tux > kubectl get storageclass persistent --output json

If it does not, run the below command to update the storage class.

tux > kubectl patch storageclass persistent \--patch '{"allowVolumeExpansion": true}'

3. Cordon all nodes in your cluster.

a. tux > export VM_NODES=$(kubectl get nodes -o name)

b. tux > for i in $VM_NODES do kubectl cordon `echo "${i//node\/}"`done

122 Prerequisites SUSE Cloud Applic… 1.5.1

Page 138: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

4. Increase the storage size of the PVC object associated with the PV being expanded.

tux > kubectl patch persistentvolumeclaim --namespace uaa mysql-data-mysql-0 \--patch '{"spec": {"resources": {"requests": {"storage": "25Gi"}}}}'

5. List all pods that use the PVC, in any namespace.

tux > kubectl get pods --all-namespaces --output=json | jq -c '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName: .spec | select( has ("volumes") ).volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'

6. Restart all pods that use the PVC.

tux > kubectl delete pod mysql-0 --namespace uaa

7. Run kubectl describe persistentvolumeclaim and monitor the status.conditionseld.

tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'

When the following is observed, press Ctrl – C to exit the watch command and proceedto the next step.

status.conditions.message is

message: Waiting for user to (re-)start a pod to finish file system resize of volume on node.

status.conditions.type is

type: FileSystemResizePending

8. Uncordon all nodes in your cluster.

tux > for i in $VM_NODES do kubectl uncordon `echo "${i//node\/}"`done

9. Wait for the resize to finish. Verify the storage size values match for status.capaci-ty.storage and spec.resources.requests.storage .

123 Example Procedure SUSE Cloud Applic… 1.5.1

Page 139: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'

10. Also verify the storage size in the pod itself is updated.

tux > kubectl --namespace uaa exec mysql-0 -- df --human-readable

124 Example Procedure SUSE Cloud Applic… 1.5.1

Page 140: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

11 Deploying SUSE Cloud Application Platform onGoogle Kubernetes Engine (GKE)

Important

README first!

Before you start deploying SUSE Cloud Application Platform, review the followingdocuments:Read the Release Notes: Release Notes SUSE Cloud Application Platform (https://

www.suse.com/releasenotes/x86_64/SUSE-CAP/1/)

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on Google Kubernetes Engine (GKE).This chapter describes the steps to prepare a SUSE Cloud Application Platform deployment onGKE using its integrated network load balancers. See https://cloud.google.com/kubernetes-en-

gine/ for more information on GKE.

11.1 PrerequisitesThe following are required to deploy and use SUSE Cloud Application Platform on GKE:

A Google Cloud Platform (GCP) user account or a service account with the following IAMroles. If you do not have an account, visit https://console.cloud.google.com/ to create one.

125 Prerequisites SUSE Cloud Applic… 1.5.1

Page 141: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

compute.admin . For details regarding this role, refer to https://cloud.google.com/

iam/docs/understanding-roles#compute-engine-roles .

container.admin . For details regarding this role, refer to https://cloud.google.com/

kubernetes-engine/docs/how-to/iam#predefined .

iam.serviceAccountUser . For details regarding this role, refer to https://

cloud.google.com/kubernetes-engine/docs/how-to/iam#primitive .

Access to a GCP project with the Kubernetes Engine API enabled. If a projectneeds to be created, refer to https://cloud.google.com/apis/docs/getting-started#creat-

ing_a_google_project . To enable access to the API, refer to https://cloud.google.com/apis/

docs/getting-started#enabling_apis .

gcloud , the primary command line interface to Google Cloud Platform. See https://

cloud.google.com/sdk/gcloud/ for more information and installation instructions.

cf , the Cloud Foundry command line interface. For more information, see https://doc-

s.cloudfoundry.org/cf-cli/ .For SUSE Linux Enterprise and openSUSE systems, install using zypper .

tux > sudo zypper install cf-cli

For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Addthe module using YaST or SUSEConnect.

tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-

cli.html .

kubectl , the Kubernetes command line tool. For more information, refer to https://kuber-

netes.io/docs/reference/kubectl/overview/ .For SLE 12 SP3 or 15 systems, install the package kubernetes-client from the PublicCloud module.For other systems, follow the instructions at https://kubernetes.io/docs/tasks/tools/in-

stall-kubectl/ .

jq , a command line JSON processor. See https://stedolan.github.io/jq/ for more infor-mation and installation instructions.

126 Prerequisites SUSE Cloud Applic… 1.5.1

Page 142: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

curl , the Client URL (cURL) command line tool.

sed , the stream editor.

ImportantThe prerequisites and configurations described is this chapter only reflect the require-ments for a minimal SUSE Cloud Application Platform deployment. For a more produc-tion-ready environment, consider incoporating the following features:

Stratos Web Console

For details, see Chapter 6, Installing the Stratos Web Console.

High Availability

For details, see Chapter 7, SUSE Cloud Application Platform High Availability.

LDAP Integration

For details, see Chapter 8, LDAP Integration.

External Log Server Integration

For details, see Chapter 24, Logging.

Managing Certificates

For details, see Chapter 25, Managing Certificates.

Other Features

Refer to the Administration Guide at Part III, “SUSE Cloud Application Platform Adminis-

tration” for additional features.

11.2 Creating a GKE clusterIn order to deploy SUSE Cloud Application Platform, create a cluster that:

Is a Zonal , Regional , or Private type. Do not use a Alpha cluster.

Uses Ubuntu as the host operating system. If using the gcloud CLI, include --im-age-type=UBUNTU during the cluster creation.

Allows access to all Cloud APIs (in order for storage to work correctly).

127 Creating a GKE cluster SUSE Cloud Applic… 1.5.1

Page 143: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Has at least 3 nodes of machine type n1-standard-4 . If using the gcloud CLI, in-clude --machine-type=n1-standard-4 and --num-nodes=3 during the cluster cre-ation. For details, see https://cloud.google.com/compute/docs/machine-types#standard_ma-

chine_types .

Ensure nodes use a mininum kernel version of 3.19.

Has at least 80 GB local storage per node.

(Optional) Uses preemptible nodes to keep costs low. For detials, see https://

cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms .

1. Set a name for your cluster:

tux > export CLUSTER_NAME="cap"

2. Set the zone for your cluster:

tux > export CLUSTER_ZONE="us-west1-a"

3. Set the number of nodes for your cluster:

tux > export NODE_COUNT=3

4. Create the cluster:

tux > gcloud container clusters create ${CLUSTER_NAME} \--image-type=UBUNTU \--machine-type=n1-standard-4 \--zone ${CLUSTER_ZONE} \--num-nodes=$NODE_COUNT \--no-enable-basic-auth \--no-issue-client-certificate \--no-enable-autoupgrade \--metadata disable-legacy-endpoints=true

128 Creating a GKE cluster SUSE Cloud Applic… 1.5.1

Page 144: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Specify the --no-enable-basic-auth and --no-issue-client-certificateags so that kubectl does not use basic or client certificate authentication, but usesOAuth Bearer Tokens instead. Configure the ags to suit your desired authenticationmechanism.

Specify --no-enable-autoupgrade to disable automatic upgrades.

Disable legacy metadata server endpoints using --metadata disable-lega-cy-endpoints=true as a best practice as indicated in https://cloud.google.com/com-

pute/docs/storing-retrieving-metadata#default .

11.3 Get kubeconfig FileGet the kubeconfig le for your cluster.

tux > gcloud container clusters get-credentials --zone ${CLUSTER_ZONE:?required} ${CLUSTER_NAME:?required}

11.4 Install Helm Client and TillerHelm is a Kubernetes package manager. It consists of a client and server component, both ofwhich are required in order to install and manage Cloud Application Platform.

The Helm client, helm , can be installed on your remote administration computer by referring tothe documentation at https://v2.helm.sh/docs/using_helm/#installing-helm . Cloud ApplicationPlatform is compatible with both Helm 2 and Helm 3. Examples in this guide are based on Helm2. To use Helm 3, refer to the Helm documentation at https://helm.sh/docs/ .

Tiller, the Helm server component, needs to be installed on your Kubernetes cluster. Followthe instructions at https://v2.helm.sh/docs/using_helm/#installing-tiller to install Tiller with aservice account and ensure your installation is appropriately secured according to your require-ments as described in https://v2.helm.sh/docs/using_helm/#securing-your-helm-installation .

129 Get kubeconfig File SUSE Cloud Applic… 1.5.1

Page 145: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

11.5 Default Storage ClassThis example creates a pd-ssd storage class for your cluster. Create a le named stor-age-class.yaml with the following:

---apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: annotations: storageclass.kubernetes.io/is-default-class: "true" labels: addonmanager.kubernetes.io/mode: EnsureExists kubernetes.io/cluster-service: "true" name: persistentparameters: type: pd-ssdprovisioner: kubernetes.io/gce-pdreclaimPolicy: DeleteallowVolumeExpansion: true

Create the new storage class using the manifest defined:

tux > kubectl create --filename storage-class.yaml

Specify the newly created created storage class, called persistent , as the value for kube.s-torage_class.persistent in your deployment configuration le, like this example:

kube: storage_class: persistent: "persistent"

See Section 11.7, “Deployment Configuration” for a complete example deployment configuration le,scf-config-values.yaml .

11.6 DNS ConfigurationThis section provides an overview of the domain and sub-domains that require A records to beset up for. The process is described in more detail in the deployment section.

130 Default Storage Class SUSE Cloud Applic… 1.5.1

Page 146: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

The following table lists the required domain and sub-domains, using example.com as the ex-ample domain:

Warning: Supported DomainsWhen selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be eithera subdomain or a root domain. Setting DOMAIN to a top-level domain, such suse , is notsupported.

Domains Services

uaa.example.com uaa-uaa-public

*.uaa.example.com uaa-uaa-public

example.com router-gorouter-public

*.example.com router-gorouter-public

tcp.example.com tcp-router-tcp-router-public

ssh.example.com diego-ssh-ssh-proxy-public

A SUSE Cloud Application Platform cluster exposes these four services:

Kubernetes service descriptions Kubernetes service names

User Account and Authentication ( uaa ) uaa-uaa-public

Cloud Foundry (CF) TCP routing service tcp-router-tcp-router-public

CF application SSH access diego-ssh-ssh-proxy-public

CF router router-gorouter-public

uaa-uaa-public is in the uaa namespace, and the rest are in the scf namespace.

11.7 Deployment ConfigurationIt is not necessary to create any DNS records before deploying uaa . Instead, after uaa is runningyou will nd the load balancer IP address that was automatically created during deployment,and then create the necessary records.

131 Deployment Configuration SUSE Cloud Applic… 1.5.1

Page 147: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

The following le, scf-config-values.yaml , provides a complete example deployment con-figuration.

Warning: Supported DomainsWhen selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be eithera subdomain or a root domain. Setting DOMAIN to a top-level domain, such suse , is notsupported.

### example deployment configuration file### scf-config-values.yaml

env: DOMAIN: example.com # the UAA prefix is required UAA_HOST: uaa.example.com UAA_PORT: 2793 GARDEN_ROOTFS_DRIVER: "overlay-xfs"

kube: storage_class: persistent: "persistent" auth: rbac

secrets: # Create a very strong password for user 'admin' CLUSTER_ADMIN_PASSWORD: password

# Create a very strong password, and protect it because it # provides root access to everything UAA_ADMIN_CLIENT_SECRET: password

services: loadbalanced: true

Take note of the following Helm values when defining your scf-config-values.yaml le.

GARDEN_ROOTFS_DRIVER

For SUSE® CaaS Platform and other Kubernetes deployments where the nodes are basedon SUSE Linux Enterprise, the btrfs le system driver must be used. By default, btrfsis selected as the default.

132 Deployment Configuration SUSE Cloud Applic… 1.5.1

Page 148: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

For Microsoft AKS, Amazon EKS, Google GKE, and other Kubernetes deployments wherethe nodes are based on other operating systems, the overlay-xfs le system driver mustbe used.

Important: Protect UAA_ADMIN_CLIENT_SECRETThe UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Appli-cation Platform cluster. Make this a very strong password, and protect it just as carefullyas you would protect any root password.

11.8 Add the Kubernetes charts repositoryDownload the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm :

tux > helm repo listNAME URLstable https://kubernetes-charts.storage.googleapis.comlocal http://127.0.0.1:8879/chartssuse https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suseNAME CHART VERSION APP VERSION DESCRIPTIONsuse/cf 2.19.1 1.5.1 A Helm chart for SUSE Cloud Foundrysuse/cf-usb-sidecar-mysql 1.0.1 A Helm chart for SUSE Universal Service Broker Sidecar fo...suse/cf-usb-sidecar-postgres 1.0.1 A Helm chart for SUSE Universal Service Broker Sidecar fo...suse/console 2.6.1 1.5.1 A Helm chart for deploying Stratos UI Consolesuse/log-agent-rsyslog 1.0.1 8.39.0 Log Agent for forwarding logs of K8s control pl...suse/metrics 1.1.1 1.5.1 A Helm chart for Stratos Metricssuse/minibroker 0.3.1 A minibroker for your minikubesuse/nginx-ingress 0.28.4 0.15.0 An nginx Ingress controller that uses ConfigMap to store ...

133 Add the Kubernetes charts repository SUSE Cloud Applic… 1.5.1

Page 149: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

suse/uaa 2.19.1 1.5.1 A Helm chart for SUSE UAA

11.9 Deploying SUSE Cloud Application PlatformThis section describes how to deploy SUSE Cloud Application Platform on Google GKE, and howto configure your DNS records.

11.9.1 Deploy uaa

Note: Embedded uaa in scfThe User Account and Authentication ( uaa ) Server is included as an optional feature ofthe scf Helm chart. This simplifies the Cloud Application Platform deployment processas a separate installation and/or upgrade of uaa is no longer a prerequisite to installingand/or upgrading scf .

It is important to note that:

This feature should only be used when uaa is not shared with other projects.

You cannot migrate from an existing external uaa to an embedded one. In this situ-ation, enabling this feature during an upgrade will result in a single admin account.

To enable this feature, add the following to your scf-config-values.yaml .

enable: uaa: true

When deploying and/or upgrading scf , run helm install and/or helm upgrade andnote that:

Installing and/or upgrading uaa using helm install suse/uaa ... and/or helmupgrade is no longer required.

It is no longer necessary to set the UAA_CA_CERT parameter. Previously, this para-meter was passed the CA_CERT variable, which was assigned the CA certificate ofuaa .

134 Deploying SUSE Cloud Application Platform SUSE Cloud Applic… 1.5.1

Page 150: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Use Helm to deploy the uaa server:

tux > helm install suse/uaa \--name susecf-uaa \--namespace uaa \--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you canmonitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

For the secret-generation pod, the STATUS is Completed and the READY column isat 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

Once the uaa deployment completes, a uaa service will be exposed on a load balancer publicIP. The name of the service ends with -public . In the following example, the uaa-uaa-publicservice is exposed on 35.197.11.229 and port 2793 .

tux > kubectl get services --namespace uaa | grep publicuaa-uaa-public LoadBalancer 10.0.67.56 35.197.11.229 2793:30206/TCP

Use the DNS service of your choice to set up DNS A records for the service from the previousstep. Use the public load balancer IP associated with the service to create domain mappings:

For the uaa-uaa-public service, map the following domains:

uaa.DOMAIN

Using the example values, an A record for uaa.example.com that points to35.197.11.229

*.uaa.DOMAIN

Using the example values, an A record for *.uaa.example.com that points to35.197.11.229

Use curl to verify you are able to connect to the uaa OAuth server on the DNS name config-ured:

tux > curl --insecure https://uaa.example.com:2793/.well-known/openid-configuration

135 Deploy uaa SUSE Cloud Applic… 1.5.1

Page 151: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

This should return a JSON object, as this abbreviated example shows:

{"issuer":"https://uaa.example.com:2793/oauth/token","authorization_endpoint":"https://uaa.example.com:2793/oauth/authorize","token_endpoint":"https://uaa.example.com:2793/oauth/token"

11.9.2 Deploy scf

Before deploying scf , ensure the DNS records for the uaa domains have been set up as specifiedin the previous section. Next, pass your uaa secret and certificate to scf , then use Helm todeploy scf :

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \--name susecf-scf \--namespace scf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}"

Wait until you have a successful scf deployment before going to the next steps, which you canmonitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

For the secret-generation and post-deployment-setup pods, the STATUS is Com-pleted and the READY column is at 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

Once the deployment completes, a number of public services will be setup using load balancersthat has been configured with corresponding load balancing rules and probes as well as havingthe correct ports opened in the firewall settings.

136 Deploy scf SUSE Cloud Applic… 1.5.1

Page 152: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

List the services that have been exposed on the load balancer public IP. The name of theseservices end in -public :

tux > kubectl get services --namespace scf | grep publicdiego-ssh-ssh-proxy-public LoadBalancer 10.23.249.196 35.197.32.244 2222:31626/TCP 1drouter-gorouter-public LoadBalancer 10.23.248.85 35.197.18.22 80:31213/TCP,443:30823/TCP,4443:32200/TCP 1dtcp-router-tcp-router-public LoadBalancer 10.23.241.17 35.197.53.74 20000:30307/TCP,20001:30630/TCP,20002:32524/TCP,20003:32344/TCP,20004:31514/TCP,20005:30917/TCP,20006:31568/TCP,20007:30701/TCP,20008:31810/TCP 1d

Use the DNS service of your choice to set up DNS A records for the services from the previousstep. Use the public load balancer IP associated with the service to create domain mappings:

For the router-gorouter-public service, map the following domains:

DOMAIN

Using the example values, an A record for example.com that points to35.197.18.22 would be created.

*.DOMAIN

Using the example values, an A record for *.example.com that points to35.197.18.22 would be created.

For the diego-ssh-ssh-proxy-public service, map the following domain:

ssh.DOMAIN

Using the example values, an A record for ssh.example.com that points to35.197.32.244 would be created.

For the tcp-router-tcp-router-public service, map the following domain:

tcp.DOMAIN

Using the example values, an A record for tcp.example.com that points to35.197.53.74 would be created.

Your load balanced deployment of Cloud Application Platform is now complete. Verify you canaccess the API endpoint:

tux > cf api --skip-ssl-validation https://api.example.com

137 Deploy scf SUSE Cloud Applic… 1.5.1

Page 153: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

11.10 Deploying and Using the Google CloudPlatform Service BrokerThe Google Cloud Platform (GCP) Service Broker is designed for use with Cloud Foundry andKubernetes. It is compliant with v2.13 of the Open Service Broker API (see https://www.openser-

vicebrokerapi.org/ ) and provides support for the services listed at https://github.com/Google-

CloudPlatform/gcp-service-broker/tree/master#open-service-broker-for-google-cloud-platform .

This section describes the how to deploy and use the GCP Service Broker, as a SUSE CloudFoundry application, on SUSE Cloud Application Platform.

11.10.1 Enable APIs

1. From the GCP console, click the Navigation menu.

2. Click APIs & Services and then Library.

3. Enable the following:

The Google Cloud Resource Manager API. For more information, see https://con-

sole.cloud.google.com/apis/api/cloudresourcemanager.googleapis.com .

The Google Identity and Access Management (IAM) API. For more information, seehttps://console.cloud.google.com/apis/api/iam.googleapis.com/overview .

4. Additionally, enable the APIs for the services that will be used. Re-fer to https://github.com/GoogleCloudPlatform/gcp-service-broker/tree/master#open-ser-

vice-broker-for-google-cloud-platform to see the services available and the correspondingAPIs that will need to be enabled. The examples in this section will require enabling thefollowing APIs:

The Google Cloud SQL API. For more information, see https://con-

sole.cloud.google.com/apis/library/sql-component.googleapis.com .

The Google Cloud SQL Admin API. For more information, see https://con-

sole.cloud.google.com/apis/library/sqladmin.googleapis.com .

138 Deploying and Using the Google Cloud Platform Service Broker SUSE Cloud Applic… 1.5.1

Page 154: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

11.10.2 Create a Service Account

A service account allows non-human users to authenticate with and be authorized to interactwith Google APIs. To learn more about service accounts, see https://cloud.google.com/iam/docs/

understanding-service-accounts . The service account created here will be used by the GCPService Broker application so that it can interact with the APIs to provision resources.

1. From the GCP console, click the Navigation menu.

2. Go to IAM & admin and click Service accounts.

3. Click Create Service Account.

4. In the Service account name eld, enter a name.

5. Click Create.

6. In the Service account permissions section, add the following roles:

Project > Editor

Cloud SQL > Cloud SQL Admin

Compute Engine > Compute Admin

Service Accounts > Service Account User

Cloud Services > Service Broker Admin

IAM > Security Admin

7. Click Continue.

8. In the Create key section, click Create Key.

9. In the Key type eld, select JSON and click Create. Save the le to a secure location. Thiswill be required when deploying the GCP Service Broker application.

10. Click Done to finish creating the service account.

11.10.3 Create a Database for the GCP Service Broker

The GCP Service Broker requires a database to store information about the resources it provi-sions. Any database that adheres to the MySQL protocol may be used, but it is recommended touse a GCP Cloud SQL instance, as outlined in the following steps.

139 Create a Service Account SUSE Cloud Applic… 1.5.1

Page 155: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

1. From the GCP console, click the Navigation menu.

2. Under the Storage section, click SQL.

3. Click Create Instance.

4. Click Choose MySQL to select MySQL as the database engine.

5. In the Instance ID eld, enter an identifier for MySQL instance.

6. In the Root password eld, set a password for the root user.

7. Click Show configuration options to see additonal configuration options.

8. Under the Set connectivity section, click Add network to add an authorized network.

9. In the Network eld, enter 0.0.0.0/0 and click Done.

10. Optionally, create SSL certificates for the database and store them in a secure location.

11. Click Create and wait for the MySQL instance to finish creating.

12. After the MySQL instance is finished creating, connect to it using either the Cloud Shellor the mysql command line client.

To connect using Cloud Shell:

1. Click on the instance ID of the MySQL instance.

2. In the Connect to this instance section of the Overview tab, click Connect usingCloud Shell.

3. After the shell is opened, the gcloud sql connect command is displayed.Press Enter to connect to the MySQL instance as the root user.

4. When prompted, enter the password for the root user set in an earlier step.

To connect using the mysql command line client:

1. Click on the instance ID of the MySQL instance.

2. In the Connect to this instance section of the Overview tab, take note of the IPaddress. For example, 11.22.33.44 .

3. Using the mysql command line client, run the following command.

tux > mysql -h 11.22.33.44 -u root -p

140 Create a Database for the GCP Service Broker SUSE Cloud Applic… 1.5.1

Page 156: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

4. When prompted, enter the password for the root user set in an earlier step.

13. After connecting to the MySQL instance, run the following commands to create an initialuser. The service broker will use this user to connect to the service broker database.

CREATE DATABASE servicebroker;CREATE USER 'gcpdbuser'@'%' IDENTIFIED BY 'gcpdbpassword';GRANT ALL PRIVILEGES ON servicebroker.* TO 'gcpdbuser'@'%' WITH GRANT OPTION;

Where:

gcpdbuser

Is the username of the user the service broker will connect to the service brokerdatabase with. Replace gcpdbuser with a username of your choosing.

gcpdbpassword

Is the password of the user the service broker will connect to the service brokerdatabase with. Replace gcpdbpassword with a secure password of your choosing.

11.10.4 Deploy the Service Broker

The GCP Service Broker can be deployed as a Cloud Foundry application onto your deploymentof SUSE Cloud Application Platform.

1. Get the GCP Service Broker application from Github and change to the GCP Service Brokerapplication directory.

tux > git clone https://github.com/GoogleCloudPlatform/gcp-service-brokertux > cd gcp-service-broker

2. Update the manifest.yml le and add the environment variables below and their asso-ciated values to the env section:

ROOT_SERVICE_ACCOUNT_JSON

The contents, as a string, of the JSON key le created for the service account createdearlier (see Section 11.10.2, “Create a Service Account”).

SECURITY_USER_NAME

The username to authenticate broker requests. This will be the same one used in thecf create-service-broker command. In the examples, this is cfgcpbrokeruser .

141 Deploy the Service Broker SUSE Cloud Applic… 1.5.1

Page 157: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

SECURITY_USER_PASSWORD

The password to authenticate broker requests. This will be the same one used inthe cf create-service-broker command. In the examples, this is cfgcpbroker-password .

DB_HOST

The host for the service broker database created earlier (see Section 11.10.3, “Create a

Database for the GCP Service Broker”. This can be found in the GCP console by clickingon the name of the database instance and examining the Connect to this instancesection of the Overview tab. In the examples, this is 11.22.33.44 .

DB_USERNAME

The username used to connect to the service broker database. This was created by themysql commands earlier while connected to the service broker database instance(see Section 11.10.3, “Create a Database for the GCP Service Broker”). In the examples, thisis gcpdbuser .

DB_PASSWORD

The password of the user used to connect to the service broker database. This wascreated by the mysql commands earlier while connected to the service broker data-base instance (see Section 11.10.3, “Create a Database for the GCP Service Broker”). In theexamples, this is gcpdbpassword .

The manifest.yml should look similar to the example below.

### example manifest.yml for the GCP Service Broker---applications:- name: gcp-service-broker memory: 1G buildpacks: - go_buildpack env: GOPACKAGENAME: github.com/GoogleCloudPlatform/gcp-service-broker GOVERSION: go1.12 ROOT_SERVICE_ACCOUNT_JSON: '{ ... }' SECURITY_USER_NAME: cfgcpbrokeruser SECURITY_USER_PASSWORD: cfgcpbrokerpassword DB_HOST: 11.22.33.44 DB_USERNAME: gcpdbuser DB_PASSWORD: gcpdbpassword

142 Deploy the Service Broker SUSE Cloud Applic… 1.5.1

Page 158: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

3. After updating the manifest.yml le, deploy the service broker as an application to yourCloud Application Platform deployment. Specify a health check type of none .

tux > cf push --health-check-type none

4. After the service broker application is deployed, take note of the URL displayed in theroute eld. Alternatively, run cf app gcp-service-broker to nd the URL in theroute eld. On a browser, go to the route (for example, https://gcp-service-bro-ker.example.com ). You should see the documentation for the GCP Service Broker.

5. Create the service broker in SUSE Cloud Foundry using the cf CLI.

tux > cf create-service-broker gcp-service-broker cfgcpbrokeruser cfgcpbrokerpassword https://gcp-service-broker.example.com

Where https://gcp-service-broker.example.com is replaced by the URL of the GCPService Broker application deployed to SUSE Cloud Application Platform. Find the URLusing cf app gcp-service-broker and examining the routes eld.

6. Verify the service broker has been successfully registered.

tux > cf service-brokers

7. List the available services and their associated plans for the GCP Service Broker. Formore information about the services, see https://github.com/GoogleCloudPlatform/gcp-ser-

vice-broker/tree/master#open-service-broker-for-google-cloud-platform .

tux > cf service-access -b gcp-service-broker

8. Enable access to a service. This example enables access to the Google CloudSQL MySQLservice (see https://cloud.google.com/sql/ ).

tux > cf enable-service-access google-cloudsql-mysql

9. Create an instance of the Google CloudSQL MySQL service. This example uses the mysql-db-f1-micro plan. Use the -c ag to pass optional parameters when provisioning aservice. See https://github.com/GoogleCloudPlatform/gcp-service-broker/blob/master/docs/

use.md for the parameters that can be set for each service.

tux > cf create-service google-cloudsql-mysql mysql-db-f1-micro mydb-instance

143 Deploy the Service Broker SUSE Cloud Applic… 1.5.1

Page 159: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Wait for the service to finish provisioning. Check the status using the GCP console or withthe following command.

tux > cf service mydb-instance | grep status

The service can now be bound to applications and used.

11.11 Resizing Persistent VolumesDepending on your workloads, the default persistent volume (PV) sizes of your Cloud Appli-cation Platform deployment may be insufficient. This section describes the process to resize apersistent volume in your Cloud Application Platform deployment, by modifying the persistentvolumes claim (PVC) object.

Note that PVs can only be expanded, but cannot be shrunk. shrunk.

11.11.1 Prerequisites

The following are required in order to use the process below to resize a PV.

Kubernetes 1.11 or newer.

The volume being expanded is among the list of supported volume types. Refer tothe list at https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-per-

sistent-volumes-claims

11.11.2 Example Procedure

The following describes the process required to resize a PV, using the PV and PVC associatedwith uaa 's mysql as an example.

1. Find the storage class and PVC associated with the PV being expanded. In This example,the storage class is called persistent and the PVC is called mysql-data-mysql-0 .

tux > kubectl get persistentvolume

2. Verify whether the storage class has allowVolumeExpansion set to true . If it does not,run the following command to update the storage class.

144 Resizing Persistent Volumes SUSE Cloud Applic… 1.5.1

Page 160: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > kubectl get storageclass persistent --output json

If it does not, run the below command to update the storage class.

tux > kubectl patch storageclass persistent \--patch '{"allowVolumeExpansion": true}'

3. Cordon all nodes in your cluster.

a. tux > export VM_NODES=$(kubectl get nodes -o name)

b. tux > for i in $VM_NODES do kubectl cordon `echo "${i//node\/}"`done

4. Increase the storage size of the PVC object associated with the PV being expanded.

tux > kubectl patch persistentvolumeclaim --namespace uaa mysql-data-mysql-0 \--patch '{"spec": {"resources": {"requests": {"storage": "25Gi"}}}}'

5. List all pods that use the PVC, in any namespace.

tux > kubectl get pods --all-namespaces --output=json | jq -c '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName: .spec | select( has ("volumes") ).volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'

6. Restart all pods that use the PVC.

tux > kubectl delete pod mysql-0 --namespace uaa

7. Run kubectl describe persistentvolumeclaim and monitor the status.conditionseld.

tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'

When the following is observed, press Ctrl – C to exit the watch command and proceedto the next step.

status.conditions.message is

145 Example Procedure SUSE Cloud Applic… 1.5.1

Page 161: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

message: Waiting for user to (re-)start a pod to finish file system resize of volume on node.

status.conditions.type is

type: FileSystemResizePending

8. Uncordon all nodes in your cluster.

tux > for i in $VM_NODES do kubectl uncordon `echo "${i//node\/}"`done

9. Wait for the resize to finish. Verify the storage size values match for status.capaci-ty.storage and spec.resources.requests.storage .

tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'

10. Also verify the storage size in the pod itself is updated.

tux > kubectl --namespace uaa exec mysql-0 -- df --human-readable

11.12 Expanding Capacity of a Cloud ApplicationPlatform Deployment on Google GKEIf the current capacity of your Cloud Application Platform deployment is insufficient for yourworkloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 11, Deploying SUSE Cloud

Application Platform on Google Kubernetes Engine (GKE) and have a running Cloud Application Plat-form deployment on Microsoft AKS. The instructions below will use environment variables de-fined in Section 11.2, “Creating a GKE cluster”.

1. Get the most recently created node in the cluster.

tux > RECENT_VM_NODE=$(gcloud compute instances list --filter=name~${CLUSTER_NAME:?required} --format json | jq --raw-output '[sort_by(.creationTimestamp) | .[].creationTimestamp ] | last | .[0:19] | strptime("%Y-%m-%dT%H:%M:%S") | mktime')

146

Expanding Capacity of a Cloud Application Platform Deployment on Google GKE SUSE

Cloud Applic… 1.5.1

Page 162: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

2. Increase the Kubernetes node count in the cluster. Replace the example value with thenumber of nodes required for your workload.

tux > gcloud container clusters resize $CLUSTER_NAME \--num-nodes 4

3. Verify the new nodes are in a Ready state before proceeding.

tux > kubectl get nodes

4. Add or update the following in your scf-config-values.yaml le to increase the num-ber of diego-cell in your Cloud Application Platform deployment. Replace the examplevalue with the number required by your workflow.

sizing: diego_cell: count: 4

5. Pass your uaa secret and certificate to scf .

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

6. Perform a helm upgrade to apply the change.

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--version 2.19.1

7. Monitor progress of the additional diego-cell pods:

tux > watch --color 'kubectl get pods --namespace scf'

147

Expanding Capacity of a Cloud Application Platform Deployment on Google GKE SUSE

Cloud Applic… 1.5.1

Page 163: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

12 Eirini

Eirini, an alternative to Diego, is a scheduler for the Cloud Foundry Application Runtime (CFAR)that runs Cloud Foundry user applications in Kubernetes. For details about Eirini, see https://

www.cloudfoundry.org/project-eirini/ and http://eirini.cf

Warning: Technology PreviewEirini is currently included in SUSE Cloud Application Platform as a technology previewto allow users to evaluate. It is not supported for use in production deployments.

As a technology preview, Eirini contains certain limitations to its functionality. These areoutlined below:

Air gapped environments or usage of manual certificates are currently not support-ed.

12.1 Enabling Eirini

1. To enable Eirini, and disable Diego, add the following to your scf-config-values.yamlle.

enable: eirini: truekube: auth: rbacenv: DEFAULT_STACK: cflinuxfs3

2. To enable persistence, refer to the instructions at https://github.com/SUSE/scf/wiki/Persis-

tence-with-Eirini-in-SCF .

3. Deploy uaa and scf .

148 Enabling Eirini SUSE Cloud Applic… 1.5.1

Page 164: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Refer to the following for platform-specific instructions:

For SUSE® CaaS Platform, see Chapter 5, Deploying SUSE Cloud Application Platform on

SUSE CaaS Platform.

For Microsoft Azure Kubernetes Service, see Chapter 9, Deploying SUSE Cloud Application

Platform on Microsoft Azure Kubernetes Service (AKS).

For Amazon Elastic Kubernetes Service, see Chapter 10, Deploying SUSE Cloud Applica-

tion Platform on Amazon Elastic Kubernetes Service (EKS).

For Google Kubernetes Engine, see Chapter 11, Deploying SUSE Cloud Application Platform

on Google Kubernetes Engine (GKE).

4. After initiating the helm install command to deploy scf , The secret generator willcreate a certificate signing request that must be approved by an administrator before de-ployment will continue. To do so, run the command below where scf should be replacedwith your namespace of your scf deployment.

tux > kubectl certificate approve scf-bits-service-ssl-cert

Note that manual approval of CSRs is recommended.An alternative to manual approval of the CSR is to pass --set env.KUBE_CSR_AUTO_AP-PROVAL=true as part of the helm install command. This ag will allow the CSR tobe automatically approved. Operators should take caution with this approach as it willprovide the secrets genrator with a cluster role binding that allows it to approve all CSRsmade to the Kubernetes signer.

5. Depending on your cluster configuration, Metrics Server may need to be deployed. UseHelm to install the latest stable Metrics Server.Note that --kubelet-insecure-tls is not recommended for production usage, but canbe useful in test clusters with self-signed Kubelet serving certificates. For production, use--tls-private-key-file .

tux > helm install stable/metrics-server --name=metrics-server --set args[0]="--kubelet-preferred-address-types=InternalIP" --set args[1]="--kubelet-insecure-tls"

149 Enabling Eirini SUSE Cloud Applic… 1.5.1

Page 165: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

13 Setting Up a Registry for an Air Gapped Environ-ment

Important

README first!

Before you start deploying SUSE Cloud Application Platform, review the followingdocuments:Read the Release Notes: Release Notes SUSE Cloud Application Platform (https://

www.suse.com/releasenotes/x86_64/SUSE-CAP/1/)

Read Chapter 3, Deployment and Administration Notes

Cloud Application Platform, which consists of Docker images, is deployed to a Kubernetes clusterthrough Helm. These images are hosted on a Docker registry at registry.suse.com . In anair gapped environment, registry.suse.com will not be accessible. You will need to create aregistry, and populate it will the images used by Cloud Application Platform.

This chapter describes how to load your registry with the necessary images to deploy CloudApplication Platform in an air gapped environment.

13.1 PrerequisitesThe following prerequisites are required:

The Docker Command Line. See https://docs.docker.com/engine/reference/command-

line/cli/ for more information.

A Docker registry has been created in your air gapped environment. Refer to the Dockerdocumentation at https://docs.docker.com/registry/ for instructions.

13.2 Mirror Images to RegistryAll the Cloud Application Platform Helm charts include an imagelist.txt le that lists all im-ages from the registry.suse.com registry under the cap organization. They can be mirroredto a local registry with the following script.

150 Prerequisites SUSE Cloud Applic… 1.5.1

Page 166: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Replace the value of MIRROR with your registry's domain.

#!/bin/bash

MIRROR=registry.home

set -ex

function mirror { CHART=$1 CHARTDIR=$(mktemp -d) helm fetch suse/$1 --untar --untardir=${CHARTDIR} IMAGES=$(cat ${CHARTDIR}/**/imagelist.txt) for IMAGE in ${IMAGES}; do echo $IMAGE docker pull registry.suse.com/cap/$IMAGE docker tag registry.suse.com/cap/$IMAGE $MIRROR/cap/$IMAGE docker push $MIRROR/cap/$IMAGE done docker save -o ${CHART}-images.tar.gz \ $(perl -E "say qq(registry.suse.com/cap/\$_) for @ARGV" ${IMAGES}) rm -r ${CHARTDIR}}

mirror cfmirror uaamirror consolemirror metricsmirror cf-usb-sidecar-mysqlmirror cf-usb-sidecar-postgres

The script above will both mirror to a local registry and save the images in a local tarball thatcan be restored with docker load foo-images.tgz . In general only one of these mechanismswill be needed.

Also take note of the following regarding the script provided above.

The minibroker chart is currently not supported as it does not use a tagged image, butminibroker:latest . It will use a tagged imaged in the next release and supported atthat time.

The nginx-ingress chart is not supported by this mechanism because it is not part of thecap organization (and cannot be configured with the kube.registry.hostname settingat deploy time either).

151 Mirror Images to Registry SUSE Cloud Applic… 1.5.1

Page 167: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Instead manually parse the Helm chart for the image names and do a manual docker pull&& docker tag && docker push on them.

Before deploying Cloud Application Platform using helm install , ensure the following inyour scf-config-values.yaml has been updated to point to your registry, and not reg-istry.suse.com .

kube: registry: # example registry domain hostname: "registry.home" username: "" password: "" organization: "cap"

152 Mirror Images to Registry SUSE Cloud Applic… 1.5.1

Page 168: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

III SUSE Cloud Application PlatformAdministration

14 Upgrading SUSE Cloud Application Platform 154

15 External Database for uaa and scf 170

16 Configuration Changes 173

17 Creating Admin Users 175

18 Managing Passwords 178

19 Accessing the UAA User Interface 180

20 Cloud Controller Database Secret Rotation 182

21 Backup and Restore 186

22 Service Brokers 198

23 App-AutoScaler 213

24 Logging 221

25 Managing Certificates 228

26 Integrating CredHub with SUSE Cloud Application Platform 236

27 Buildpacks 240

28 Custom Application Domains 253

29 Managing Nproc Limits of Pods 256

Page 169: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

14 Upgrading SUSE Cloud Application Platform

uaa , scf , and Stratos together make up a SUSE Cloud Application Platform release. Mainte-nance updates are delivered as container images from the SUSE registry and applied with Helm.

For additional upgrade information, always review the release notes published at https://

www.suse.com/releasenotes/x86_64/SUSE-CAP/1/ .

14.1 Important ConsiderationsBefore performing an upgrade, be sure to take note of the following:

Perform Upgrades in Sequence

Cloud Application Platform only supports upgrading releases in sequential order. If thereare any intermediate releases between your current release and your target release, theymust be installed. Skipping releases is not supported. See Section 14.3, “Installing Skipped

Releases” for more information.

Preserve Helm Value Changes during Upgrades

During a helm upgrade , always ensure your scf-config-values-yaml le is passed.This will preserve any previously set Helm values while allowing additional Helm valuechanges to be made.

helm rollback Is Not Supported

helm rollback is not supported in SUSE Cloud Application Platform or in upstream CloudFoundry, and may break your cluster completely, because database migrations only runforward and cannot be reversed. Database schema can change over time. During upgradesboth pods of the current and the next release may run concurrently, so the schema muststay compatible with the immediately previous release. But there is no way to guaranteesuch compatibility for future upgrades. One way to address this is to perform a full rawdata backup and restore. (See Section 21.2, “Disaster Recovery through Raw Data Backup and

Restore”)

154 Important Considerations SUSE Cloud Applic… 1.5.1

Page 170: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

14.2 Upgrading SUSE Cloud Application PlatformThe supported upgrade method is to install all upgrades, in order. Skipping releases is not sup-ported. This table matches the Helm chart versions to each release:

CAP Re-lease

SCF andUAA HelmChart Ver-sion

StratosHelmChart Ver-sion

StratosMetricsHelmChart Ver-sion

CF API Im-plemented

KnownCompati-ble CF CLIVersion

CF CLI URL

1.5.1 (cur-rent re-lease)

2.19.1 2.6.1 1.1.1 2.138.0 6.46.1 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.46.1

1.5 2.18.0 2.5.3 1.1.0 2.138.0 6.46.1 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.46.1

1.4.1 2.17.1 2.4.0 1.0.0 2.134.0 6.46.0 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.46.0

1.4 2.16.4 2.4.0 1.0.0 2.134.0 6.44.1 https://

github.com/

cloud-

foundry/cli/

155 Upgrading SUSE Cloud Application Platform SUSE Cloud Applic… 1.5.1

Page 171: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

CAP Re-lease

SCF andUAA HelmChart Ver-sion

StratosHelmChart Ver-sion

StratosMetricsHelmChart Ver-sion

CF API Im-plemented

KnownCompati-ble CF CLIVersion

CF CLI URL

releas-

es/tag/

v6.44.1

1.3.1 2.15.2 2.3.0 1.0.0 2.120.0 6.42.0 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.42.0

1.3 2.14.5 2.2.0 1.0.0 2.115.0 6.40.1 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.40.1

1.2.1 2.13.3 2.1.0 2.115.0 6.39.1 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.39.1

1.2.0 2.11.0 2.0.0 2.112.0 6.38.0 https://

github.com/

cloud-

foundry/cli/

156 Upgrading SUSE Cloud Application Platform SUSE Cloud Applic… 1.5.1

Page 172: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

CAP Re-lease

SCF andUAA HelmChart Ver-sion

StratosHelmChart Ver-sion

StratosMetricsHelmChart Ver-sion

CF API Im-plemented

KnownCompati-ble CF CLIVersion

CF CLI URL

releas-

es/tag/

v6.38.0

1.1.1 2.10.1 1.1.0 2.106.0 6.37.0 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.37.0

1.1.0 2.8.0 1.1.0 2.103.0 6.36.1 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.36.1

1.0.1 2.7.0 1.0.2 2.101.0 6.34.1 https://

github.com/

cloud-

foundry/cli/

releas-

es/tag/

v6.34.1

1.0 2.6.11 1.0.0 6.34.0 https://

github.com/

cloud-

foundry/cli/

157 Upgrading SUSE Cloud Application Platform SUSE Cloud Applic… 1.5.1

Page 173: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

CAP Re-lease

SCF andUAA HelmChart Ver-sion

StratosHelmChart Ver-sion

StratosMetricsHelmChart Ver-sion

CF API Im-plemented

KnownCompati-ble CF CLIVersion

CF CLI URL

releas-

es/tag/

v6.34.0

Use helm list to see the version of your installed release . Verify the latest release is thenext sequential release from your installed release. If it is, proceed with the commands belowto perform the upgrade. If any releases have been missed, see Section 14.3, “Installing Skipped

Releases”.

For upgrades from SUSE Cloud Application Platform 1.5 to 1.5.1, if the mysql roles of uaa andscf are in high availability mode, it is recommended to rst scale them to single availability.Performing an upgrade with these roles in HA mode runs the risk of encountering potentialdatabase migration failures, which will lead to pods not starting.

The following sections outline the upgrade process for SUSE Cloud Application Platform deploy-ments described by the given scenarios:

If the mysql roles are currently running in high availability mode through setting con-fig.HA to true , see Section 14.2.1, “High Availability mysql through config.HA”.

If the mysql roles are currently running in high availability mode through configuringcustom sizing values, see Section 14.2.2, “High Availability mysql through Custom Sizing Values”.

If the mysql roles are currently running in single availability mode, see Section 14.2.3,

“Single Availability mysql”.

14.2.1 High Availability mysql through config.HA

This section describes the upgrade process for deployments where the mysql roles of uaaand scf are currently in high availability mode through setting config.HA to true (see Sec-

tion 7.1.3, “Simple High Availability Configuration”).

The following example assumes your current scf-config-values.yaml le contains:

config:

158 High Availability mysql through config.HA SUSE Cloud Applic… 1.5.1

Page 174: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

HA: true

1. For the uaa deployment, scale the instance count of the mysql role to 1 and remove thepersistent volume claims associated with the removed mysql instances.

a. Scale the mysql instance count to 1.Create a custom configuration le, called ha-strict-false-single-mysql.yamlin this example, with the values below. The le uses the config.HA_strict featureto allow setting the mysql role to a single instance while other roles are kept HA.Note that it is assumed config.HA=true is set in your existing scf-config-val-ues.yaml le.

config: HA_strict: falsesizing: mysql: count: 1

Perform the scaling.

tux > helm upgrade susecf-uaa suse/uaa \--values scf-config-values.yaml \--values ha-strict-false-single-mysql.yaml \--version 2.18.0

Monitor progress using the watch command.

tux > watch --color 'kubectl get pods --namespace uaa'

b. Delete the persistent volume claims (PVC) associated with the removed mysql in-stances from the uaa namespace. Do not delete the persistent volumes (PV).When config.HA is set to true , there are 3 instances of the mysql role. Thismeans that after scaling to single availability, the PVCs associated with mysql-1and mysql-2 need to be deleted.

tux > kubectl delete persistentvolumeclaim --namespace uaa mysql-data-mysql-1

tux > kubectl delete persistentvolumeclaim --namespace uaa mysql-data-mysql-2

2. For the scf deployment, scale the instance count of the mysql role to 1 and remove thepersistent volume claims associated with the removed mysql instances.

159 High Availability mysql through config.HA SUSE Cloud Applic… 1.5.1

Page 175: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

a. Scale the mysql instance count to 1.Reuse the custom configuration le, called ha-strict-false-sin-

gle-mysql.yaml , created earlier. The le uses the config.HA_strict feature toallow setting the mysql role to a single instance while other roles are kept HA.Note that it is assumed config.HA=true is set in your existing scf-config-val-ues.yaml le.Perform the scaling.

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--values ha-strict-false-single-mysql.yaml \--version 2.18.0

Monitor progress using the watch command.

tux > watch --color 'kubectl get pods --namespace scf'

b. Delete the persistent volume claims (PVC) associated with the removed mysql in-stances from the scf namespace. Do not delete the persistent volumes (PV).When config.HA is set to true , there are 3 instances of the mysql role. Thismeans that after scaling to single availability, the PVCs associated with mysql-1and mysql-2 need to be deleted.

tux > kubectl delete persistentvolumeclaim --namespace scf mysql-data-mysql-1

tux > kubectl delete persistentvolumeclaim --namespace scf mysql-data-mysql-2

3. Get the latest updates from your Helm chart repositories. This will allow upgrading tonewer releases of the SUSE Cloud Application Platform charts.

tux > helm repo update

4. Upgrade uaa .Reuse the custom configuration le, called ha-strict-false-single-mysql.yaml , cre-ated earlier. The le uses the config.HA_strict feature to allow setting the mysql roleto a single instance while other roles are kept HA. Note that it is assumed config.HA=trueis set in your existing scf-config-values.yaml le.

tux > helm upgrade susecf-uaa suse/uaa \--values scf-config-values.yaml \

160 High Availability mysql through config.HA SUSE Cloud Applic… 1.5.1

Page 176: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

--values ha-strict-false-single-mysql.yaml \--version 2.19.1

Monitor progress using the watch command.

tux > watch --color 'kubectl get pods --namespace uaa'

5. Extract the uaa secret and certificate for scf to use.

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

6. Upgrade scf .Reuse the custom configuration le, called ha-strict-false-single-mysql.yaml , cre-ated earlier. The le uses the config.HA_strict feature to allow setting the mysql roleto a single instance while other roles are kept HA. Note that it is assumed config.HA=trueis set in your existing scf-config-values.yaml le.Perform the upgrade.

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--values ha-strict-false-single-mysql.yaml \--version 2.19.1 \--set "secrets.UAA_CA_CERT=${CA_CERT}"

Monitor progress using the watch command.

tux > watch --color 'kubectl get pods --namespace scf'

7. Scale the entire uaa deployment, including the mysql role, back to the default highavailability mode.Create a custom configuration le, called ha-strict-true.yaml in this example, withthe values below. The le sets config.HA_strict to true , which enforces all roles,including mysql , are at the required minimum required instance count of 3 for a defaultHA configuration. Note that it is assumed config.HA=true is set in your existing scf-config-values.yaml le.

config:

161 High Availability mysql through config.HA SUSE Cloud Applic… 1.5.1

Page 177: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

HA_strict: true

Perform the scaling.

tux > helm upgrade susecf-uaa suse/uaa \--values scf-config-values.yaml \--values ha-strict-true.yaml \--version 2.19.1

Monitor progress using the watch command.

tux > watch --color 'kubectl get pods --namespace uaa'

8. Extract the uaa secret and certificate for scf to use.

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

9. Scale the entire scf deployment, including the mysql role, back to the default highavailability mode.Reuse the custom configuration le, called ha-strict-true.yaml in this example, cre-ated earlier. The le sets config.HA_strict to true , which enforces all roles, includ-ing mysql , are at the required minimum required instance count of 3 for a default HAconfiguration. Note that it is assumed config.HA=true is set in your existing scf-con-fig-values.yaml le.

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--values ha-strict-true.yaml \--version 2.19.1 \--set "secrets.UAA_CA_CERT=${CA_CERT}"

10. If installed, upgrade Stratos.

tux > helm upgrade --recreate-pods susecf-console suse/console \--values scf-config-values.yaml \--version 2.6.0

162 High Availability mysql through config.HA SUSE Cloud Applic… 1.5.1

Page 178: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

14.2.2 High Availability mysql through Custom Sizing Values

This section describes the upgrade process for deployments where the mysql roles of uaa andscf are currently in high availability mode by configuring custom sizing values (see Section 7.1.4,

“Example Custom High Availability Configurations”).

1. For the uaa deployment, scale the instance count of the mysql role to 1 and remove thepersistent volume claims associated with the removed mysql instances.

a. Scale the mysql instance count to 1.Modify your existing custom sizing configuration le, called uaa-sizing.yaml inthis example, so that the mysql role has a count of 1.

sizing: mysql: count: 1

Perform the scaling.

tux > helm upgrade susecf-uaa suse/uaa \--values scf-config-values.yaml \--values uaa-sizing.yaml \--version 2.18.0

Monitor progress using the watch command.

tux > watch --color 'kubectl get pods --namespace uaa'

b. Delete the persistent volume claims (PVC) associated with the removed mysql in-stances from the uaa namespace. Do not delete the persistent volumes (PV).This example assumes mysql.count was previously set to 3 instances in your customsizing configuration le. This means that after scaling to single availability, the PVCsassociated with mysql-1 and mysql-2 need to be deleted.If your mysql.count was previously set to 5 or 7 instances, repeat the procedure.For 5 instances, the PVCs associated with mysql-3 and mysql-4 would need tobe deleted as well. For 7 instances, the PVCs associated with mysql-3 , mysql-4 ,mysql-5 , and mysql-6 would need to be deleted as well.

tux > kubectl delete persistentvolumeclaim --namespace uaa mysql-data-mysql-1

tux > kubectl delete persistentvolumeclaim --namespace uaa mysql-data-mysql-2

163 High Availability mysql through Custom Sizing Values SUSE Cloud Applic… 1.5.1

Page 179: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

2. For the scf deployment, scale the instance count of the mysql role to 1 and remove thepersistent volume claims associated with the removed mysql instances.

a. Scale the mysql instance count to 1.Modify your existing custom sizing configuration le, called scf-sizing.yaml inthis example, so that the mysql role has a count of 1.

sizing: mysql: count: 1

Perform the scaling.

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--values scf-sizing.yaml \--version 2.18.0

Monitor progress using the watch command.

tux > watch --color 'kubectl get pods --namespace scf'

b. Delete the persistent volume claims (PVC) associated with the removed mysql in-stances from the scf namespace. Do not delete the persistent volumes (PV).This example assumes mysql.count was previously set to 3 instances in your customsizing configuration le. This means that after scaling to single availability, the PVCsassociated with mysql-1 and mysql-2 need to be deleted.If your mysql.count was previously set to 5 or 7 instances, repeat the procedure.For 5 instances, the PVCs associated with mysql-3 and mysql-4 would need tobe deleted as well. For 7 instances, the PVCs associated with mysql-3 , mysql-4 ,mysql-5 , and mysql-6 would need to be deleted as well.

tux > kubectl delete persistentvolumeclaim --namespace scf mysql-data-mysql-1

tux > kubectl delete persistentvolumeclaim --namespace scf mysql-data-mysql-2

3. Get the latest updates from your Helm chart repositories. This will allow upgrading tonewer releases of the SUSE Cloud Application Platform charts.

tux > helm repo update

4. Upgrade uaa .

164 High Availability mysql through Custom Sizing Values SUSE Cloud Applic… 1.5.1

Page 180: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Reuse your existing custom sizing configuration le, called uaa-sizing.yaml in this ex-ample, where the mysql role has a count of 1.

tux > helm upgrade susecf-uaa suse/uaa \--values scf-config-values.yaml \--values uaa-sizing.yaml \--version 2.19.1

Monitor progress using the watch command.

tux > watch --color 'kubectl get pods --namespace uaa'

5. Extract the uaa secret and certificate for scf to use.

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

6. Upgrade scf .Reuse your existing custom sizing configuration le, called scf-sizing.yaml in this ex-ample, where the mysql role has a count of 1.

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--values scf-sizing.yaml \--version 2.19.1 \--set "secrets.UAA_CA_CERT=${CA_CERT}"

Monitor progress using the watch command.

tux > watch --color 'kubectl get pods --namespace scf'

7. Scale the entire uaa deployment, including the mysql role, back to high availabilitymode. This example assumes the mysql.count was previously set to 3 instances.Note that the mysql role requires at least 3 instances for high availability and must havean odd number of instances.Modify your existing custom sizing configuration le, called uaa-sizing.yaml in thisexample, so that the mysql role has a count of 3.

sizing:

165 High Availability mysql through Custom Sizing Values SUSE Cloud Applic… 1.5.1

Page 181: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

mysql: count: 3

Perform the scaling.

tux > helm upgrade susecf-uaa suse/uaa \--values scf-config-values.yaml \--values uaa-sizing.yaml \--version 2.19.1

Monitor progress using the watch command.

tux > watch --color 'kubectl get pods --namespace uaa'

8. Extract the uaa secret and certificate for scf to use.

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

9. Scale the entire scf deployment, including the mysql role, back to high availabilitymode. This example assumes the mysql.count was previously set to 3 instances.Note the mysql role requires at least 3 instances for high availability and must have anodd number of instances.Modify your existing custom sizing configuration le, called scf-sizing.yaml in thisexample, so that the mysql role has a count of 3.

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--values scf-sizing.yaml \--version 2.19.1 \--set "secrets.UAA_CA_CERT=${CA_CERT}"

10. If installed, upgrade Stratos.

tux > helm upgrade --recreate-pods susecf-console suse/console \--values scf-config-values.yaml \--version 2.6.0

166 High Availability mysql through Custom Sizing Values SUSE Cloud Applic… 1.5.1

Page 182: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

14.2.3 Single Availability mysqlThis section describes the upgrade process for deployments where the mysql roles of uaa andscf are currently in single availability mode.

1. Get the latest updates from your Helm chart repositories. This will allow upgrading tonewer releases of the SUSE Cloud Application Platform charts.

tux > helm repo update

2. Upgrade uaa .

tux > helm upgrade susecf-uaa suse/uaa \--values scf-config-values.yaml \--version 2.19.1

Monitor progress using the watch command.

tux > watch --color 'kubectl get pods --namespace uaa'

3. Extract the uaa secret and certificate for scf to use.

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

4. Upgrade scf .

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--version 2.19.1 \--set "secrets.UAA_CA_CERT=${CA_CERT}"

Monitor progress using the watch command.

tux > watch --color 'kubectl get pods --namespace scf'

5. If installed, upgrade Stratos.

tux > helm upgrade --recreate-pods susecf-console suse/console \--values scf-config-values.yaml \--version 2.6.0

167 Single Availability mysql SUSE Cloud Applic… 1.5.1

Page 183: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

14.2.4 Change in URL of Internal cf-usb Broker Endpoint

This change is only applicable for upgrades from Cloud Application Platform 1.2.1 to CloudApplication Platform 1.3 and upgrades from Cloud Application Platform 1.3 to Cloud Applica-tion Platform 1.3.1. The URL of the internal cf-usb broker endpoint has changed. Brokers forPostgreSQL and MySQL that use cf-usb will require the following manual x after upgradingto reconnect with SCF/CAP:

For Cloud Application Platform 1.2.1 to Cloud Application Platform 1.3 upgrades:

1. Get the name of the secret (for example secrets-2.14.5-1 ):

tux > kubectl get secret --namespace scf

2. Get the URL of the cf-usb host (for example https://cf-usb.scf.svc.cluster.lo-cal:24054 ):

tux > cf service-brokers

3. Get the current cf-usb password. Use the name of the secret obtained in the rst step:

tux > kubectl get secret --namespace scf secrets-2.14.5-1 --output yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id

4. Update the service broker, where password is the password from the previous step, andthe URL is the result of the second step with the leading cf-usb part doubled with adash separator

tux > cf update-service-broker usb broker-admin password https://cf-usb-cf-usb.scf.svc.cluster.local:24054

For Cloud Application Platform 1.3 to Cloud Application Platform 1.3.1 upgrades:

1. Get the name of the secret (for example secrets-2.15.2-1 ):

tux > kubectl get secret --namespace scf

2. Get the URL of the cf-usb host (for example https://cf-usb-cf-usb.scf.svc.clus-ter.local:24054 ):

tux > cf service-brokers

168 Change in URL of Internal cf-usb Broker Endpoint SUSE Cloud Applic… 1.5.1

Page 184: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

3. Get the current cf-usb password. Use the name of the secret obtained in the rst step:

tux > kubectl get secret --namespace scf secrets-2.15.2-1 --output yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id

4. Update the service broker, where password is the password from the previous step, andthe URL is the result of the second step with the leading cf-usb- part removed:

tux > cf update-service-broker usb broker-admin password https://cf-usb.scf.svc.cluster.local:24054

14.3 Installing Skipped ReleasesBy default, Helm always installs the latest release. What if you accidentally skipped a release,and need to apply it before upgrading to the current release? Install the missing release byspecifying the Helm chart version number. For example, your current uaa and scf versionsare 2.10.1. Consult the table at the beginning of this chapter to see which releases you havemissed. In this example, the missing Helm chart version for uaa and scf is 2.11.0. Use the --version option to install a specific version:

tux > helm upgrade susecf-uaa suse/uaa \--values scf-config-values.yaml--recreate-pods \--version 2.11.0

Be sure to install the corresponding versions for scf and Stratos.

169 Installing Skipped Releases SUSE Cloud Applic… 1.5.1

Page 185: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

15 External Database for uaa and scf

By default, internal MariaDB instances serve as the backing databases for internal components ofuaa and scf . These components can be configured to use an external database system, such asa data service offered by a cloud service provider or an existing high availability database server.

The current SUSE Cloud Application Platform release supports the following databases:

MariaDB

MySQL

15.1 ConfigurationThis section describes how to configure SUSE Cloud Application Platform to use an externaldatabase with internal components of uaa and scf . The deployment and configuration ofthe external database itself is the responsibility of the operator and beyond the scope of thisdocumentation. It is assumed the external database has been deployed and accessible.

In order to configure SUSE Cloud Application Platform to use an external database, add anddefine the following Helm values in your scf-config-values.yaml .

170 Configuration SUSE Cloud Applic… 1.5.1

Page 186: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

In the env section:

DB_EXTERNAL_HOST . The hostname for the external database server.

DB_EXTERNAL_PORT . The port for the external database server. By default, this is3306 .

DB_EXTERNAL_USER . The name of an administrator user name for the external data-base server. This is required to create the necessary databases.

In the secrets section:

DB_EXTERNAL_PASSWORD . The corresponding password for the administrator user de-fined for DB_EXTERNAL_USER .

In the enable section:

mysql: false . This option disables the start and use of the internal database-re-lated roles (mysql, mysql-proxy). It is currently not possible to do this automaticallywhen an external database is used.

The following snippet contains the relevant Helm values defined in an example scf-con-fig-values.yaml . Replace the example values with the connection details of your database.

env: DB_EXTERNAL_HOST: my-external-db.chnutnopvydk.us-west-2.rds.amazonaws.com DB_EXTERNAL_PORT: 3306 DB_EXTERNAL_USER: admin

secrets: DB_EXTERNAL_PASSWORD: password

enable: mysql: false

The backing databases for uaa and scf can be independently configured by supplying theabove Helm values in separate scf-config-values.yaml les (or as command line options).They can be configured to share the same external database, use different external databases,or use the internal database. The table below outlines the compatible combinations.

uaa scf

Internal database Internal database

171 Configuration SUSE Cloud Applic… 1.5.1

Page 187: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

uaa scf

Internal database External database

External database Internal database

External database A External database A

External database A External database B

After your configuration le has been updated, refer to the platform-specific instructions todeploy uaa and/or scf :

For SUSE® CaaS Platform, see Chapter 5, Deploying SUSE Cloud Application Platform on SUSE

CaaS Platform.

For Microsoft Azure Kubernetes Service, see Chapter 9, Deploying SUSE Cloud Application Plat-

form on Microsoft Azure Kubernetes Service (AKS).

For Amazon Elastic Kubernetes Service, see Chapter 10, Deploying SUSE Cloud Application Plat-

form on Amazon Elastic Kubernetes Service (EKS).

For Google Kubernetes Engine, see Chapter 11, Deploying SUSE Cloud Application Platform on

Google Kubernetes Engine (GKE).

172 Configuration SUSE Cloud Applic… 1.5.1

Page 188: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

16 Configuration Changes

After the initial deployment of Cloud Application Platform, any changes made to your Helmchart values, whether through your scf-config-values.yaml le or directly using Helm's --set ag, are applied using the helm upgrade command.

Warning: Do Not Make Changes to Pod Counts During a VersionUpgradeThe helm upgrade command can be used to apply configuration changes as well asperform version upgrades to Cloud Application Platform. A change to the pod count con-figuration should not be applied simultaneously with a version upgrade. Sizing changesshould be made separately, either before or after, from a version upgrade (see Section 7.1,

“Configuring Cloud Application Platform for High Availability”).

16.1 Configuration Change ExampleConsider an example where more granular log entries are required than those provided by yourdefault deployment of uaa ; (default LOG_LEVEL: "info" ).

You would then add an entry for LOG_LEVEL to the env section of your scf-config-val-ues.yaml used to deploy uaa :

env: LOG_LEVEL: "debug2"

Then apply the change with the helm upgrade command. This example assumes the suse/uaa Helm chart deployed was named susecf-uaa :

tux > helm upgrade susecf-uaa suse/uaa \--values scf-config-values.yaml \--version 2.19.1

When all pods are in a READY state, the configuration change will also be reflected. If the chartwas deployed to the uaa namespace, progress can be monitored with:

tux > watch --color 'kubectl get pods --namespace uaa'

173 Configuration Change Example SUSE Cloud Applic… 1.5.1

Page 189: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

16.2 Other ExamplesThe following are other examples of using helm upgrade to make configuration changes:

Cluster sizing changes (see Chapter 7, SUSE Cloud Application Platform High Availability)

Secrets rotation (see Section 25.4, “Rotating Automatically Generated Secrets”)

Enabling additional services (see Section 23.2, “Enabling and Disabling the App-AutoScaler Ser-

vice”)

174 Other Examples SUSE Cloud Applic… 1.5.1

Page 190: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

17 Creating Admin Users

This chapter provides an overview on how to create additional administrators for your CloudApplication Platform cluster.

17.1 PrerequisitesThe following prerequisites are required in order to create additional Cloud Application Platformcluster administrators:

cf , the Cloud Foundry command line interface. For more information, see https://doc-

s.cloudfoundry.org/cf-cli/ .For SUSE Linux Enterprise and openSUSE systems, install using zypper .

tux > sudo zypper install cf-cli

For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Addthe module using YaST or SUSEConnect.

tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-

cli.html .

uaac , the Cloud Foundry uaa command line client (UAAC). See https://docs.cloud-

foundry.org/uaa/uaa-user-management.html for more information and installation in-structions.On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages havebeen installed before installing the cf-uaac gem.

tux > sudo zypper install ruby-devel gcc-c++

175 Prerequisites SUSE Cloud Applic… 1.5.1

Page 191: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

17.2 Creating an Example Cloud Application PlatformCluster AdministratorThe following example demonstrates the steps required to create a new administrator user foryour Cloud Application Platform cluster. Note that creating administrator accounts must be doneusing the UAAC and cannot be done using the cf CLI.

1. Use UAAC to target your uaa server.

tux > uaac target --skip-ssl-validation https://uaa.example.com:2793

2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set inyour scf-config-values.yaml le.

tux > uaac token client get admin --secret password

3. Create a new user:

tux > uaac user add new-admin --password password --emails [email protected] --zone scf

4. Add the new user to the following groups to grant administrator privileges to the cluster(see https://docs.cloudfoundry.org/concepts/architecture/uaa.html#uaa-scopes for infor-mation on privileges provided by each group):

tux > uaac member add scim.write new-admin --zone scf

tux > uaac member add scim.read new-admin --zone scf

tux > uaac member add cloud_controller.admin new-admin --zone scf

tux > uaac member add clients.read new-admin --zone scf

tux > uaac member add clients.write new-admin --zone scf

tux > uaac member add doppler.firehose new-admin --zone scf

tux > uaac member add routing.router_groups.read new-admin --zone scf

tux > uaac member add routing.router_groups.write new-admin --zone scf

176

Creating an Example Cloud Application Platform Cluster Administrator SUSE Cloud Ap-

plic… 1.5.1

Page 192: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

5. Log into your Cloud Application Platform deployment as the newly created administrator:

tux > cf api --skip-ssl-validation https://api.example.com

tux > cf login -u new-admin

6. The following commands can be used to verify the new administrator account has suffi-cient permissions:

tux > cf create-shared-domain test-domain.com

tux > cf set-org-role new-admin org OrgManager

tux > cf create-buildpack test_buildpack /tmp/ruby_buildpack-cached-sle15-v1.7.30.1.zip 1

If the account has sufficient permissions, you should not receive any authorization messagesimilar to the following:

FAILEDServer error, status code: 403, error code: 10003, message: You are not authorized to perform the requested action

See https://docs.cloudfoundry.org/cf-cli/cf-help.html for other administrator-specific com-mands that can be run to confirm sufficient permissions are provided.

177

Creating an Example Cloud Application Platform Cluster Administrator SUSE Cloud Ap-

plic… 1.5.1

Page 193: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

18 Managing Passwords

The various components of SUSE Cloud Application Platform authenticate to each other usingpasswords that are automatically managed by the Cloud Application Platform secrets-generator.The only passwords managed by the cluster administrator are passwords for human users. Theadministrator may create and remove user logins, but cannot change user passwords.

The cluster administrator password is initially defined in the deployment's values.yamlle with CLUSTER_ADMIN_PASSWORD

The Stratos Web UI provides a form for users, including the administrator, to change theirown passwords

User logins are created (and removed) with the Cloud Foundry Client, cf CLI

18.1 Password Management with the Cloud FoundryClientThe administrator cannot change other users' passwords. Only users may change their ownpasswords, and password changes require the current password:

tux > cf passwdCurrent Password>New Password>Verify Password>Changing password...OKPlease log in again

The administrator can create a new user:

tux > cf create-user username password

and delete a user:

tux > cf delete-user username password

Use the cf CLI to assign space and org roles. Run cf help -a for a complete command listing,or see Creating and Managing Users with the cf CLI (https://docs.cloudfoundry.org/adminguide/cli-

user-management.html) .

178 Password Management with the Cloud Foundry Client SUSE Cloud Applic… 1.5.1

Page 194: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

18.2 Changing User Passwords with StratosThe Stratos Web UI provides a form for changing passwords on your profile page. Click theoverflow menu button on the top right to access your profile, then click the edit button on yourprofile page. You can manage your password and username on this page.

FIGURE 18.1: STRATOS PROFILE PAGE

FIGURE 18.2: STRATOS EDIT PROFILE PAGE

179 Changing User Passwords with Stratos SUSE Cloud Applic… 1.5.1

Page 195: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

19 Accessing the UAA User Interface

After UAA is deployed successfully, users will not be able to log in to the UAA user interface(UI) with the admin user and the UAA_ADMIN_CLIENT_SECRET credentials. This user is only anOAuth client that is authorized to call UAA REST APIs and will need to create a separate userin the UAA server by using the UAAC utility.

19.1 PrerequisitesThe following prerequisites are required in order to access the UAA UI.

cf , the Cloud Foundry command line interface. For more information, see https://doc-

s.cloudfoundry.org/cf-cli/ .For SUSE Linux Enterprise and openSUSE systems, install using zypper .

tux > sudo zypper install cf-cli

For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Addthe module using YaST or SUSEConnect.

tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-

cli.html .

uaac , the Cloud Foundry uaa command line client (UAAC). See https://docs.cloud-

foundry.org/uaa/uaa-user-management.html for more information and installation in-structions.On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages havebeen installed before installing the cf-uaac gem.

tux > sudo zypper install ruby-devel gcc-c++

UAA has been successfully deployed.

19.2 Procedure1. Use UAAC to target your uaa server.

180 Prerequisites SUSE Cloud Applic… 1.5.1

Page 196: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > uaac target --skip-ssl-validation https://uaa.example.com:2793

2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set inyour scf-config-values.yaml le.

tux > uaac token client get admin --secret password

3. Create a new user.

tux > uaac user add NEW-USER -p PASSWORD --emails NEW-USER-EMAIL

4. Go to the UAA UI at https://uaa.example.com:2793/login , replacing example.com withyour domain.

5. Log in using the the newly created user. Use the username and password as the credentials.

181 Procedure SUSE Cloud Applic… 1.5.1

Page 197: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

20 Cloud Controller Database Secret Rotation

The Cloud Controller Database (CCDB) encrypts sensitive information like pass-words. By default, the encryption key is generated by SCF, for de-tails see https://github.com/SUSE/scf/blob/2d095a71008c33a23ca39d2ab9664e5602f8707e/con-

tainer-host-files/etc/scf/config/role-manifest.yml#L1656-L1662 . If it is compromised and needsto be rotated, new keys can be added. Note that existing encrypted information will not be up-dated. The encrypted information must be set again to have them re-encrypted with the newkey. The old key cannot be dropped until all references to it are removed from the database.

Updating these secrets is a manual process. The following procedure outlines how this is done.

1. Add env.CC_DB_CURRENT_KEY_LABEL and secrets.CC_DB_ENCRYPTION_KEYS to yourscf-config-values.yaml . Replace the example values with your own.

env: CC_DB_CURRENT_KEY_LABEL: new_key

secrets: CC_DB_ENCRYPTION_KEYS: new_key: "new_key_value"

2. Pass your uaa secret and certificate to scf :

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

3. Use the helm upgrade command to import the above data into the cluster. This restartsrelevant pods with the new information from the previous step:

4. tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--version 2.19.1

182 SUSE Cloud Applic… 1.5.1

Page 198: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

5. Perform the rotation:

a. Run the rotation for the encryption keys. A series of JSON-formatted log entriesdescribing the key rotation progress for various Cloud Controller models will bedisplayed:

tux > kubectl exec --namespace scf api-group-0 -- bash -c \'source /var/vcap/jobs/cloud_controller_ng/bin/ruby_version.sh; \export CLOUD_CONTROLLER_NG_CONFIG=/var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml; \cd /var/vcap/packages/cloud_controller_ng/cloud_controller_ng; \bundle exec rake rotate_cc_database_key:perform'

b. Restart the api-group pod.

tux > kubectl delete pod api-group-0 --namespace scf --force --grace-period=0

Note that keys should be appended to the existing secret to ensure existing environmentvariables can be decoded. Any operator can check which keys are in use by accessing theCCDB. If the encryption_key_label is empty, the default generated key is still beingused

tux > kubectl exec --stdin --tty mysql-0 --namespace scf -- /bin/bash -c 'mysql -p${MYSQL_ADMIN_PASSWORD} --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock'MariaDB [(none)]> select name, encrypted_environment_variables, encryption_key_label from ccdb.apps;+--------+--------------------------------------------------------------------------------------------------------------+----------------------+| name | encrypted_environment_variables | encryption_key_label |+--------+--------------------------------------------------------------------------------------------------------------+----------------------+| go-env | XF08q9HFfDkfxTvzgRoAGp+oci2l4xDeosSlfHJUkZzn5yvr0U/+s5LrbQ2qKtET0ssbMm3L3OuSkBnudZLlaCpFWtEe5MhUe2kUn3A6rUY= | key0 |+--------+--------------------------------------------------------------------------------------------------------------+----------------------+1 row in set (0.00 sec)

For example, if keys were being rotated again, the secret would become:

SECRET_DATA=$(echo "{key0: abc-123, key1: def-456}" | base64)

183 SUSE Cloud Applic… 1.5.1

Page 199: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

and the CC_DB_CURRENT_KEY_LABEL would be updated to match the new key.

20.1 Tables with Encrypted InformationThe CCDB contains several tables with encrypted information as follows:

apps

Environment variables

buildpack_lifecycle_buildpacks

Buildpack URLs may contain passwords

buildpack_lifecycle_data

Buildpack URLs may contain passwords

droplets

May contain Docker registry passwords

env_groups

Environment variables

packages

May contain Docker registry passwords

service_bindings

Contains service credentials

service_brokers

Contains service credentials

service_instances

Contains service credentials

service_keys

Contains service credentials

tasks

Environment variables

184 Tables with Encrypted Information SUSE Cloud Applic… 1.5.1

Page 200: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

20.1.1 Update Existing Data with New Encryption Key

To ensure the encryption key is updated for existing data, the command (or its update- equiv-alent) can be run again with the same parameters. Some commands need to be deleted/recre-ated to update the label.

apps

Run cf set-env again

buildpack_lifecycle_buildpacks, buildpack_lifecycle_data, droplets

cf restage the app

packages

cf delete , then cf push the app (Docker apps with registry password)

env_groups

Run cf set-staging-environment-variable-group or cf set-running-environ-ment-variable-group again

service_bindings

Run cf unbind-service and cf bind-service again

service_brokers

Run cf update-service-broker with the appropriate credentials

service_instances

Run cf update-service with the appropriate credentials

service_keys

Run cf delete-service-key and cf create-service-key again

tasks

While tasks have an encryption key label, they are generally meant to be a one-o event,and left to run to completion. If there is a task still running, it could be stopped with cfterminate-task , then run again with cf run-task .

185 Update Existing Data with New Encryption Key SUSE Cloud Applic… 1.5.1

Page 201: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

21 Backup and Restore

21.1 Backup and Restore Using cf-plugin-backupcf-plugin-backup backs up and restores your Cloud Controller Database (CCDB), using theCloud Foundry command line interface (cf CLI). (See Section 30.1, “Using the cf CLI with SUSE Cloud

Application Platform”.)

cf-plugin-backup is not a general-purpose backup and restore plugin. It is designed to savethe state of a SUSE Cloud Foundry instance before making changes to it. If the changes causeproblems, use cf-plugin-backup to restore the instance from scratch. Do not use it to restoreto a non-pristine SUSE Cloud Foundry instance. Some of the limitations for applying the backupto a non-pristine SUSE Cloud Foundry instance are:

Application configuration is not restored to running applications, as the plugin does nothave the ability to determine which applications should be restarted to load the restoredconfigurations.

User information is managed by the User Account and Authentication ( uaa ) Server, notthe Cloud Controller (CC). As the plugin talks only to the CC it cannot save full user infor-mation, nor restore users. Saving and restoring users must be performed separately, anduser restoration must be performed before the backup plugin is invoked.

The set of available stacks is part of the SUSE Cloud Foundry instance setup, and is notpart of the CC configuration. Trying to restore applications using stacks not available onthe target SUSE Cloud Foundry instance will fail. Setting up the necessary stacks must beperformed separately before the backup plugin is invoked.

Buildpacks are not saved. Applications using custom buildpacks not available on the targetSUSE Cloud Foundry instance will not be restored. Custom buildpacks must be managedseparately, and relevant buildpacks must be in place before the affected applications arerestored.

21.1.1 Installing the cf-plugin-backup

Download the plugin from https://github.com/SUSE/cf-plugin-backup/releases .

186 Backup and Restore Using cf-plugin-backup SUSE Cloud Applic… 1.5.1

Page 202: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Then install it with cf , using the name of the plugin binary that you downloaded:

tux > cf install-plugin cf-plugin-backup-1.0.8.0.g9e8438e.linux-amd64 Attention: Plugins are binaries written by potentially untrusted authors. Install and use plugins at your own risk. Do you want to install the plugin backup-plugin/cf-plugin-backup-1.0.8.0.g9e8438e.linux-amd64? [yN]: y Installing plugin backup... OK Plugin backup 1.0.8 successfully installed.

Verify installation by listing installed plugins:

tux > cf plugins Listing installed plugins...

plugin version command name command help backup 1.0.8 backup-info Show information about the current snapshot backup 1.0.8 backup-restore Restore the CloudFoundry state from a backup created with the snapshot command backup 1.0.8 backup-snapshot Create a new CloudFoundry backup snapshot to a local file

Use 'cf repo-plugins' to list plugins in registered repos available to install.

21.1.2 Using cf-plugin-backup

The plugin has three commands:

backup-info

backup-snapshot

backup-restore

View the online help for any command, like this example:

tux > cf backup-info --help NAME: backup-info - Show information about the current snapshot

USAGE: cf backup-info

187 Using cf-plugin-backup SUSE Cloud Applic… 1.5.1

Page 203: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Create a backup of your SUSE Cloud Application Platform data and applications. The commandoutputs progress messages until it is completed:

tux > cf backup-snapshot 2018/08/18 12:48:27 Retrieving resource /v2/quota_definitions 2018/08/18 12:48:30 org quota definitions done 2018/08/18 12:48:30 Retrieving resource /v2/space_quota_definitions 2018/08/18 12:48:32 space quota definitions done 2018/08/18 12:48:32 Retrieving resource /v2/organizations [...]

Your Cloud Application Platform data is saved in the current directory in cf-backup.json ,and application data in the app-bits/ directory.

View the current backup:

tux > cf backup-info - Org system

Restore from backup:

tux > cf backup-restore

There are two additional restore options: --include-security-groups and --include-quo-ta-definitions .

21.1.3 Scope of Backup

The following table lists the scope of the cf-plugin-backup backup. Organization and spaceusers are backed up at the SUSE Cloud Application Platform level. The user account in uaa /LDAP, the service instances and their application bindings, and buildpacks are not backed up.The sections following the table goes into more detail.

Scope Restore

Orgs Yes

Org auditors Yes

Org billing-manager Yes

Quota definitions Optional

188 Scope of Backup SUSE Cloud Applic… 1.5.1

Page 204: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Scope Restore

Spaces Yes

Space developers Yes

Space auditors Yes

Space managers Yes

Apps Yes

App binaries Yes

Routes Yes

Route mappings Yes

Domains Yes

Private domains Yes

Stacks not available

Feature ags Yes

Security groups Optional

Custom buildpacks No

cf backup-info reads the cf-backup.json snapshot le found in the current working direc-tory, and reports summary statistics on the content.

cf backup-snapshot extracts and saves the following information from the CC into a cf-backup.json snapshot le. Note that it does not save user information, but only the referencesneeded for the roles. The full user information is handled by the uaa server, and the plugintalks only to the CC. The following list provides a summary of what each plugin command does.

Org Quota Definitions

Space Quota Definitions

Shared Domains

Security Groups

Feature Flags

189 Scope of Backup SUSE Cloud Applic… 1.5.1

Page 205: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Application droplets (zip les holding the staged app)

Orgs

Spaces

Applications

Users' references (role in the space)

cf backup-restore reads the cf-backup.json snapshot le found in the current workingdirectory, and then talks to the targeted SUSE Cloud Foundry instance to upload the followinginformation, in the specified order:

Shared domains

Feature ags

Quota Definitions (i --include-quota-definitions)

Orgs

Space Quotas (i --include-quota-definitions)

UserRoles

(private) Domains

Spaces

UserRoles

Applications (+ droplet)

Bound Routes

Security Groups (i --include-security-groups)

The following list provides more details of each action.

Shared Domains

Attempts to create domains from the backup. Existing domains are retained, and not over-written.

Feature Flags

Attempts to update ags from the backup.

190 Scope of Backup SUSE Cloud Applic… 1.5.1

Page 206: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Quota Definitions

Existing quotas are overwritten from the backup (deleted, re-created).

Orgs

Attempts to create orgs from the backup. Attempts to update existing orgs from the backup.

Space Quota Definitions

Existing quotas are overwritten from the backup (deleted, re-created).

User roles

Expect the referenced user to exist. Will fail when the user is already associated with thespace, in the given role.

(private) Domains

Attempts to create domains from the backup. Existing domains are retained, and not over-written.

Spaces

Attempts to create spaces from the backup. Attempts to update existing spaces from thebackup.

User roles

Expect the referenced user to exist. Will fail when the user is already associated with thespace, in the given role.

Apps

Attempts to create apps from the backup. Attempts to update existing apps from the backup(memory, instances, buildpack, state, ...)

Security groups

Existing groups are overwritten from the backup

21.2 Disaster Recovery through Raw Data Backupand RestoreAn existing SUSE Cloud Application Platform deployment's data can be migrated to a newSUSE Cloud Application Platform deployment through a backup and restore of its raw data.The process involves performing a backup and restore of both the uaa and scf componentsrespectively. This procedure is agnostic of the underlying Kubernetes infrastructure and can beincluded as part of your disaster recovery solution.

191 Disaster Recovery through Raw Data Backup and Restore SUSE Cloud Applic… 1.5.1

Page 207: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

21.2.1 Prerequisites

In order to complete a raw data backup and restore, the following are required:

Access to running deployments of both uaa and scf to create backups with

Access to new deployments of both uaa and scf (deployed with a scf-config-val-ues.yaml configured according to Step 1 of Section 21.2.4, “Performing a Raw Data Restore”)to perform the restore to

21.2.2 Scope of Raw Data Backup and Restore

The following lists the data that is included as part of the backup (and restore) procedure:

The Cloud Controller Database (CCDB). In addition to what is encompassed by the CCDBlisted in Section 21.1.3, “Scope of Backup”, this will include service binding data as well.

The Cloud Controller Blobstore, which includes the types of binary large object(blob) les listed below. (See https://docs.cloudfoundry.org/concepts/architecture/cloud-

controller.html#blob-store to learn more about each blob type.)

App Packages

Buildpacks

Resource Cache

Buildpack Cache

Droplets

User data

21.2.3 Performing a Raw Data Backup

Note: Restore to the Same VersionThis process is intended for backing up and restoring to a target deployment with thesame version as the source deployment. For example, data from a backup of uaa version2.18.0 should be restored to a version version 2.18.0 uaa deployment.

192 Prerequisites SUSE Cloud Applic… 1.5.1

Page 208: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Perform the following steps to create a backup of your source uaa deployment.

Export the UAA database into a le:

tux > kubectl exec --tty mysql-0 --namespace uaa -- bash -c \ '/var/vcap/packages/mariadb/bin/mysqldump \ --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \ --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \ uaadb' > /tmp/uaadb-src.sql

Perform the following steps to create a backup of your source scf deployment.

1. Connect to the blobstore pod:

tux > kubectl exec --stdin --tty blobstore-0 --namespace scf -- env /bin/bash

2. Create an archive of the blobstore directory to preserve all needed les (see the CloudController Blobstore content of Section 21.2.2, “Scope of Raw Data Backup and Restore”) thendisconnect from the pod:

tux > tar cfvz blobstore-src.tgz /var/vcap/store/shared

tux > exit

3. Copy the archive to a location outside of the pod:

tux > kubectl cp scf/blobstore-0:blobstore-src.tgz /tmp/blobstore-src.tgz

4. Export the Cloud Controller Database (CCDB) into a le:

tux > kubectl exec mysql-0 --namespace scf -- bash -c \ '/var/vcap/packages/mariadb/bin/mysqldump \ --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \ --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \ ccdb' > /tmp/ccdb-src.sql

5. Next, obtain the CCDB encryption key(s). The method used to capture the key will dependon whether current_key_label has been defined on the source cluster. This value is de-fined in /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.ymlof the api-group-0 pod and also found in various tables of the MySQL database.

193 Performing a Raw Data Backup SUSE Cloud Applic… 1.5.1

Page 209: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Begin by examining the configuration le for the current_key_label setting:

tux > kubectl exec --stdin --tty --namespace scf api-group-0 -- bash -c "cat /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml | grep -A 3 database_encryption"

If the output contains the current_key_label setting, save the output for therestoration process. Adjust the -A ag as needed to include all keys.

If the output does not contain the current_key_label setting, run the followingcommand and save the output for the restoration process:

tux > kubectl exec api-group-0 --namespace scf -- bash -c 'echo $DB_ENCRYPTION_KEY'

21.2.4 Performing a Raw Data Restore

Important: Ensure Access to the Correct DeploymentWorking with multiple Kubernetes clusters simultaneously can be confusing. En-sure you are communicating with the desired cluster by setting $KUBECONFIG

correctly (https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-mul-

tiple-clusters/#set-the-kubeconfig-environment-variable) .

Perform the following steps to restore your backed up data to the target uaa deployment.

1. Recreate the UAA database on the mysql pod:

tux > kubectl exec -t mysql-0 --namespace uaa -- bash -c \ "/var/vcap/packages/mariadb/bin/mysql \ --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \ --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \ -e 'drop database uaadb; create database uaadb;'";

2. Restore the UAA database on the mysql pod:

tux > kubectl exec --stdin mysql-0 --namespace uaa -- bash -c \ '/var/vcap/packages/mariadb/bin/mysql \ --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \ --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \

194 Performing a Raw Data Restore SUSE Cloud Applic… 1.5.1

Page 210: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

uaadb' < /tmp/uaadb-src.sql

Perform the following steps to restore your backed up data to the target scf deployment.

1. The target scf cluster needs to be deployed with the correct database encryption key(s)set in your scf-config-values.yaml before data can be restored. How the encryptionkey(s) will be prepared in your scf-config-values.yaml depends on the result of Step

5 in Section 21.2.3, “Performing a Raw Data Backup”

If current_key_label was set, use the current_key_label obtained as the valueof CC_DB_CURRENT_KEY_LABEL and all the keys under the keys are defined underCC_DB_ENCRYPTION_KEYS . See the following example scf-config-values.yaml :

env: CC_DB_CURRENT_KEY_LABEL: migrated_key_1

secrets: CC_DB_ENCRYPTION_KEYS: migrated_key_1: "<key_goes_here>" migrated_key_2: "<key_goes_here>"

If current_key_label was not set, create one for the new cluster through scf-config-values.yaml and set it to the $DB_ENCRYPTION_KEY value from the oldcluster. In this example, migrated_key is the new current_key_label created:

env: CC_DB_CURRENT_KEY_LABEL: migrated_key

secrets: CC_DB_ENCRYPTION_KEYS: migrated_key: "OLD_CLUSTER_DB_ENCRYPTION_KEY"

2. Deploy a non-high-availability configuration of scf and wait until all pods are readybefore proceeding.

3. In the ccdb-src.sql le created earlier, replace the domain name of the source deploy-ment with the domain name of the target deployment.

tux > sed --in-place 's/old-example.com/new-example.com/g' /tmp/ccdb-src.sql

4. Stop the monit services on the api-group-0 , cc-worker-0 , and cc-clock-0 pods:

tux > for n in api-group-0 cc-worker-0 cc-clock-0; do

195 Performing a Raw Data Restore SUSE Cloud Applic… 1.5.1

Page 211: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

kubectl exec --stdin --tty --namespace scf $n -- bash -l -c 'monit stop all'done

5. Copy the blobstore-src.tgz archive to the blobstore pod:

tux > kubectl cp /tmp/blobstore-src.tgz scf/blobstore-0:/.

6. Restore the contents of the archive created during the backup process to the blobstorepod:

tux > kubectl exec --stdin --tty --namespace scf blobstore-0 -- bash -l -c 'monit stop all && sleep 10 && rm -rf /var/vcap/store/shared/* && tar xvf blobstore-src.tgz && monit start all && rm blobstore-src.tgz'

7. Recreate the CCDB on the mysql pod:

tux > kubectl exec mysql-0 --namespace scf -- bash -c \ "/var/vcap/packages/mariadb/bin/mysql \ --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \ --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \ -e 'drop database ccdb; create database ccdb;'"

8. Restore the CCDB on the mysql pod:

tux > kubectl exec --stdin mysql-0 --namespace scf -- bash -c \ '/var/vcap/packages/mariadb/bin/mysql \ --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \ --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \ ccdb' < /tmp/ccdb-src.sql

9. Start the monit services on the api-group-0 , cc-worker-0 , and cc-clock-0 pods

tux > for n in api-group-0 cc-worker-0 cc-clock-0; do kubectl exec --stdin --tty --namespace scf $n -- bash -l -c 'monit start all'done

10. If your old cluster did not have current_key_label defined, perform a key rotation.Otherwise, a key rotation is not necessary.

a. Run the rotation for the encryption keys:

tux > kubectl exec --namespace scf api-group-0 -- bash -c \"source /var/vcap/jobs/cloud_controller_ng/bin/ruby_version.sh; \export CLOUD_CONTROLLER_NG_CONFIG=/var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml; \

196 Performing a Raw Data Restore SUSE Cloud Applic… 1.5.1

Page 212: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

cd /var/vcap/packages/cloud_controller_ng/cloud_controller_ng; \bundle exec rake rotate_cc_database_key:perform"

b. Restart the api-group pod.

tux > kubectl delete pod api-group-0 --namespace scf --force --grace-period=0

11. Perform a cf restage appname for existing applications to ensure their existing data isupdated with the new encryption key.

12. The data restore is now complete. Run some cf commands, such as cf apps , cf mar-ketplace , or cf services , and verify data from the old cluster is returned.

197 Performing a Raw Data Restore SUSE Cloud Applic… 1.5.1

Page 213: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

22 Service Brokers

The Open Service Broker API provides (OSBAPI) your SUSE Cloud Application Platform appli-cations with access to external dependencies and platform-level capabilities, such as databases,filesystems, external repositories, and messaging systems. These resources are called services.Services are created, used, and deleted as needed, and provisioned on demand. This chapterintroduces several different service brokers that implement the Open Service Broker API.

Use the following guideline to determine which service broker is most suitable for your situation.

When SUSE Cloud Application Platform is deployed on a cloud service provider (CSP), useservice brokers provided by the CSP.

For deployments on Microsoft Azure Kubernetes Service, see Section 9.10, “Configuring

and Testing the Native Microsoft AKS Service Broker”.

For deployments on Amazon Elastic Kubernetes Service, see Section 10.11, “Deploying

and Using the AWS Service Broker”.

For deployments on Google Kubernetes Engine, see Section 11.10, “Deploying and Using

the Google Cloud Platform Service Broker”.

When connecting SUSE Cloud Application Platform to existing HA MySQL or PostgreSQLclusters or services, use cf-usb . See Section 22.1, “cf-usb” for more information.

When you want services deployed on demand to Kubernetes, use Minibroker. See Sec-

tion 22.2, “Provisioning Services with Minibroker” for more information.

When you want a service that is not one of the above, note that 3rd party OSBAPI brokerswill work with SUSE Cloud Application Platform. Refer to the Cloud Foundry documenta-tion at https://docs.cloudfoundry.org/services/managing-service-brokers.html for configu-ration instructions.

22.1 cf-usbUniversal Service Broker for Cloud Foundry, or cf-usb , is a project that implements and exposesthe Open Service Broker API. It uses drivers, referred to as plugins, to connect to differentservices. More information about cf-usb can be found at https://github.com/SUSE/cf-usb .

198 cf-usb SUSE Cloud Applic… 1.5.1

Page 214: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Important: Kubernetes Compatabilitycf-usb is currently compatible with Kubernetes version 1.15 and earlier. Usage withKubernetes version 1.16 and beyond is not supported.

22.1.1 Enabling and Disabling Service Brokers

The service broker feature is enabled as part of a default SUSE Cloud Foundry deployment. Todisable it, use the --set "enable.cf_usb=false" ag when running helm install or helmupgrade .

First fetch the uaa secret and certificate:

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

If disabling the feature on an initial deployment, run the following command:

tux > helm install suse/cf \--name susecf-scf \--namespace scf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--set "enable.cf_usb=false"

If disabling the feature on an existing deployment, run the following command:

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--set "enable.cf_usb=false" \--version 2.19.1

To enable the feature again, run helm upgrade with the --set "enable.cf_usb=true" ag.Be sure to pass your uaa secret and certificate to scf rst:

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \

199 Enabling and Disabling Service Brokers SUSE Cloud Applic… 1.5.1

Page 215: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--set "enable.cf_usb=true" \--version 2.19.1

22.1.2 Prerequisites

The following examples demonstrate how to deploy service brokers for MySQL and PostgreSQLwith Helm, using charts from the SUSE repository. You must have the following prerequisites:

A working SUSE Cloud Application Platform deployment with Helm and the Cloud Foundrycommand line interface (cf CLI).

An Application Security Group (ASG) for applications to reach external databases.(See Understanding Application Security Groups (https://docs.cloudfoundry.org/concepts/as-

g.html) .)

An external MySQL or PostgreSQL installation with account credentials that allow creatingand deleting databases and users.

For testing purposes you may create an insecure security group:

tux > echo > "internal-services.json" '[{ "destination": "0.0.0.0/0", "protocol": "all" }]'tux > cf create-security-group internal-services-test internal-services.jsontux > cf bind-running-security-group internal-services-testtux > cf bind-staging-security-group internal-services-test

You may apply an ASG later, after testing. All running applications must be restarted to use thenew security group.

22.1.3 Deploying on SUSE CaaS Platform

If you are deploying SUSE Cloud Application Platform on SUSE CaaS Platform, see a Pod SecurityPolicy (PSP) must also applied to your new service brokers.

Take the example configuration le, cap-psp-rbac.yaml , and add the following. Replacemysql-sidecar with the namespace name of your service broker.

- kind: ServiceAccount

200 Prerequisites SUSE Cloud Applic… 1.5.1

Page 216: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

name: default namespace: mysql-sidecar

Then apply the PSP configuration, before you deploy your new service broker, with this com-mand:

tux > kubectl apply -f cap-psp-rbac.yaml

kubectl apply updates an existing deployment. After applying the PSP, proceed to configuringand deploying your service broker.

22.1.4 Configuring the MySQL Deployment

Start by extracting the uaa namespace secrets name, and the uaa and scf namespaces internalcertificates with these commands. These will output the complete certificates. Substitute yoursecrets name if it is different than the example:

tux > kubectl get pods --namespace uaa \--output jsonpath='{.items[*].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}' secrets-2.8.0-1

tux > kubectl get secret --namespace scf secrets-2.8.0-1 --output jsonpath='{.data.internal-ca-cert}' | base64 --decode -----BEGIN CERTIFICATE----- MIIE8jCCAtqgAwIBAgIUT/Yu/Sv4UHl5zHZYZKCy5RKJqmYwDQYJKoZIhvcNAQEN [...] xC8x/+zT0QkvcRJBio5gg670+25KJQ== -----END CERTIFICATE-----

tux > kubectl get secret --namespace uaa secrets-2.8.0-1 --output jsonpath='{.data.internal-ca-cert}' | base64 --decode -----BEGIN CERTIFICATE----- MIIE8jCCAtqgAwIBAgIUSI02lj0a0InLb/zMrjNgW5d8EygwDQYJKoZIhvcNAQEN [...] to2GI8rPMb9W9fd2WwUXGEHTc+PqTg== -----END CERTIFICATE-----

You will copy these certificates into your configuration le as shown below.

Create a values.yaml le. The following example is called usb-config-values.yaml . Modifythe values to suit your SUSE Cloud Application Platform installation.

env: # Database access credentials SERVICE_MYSQL_HOST: mysql.example.com

201 Configuring the MySQL Deployment SUSE Cloud Applic… 1.5.1

Page 217: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

SERVICE_MYSQL_PORT: 3306 SERVICE_MYSQL_USER: mysql-admin-user SERVICE_MYSQL_PASS: mysql-admin-password

# CAP access credentials, from your original deployment configuration # file, scf-config-values.yaml CF_ADMIN_USER: admin CF_ADMIN_PASSWORD: password CF_DOMAIN: example.com

# Copy the certificates you extracted above, as shown in these # abbreviated examples, prefaced with the pipe character

# SCF cert CF_CA_CERT: | -----BEGIN CERTIFICATE----- MIIE8jCCAtqgAwIBAgIUT/Yu/Sv4UHl5zHZYZKCy5RKJqmYwDQYJKoZIhvcNAQEN [...] xC8x/+zT0QkvcRJBio5gg670+25KJQ== -----END CERTIFICATE-----

# UAA cert UAA_CA_CERT: | -----BEGIN CERTIFICATE----- MIIE8jCCAtqgAwIBAgIUSI02lj0a0InLb/zMrjNgW5d8EygwDQYJKoZIhvcNAQEN [...] to2GI8rPMb9W9fd2WwUXGEHTc+PqTg== -----END CERTIFICATE-----

kube: organization: "cap" registry: hostname: "registry.suse.com" username: "" password: ""

22.1.5 Deploying the MySQL Chart

SUSE Cloud Application Platform includes charts for MySQL and PostgreSQL (see Section 5.6,

“Add the Kubernetes Charts Repository” for information on managing your Helm repository):

tux > helm search suseNAME CHART VERSION APP VERSION DESCRIPTIONsuse/cf 2.19.1 1.5.1 A Helm chart for SUSE Cloud Foundry

202 Deploying the MySQL Chart SUSE Cloud Applic… 1.5.1

Page 218: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

suse/cf-usb-sidecar-mysql 1.0.1 A Helm chart for SUSE Universal Service Broker Sidecar fo...suse/cf-usb-sidecar-postgres 1.0.1 A Helm chart for SUSE Universal Service Broker Sidecar fo...suse/console 2.6.1 1.5.1 A Helm chart for deploying Stratos UI Consolesuse/log-agent-rsyslog 1.0.1 8.39.0 Log Agent for forwarding logs of K8s control pl...suse/metrics 1.1.1 1.5.1 A Helm chart for Stratos Metricssuse/minibroker 0.3.1 A minibroker for your minikubesuse/nginx-ingress 0.28.4 0.15.0 An nginx Ingress controller that uses ConfigMap to store ...suse/uaa 2.19.1 1.5.1 A Helm chart for SUSE UAA

Create a namespace for your MySQL sidecar:

tux > kubectl create namespace mysql-sidecar

Install the MySQL Helm chart:

tux > helm install suse/cf-usb-sidecar-mysql \ --devel \ --name mysql-service \ --namespace mysql-sidecar \ --set "env.SERVICE_LOCATION=http://cf-usb-sidecar-mysql.mysql-sidecar:8081" \ --set default-auth=mysql_native_password --values usb-config-values.yaml \ --wait

Wait for the new pods to become ready:

tux > watch kubectl get pods --namespace=mysql-sidecar

Confirm that the new service has been added to your SUSE Cloud Application Platform instal-lation:

tux > cf marketplace

Warning: MySQL Requires mysql_native_passwordThe MySQL sidecar works only with deployments that use mysql_native_password astheir authentication plugin. This is the default for MySQL versions 8.0.3 and earlier, butlater versions must be started with --default-auth=mysql_native_password beforeany user creation. (See https://github.com/go-sql-driver/mysql/issues/785

203 Deploying the MySQL Chart SUSE Cloud Applic… 1.5.1

Page 219: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

22.1.6 Create and Bind a MySQL Service

To create a new service instance, use the Cloud Foundry command line client:

tux > cf create-service mysql default service_instance_name

You may replace service_instance_name with any name you prefer.

Bind the service instance to an application:

tux > cf bind-service my_application service_instance_name

22.1.7 Deploying the PostgreSQL Chart

The PostgreSQL configuration is slightly different from the MySQL configuration. The data-base-specific keys are named differently, and it requires the SERVICE_POSTGRESQL_SSLMODEkey.

env: # Database access credentials SERVICE_POSTGRESQL_HOST: postgres.example.com SERVICE_POSTGRESQL_PORT: 5432 SERVICE_POSTGRESQL_USER: pgsql-admin-user SERVICE_POSTGRESQL_PASS: pgsql-admin-password

# The SSL connection mode when connecting to the database. For a list of # valid values, please see https://godoc.org/github.com/lib/pq SERVICE_POSTGRESQL_SSLMODE: disable

# CAP access credentials, from your original deployment configuration # file, scf-config-values.yaml CF_ADMIN_USER: admin CF_ADMIN_PASSWORD: password CF_DOMAIN: example.com

# Copy the certificates you extracted above, as shown in these # abbreviated examples, prefaced with the pipe character

# SCF certificate CF_CA_CERT: | -----BEGIN CERTIFICATE----- MIIE8jCCAtqgAwIBAgIUT/Yu/Sv4UHl5zHZYZKCy5RKJqmYwDQYJKoZIhvcNAQEN [...] xC8x/+zT0QkvcRJBio5gg670+25KJQ== -----END CERTIFICATE-----

204 Create and Bind a MySQL Service SUSE Cloud Applic… 1.5.1

Page 220: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# UAA certificate UAA_CA_CERT: | ----BEGIN CERTIFICATE----- MIIE8jCCAtqgAwIBAgIUSI02lj0a0InLb/zMrjNgW5d8EygwDQYJKoZIhvcNAQEN [...] to2GI8rPMb9W9fd2WwUXGEHTc+PqTg== -----END CERTIFICATE-----

SERVICE_TYPE: postgres

kube: organization: "cap" registry: hostname: "registry.suse.com" username: "" password: ""

Create a namespace and install the chart:

tux > kubectl create namespace postgres-sidecar

tux > helm install suse/cf-usb-sidecar-postgres \ --devel \ --name postgres-service \ --namespace postgres-sidecar \ --set "env.SERVICE_LOCATION=http://cf-usb-sidecar-postgres.postgres-sidecar:8081" \ --values usb-config-values.yaml \ --wait

Then follow the same steps as for the MySQL chart.

22.1.8 Removing Service Broker Sidecar Deployments

To correctly remove sidecar deployments, perform the following steps in order.

Unbind any applications using instances of the service, and then delete those instances:

tux > cf unbind-service my_app my_service_instancetux > cf delete-service my_service_instance

Install the CF-USB CLI plugin for the Cloud Foundry CLI from https://github.com/SUSE/cf-

usb-plugin/releases/ , for example:

tux > cf install-plugin \

205 Removing Service Broker Sidecar Deployments SUSE Cloud Applic… 1.5.1

Page 221: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

https://github.com/SUSE/cf-usb-plugin/releases/download/1.0.0/cf-usb-plugin-1.0.0.0.g47b49cd-linux-amd64

Configure the Cloud Foundry USB CLI plugin, using the domain you created for your SUSECloud Foundry deployment:

tux > cf usb-target https://usb.example.com

List the current sidecar deployments and take note of the names:

tux > cf usb-driver-endpoints

Remove the service by specifying its name:

tux > cf usb-delete-driver-endpoint mysql-service

Find your release name, then delete the release:

tux > helm listNAME REVISION UPDATED STATUS CHART NAMESPACEsusecf-console 1 Wed Aug 14 08:35:58 2018 DEPLOYED console-2.6.1 stratossusecf-scf 1 Tue Aug 14 12:24:36 2018 DEPLOYED cf-2.19.1 scfsusecf-uaa 1 Tue Aug 14 12:01:17 2018 DEPLOYED uaa-2.19.1 uaamysql-service 1 Mon May 21 11:40:11 2018 DEPLOYED cf-usb-sidecar-mysql-1.0.1 mysql-sidecar

tux > helm delete --purge mysql-service

206 Removing Service Broker Sidecar Deployments SUSE Cloud Applic… 1.5.1

Page 222: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

22.1.9 Change in URL of Internal cf-usb Broker Endpoint

This change is only applicable for upgrades from Cloud Application Platform 1.2.1 to CloudApplication Platform 1.3 and upgrades from Cloud Application Platform 1.3 to Cloud Applica-tion Platform 1.3.1. The URL of the internal cf-usb broker endpoint has changed. Brokers forPostgreSQL and MySQL that use cf-usb will require the following manual x after upgradingto reconnect with SCF/CAP:

For Cloud Application Platform 1.2.1 to Cloud Application Platform 1.3 upgrades:

1. Get the name of the secret (for example secrets-2.14.5-1 ):

tux > kubectl get secret --namespace scf

2. Get the URL of the cf-usb host (for example https://cf-usb.scf.svc.cluster.lo-cal:24054 ):

tux > cf service-brokers

3. Get the current cf-usb password. Use the name of the secret obtained in the rst step:

tux > kubectl get secret --namespace scf secrets-2.14.5-1 --output yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id

4. Update the service broker, where password is the password from the previous step, andthe URL is the result of the second step with the leading cf-usb part doubled with adash separator

tux > cf update-service-broker usb broker-admin password https://cf-usb-cf-usb.scf.svc.cluster.local:24054

For Cloud Application Platform 1.3 to Cloud Application Platform 1.3.1 upgrades:

1. Get the name of the secret (for example secrets-2.15.2-1 ):

tux > kubectl get secret --namespace scf

2. Get the URL of the cf-usb host (for example https://cf-usb-cf-usb.scf.svc.clus-ter.local:24054 ):

tux > cf service-brokers

207 Change in URL of Internal cf-usb Broker Endpoint SUSE Cloud Applic… 1.5.1

Page 223: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

3. Get the current cf-usb password. Use the name of the secret obtained in the rst step:

tux > kubectl get secret --namespace scf secrets-2.15.2-1 --output yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id

4. Update the service broker, where password is the password from the previous step, andthe URL is the result of the second step with the leading cf-usb- part removed:

tux > cf update-service-broker usb broker-admin password https://cf-usb.scf.svc.cluster.local:24054

22.2 Provisioning Services with MinibrokerMinibroker (https://github.com/SUSE/minibroker) is an OSBAPI compliant broker (https://

www.openservicebrokerapi.org/) created by members of the Microsoft Azure team (https://

github.com/osbkit) . It provides a simple method to provision service brokers on Kubernetesclusters.

Important: Minibroker Upstream ServicesThe services deployed by Minibroker are sourced from the stable upstream charts reposi-tory, see https://github.com/helm/charts/tree/master/stable , and maintained by contrib-utors to the Helm project. Though SUSE supports Minibroker itself, it does not supportthe service charts it deploys. Operators should inspect the charts and images exposed bythe service plans before deciding to use them in a production environment.

22.2.1 Deploy Minibroker

1. Minibroker is deployed using a Helm chart. Ensure your SUSE Helm chart repository con-tains the most recent Minibroker chart:

tux > helm repo update

2. Use Helm to deploy Minibroker:

tux > helm install suse/minibroker --namespace minibroker --name minibroker --set "defaultNamespace=minibroker"

208 Provisioning Services with Minibroker SUSE Cloud Applic… 1.5.1

Page 224: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

The following table lists the services provided by Minibroker, along with the latest chartand application version combination known to work with Minibroker.Note that the table is only applicable to Kubernetes deployments with version 1.15 orearlier.

Service Version appVersion

MariaDB 4.3.0 10.1.34

MongoDB 5.3.3 4.0.6

PostgreSQL 6.2.1 11.5.0

Redis 3.7.2 4.0.10

3. Monitor the deployment progress. Wait until all pods are in a ready state before proceed-ing:

tux > watch --color 'kubectl get pods --namespace minibroker'

22.2.2 Setting Up the Environment for Minibroker Usage

1. Begin by logging into your Cloud Application Platform deployment. Select an organizationand space to work with, creating them if needed:

tux > cf api --skip-ssl-validation https://api.example.com tux > cf login -u admin -p password tux > cf create-org org tux > cf create-space space -o org tux > cf target -o org -s space

2. Create the service broker. Note that Minibroker does not require authentication and theusername and password parameters act as dummy values to pass to the cf command.These parameters do not need to be customized for the Cloud Application Platform instal-lation:

tux > cf create-service-broker minibroker username password http://minibroker-minibroker.minibroker.svc.cluster.local

209 Setting Up the Environment for Minibroker Usage SUSE Cloud Applic… 1.5.1

Page 225: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

After the service broker is ready, it can be seen on your deployment:

tux > cf service-brokers Getting service brokers as admin...

name url minibroker http://minibroker-minibroker.minibroker.svc.cluster.local

3. List the services and their associated plans the Minibroker has access to:

tux > cf service-access -b minibroker

4. Enable access to a service. Refer to the table in Section 22.2.1, “Deploy Minibroker” for serviceplans known to be working with Minibroker.This example enables access to the Redis service:

tux > cf enable-service-access redis -b minibroker -p 4-0-10

Use cf marketplace to verify the service has been enabled:

tux > cf marketplace Getting services from marketplace in org org / space space as admin... OK

service plans description redis 4-0-10 Helm Chart for redis

TIP: Use 'cf marketplace -s SERVICE' to view descriptions of individual plans of a given service.

5. Define your Application Security Group (ASG) (https://docs.cloudfoundry.org/concepts/as-

g.html) rules in a JSON le. Using the defined rules, create an ASG and bind it to anorganization and space:

tux > echo > redis.json '[{ "protocol": "tcp", "destination": "10.0.0.0/8", "ports": "6379", "description": "Allow Redis traffic" }]' tux > cf create-security-group redis_networking redis.json tux > cf bind-security-group redis_networking org space

210 Setting Up the Environment for Minibroker Usage SUSE Cloud Applic… 1.5.1

Page 226: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Use following ports to define your ASG for the given service:

Service Port

MariaDB 3306

MongoDB 27017

PostgreSQL 5432

Redis 6379

6. Create an instance of the Redis service. The cf marketplace or cf marketplace -sredis commands can be used to see the available plans for the service:

tux > cf create-service redis 4-0-10 redis-example-service

Monitor the progress of the pods and wait until all pods are in a ready state. The examplebelow shows the additional redis pods with a randomly generated name that have beencreated in the minibroker namespace:

tux > watch --color 'kubectl get pods --namespace minibroker' NAME READY STATUS RESTARTS AGE alternating-frog-redis-master-0 1/1 Running 2 1h alternating-frog-redis-slave-7f7444978d-z86nr 1/1 Running 0 1h minibroker-minibroker-5865f66bb8-6dxm7 2/2 Running 0 1h

22.2.3 Using Minibroker with Applications

This section demonstrates how to use Minibroker services with your applications. The examplebelow uses the Redis service instance created in the previous section.

1. Obtain the demo application from Github and use cf push with the --no-start ag todeploy the application without starting it:

tux > git clone https://github.com/scf-samples/cf-redis-example-app tux > cd cf-redis-example-app tux > cf push --no-start

211 Using Minibroker with Applications SUSE Cloud Applic… 1.5.1

Page 227: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

2. Bind the service to your application and start the application:

tux > cf bind-service redis-example-app redis-example-service tux > cf start redis-example-app

3. When the application is ready, it can be tested by storing a value into the Redis service:

tux > export APP=redis-example-app.example.com tux > curl --request GET $APP/foo tux > curl --request PUT $APP/foo --data 'data=bar' tux > curl --request GET $APP/foo

The rst GET will return key not present . After storing a value, it will return bar .

Important: Database Names for PostgreSQL and MariaDBInstancesBy default, Minibroker creates PostgreSQL and MariaDB server instances without a nameddatabase. A named database is required for normal usage with these and will need to beadded during the cf create-service step using the -c ag. For example:

tux > cf create-service postgresql 9-6-2 djangocms-db -c '{"postgresDatabase":"mydjango"}' tux > cf create-service mariadb 10-1-34 my-db -c '{"mariadbDatabase":"mydb"}'

Other options can be set too, but vary by service type.

212 Using Minibroker with Applications SUSE Cloud Applic… 1.5.1

Page 228: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

23 App-AutoScaler

The App-AutoScaler service is used for automatically managing an application's instance countwhen deployed on SUSE Cloud Foundry. The scaling behavior is determined by a set of criteriadefined in a policy (See Section 23.5, “Policies”).

23.1 PrerequisitesUsing the App-AutoScaler service requires:

A running deployment of scf

cf , the Cloud Foundry command line interface. For more information, see https://doc-

s.cloudfoundry.org/cf-cli/ .For SUSE Linux Enterprise and openSUSE systems, install using zypper .

tux > sudo zypper install cf-cli

For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Addthe module using YaST or SUSEConnect.

tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-

cli.html .

The Cloud Foundry CLI AutoScaler Plug-in, see https://github.com/cloudfoundry/app-au-

toscaler-cli-plugin

The plugin can be installed by running the following command:

tux > cf install-plugin -r CF-Community app-autoscaler-plugin

If the plugin repo is not found, add it rst:

tux > cf add-plugin-repo "CF-Community" "https://plugins.cloudfoundry.org"

213 Prerequisites SUSE Cloud Applic… 1.5.1

Page 229: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

23.2 Enabling and Disabling the App-AutoScalerServiceBy default, the App-AutoScaler service is not enabled as part of a SUSE Cloud Foundry deploy-ment. To enable it, run helm upgrade with the --set "enable.autoscaler=true" ag onan existing scf deployment. Be sure to pass your uaa secret and certificate to scf rst:

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--set "enable.autoscaler=true" \--version 2.19.1

To disable the App-AutoScaler service, run helm upgrade with the --set "enable.au-toscaler=false" ag. Be sure to pass your uaa secret and certificate to scf rst:

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--set "enable.autoscaler=false" \--version 2.19.1

23.3 Upgrade ConsiderationsIn order to upgrade from a SUSE Cloud Application Platform 1.3.1 deployment with the App-AutoScaler enabled to SUSE Cloud Application Platform 1.4, perform one of the two methodslisted below. Both methods require that uaa is rst upgraded.

214 Enabling and Disabling the App-AutoScaler Service SUSE Cloud Applic… 1.5.1

Page 230: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

1. Enabling App-AutoScaler no longer requires sizing values set to have a count of 1, whichis the new minimum setting. The following values in your scf-config-values.yaml lecan be removed:

sizing: autoscaler_api: count: 1 autoscaler_eventgenerator: count: 1 autoscaler_metrics: count: 1 autoscaler_operator: count: 1 autoscaler_postgres: count: 1 autoscaler_scalingengine: count: 1 autoscaler_scheduler: count: 1 autoscaler_servicebroker: count: 1

2. Get the most recent Helm charts:

tux > helm repo update

3. Upgrade uaa :

tux > helm upgrade susecf-uaa suse/uaa \--values scf-config-values.yaml \--version 2.19.1

4. Extract the uaa secret for scf to use:

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

You are now ready to upgrade scf using one of the two following methods:

The rst method is to disable the App-AutoScaler during the initial SUSE Cloud ApplicationPlatform upgrade, then when the upgrade completes, enable the App-AutoScaler again.

215 Upgrade Considerations SUSE Cloud Applic… 1.5.1

Page 231: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

1. Upgrade scf with the App-AutoScaler disabled:

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--set "enable.autoscaler=false"

2. Monitor the deployment progress. Wait until all pods are in a ready state before proceed-ing:

tux > watch --color 'kubectl get pods --namespace scf'

3. Enable the App-AutoScaler again:

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--set "enable.autoscaler=true"

4. Monitor the deployment progress and until all pods are in a ready state:

tux > watch --color 'kubectl get pods --namespace scf'

The second method is to pass the --set sizing.autoscaler_postgres.disk_sizes.post-gres_data=100 option as part of the upgrade.

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--set "enable.autoscaler=true" \--set sizing.autoscaler_postgres.disk_sizes.postgres_data=100

23.4 Using the App-AutoScaler ServicePush the application without starting it rst:

tux > cf push my_application --no-start

Attach autoscaling policy to the application:

tux > cf attach-autoscaling-policy my_application my-policy.json

Policy has to be defined as a JSON le (See Section 23.5, “Policies”) in a proper format (See https://

github.com/cloudfoundry/app-autoscaler/blob/develop/docs/policy.md ).

216 Using the App-AutoScaler Service SUSE Cloud Applic… 1.5.1

Page 232: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Start the application:

tux > cf start my_application

Autoscaling policies can be managed using cf CLI with the App-AutoScaler plugin as above (SeeSection 23.4.1, “The App-AutoScaler cf CLI Plugin”) or using the App-AutoScaler API (See Section 23.4.2,

“App-AutoScaler API”).

23.4.1 The App-AutoScaler cf CLI Plugin

The App-AutoScaler plugin is used for managing the service with your applications and pro-vides the following commands (with shortcuts in brackets). Refer to https://github.com/cloud-

foundry/app-autoscaler-cli-plugin#command-list for details about each command:

autoscaling-api (asa)

Set or view AutoScaler service API endpoint. See https://github.com/cloudfoundry/app-au-

toscaler-cli-plugin#cf-autoscaling-api for more information.

autoscaling-policy (asp)

Retrieve the scaling policy of an application. See https://github.com/cloudfoundry/app-au-

toscaler-cli-plugin#cf-autoscaling-policy for more information.

attach-autoscaling-policy (aasp)

Attach a scaling policy to an application. See https://github.com/cloudfoundry/app-au-

toscaler-cli-plugin#cf-attach-autoscaling-policy for more information.

detach-autoscaling-policy (dasp)

Detach the scaling policy from an application. See https://github.com/cloudfoundry/app-

autoscaler-cli-plugin#cf-detach-autoscaling-policy for more information.

create-autoscaling-credential (casc)

Create custom metric credential for an application. See https://github.com/cloud-

foundry/app-autoscaler-cli-plugin#cf-create-autoscaling-credential for more information.

delete-autoscaling-credential (dasc)

Delete the custom metric credential of an application. See https://github.com/cloud-

foundry/app-autoscaler-cli-plugin#cf-delete-autoscaling-credential for more information.

autoscaling-metrics (asm)

217 The App-AutoScaler cf CLI Plugin SUSE Cloud Applic… 1.5.1

Page 233: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Retrieve the metrics of an application. See https://github.com/cloudfoundry/app-au-

toscaler-cli-plugin#cf-autoscaling-metrics for more information.

autoscaling-history (ash)

Retrieve the scaling history of an application. See https://github.com/cloudfoundry/app-

autoscaler-cli-plugin#cf-autoscaling-history for more information.

23.4.2 App-AutoScaler API

The App-AutoScaler service provides a Public API with detailed usage information, see https://

github.com/cloudfoundry/app-autoscaler/blob/develop/docs/Public_API.rst . It includes requeststo:

List scaling history of an application. For details,refer to https://github.com/cloudfoundry/app-autoscaler/blob/develop/docs/Pub-

lic_API.rst#list-scaling-history-of-an-application

List instance metrics of an application. For details, referto https://github.com/cloudfoundry/app-autoscaler/blob/develop/docs/Public_API.rst#list-in-

stance-metrics-of-an-application

List aggregated metrics of an application. For de-tails, refer to https://github.com/cloudfoundry/app-autoscaler/blob/develop/docs/Pub-

lic_API.rst#list-aggregated-metrics-of-an-application

Policy api. For details, refer to https://github.com/cloudfoundry/app-autoscaler/blob/devel-

op/docs/Public_API.rst#policy-api

Delete policy. For details, refer to https://github.com/cloudfoundry/app-autoscaler/blob/de-

velop/docs/Public_API.rst#delete-policy

Get policy. For details, refer to https://github.com/cloudfoundry/app-autoscaler/blob/devel-

op/docs/Public_API.rst#get-policy

23.5 PoliciesA policy identifies characteristics including minimum instance count, maximum instance count,and the rules used to determine when the number of application instances is scaled up or down.These rules are categorized into two types, scheduled scaling and dynamic scaling. (See Sec-

218 App-AutoScaler API SUSE Cloud Applic… 1.5.1

Page 234: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tion 23.5.1, “Scaling Types”). Multiple scaling rules can be specified in a policy, but App-AutoScalerdoes not detect or handle conflicts that may occur. Ensure there are no conflicting rules to avoidunintended scaling behavior.

Policies are defined using the JSON format and can be attached to an application either bypassing the path to the policy le or directly as a parameter.

The following is an example of a policy le, called my-policy.json .

{ "instance_min_count": 1, "instance_max_count": 4, "scaling_rules": [{ "metric_type": "memoryused", "stat_window_secs": 60, "breach_duration_secs": 60, "threshold": 10, "operator": ">=", "cool_down_secs": 300, "adjustment": "+1" }]}

For an example that demonstrates defining multiple scaling rules in a single policy, refer tothe sample of a policy le at https://github.com/cloudfoundry/app-autoscaler/blob/develop/src/

integration/fakePolicyWithSchedule.json . The complete list of configurable policy values canbe found at https://github.com/cloudfoundry/app-autoscaler/blob/master/docs/policy.md .

23.5.1 Scaling Types

Scheduled Scaling

Modifies an application's instance count at a predetermined time. This option is suitablefor workloads with predictable resource usage.

Dynamic Scaling

Modifies an application's instance count based on metrics criteria. This option is suitablefor workloads with dynamic resource usage. The following metrics are available:

memoryused

memoryutil

cpu

219 Scaling Types SUSE Cloud Applic… 1.5.1

Page 235: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

responsetime

throughput

custom metric

See https://github.com/cloudfoundry/app-autoscaler/tree/develop/docs#scaling-type for addi-tional details.

220 Scaling Types SUSE Cloud Applic… 1.5.1

Page 236: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

24 Logging

There are two types of logs in a deployment of SUSE Cloud Application Platform, applicationslogs and component logs. The following provides a brief overview of each log type and how toretrieve them for monitoring and debugging use.

Application logs provide information specific to a given application that has been deployedto your Cloud Application Platform cluster and can be accessed through:

The cf CLI using the cf logs command

The application's log stream within the Stratos console

Access to logs for a given component of your Cloud Application Platform deployment canbe obtained by:

The kubectl logs commandThe following example retrieves the logs of the router container of router-0 podin the scf namespace

tux > kubectl logs --namespace scf router-0 router

Direct access to the log les using the following:

1. Open a shell to the container of the component using the kubectl exec com-mand

2. Navigate to the logs directory at /var/vcap/sys/logs , at which point therewill be subdirectories containing the log les for access.

tux > kubectl exec --stdin --tty --namespace scf router-0 /bin/bash

router/0:/# cd /var/vcap/sys/log

router/0:/var/vcap/sys/log# ls -R.:gorouter loggregator_agent

./gorouter:access.log gorouter.err.log gorouter.log post-start.err.log post-start.log

./loggregator_agent:

221 SUSE Cloud Applic… 1.5.1

Page 237: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

agent.log

24.1 Logging to an External Syslog ServerCloud Application Platform supports sending the cluster's log data to external logging serviceswhere additional processing and analysis can be performed.

24.1.1 Configuring Cloud Application Platform

In your scf-config-values.yaml le add the following configuration values to the env:section. The example values below are configured for an external ELK stack.

env: SCF_LOG_HOST: elk.example.com SCF_LOG_PORT: 5001 SCF_LOG_PROTOCOL: "tcp"

24.1.2 Example using the ELK Stack

The ELK stack is an example of an external syslog server where log data can be sent to for logmanagement. The ELK stack consists of:

Elasticsearch

A tool for search and analytics. For more information, refer to https://www.elastic.co/prod-

ucts/elasticsearch .

Logstash

A tool for data processing. For more information, refer to https://www.elastic.co/prod-

ucts/logstash .

Kibana

A tool for data visualization. For more information, refer to https://www.elastic.co/prod-

ucts/kibana .

222 Logging to an External Syslog Server SUSE Cloud Applic… 1.5.1

Page 238: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

24.1.2.1 Prerequisites

Java 8 is required by:

Elasticsearch

For more information, refer to https://www.elastic.co/guide/en/elasticsearch/reference/cur-

rent/setup.html#jvm-version .

Logstash

For more information, refer to https://www.elastic.co/guide/en/logstash/6.x/in-

stalling-logstash.html#installing-logstash .

24.1.2.2 Installing and Configuring Elasticsearch

For methods of installing Elasticsearch, refer to https://www.elastic.co/guide/en/elasticsearch/ref-

erence/7.1/install-elasticsearch.html .

After installation, modify the config le /etc/elasticsearch/elasticsearch.yml to set thefollowing value.

network.host: localhost

24.1.2.3 Installing and Configuring Logstash

For methods of installing Logstash, refer to http://www.elastic.co/guide/en/logstash/7.1/in-

stalling-logstash.html .

After installation, create the configuration le /etc/logstash/conf.d/00-scf.conf . In thisexample, we will name it 00-scf.conf . Add the following into the le. Take note of the portused in the input section. This value will need to match the value of the SCF_LOG_PORT propertyin your scf-config-values.yaml le.

input { tcp { port => 5001 }}output { stdout {} elasticsearch { hosts => ["localhost:9200"]

223 Example using the ELK Stack SUSE Cloud Applic… 1.5.1

Page 239: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

index => "scf-%{+YYYY.MM.dd}" }}

Additional input plug-ins can be found at https://www.elastic.co/guide/en/logstash/current/in-

put-plugins.html and output plug-ins can be found at https://www.elastic.co/guide/en/logstash/

current/output-plugins.html . For this example, we will demonstrate the ow of data throughthe stack, but filter plugins can also be specified to perform processing of the log data. For moredetails about filter plug-ins, refer to https://www.elastic.co/guide/en/logstash/current/filter-plug-

ins.html .

24.1.2.4 Installing and Configuring Kibana

For methods of installing Kibana, refer to https://www.elastic.co/guide/en/kibana/7.1/instal-

l.html .

No configuration changes are required at this point. Refer to https://www.elastic.co/guide/

en/kibana/current/settings.html for additonal properties that can configured through thekibana.yml le.

24.2 Log LevelsThe log level is configured through the scf-config-values.yaml le by using the LOG_LEVELproperty found in the env: section. The LOG_LEVEL property is mapped to component-specif-ic levels. Components have differing technology compositions (for example languages, frame-works) and results in each component determining for itself what content to provide at eachlevel, which may vary between components.

The following are the log levels available along with examples of log entries at the given level.

o: disable log messages

fatal: fatal conditions

error: error conditions

<11>1 2018-08-21T17:59:48.321059+00:00 api-group-0 vcap.cloud_controller_ng- - -{"timestamp":1534874388.3206334,"message":"Mysql2::Error: MySQLserver has gone away: SELECT count(*) AS `count` FROM `tasks` WHERE(`state` = 'RUNNING') LIMIT 1","log_level":"error","source":"cc.db","data":

224 Log Levels SUSE Cloud Applic… 1.5.1

Page 240: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

{},"thread_id":47367387197280,"fiber_id":47367404488760,"process_id":3400,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/sequel-4.49.0/lib/sequel/database/logging.rb","lineno":88,"method":"block inlog_each"}

warn: warning conditions

<12>1 2018-08-21T18:49:37.651186+00:00 api-group-0 vcap.cloud_controller_ng- - -{"timestamp":1534877377.6507676,"message":"Invalid bearer token:#<CF::UAA::InvalidSignature: Signature verification failed> [\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/cf-uaa-lib-3.14.3/lib/uaa/token_coder.rb:118:in `decode'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/cf-uaa-lib-3.14.3/lib/uaa/token_coder.rb:212:in `decode_at_reference_time'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:70:in `decode_token_with_key'\",\"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:58:in`block in decode_token_with_asymmetric_key'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:56:in `each'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:56:in`decode_token_with_asymmetric_key'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:29:in `decode_token'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/security/security_context_configurer.rb:22:in `decode_token'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/security/security_context_configurer.rb:10:in `configure'\",\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/security_context_setter.rb:12:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/vcap_request_id.rb:15:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:49:in`call_app'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:14:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_metrics.rb:12:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/rack-1.6.9/lib/rack/builder.rb:153:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/thin-1.7.0/lib/thin/connection.rb:86:in `block in pre_process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/thin-1.7.0/lib/thin/connection.rb:84:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/thin-1.7.0/lib/thin/connection.rb:84:in `pre_process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/

225 Log Levels SUSE Cloud Applic… 1.5.1

Page 241: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

gems/thin-1.7.0/lib/thin/connection.rb:50:in `block in process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/eventmachine-1.0.9.1/lib/eventmachine.rb:1067:in `block inspawn_threadpool'\"]","log_level":"warn","source":"cc.uaa_token_decoder","data":{"request_guid":"f3e25c45-a94a-4748-7ccf-5a72600fbb17::774bdb79-5d6a-4ccb-a9b8-f4022afa3bdd"},"thread_id":47339751566100,"fiber_id":47339769104800,"process_id":3245,"file":"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb","lineno":35,"method":"rescue indecode_token"}

info: informational messages

<14>1 2018-08-21T22:42:54.324023+00:00 api-group-0 vcap.cloud_controller_ng- - -{"timestamp":1534891374.3237739,"message":"Started GET\"/v2/info\" for user: , ip: 127.0.0.1 with vcap-request-id:45e00b66-e0b7-4b10-b1e0-2657f43284e7 at 2018-08-21 22:42:54UTC","log_level":"info","source":"cc.api","data":{"request_guid":"45e00b66-e0b7-4b10-b1e0-2657f43284e7"},"thread_id":47420077354840,"fiber_id":47420124921300,"process_id":3200,"file":var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_logs.rb","lineno":12,"method":"call"}

debug: debugging messages

<15>1 2018-08-21T22:45:15.146838+00:00 api-group-0 vcap.cloud_controller_ng- - -{"timestamp":1534891515.1463814,"message":"dispatchVCAP::CloudController::InfoController get /v2/info","log_level":"debug","source":"cc.api","data":{"request_guid":"b228ef6d-af5e-4808-af0b-791a37f51154"},"thread_id":47420125585200,"fiber_id":47420098783620,"process_id":3200,"file":var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb","lineno":12,"method":"block indefine_route"}

debug1: lower-level debugging messages

debug2: lowest-level debugging message

<15>1 2018-08-21T22:46:02.173445+00:00 api-group-0 vcap.cloud_controller_ng - - -{"timestamp":1534891562.1731355,"message":"(0.006130s) SELECT * FROM `delayed_jobs`WHERE ((((`run_at` <= '2018-08-21 22:46:02') AND (`locked_at` IS NULL)) OR(`locked_at` < '2018-08-21 18:46:02') OR (`locked_by` = 'cc_api_worker.api.0.1'))AND (`failed_at` IS NULL) AND (`queue` IN ('cc-api-0'))) ORDER BY `priority`ASC, `run_at` ASC LIMIT 5","log_level":"debug2","source":"cc.background","data":{},"thread_id":47194852110160,"fiber_id":47194886034680,"process_id":3296,"file":"/

226 Log Levels SUSE Cloud Applic… 1.5.1

Page 242: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/sequel-4.49.0/lib/sequel/database/logging.rb","lineno":88,"method":"block inlog_each"}

227 Log Levels SUSE Cloud Applic… 1.5.1

Page 243: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

25 Managing Certificates

This chapter describes the process to deploy SUSE Cloud Application Platform installed withcertificates signed by an external Certificate Authority.

25.1 Certificate CharacteristicsEnsure the certificates you use have the following characteristics:

The certificate is encoded in the PEM format.

In order to secure uaa -related traffic, the certificate's Subject Alternative Name (SAN)should include the domains uaa.example.com and *.uaa.example.com , where exam-ple.com is replaced with the DOMAIN set in your scf-config-values.yaml .

In order to secure scf -related traffic, the certificate's Subject Alternative Name (SAN)should include the domain *.example.com , where example.com is replaced with theDOMAIN in your scf-config-values.yaml .

25.2 Deployment ConfigurationCertificates used in SUSE Cloud Application Platform are installed through a configuration le,called scf-config-values.yaml . To specify a certificate, set the value of the certificate andits corresponding private key under the secrets: section.

NoteNote the use of the "|" character which indicates the use of a literal scalar. See the http://

yaml.org/spec/1.2/spec.html#id2795688 for more information.

Certificates are installed to the uaa component by setting the values UAA_SERVER_CERT andUAA_SERVER_CERT_KEY . For example:

secrets: UAA_SERVER_CERT: | -----BEGIN CERTIFICATE----- MIIFnzCCA4egAwIBAgICEAMwDQYJKoZIhvcNAQENBQAwXDELMAkGA1UEBhMCQ0Ex CzAJBgNVBAgMAkJDMRIwEAYDVQQHDAlWYW5jb3V2ZXIxETAPBgNVBAoMCE15Q2Fw T3JnMRkwFwYDVQQDDBBNeUNhcE9yZyBSb290IENBMB4XDTE4MDkxNDIyNDMzNVoX

228 Certificate Characteristics SUSE Cloud Applic… 1.5.1

Page 244: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

... IqhPRKYBFHPw6RxVTjG/ClMsFvOIAO3QsK+MwTRIGVu/MNs0wjMu34B/zApLP+hQ 3ZxAt/z5Dvdd0y78voCWumXYPfDw9T94B4o58FvzcM0eR3V+nVtahLGD2r+DqJB0 3xoI -----END CERTIFICATE-----

UAA_SERVER_CERT_KEY: | -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDhRlcoZAVwUkg0 sdExkBnPenhLG5FzQM3wm9t4erbSQulKjeFlBa9b0+RH6gbYDHh5+NyiL0L89txO JHNRGEmt+4zy+9bY7e2syU18z1orOrgdNq+8QhsSoKHJV2w+0QZkSHTLdWmAetrA ... ZP5BpgjrT2lGC1ElW/8AFM5TxkkOPMzDCe8HRXPUUw+2YDzyKY1YgkwOMpHlk8Cs wPQYJsrcObenRwsGy2+A6NiIg2AVJwHASFG65taoV+1A061P3oPDtyIH/UPhRUoC OULPS8fbHefNiSvZTNVKwj8= -----END PRIVATE KEY-----

Certificates are installed to the scf component by setting the values ROUTER_SSL_CERT andROUTER_SSL_CERT_KEY . In addition, set UAA_CA_CERT with the root certificate of the CertificateAuthority used to sign your certificate. For example:

secrets: ROUTER_SSL_CERT: | -----BEGIN CERTIFICATE----- MIIEEjCCAfoCCQCWC4NErLzy3jANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM CU15Q0Euc2l0ZTAeFw0xODA5MDYxNzA1MTRaFw0yMDAxMTkxNzA1MTRaMFAxCzAJ ... xtNNDwl2rnA+U0Q48uZIPSy6UzSmiNaP3PDR+cOak/mV8s1/7oUXM5ivqkz8pEJo M3KrIxZ7+MbdTvDOh8lQplvFTeGgjmUDd587Gs4JsormqOsGwKd1BLzQbGELryV9 1usMOVbUuL8mSKVvgqhbz7vJlW1+zwmrpMV3qgTMoHoJWGx2n5g= -----END CERTIFICATE-----

ROUTER_SSL_CERT_KEY: | -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEAm4JMchGSqbZuqc4LdryJpX2HnarWPOW0hUkm60DL53f6ehPK T5Dtb2s+CoDX9A0iTjGZWRD7WwjpiiuXUcyszm8y9bJjP3sIcTnHWSgL/6Bb3KN5 G5D8GHz7eMYkZBviFvygCqEs1hmfGCVNtgiTbAwgBTNsrmyx2NygnF5uy4KlkgwI ... GORpbQKBgQDB1/nLPjKxBqJmZ/JymBl6iBnhIgVkuUMuvmqES2nqqMI+r60EAKpX M5CD+pq71TuBtbo9hbjy5Buh0+QSIbJaNIOdJxU7idEf200+4anzdaipyCWXdZU+ MPdJf40awgSWpGdiSv6hoj0AOm+lf4AsH6yAqw/eIHXNzhWLRvnqgA== -----END RSA PRIVATE KEY----

UAA_CA_CERT: | -----BEGIN CERTIFICATE----- MIIDaSjCAjKgAwIBAgIQRK+wgNajJ7qJMDmGLvhAazANBgkqhkiG9w0BAQUFADA/

229 Deployment Configuration SUSE Cloud Applic… 1.5.1

Page 245: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

MTQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT DkITVCBSb290IENBIFg7MB4XDTAwMDkzMDIxMTIxOVoXDTIxMDkzMDE0MDExNVow ... R8srzJmwN0jP41ZL9c8PDHIyh8bwRLtTcm1D9SZImlJnt1ir/md2cXjbDaJWFBM5 NvaEkqgCWjBH4d1QB7wCCZAA62RjYJsWvIjJEubSfZGL+T0yjWW06XyxV3bqxbYo NTQVZRzI9neWagqNdwvYkQsEjgfbKbYK7p2MEIAL -----END CERTIFICATE-----

25.2.1 Configuring Multiple Certificates

Cloud Application Platform supports configurations that use multiple certificates. To spec-ify multiple certificates with their associated keys, replace the ROUTER_SSL_CERT andROUTER_SSL_CERT_KEY properties with the ROUTER_TLS_PEM property in your scf-con-fig-values.yaml le.

secrets: ROUTER_TLS_PEM: | - cert_chain: | -----BEGIN CERTIFICATE----- MIIEDzCCAfcCCQCWC4NErLzy9DANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM opR9hW2YNrMYQYfhVu4KTkpXIr4iBrt2L+aq2Rk4NBaprH+0X6CPlYg+3edC7Jc+ ... ooXNKOrpbSUncflZYrAfYiBfnZGIC99EaXShRdavStKJukLZqb3iHBZWNLYnugGh jyoKpGgceU1lwcUkUeRIOXI8qs6jCqsePM6vak3EO5rSiMpXMvLO8WMaWsXEfcBL dglVTMCit9ORAbVZryXk8Xxiham83SjG+fOVO4pd0R8UuCE= -----END CERTIFICATE----- private_key: | -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA0HZ/aF64ITOrwtzlRlDkxf0b4V6MFaaTx/9UIQKQZLKT0d7u 3Rz+egrsZ90Jk683Oz9fUZKtgMXt72CMYUn13TTYwnh5fJrDM1JXx6yHJyiIp0rf 3G6wh4zzgBosIFiadWPQgL4iAJxmP14KMg4z7tNERu6VXa+0OnYT0DBrf5IJhbn6 ... ja0CsQKBgQCNrhKuxLgmQKp409y36Lh4VtIgT400jFOsMWFH1hTtODTgZ/AOnBZd bYFffmdjVxBPl4wEdVSXHEBrokIw+Z+ZhI2jf2jJkge9vsSPqX5cTd2X146sMUSy o+J1ZbzMp423AvWB7imsPTA+t9vfYPSlf+Is0MhBsnGE7XL4fAcVFQ== -----END RSA PRIVATE KEY----- - cert_chain: | -----BEGIN CERTIFICATE----- MIIEPzCCAiegAwIBAgIJAJYLg0SsvPL1MA0GCSqGSIb3DQEBCwUAMEYxCzAJBgNV BAYTAkNBMRMwEQYDVQQIDApTb21lLVN0YXRlMQ4wDAYDVQQKDAVNeU9yZzESMBAG A1UEAwwJTXlDQS5zaXRlMB4XDTE4MDkxNzE1MjQyMVoXDTIwMDEzMDE1MjQyMVow ... FXrgM9jVBGXeL7T/DNfJp5QfRnrQq1/NFWafjORXEo9EPbAGVbPh8LiaEqwraR/K

230 Configuring Multiple Certificates SUSE Cloud Applic… 1.5.1

Page 246: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

cDuNI7supZ33I82VOrI4+5mSMxj+jzSGd2fRAvWEo8E+MpHSpHJt6trGa5ON57vV duCWD+f1swpuuzW+rNinrNZZxUQ77j9Vk4oUeVUfL91ZK4k= -----END CERTIFICATE----- private_key: | -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEA5kNN9ZZK/UssdUeYSajG6xFcjyJDhnPvVHYA0VtgVOq8S/rb irVvkI1s00rj+WypHqP4+l/0dDHTiclOpUU5c3pn3vbGaaSGyonOyr5Cbx1X+JZ5 17b+ah+oEnI5pUDn7chGI1rk56UI5oV1Qps0+bYTetEYTE1DVjGOHl5ERMv2QqZM ... rMMhAoGBAMmge/JWThffCaponeakJu63DHKz87e2qxcqu25fbo9il1ZpllOD61Zi xd0GATICOuPeOUoVUjSuiMtS7B5zjWnmk5+siGeXF1SNJCZ9spgp9rWA/dXqXJRi 55w7eGyYZSmOg6I7eWvpYpkRll4iFVApMt6KPM72XlyhQOigbGdJ -----END RSA PRIVATE KEY-----

25.3 Deploying SUSE Cloud Application Platform withCertificatesOnce the certificate-related values have been set in your scf-config-values.yaml , deploySUSE Cloud Application Platform.

Note: Embedded uaa in scfThe User Account and Authentication ( uaa ) Server is included as an optional feature ofthe scf Helm chart. This simplifies the Cloud Application Platform deployment processas a separate installation and/or upgrade of uaa is no longer a prerequisite to installingand/or upgrading scf .

It is important to note that:

This feature should only be used when uaa is not shared with other projects.

You cannot migrate from an existing external uaa to an embedded one. In this situ-ation, enabling this feature during an upgrade will result in a single admin account.

To enable this feature, add the following to your scf-config-values.yaml .

enable: uaa: true

231 Deploying SUSE Cloud Application Platform with Certificates SUSE Cloud Applic… 1.5.1

Page 247: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

When deploying and/or upgrading scf , run helm install and/or helm upgrade andnote that:

Installing and/or upgrading uaa using helm install suse/uaa ... and/or helmupgrade is no longer required.

It is no longer necessary to set the UAA_CA_CERT parameter. Previously, this para-meter was passed the CA_CERT variable, which was assigned the CA certificate ofuaa .

If this is an initial deployment, use helm install to deploy uaa and scf :

1. Deploy uaa :

tux > helm install suse/uaa \--name susecf-uaa \--namespace uaa \--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, whichyou can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

For the secret-generation pod, the STATUS is Completed and the READY columnis at 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

2. Pass your uaa secret and certificate to scf :

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

232 Deploying SUSE Cloud Application Platform with Certificates SUSE Cloud Applic… 1.5.1

Page 248: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

3. Deploy scf :

tux > helm install suse/cf \--name susecf-scf \--namespace scf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}"

Wait until you have a successful scf deployment before going to the next steps, whichyou can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

For the secret-generation and post-deployment-setup pods, the STATUS isCompleted and the READY column is at 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

If this is an existing deployment, use helm upgrade to apply the changes to uaa and scf :

1. Upgrade uaa :

tux > helm upgrade susecf-uaa suse/uaa \--values scf-config-values.yaml \--version 2.19.1

2. Wait until you have a successful uaa deployment before going to the next steps, whichyou can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

3. Pass your uaa secret and certificate to scf :

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

233 Deploying SUSE Cloud Application Platform with Certificates SUSE Cloud Applic… 1.5.1

Page 249: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

4. Upgrade scf :

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--version 2.19.1

5. Monitor the deployment progress using the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

Once all pods are up and running, verify you can successfully set your cluster as the target APIendpoint by running the cf api command without using the --skip-ssl-validation option.

tux > cf api https://api.example.com

25.4 Rotating Automatically Generated SecretsCloud Application Platform uses a number of automatically generated secrets for use internally.These secrets have a default expiration of 10950 days and are set through the CERT_EXPIRATIONproperty in the env: section of the scf-config-values.yaml le. If rotation of the secretsis required, increment the value of secrets_generation_counter in the kube: section ofthe scf-config-values.yaml configuration le (for example the example scf-config-val-ues.yaml used in this guide) then run helm upgrade .

This example demonstrates rotating the secrets of the scf deployment.

First, update the scf-config-values.yaml le.

kube: # Increment this counter to rotate all generated secrets secrets_generation_counter: 2

Next, perform a helm upgrade to apply the change.

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade susecf-scf suse/cf \

234 Rotating Automatically Generated Secrets SUSE Cloud Applic… 1.5.1

Page 250: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

--values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}" \--version 2.19.1

25.5 Dierence between TRUSTED_CERTS andROOTFS_TRUSTED_CERTSThe TRUSTED_CERTS parameter contains a list of CA certificates which become available in ap-plication instance containers as les in the directory /etc/cf-system-certificates and thegeneral trust store (Only cinuxfs3). The environment variable CF_SYSTEM_CERT_PATH holdsthe rst path. No difference between applications from buildpacks and from Docker images.

The ROOT_TRUSTED_CERTS parameter contains a list of CA certificates which are available inapplication instance containers through the general OS trust store. Only applications based onbuildpacks (and a rootfs) receive the information.

Summary

Both contain additional CA certificates for applications.

ROOTFS_TRUSTED_CERTS only applies to applications based on buildpacks and rootfs andare automatically available through the OS trust store.

TRUSTED_CERTS applies to all applications, based on both buildpacks and Docker im-ages. Not automatically available. Applications have to use the environment variableCF_SYSTEM_CERT_PATH and add the certificates themselves.

235

Difference between TRUSTED_CERTS and ROOTFS_TRUSTED_CERTS SUSE Cloud Applic…

1.5.1

Page 251: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

26 Integrating CredHub with SUSE Cloud ApplicationPlatform

SUSE Cloud Application Platform supports CredHub integration. You should already have aworking CredHub instance, a CredHub service on your cluster, then apply the steps in thischapter to connect SUSE Cloud Application Platform.

26.1 Installing the CredHub ClientStart by creating a new directory for the CredHub client on your local workstation, then down-load and unpack the CredHub client. The following example is for the 2.2.0 Linux release.For other platforms and current releases, see the cloudfoundry-incubator/credhub-cli at https://

github.com/cloudfoundry-incubator/credhub-cli/releases

tux > mkdir chclienttux > cd chclienttux > wget https://github.com/cloudfoundry-incubator/credhub-cli/releases/download/2.2.0/credhub-linux-2.2.0.tgztux > tar zxf credhub-linux-2.2.0.tgz

26.2 Enabling and Disabling CredHubEnable CredHub for your deployment using the --set "enable.credhub=true" ag whenrunning helm install or helm upgrade .

First fetch the uaa credentials:

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

If this is an initial deployment, use helm install to deploy scf with CredHub enabled:

tux > helm install suse/cf \--name susecf-scf \

236 Installing the CredHub Client SUSE Cloud Applic… 1.5.1

Page 252: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

--namespace scf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--set "enable.credhub=true"

If this is an existing deployment, use helm upgrade to enable CredHub:

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--set "enable.credhub=true" \--version 2.19.1

WarningOn occasion, the credhub pod may fail to start due to database migration failures; thishas been spotted intermittently on Microsoft Azure Kubernetes Service and to a lesserextent, other public clouds. In these situations, manual intervention is required to trackthe last completed transaction in credhub_user database and update the flyway schemahistory table with the record of the last completed transaction. Please contact supportfor further instructions.

To disable CredHub, run helm upgrade with the --set "enable.credhub=false" ag. Besure to pass your uaa secret and certificate to scf rst:

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}" \--set "enable.credhub=false" \--version 2.19.1

237 Enabling and Disabling CredHub SUSE Cloud Applic… 1.5.1

Page 253: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

26.3 Upgrade ConsiderationsThe following applies to upgrades from SUSE Cloud Application Platform 1.3.1 to SUSE CloudApplication Platform 1.4:

If CredHub is enabled on your deployment of SUSE Cloud Application Platform 1.3.1, then youmust specify --set "enable.credhub=true" during the upgrade to keep the feature installed.Sizing values previously set to have a count of 1, which is the new minimum setting, to enablethe service no longer need to be set explicitly. The following values in your scf-config-val-ues.yaml le can be removed:

sizing: credhub_user: count: 1

26.4 Connecting to the CredHub ServiceSet environment variables for the CredHub client, your CredHub service location, and CloudApplication Platform namespace. In these guides the example namespace is scf :

tux > CH_CLI=~/.chclient/credhubtux > CH_SERVICE=https://credhub.example.comtux > NAMESPACE=scf

Set up the CredHub service location:

tux > SECRET="$(kubectl get secrets --namespace "${NAMESPACE}" | awk '/^secrets-/ { print $1 }')"tux > CH_SECRET="$(kubectl get secrets --namespace "${NAMESPACE}" "${SECRET}" --output jsonpath="{.data['uaa-clients-credhub-user-cli-secret']}"|base64 --decode)"tux > CH_CLIENT=credhub_user_clitux > echo Service ......@ $CH_SERVICEtux > echo CH cli Secret @ $CH_SECRET

Set the CredHub target through its Kubernetes service, then log into CredHub:

tux > "${CH_CLI}" api --skip-tls-validation --server "${CH_SERVICE}"tux > "${CH_CLI}" login --client-name="${CH_CLIENT}" --client-secret="${CH_SECRET}"

Test your new connection by inserting and retrieving some fake credentials:

tux > "${CH_CLI}" set --name FOX --type value --value 'fox over lazy dog'

238 Upgrade Considerations SUSE Cloud Applic… 1.5.1

Page 254: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > "${CH_CLI}" set --name DOG --type user --username dog --password foxtux > "${CH_CLI}" get --name FOXtux > "${CH_CLI}" get --name DOG

239 Connecting to the CredHub Service SUSE Cloud Applic… 1.5.1

Page 255: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

27 Buildpacks

Buildpacks (https://docs.cloudfoundry.org/buildpacks) are used to construct the environmentneeded to run your applications, including any required runtimes or frameworks as well as otherdependencies. When you deploy an application, a buildpack can be specified or automaticallydetected by cycling through all available buildpacks to nd one that is applicable. When thereis a suitable buildpack for your application, the buildpack will then download any necessarydependencies during the staging process.

27.1 System BuildpacksSUSE Cloud Application Platform releases include a set of system, or built-in, buildpacks forcommon languages and frameworks. These system buildpacks are based on the upstream ver-sions of the buildpack, but are made compatible with the SLE-based stack(s) found in SUSECloud Application Platform.

The following table lists the default system buildpacks and their associated versions included aspart of the SUSE Cloud Application Platform 1.5.1 release.

Buildpack Version Github Repository

Staticfile 1.4.43 https://github.com/SUSE/cf-

staticfile-buildpack

NGINX 1.0.15 https://github.com/SUSE/cf-

nginx-buildpack

Java 4.20.0 https://github.com/SUSE/cf-

java-buildpack

Ruby 1.7.42 https://github.com/SUSE/cf-

ruby-buildpack

Node.js 1.6.53 https://github.com/SUSE/cf-

nodejs-buildpack

Go 1.8.42 https://github.com/SUSE/cf-

go-buildpack

240 System Buildpacks SUSE Cloud Applic… 1.5.1

Page 256: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Buildpack Version Github Repository

Python 1.6.36 https://github.com/SUSE/cf-

python-buildpack

PHP 4.3.80 https://github.com/SUSE/cf-

php-buildpack

Binary 1.0.33 https://github.com/SUSE/cf-

binary-builder

.NET Core 2.2.13 https://github.com/SUSE/cf-

dotnet-core-buildpack

27.2 Using BuildpacksWhen deploying an application, a buildpack can be selected through one of the following meth-ods:

Using the -b option during the cf push command, for example:

tux > cf push 12factor -b ruby_buildpack

Using the buildpacks attribute in your application's manifest.yml . For more infor-mation, see https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.htm-

l#buildpack .

---applications:- name: 12factor buildpacks: - ruby_buildpack

Using buildpack detection.Buildpack detection occurs when an application is pushed and a buildpack has not beenspecified using any of the other methods. The application is checked aginst the detectioncriteria of a buildpack to verify whether its compatible. Each buildpack has its own detec-tion criteria, defined in the /bin/detect le. The Ruby buildpack, for example, considersan application compatible if it contains a Gemfile le and Gemfile.lock le in its rootdirectory.

241 Using Buildpacks SUSE Cloud Applic… 1.5.1

Page 257: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

The detection process begins with the rst buildpack in the detection priority list. If thebuildpack is compatible with the application, the staging process continues. If the build-pack is not compatible with the application, the buildpack in the next position is checked.To see the detection priority list, run cf buildpacks and examine the position eld.If there are no compatible buildpacks, the cf push command will fail.For more information, see https://docs.cloudfoundry.org/buildpacks/understand-buildpack-

s.html#buildpack-detection .

In the above, ruby_buildpack can be replaced with:

The name of a buildpack. To list the currently available buildpacks, including any thatwere created or updated, examine the buildpack eld after running:

tux > cf buildpacks

The Git URL of a buildpack. For example, https://github.com/SUSE/cf-ruby-build-pack .

The Git URL of a buildpack with a specific branch or tag. For example, https://github.com/SUSE/cf-ruby-buildpack#1.7.40 .

For more information about using buildpacks, see https://docs.cloudfoundry.org/buildpacks/#us-

ing-buildpacks .

27.3 Adding BuildpacksAdditional buildpacks can be added to your SUSE Cloud Application Platform deployment tocomplement the ones already installed.

1. List the currently installed buildpacks.

tux > cf buildpacksGetting buildpacks...

buildpack position enabled locked filename stackstaticfile_buildpack 1 true false staticfile-buildpack-v1.4.43.1-1.1-53227ab3.zipnginx_buildpack 2 true false nginx-buildpack-v1.0.15.1-1.1-868e3dbf.zip

242 Adding Buildpacks SUSE Cloud Applic… 1.5.1

Page 258: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

java_buildpack 3 true false java-buildpack-v4.20.0.1-7b3efeee.zipruby_buildpack 4 true false ruby-buildpack-v1.7.42.1-1.1-897dec18.zipnodejs_buildpack 5 true false nodejs-buildpack-v1.6.53.1-1.1-ca7738ac.zipgo_buildpack 6 true false go-buildpack-v1.8.42.1-1.1-c93d1f83.zippython_buildpack 7 true false python-buildpack-v1.6.36.1-1.1-4c0057b7.zipphp_buildpack 8 true false php-buildpack-v4.3.80.1-6.1-613615bf.zipbinary_buildpack 9 true false binary-buildpack-v1.0.33.1-1.1-a53fa79d.zipdotnet-core_buildpack 10 true false dotnet-core-buildpack-v2.2.13.1-1.1-cf41131a.zip

2. Add a new buildpack using the cf create-buildpack command.

tux > cf create-buildpack another_ruby_buildpack https://cf-buildpacks.suse.com/ruby-buildpack-v1.7.41.1-1.1-c4cd5fed.zip 10

Where:

another_ruby_buildpack is the name of the buildpack.

https://cf-buildpacks.suse.com/ruby-buildpack-v1.7.41.1-1.1-

c4cd5fed.zip is the path to the buildpack release. It should be a zip le, a URL toa zip le, or a local directory.

10 is the position of the buildpack and used to determine priority. A lower valueindicates a higher priority.

To see all available options, run:

tux > cf create-buildpack -h

3. Verify the new buildpack has been added.

tux > cf buildpacksGetting buildpacks...

buildpack position enabled locked filename stack

243 Adding Buildpacks SUSE Cloud Applic… 1.5.1

Page 259: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

staticfile_buildpack 1 true false staticfile-buildpack-v1.4.43.1-1.1-53227ab3.zipnginx_buildpack 2 true false nginx-buildpack-v1.0.15.1-1.1-868e3dbf.zipjava_buildpack 3 true false java-buildpack-v4.20.0.1-7b3efeee.zipruby_buildpack 4 true false ruby-buildpack-v1.7.42.1-1.1-897dec18.zipnodejs_buildpack 5 true false nodejs-buildpack-v1.6.53.1-1.1-ca7738ac.zipgo_buildpack 6 true false go-buildpack-v1.8.42.1-1.1-c93d1f83.zippython_buildpack 7 true false python-buildpack-v1.6.36.1-1.1-4c0057b7.zipphp_buildpack 8 true false php-buildpack-v4.3.80.1-6.1-613615bf.zipbinary_buildpack 9 true false binary-buildpack-v1.0.33.1-1.1-a53fa79d.zipanother_ruby_buildpack 10 true false ruby-buildpack-v1.7.41.1-1.1-c4cd5fed.zipdotnet-core_buildpack 11 true false dotnet-core-buildpack-v2.2.13.1-1.1-cf41131a.zip

27.4 Updating BuildpacksCurrently installed buildpacks can be updated using the cf update-buildpack command. Tosee all values that can be updated, run cf update-buildpack -h .

ImportantBefore updating a built-in buildpack, take note of the scenarios that could lead to theapi-group pod not being in a ready state as described in Section 31.7, “api-group Pod

Not Ready after Buildpack Update”.

1. List the currently installed buildpacks that can be updated.

tux > cf buildpacksGetting buildpacks...

buildpack position enabled locked filename stack

244 Updating Buildpacks SUSE Cloud Applic… 1.5.1

Page 260: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

staticfile_buildpack 1 true false staticfile-buildpack-v1.4.43.1-1.1-53227ab3.zipnginx_buildpack 2 true false nginx-buildpack-v1.0.15.1-1.1-868e3dbf.zipjava_buildpack 3 true false java-buildpack-v4.20.0.1-7b3efeee.zipruby_buildpack 4 true false ruby-buildpack-v1.7.42.1-1.1-897dec18.zipnodejs_buildpack 5 true false nodejs-buildpack-v1.6.53.1-1.1-ca7738ac.zipgo_buildpack 6 true false go-buildpack-v1.8.42.1-1.1-c93d1f83.zippython_buildpack 7 true false python-buildpack-v1.6.36.1-1.1-4c0057b7.zipphp_buildpack 8 true false php-buildpack-v4.3.80.1-6.1-613615bf.zipbinary_buildpack 9 true false binary-buildpack-v1.0.33.1-1.1-a53fa79d.zipanother_ruby_buildpack 10 true false ruby-buildpack-v1.7.41.1-1.1-c4cd5fed.zipdotnet-core_buildpack 11 true false dotnet-core-buildpack-v2.2.13.1-1.1-cf41131a.zip

2. Use the cf update-buildpack command to update a buildpack.

tux > cf update-buildpack another_ruby_buildpack -i 11

To see all available options, run:

tux > cf update-buildpack -h

3. Verify the new buildpack has been updated.

tux > cf buildpacksGetting buildpacks...

buildpack position enabled locked filename stackstaticfile_buildpack 1 true false staticfile-buildpack-v1.4.43.1-1.1-53227ab3.zipnginx_buildpack 2 true false nginx-buildpack-v1.0.15.1-1.1-868e3dbf.zipjava_buildpack 3 true false java-buildpack-v4.20.0.1-7b3efeee.zipruby_buildpack 4 true false ruby-buildpack-v1.7.42.1-1.1-897dec18.zip

245 Updating Buildpacks SUSE Cloud Applic… 1.5.1

Page 261: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

nodejs_buildpack 5 true false nodejs-buildpack-v1.6.53.1-1.1-ca7738ac.zipgo_buildpack 6 true false go-buildpack-v1.8.42.1-1.1-c93d1f83.zippython_buildpack 7 true false python-buildpack-v1.6.36.1-1.1-4c0057b7.zipphp_buildpack 8 true false php-buildpack-v4.3.80.1-6.1-613615bf.zipbinary_buildpack 9 true false binary-buildpack-v1.0.33.1-1.1-a53fa79d.zipdotnet-core_buildpack 10 true false dotnet-core-buildpack-v2.2.13.1-1.1-cf41131a.zipanother_ruby_buildpack 11 true false ruby-buildpack-v1.7.41.1-1.1-c4cd5fed.zip

27.5 Oine BuildpacksAn offline, or cached, buildpack packages the runtimes, frameworks, and dependencies neededto run your applications into an archive that is then uploaded to your Cloud Application Platformdeployment. When an application is deployed using an offline buildpack, access to the Internetto download dependencies is no longer required. This has the benefit of providing improvedstaging performance and allows for staging to take place on air-gapped environments.

27.5.1 Creating an Oine Buildpack

Offline buildpacks can be created using the cf-buildpack-packager-docker (https://github.com/

SUSE/cf-buildpack-packager-docker) tool, which is available as a Docker (https://www.dock-

er.com/) image. The only requirement to use this tool is a system with Docker support.

Important: DisclaimerSome Cloud Foundry buildpacks can reference binaries with proprietary or mutually in-compatible open source licenses which cannot be distributed together as offline/cachedbuildpack archives. Operators who wish to package and maintain offline buildpacks willbe responsible for any required licensing or export compliance obligations.

For automation purposes, you can use the --accept-external-binaries option to ac-cept this disclaimer without the interactive prompt.

246 Offline Buildpacks SUSE Cloud Applic… 1.5.1

Page 262: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Usage (https://github.com/SUSE/cf-buildpack-packager-docker#usage) of the tool is as follows:

package [--accept-external-binaries] org [all [stack] | language [tag] [stack]]

Where:

org is the Github organization hosting the buildpack repositories, such as "cloudfoundry"

(https://github.com/cloudfoundry) or "SUSE" (https://github.com/SUSE)

A tag cannot be specified when using all as the language because the tag is differentfor each language

tag is not optional if a stack is specified. To specify the latest release, use "" as the tag

A maximum of one stack can be specified

The following example demonstrates packaging an offline Ruby buildpack and uploading it toyour Cloud Application Platform deployment to use. The packaged buildpack will be a Zip leplaced in the current working directory, $PWD .

1. Build the latest released SUSE Ruby buildpack for the SUSE Linux Enterprise 15 stack:

tux > docker run --interactive --tty --rm -v $PWD:/out splatform/cf-buildpack-packager SUSE ruby "" sle15

2. Verify the archive has been created in your current working directory:

tux > lsruby_buildpack-cached-sle15-v1.7.30.1.zip

3. Log into your Cloud Application Platform deployment. Select an organization and spaceto work with, creating them if needed:

tux > cf api --skip-ssl-validation https://api.example.comtux > cf login -u admin -p passwordtux > cf create-org orgtux > cf create-space space -o orgtux > cf target -o org -s space

4. List the currently available buildpacks:

tux > cf buildpacks

247 Creating an Offline Buildpack SUSE Cloud Applic… 1.5.1

Page 263: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Getting buildpacks...

buildpack position enabled locked filenamestaticfile_buildpack 1 true false staticfile_buildpack-v1.4.34.1-1.1-1dd6386a.zipjava_buildpack 2 true false java-buildpack-v4.16.1-e638145.zipruby_buildpack 3 true false ruby_buildpack-v1.7.26.1-1.1-c2218d66.zipnodejs_buildpack 4 true false nodejs_buildpack-v1.6.34.1-3.1-c794e433.zipgo_buildpack 5 true false go_buildpack-v1.8.28.1-1.1-7508400b.zippython_buildpack 6 true false python_buildpack-v1.6.23.1-1.1-99388428.zipphp_buildpack 7 true false php_buildpack-v4.3.63.1-1.1-2515c4f4.zipbinary_buildpack 8 true false binary_buildpack-v1.0.27.1-3.1-dc23dfe2.zipdotnet-core_buildpack 9 true false dotnet-core-buildpack-v2.0.3.zip

5. Upload your packaged offline buildpack to your Cloud Application Platform deployment:

tux > cf create-buildpack ruby_buildpack_cached /tmp/ruby_buildpack-cached-sle15-v1.7.30.1.zip 1 --enableCreating buildpack ruby_buildpack_cached...OK

Uploading buildpack ruby_buildpack_cached...Done uploadingOK

6. Verify your buildpack is available:

tux > cf buildpacksGetting buildpacks...

buildpack position enabled locked filenameruby_buildpack_cached 1 true false ruby_buildpack-cached-sle15-v1.7.30.1.zipstaticfile_buildpack 2 true false staticfile_buildpack-v1.4.34.1-1.1-1dd6386a.zipjava_buildpack 3 true false java-buildpack-v4.16.1-e638145.zipruby_buildpack 4 true false ruby_buildpack-v1.7.26.1-1.1-c2218d66.zip

248 Creating an Offline Buildpack SUSE Cloud Applic… 1.5.1

Page 264: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

nodejs_buildpack 5 true false nodejs_buildpack-v1.6.34.1-3.1-c794e433.zipgo_buildpack 6 true false go_buildpack-v1.8.28.1-1.1-7508400b.zippython_buildpack 7 true false python_buildpack-v1.6.23.1-1.1-99388428.zipphp_buildpack 8 true false php_buildpack-v4.3.63.1-1.1-2515c4f4.zipbinary_buildpack 9 true false binary_buildpack-v1.0.27.1-3.1-dc23dfe2.zipdotnet-core_buildpack 10 true false dotnet-core-buildpack-v2.0.3.zip

7. Deploy a sample Rails app using the new buildpack:

tux > git clone https://github.com/scf-samples/12factortux > cd 12factortux > cf push 12factor -b ruby_buildpack_cached

Warning: Deprecation of cflinuxfs2 and sle12 StacksAs of SUSE Cloud Foundry 2.18.0, since our cf-deployment version is 9.5 , thecflinuxfs2 stack is no longer supported, as was advised in SUSE Cloud Foundry 2.17.1or Cloud Application Platform 1.4.1. The cflinuxfs2 buildpack is no longer shipped,but if you are upgrading from an earlier version, cflinuxfs2 will not be removed. How-ever, for migration purposes, we encourage all admins to move to cflinuxfs3 or sle15as newer buildpacks will not work with the deprecated cflinuxfs2 . If you still want touse the older stack, you will need to build an older version of a buildpack to continue forthe application to work, but you will be unsupported. (If you are running on sle12 , wewill be retiring that stack in a future version so start planning your migration to sle15 .The procedure is described below.)

Migrate applications to the new stack using one of the methods listed. Note that bothmethods will cause application downtime. Downtime can be avoided by followinga Blue-Green Deployment strategy. See https://docs.cloudfoundry.org/devguide/de-

ploy-apps/blue-green.html for details.Note that stack association support is available as of cf CLI v6.39.0.

Option 1 - Migrating applications using the Stack Auditor plugin.

249 Creating an Offline Buildpack SUSE Cloud Applic… 1.5.1

Page 265: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Stack Auditor rebuilds the application onto the new stack without a changein the application source code. If you want to move to a new stack with up-dated code, please follow Option 2 below. For additional information aboutthe Stack Auditor plugin, see https://docs.cloudfoundry.org/adminguide/stack-

auditor.html .

1. Install the Stack Auditor plugin for the cf CLI. For instructions, seehttps://docs.cloudfoundry.org/adminguide/stack-auditor.html#install .

2. Identify the stack applications are using. The audit lists all applicationsin orgs you have access to. To list all applications in your Cloud Appli-cation Platform deployment, ensure you are logged in as a user withaccess to all orgs.

tux > cf audit-stack

For each application requiring migration, perform the steps below.

3. If necessary, switch to the org and space the application is deployed to.

tux > cf target ORG SPACE

4. Change the stack to sle15 .

tux > cf change-stack APP_NAME sle15

5. Identify all buildpacks associated with the sle12 and cflinuxfs2stacks.

tux > cf buildpacks

6. Remove all buildpacks associated with the sle12 and cflinuxfs2stacks.

tux > cf delete-buildpack BUILDPACK -s sle12

tux > cf delete-buildpack BUILDPACK -s cflinuxfs2

7. Remove the sle12 and cflinuxfs2 stacks.

tux > cf delete-stack sle12

250 Creating an Offline Buildpack SUSE Cloud Applic… 1.5.1

Page 266: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > cf delete-stack cflinuxfs2

Option 2 - Migrating applications using the cf CLI.Perform the following for all orgs and spaces in your Cloud Application Plat-form deployment. Ensure you are logged in as a user with access to all orgs.

1. Target an org and space.

tux > cf target ORG SPACE

2. Identify the stack an applications in the org and space is using.

tux > cf app APP_NAME

3. Re-push the app with the sle15 stack using one of the following meth-ods.

Push the application with the stack option, -s passed.

tux > cf push APP_NAME -s sle15

1. Update the application manifest le to includestack: sle15 . See https://docs.cloudfoundry.org/de-

vguide/deploy-apps/manifest-attributes.html#stack for de-tails.

--- ... stack: sle15

2. Push the application.

tux > cf push APP_NAME

4. Identify all buildpacks associated with the sle12 and cflinuxfs2stacks.

tux > cf buildpacks

5. Remove all buildpacks associated with the sle12 and cflinuxfs2stacks.

251 Creating an Offline Buildpack SUSE Cloud Applic… 1.5.1

Page 267: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > cf delete-buildpack BUILDPACK -s sle12

tux > cf delete-buildpack BUILDPACK -s cflinuxfs2

6. Remove the sle12 and cflinuxfs2 stacks using the CF API. See https://

apidocs.cloudfoundry.org/7.11.0/#stacks for details.List all stacks then nd the GUID of the sle12 cflinuxfs2 stacks.

tux > cf curl /v2/stacks

Delete the sle12 and cflinuxfs2 stacks.

tux > cf curl -X DELETE /v2/stacks/SLE12_STACK_GUID

tux > cf curl -X DELETE /v2/stacks/CFLINUXFS2_STACK_GUID

252 Creating an Offline Buildpack SUSE Cloud Applic… 1.5.1

Page 268: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

28 Custom Application Domains

In a standard SUSE Cloud Foundry deployment, applications will use the same domain as theone configured in your scf-config-values.yaml for SCF. For example, if DOMAIN is set asexample.com in your scf-config-values.yaml and you deploy an application called myappthen the application's URL will be myapp.example.com .

This chapter describes the changes required to allow applications to use a separate domain.

28.1 Customizing Application DomainsBegin by adding the following to your scf-config-values.yaml . Replace appdomain.comwith the domain to use with your applications:

bosh: instance_groups: - name: api-group jobs: - name: cloud_controller_ng properties: app_domains: - appdomain.com

If uaa is deployed, pass your uaa secret and certificate to scf . Otherwise deploy uaa rst(See Section 5.7.1, “Deploy uaa”), then proceed with this step:

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

If this is an initial deployment, use helm install to deploy scf :

tux > helm install suse/cf \--name susecf-scf \--namespace scf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}"

If this is an existing deployment, use helm upgrade to apply the change:

tux > helm upgrade susecf-scf suse/cf \

253 Customizing Application Domains SUSE Cloud Applic… 1.5.1

Page 269: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

--values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}" \--version 2.19.1

Wait until you have a successful scf deployment before going to the next steps, which you canmonitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

For the secret-generation and post-deployment-setup pods, the STATUS is Com-pleted and the READY column is at 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

When the scf is complete, do the following to confirm custom application domains have beenconfigured correctly.

Run cf curl /v2/info and verify the SCF domain is not appdomain.com :

tux > cf api --skip-ssl-validation https://api.example.comtux > cf curl /v2/info | grep endpoint

Deploy an application and examine the routes eld to verify appdomain.com is being used:

tux > cf logintux > cf create-org orgtux > cf create-space space -o orgtux > cf target -o org -s spacetux > cf push myappcf push myappPushing app myapp to org org / space space as admin...Getting app info...Creating app with these attributes... name: myapp path: /path/to/myapp routes:+ myapp.appdomain.com

Creating app myapp...Mapping routes...

...

Waiting for app to start...

254 Customizing Application Domains SUSE Cloud Applic… 1.5.1

Page 270: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

name: myapprequested state: startedinstances: 1/1usage: 1G x 1 instancesroutes: myapp.appdomain.comlast uploaded: Mon 14 Jan 11:08:02 PST 2019stack: sle15buildpack: rubystart command: bundle exec rackup config.ru -p $PORT

state since cpu memory disk details#0 running 2019-01-14T19:09:42Z 0.0% 2.7M of 1G 80.6M of 1G

255 Customizing Application Domains SUSE Cloud Applic… 1.5.1

Page 271: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

29 Managing Nproc Limits of Pods

Warning: Do Not Adjust Without GuidanceIt is not recommended to change these values without the guidance of SUSE Cloud Ap-plication Platform developers. Please contact support for assistance.

Nproc is the maximum number of processes allowed per user. In the case of scf , the nprocvalue applies to the vcap user. In scf , there are parameters, kube.limits.nproc.soft andkube.limits.nproc.hard , to configure a soft nproc limit and a hard nproc limit for processesspawned by the vcap user in scf pods. By default, the soft limit is 1024 while the hard limitis 2048. The soft and hard limits can be changed to suit your workloads. Note that the limitsare applied to all pods.

When configuring the nproc limits, take note that:

If the soft limit is set, the hard limit must be set as well.

If the hard limit is set, the soft limit must be set as well.

The soft limit cannot be greater than the hard limit.

29.1 Configuring and Applying Nproc LimitsTo configure the nproc limits, add the following to your scf-config-values.yaml . Replacethe example values with limits suitable for your workloads:

kube: limits: nproc: hard: 3072 soft: 2048

256 Configuring and Applying Nproc Limits SUSE Cloud Applic… 1.5.1

Page 272: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

29.1.1 New Deployments

Note: Embedded uaa in scfThe User Account and Authentication ( uaa ) Server is included as an optional feature ofthe scf Helm chart. This simplifies the Cloud Application Platform deployment processas a separate installation and/or upgrade of uaa is no longer a prerequisite to installingand/or upgrading scf .

It is important to note that:

This feature should only be used when uaa is not shared with other projects.

You cannot migrate from an existing external uaa to an embedded one. In this situ-ation, enabling this feature during an upgrade will result in a single admin account.

To enable this feature, add the following to your scf-config-values.yaml .

enable: uaa: true

When deploying and/or upgrading scf , run helm install and/or helm upgrade andnote that:

Installing and/or upgrading uaa using helm install suse/uaa ... and/or helmupgrade is no longer required.

It is no longer necessary to set the UAA_CA_CERT parameter. Previously, this para-meter was passed the CA_CERT variable, which was assigned the CA certificate ofuaa .

For new SUSE Cloud Application Platform deployments, follow the steps below to deploy SUSECloud Application Platform with nproc limits configured:

1. Deploy uaa :

tux > helm install suse/uaa \--name susecf-uaa \--namespace uaa \--values scf-config-values.yaml

257 New Deployments SUSE Cloud Applic… 1.5.1

Page 273: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Wait until you have a successful uaa deployment before going to the next steps, whichyou can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

For the secret-generation pod, the STATUS is Completed and the READY columnis at 0/1 .

All other pods have a Running STATUS and a READY value of n/n .

Press Ctrl – C to exit the watch command.

2. Pass your uaa secret and certificate to scf :

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

3. Deploy scf :

tux > helm install suse/cf \--name susecf-scf \--namespace scf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}"

4. Monitor the deployment progress using the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

5. Open a shell into any container. The command below opens a shell to the default containerin the blobstore-0 pod:

tux > kubectl exec --stdin --tty blobstore-0 --namespace scf -- env /bin/bash

6. Use the vcap user identity:

tux > su vcap

258 New Deployments SUSE Cloud Applic… 1.5.1

Page 274: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

7. Verify the maximum number of processes for the vcap user matches the limits you set:

tux > ulimit -u

tux > cat /etc/security/limits.conf | grep nproc

29.1.2 Existing Deployments

Note: Embedded uaa in scfThe User Account and Authentication ( uaa ) Server is included as an optional feature ofthe scf Helm chart. This simplifies the Cloud Application Platform deployment processas a separate installation and/or upgrade of uaa is no longer a prerequisite to installingand/or upgrading scf .

It is important to note that:

This feature should only be used when uaa is not shared with other projects.

You cannot migrate from an existing external uaa to an embedded one. In this situ-ation, enabling this feature during an upgrade will result in a single admin account.

To enable this feature, add the following to your scf-config-values.yaml .

enable: uaa: true

When deploying and/or upgrading scf , run helm install and/or helm upgrade andnote that:

Installing and/or upgrading uaa using helm install suse/uaa ... and/or helmupgrade is no longer required.

It is no longer necessary to set the UAA_CA_CERT parameter. Previously, this para-meter was passed the CA_CERT variable, which was assigned the CA certificate ofuaa .

259 Existing Deployments SUSE Cloud Applic… 1.5.1

Page 275: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

For existing SUSE Cloud Application Platform deployments, follow the steps below to redeploySUSE Cloud Application Platform with nproc limits configured:

1. Pass your uaa secret and certificate to scf :

tux > SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

2. Use helm upgrade to apply the change:

tux > helm upgrade susecf-scf suse/cf \--values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}" \--version 2.19.1

3. Monitor the deployment progress using the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

4. Open a shell into any container. The command below opens a shell to the default containerin the blobstore-0 pod:

tux > kubectl exec --stdin --tty blobstore-0 --namespace scf -- env /bin/bash

5. Use the vcap user identity:

tux > su vcap

6. Verify the maximum number of processes for the vcap user matches the limits you set:

tux > ulimit -u

tux > cat /etc/security/limits.conf | grep nproc

260 Existing Deployments SUSE Cloud Applic… 1.5.1

Page 276: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

IV SUSE Cloud Application Platform UserGuide

30 Deploying and Managing Applications with the Cloud FoundryClient 262

Page 277: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

30 Deploying and Managing Applications with theCloud Foundry Client

30.1 Using the cf CLI with SUSE Cloud ApplicationPlatformThe Cloud Foundry command line interface (cf CLI) is for deploying and managing your appli-cations. You may use it for all the orgs and spaces that you are a member of. Install the clienton a workstation for remote administration of your SUSE Cloud Foundry instances.

The complete guide is at Using the Cloud Foundry Command Line Interface (https://docs.cloud-

foundry.org/cf-cli/) , and source code with a demo video is on GitHub at Cloud Foundry CLI

(https://github.com/cloudfoundry/cli/blob/master/README.md) .

The following examples demonstrate some of the commonly-used commands. The rst task isto log into your new SUSE Cloud Foundry instance. When your installation completes it printsa welcome screen with the information you need to access it.

NOTES: Welcome to your new deployment of SCF.

The endpoint for use by the `cf` client is https://api.example.com

To target this endpoint run cf api --skip-ssl-validation https://api.example.com

Your administrative credentials are: Username: admin Password: password

Please remember, it may take some time for everything to come online.

You can use kubectl get pods --namespace scf

to spot-check if everything is up and running, or watch --color 'kubectl get pods --namespace scf'

to monitor continuously.

262 Using the cf CLI with SUSE Cloud Application Platform SUSE Cloud Applic… 1.5.1

Page 278: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

You can display this message anytime with this command:

tux > helm status $(helm list | awk '/cf-([0-9]).([0-9]).*/{print$1}') | \sed --quiet --expression '/NOTES/,$p'

You need to provide the API endpoint of your SUSE Cloud Application Platform instance to login. The API endpoint is the DOMAIN value you provided in scf-config-values.yaml , plus theapi. prefix, as it shows in the above welcome screen. Set your endpoint, and use --skip-ssl-validation when you have self-signed SSL certificates. It asks for an e-mail address, but youmust enter admin instead (you cannot change this to a different username, though you maycreate additional users), and the password is the one you created in scf-config-values.yaml :

tux > cf login --skip-ssl-validation -a https://api.example.comAPI endpoint: https://api.example.com

Email> admin

Password>Authenticating...OK

Targeted org system

API endpoint: https://api.example.com (API version: 2.134.0)User: adminOrg: systemSpace: No space targeted, use 'cf target -s SPACE'

cf help displays a list of commands and options. cf help [command] provides informationon specific commands.

You may pass in your credentials and set the API endpoint in a single command:

tux > cf login -u admin -p password --skip-ssl-validation -a https://api.example.com

Log out with cf logout .

Change the admin password:

tux > cf passwdCurrent Password>New Password>Verify Password>Changing password...OK

263 Using the cf CLI with SUSE Cloud Application Platform SUSE Cloud Applic… 1.5.1

Page 279: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Please log in again

View your current API endpoint, user, org, and space:

tux > cf target

Switch to a different org or space:

tux > cf target -o orgtux > cf target -s space

List all apps in the current space:

tux > cf apps

Query the health and status of a particular app:

tux > cf app appname

View app logs. The rst example tails the log of a running app. The --recent option dumpsrecent logs instead of tailing, which is useful for stopped and crashed apps:

tux > cf logs appnametux > cf logs --recent appname

Restart all instances of an app:

tux > cf restart appname

Restart a single instance of an app, identified by its index number, and restart it with the sameindex number:

tux > cf restart-app-instance appname index

After you have set up a service broker (see Chapter 22, Service Brokers), create new services:

tux > cf create-service service-name default mydb

Then you may bind a service instance to an app:

tux > cf bind-service appname service-instance

The most-used command is cf push , for pushing new apps and changes to existing apps.

tux > cf push new-app -b buildpack

264 Using the cf CLI with SUSE Cloud Application Platform SUSE Cloud Applic… 1.5.1

Page 280: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

If you need to debug your application or run one-o tasks, start an SSH session into your ap-plication container.

tux > cf ssh appname

When the SSH connection is established, run the following to have the environment match thatof the application and its associated buildpack.

tux > /tmp/lifecycle/shell

265 Using the cf CLI with SUSE Cloud Application Platform SUSE Cloud Applic… 1.5.1

Page 281: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

V Troubleshooting

31 Troubleshooting 267

Page 282: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

31 Troubleshooting

Cloud stacks are complex, and debugging deployment issues often requires digging throughmultiple layers to nd the information you need. Remember that the SUSE Cloud Foundry re-leases must be deployed in the correct order, and that each release must deploy successfully,with no failed pods, before deploying the next release.

31.1 LoggingThere are two types of logs in a deployment of SUSE Cloud Application Platform, applicationslogs and component logs. The following provides a brief overview of each log type and how toretrieve them for monitoring and debugging use.

Application logs provide information specific to a given application that has been deployedto your Cloud Application Platform cluster and can be accessed through:

The cf CLI using the cf logs command

The application's log stream within the Stratos console

Access to logs for a given component of your Cloud Application Platform deployment canbe obtained by:

The kubectl logs commandThe following example retrieves the logs of the router container of router-0 podin the scf namespace

tux > kubectl logs --namespace scf router-0 router

Direct access to the log les using the following:

1. Open a shell to the container of the component using the kubectl exec com-mand

2. Navigate to the logs directory at /var/vcap/sys/logs , at which point therewill be subdirectories containing the log les for access.

tux > kubectl exec --stdin --tty --namespace scf router-0 /bin/bash

router/0:/# cd /var/vcap/sys/log

267 Logging SUSE Cloud Applic… 1.5.1

Page 283: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

router/0:/var/vcap/sys/log# ls -R.:gorouter loggregator_agent

./gorouter:access.log gorouter.err.log gorouter.log post-start.err.log post-start.log

./loggregator_agent:agent.log

31.2 Using SupportconfigIf you ever need to request support, or just want to generate detailed system information andlogs, use the supportconfig utility. Run it with no options to collect basic system information,and also cluster logs including Docker, etcd, flannel, and Velum. supportconfig may give youall the information you need.

supportconfig -h prints the options. Read the "Gathering System Information for Support"chapter in any SUSE Linux Enterprise Administration Guide to learn more.

31.3 Deployment Is Taking Too LongA deployment step seems to take too long, or you see that some pods are not in a ready statehours after all the others are ready, or a pod shows a lot of restarts. This example shows not-ready pods many hours after the others have become ready:

tux > kubectl get pods --namespace scfNAME READY STATUS RESTARTS AGErouter-3137013061-wlhxb 0/1 Running 0 16hrouting-api-0 0/1 Running 0 16h

The Running status means the pod is bound to a node and all of its containers have been created.However, it is not Ready , which means it is not ready to service requests. Use kubectl to printa detailed description of pod events and status:

tux > kubectl describe pod --namespace scf router-3137013061-wlhxb

This prints a lot of information, including IP addresses, routine events, warnings, and errors.You should nd the reason for the failure in this output.

268 Using Supportconfig SUSE Cloud Applic… 1.5.1

Page 284: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Important

Some Pods Show as Not Running

Some uaa and scf pods perform only deployment tasks, and it is normal for themto show as unready and Completed after they have completed their tasks, as theseexamples show:

tux > kubectl get pods --namespace uaasecret-generation-1-z4nlz 0/1 Completed

tux > kubectl get pods --namespace scfsecret-generation-1-m6k2h 0/1 Completedpost-deployment-setup-1-hnpln 0/1 Completed

Some Pods Terminate and Restart during Deployment

When monitoring the status of a deployment, pods can be observed transitioningfrom a Running state to a Terminating state, then returning to a Running again.If a RESTARTS count of 0 is maintained during this process, this is normal behav-ior and not due to failing pods. It is not necessary to stop the deployment. Duringdeployment, pods will modify annotations on itself via the StatefulSet pod spec. Inorder to get the correct annotations on the running pod, it is stopped and restarted.Under normal circumstances, this behavior should only result in a pod restartingonce.

31.4 Deleting and Rebuilding a DeploymentThere may be times when you want to delete and rebuild a deployment, for example when thereare errors in your scf-config-values.yaml le, you wish to test configuration changes, or adeployment fails and you want to try it again. This has ve steps: rst delete the StatefulSetsof the namespace associated with the release or releases you want to re-deploy, then delete therelease or releases, delete its namespace, then re-create the namespace and re-deploy the release.

The namespace is also deleted as part of the process because the SCF and UAA namespacescontain generated secrets which Helm is not aware of and will not remove when a release isdeleted. When deleting a release, busy systems may encounter timeouts. By rst deleting theStatefulSets, it ensures that this operation is more likely to succeed. Using the delete state-fulsets command requires kubectl v1.9.6 or newer.

269 Deleting and Rebuilding a Deployment SUSE Cloud Applic… 1.5.1

Page 285: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

Use helm to see your releases:

tux > helm listNAME REVISION UPDATED STATUS CHART NAMESPACEsusecf-console 1 Tue Aug 14 11:53:28 2018 DEPLOYED console-2.6.1 stratossusecf-scf 1 Tue Aug 14 10:58:16 2018 DEPLOYED cf-2.19.1 scfsusecf-uaa 1 Tue Aug 14 10:49:30 2018 DEPLOYED uaa-2.19.1 uaa

This example deletes the susecf-uaa release and uaa namespace:

tux > kubectl delete statefulsets --all --namespace uaastatefulset "mysql" deletedstatefulset "uaa" deleted

tux > helm delete --purge susecf-uaarelease "susecf-uaa" deleted

tux > kubectl delete namespace uaanamespace "uaa" deleted

Repeat the same process for the susecf-scf release and scf namespace:

tux > kubectl delete statefulsets --all --namespace scfstatefulset "adapter" deletedstatefulset "api-group" deleted...

tux > helm delete --purge susecf-scfrelease "susecf-scf" deleted

tux > kubectl delete namespace scfnamespace "scf" deleted

Then you can start over. Be sure to create new release and namespace names.

31.5 Querying with KubectlYou can safely query with kubectl to get information about resources inside your Kubernetescluster. kubectl cluster-info dump | tee clusterinfo.txt outputs a large amount ofinformation about the Kubernetes master and cluster services to a text le.

The following commands give more targeted information about your cluster.

270 Querying with Kubectl SUSE Cloud Applic… 1.5.1

Page 286: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

List all cluster resources:

tux > kubectl get all --all-namespaces

List all of your running pods:

tux > kubectl get pods --all-namespaces

List all of your running pods, their internal IP addresses, and which Kubernetes nodes theyare running on:

tux > kubectl get pods --all-namespaces --output wide

See all pods, including those with Completed or Failed statuses:

tux > kubectl get pods --show-all --all-namespaces

List pods in one namespace:

tux > kubectl get pods --namespace scf

Get detailed information about one pod:

tux > kubectl describe --namespace scf po/diego-cell-0

Read the log le of a pod:

tux > kubectl logs --namespace scf po/diego-cell-0

List all Kubernetes nodes, then print detailed information about a single node:

tux > kubectl get nodestux > kubectl describe node 6a2752b6fab54bb889029f60de6fa4d5.infra.caasp.local

List all containers in all namespaces, formatted for readability:

tux > kubectl get pods --all-namespaces --output jsonpath="{..image}" |\tr -s '[[:space:]]' '\n' |\sort |\uniq -c

These two commands check node capacities, to verify that there are enough resources forthe pods:

tux > kubectl get nodes --output yaml | grep '\sname\|cpu\|memory'

271 Querying with Kubectl SUSE Cloud Applic… 1.5.1

Page 287: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > kubectl get nodes --output json | \jq '.items[] | {name: .metadata.name, cap: .status.capacity}'

31.6 autoscaler-metrics Pod Liquibase ErrorIn some situations, the autoscaler-metrics pod may fail to reach a fully ready state due toa Liquibase error, liquibase.exception.LockException: Could not acquire change loglock .

When this occurs, use the following procedure to resolve the issue.

1. Verify whether the error has occurred by examining the logs.

tux > kubectl logs autoscaler-metrics-0 autoscaler-metrics --follow --namespace scf

If the error has occurred, the log will contain entires similar to the following.

# Starting Liquibase at Wed, 21 Aug 2019 22:38:00 UTC (version 3.6.3 built at 2019-01-29 11:34:48)# Unexpected error running Liquibase: Could not acquire change log lock. Currently locked by autoscaler-metrics-0.autoscaler-metrics-set.cf.svc.cluster.local (172.16.0.173) since 8/21/19, 9:20 PM# liquibase.exception.LockException: Could not acquire change log lock. Currently locked by autoscaler-metrics-0.autoscaler-metrics-set.cf.svc.cluster.local (172.16.0.173) since 8/21/19, 9:20 PM# at liquibase.lockservice.StandardLockService.waitForLock(StandardLockService.java:230)# at liquibase.Liquibase.update(Liquibase.java:184)# at liquibase.Liquibase.update(Liquibase.java:179)# at liquibase.integration.commandline.Main.doMigration(Main.java:1220)# at liquibase.integration.commandline.Main.run(Main.java:199)# at liquibase.integration.commandline.Main.main(Main.java:137)

2. After confirming the error has occurred, use the provided Ruby script to recover fromthe error.

a. Save the script below to a le. In this example, it is saved to a le named recov-ery-autoscaler-metrics-liquidbase-stale-lock .

#!/usr/bin/env ruby

# This script recovers from autoscaler-metrics failing to start because it could

272 autoscaler-metrics Pod Liquibase Error SUSE Cloud Applic… 1.5.1

Page 288: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# not continue schema migrations. The error looks something like:

# Starting Liquibase at Wed, 21 Aug 2019 22:38:00 UTC (version 3.6.3 built at 2019-01-29 11:34:48)# Unexpected error running Liquibase: Could not acquire change log lock. Currently locked by autoscaler-metrics-0.autoscaler-metrics-set.cf.svc.cluster.local (172.16.0.173) since 8/21/19, 9:20 PM# liquibase.exception.LockException: Could not acquire change log lock. Currently locked by autoscaler-metrics-0.autoscaler-metrics-set.cf.svc.cluster.local (172.16.0.173) since 8/21/19, 9:20 PM# at liquibase.lockservice.StandardLockService.waitForLock(StandardLockService.java:230)# at liquibase.Liquibase.update(Liquibase.java:184)# at liquibase.Liquibase.update(Liquibase.java:179)# at liquibase.integration.commandline.Main.doMigration(Main.java:1220)# at liquibase.integration.commandline.Main.run(Main.java:199)# at liquibase.integration.commandline.Main.main(Main.java:137)

require 'json'require 'open3'

EXPECTED_ERROR='Could not acquire change log lock. Currently locked by'

def capture(*args) stdout, status = Open3.capture2(*args) return stdout if status.success? cmd = args.dup cmd.shift while Hash.try_convert(cmd.first) cmd.pop while Hash.try_convert(cmd.last) fail %Q<"#{cmd.join(' ')}" failed with exit status #{status.exitstatus}>end

def namespace $namespace ||= JSON.parse(capture('helm list --output json'))['Releases'] .select { |r| r['Status'].downcase == 'deployed' } .select { |r| r['Chart'].start_with? 'cf-' } .first['Namespace']end

def run(*args) Process.wait Process.spawn(*args) return if $?.success? cmd = args.dup cmd.shift while Hash.try_convert(cmd.first) cmd.pop while Hash.try_convert(cmd.last) fail %Q<"#{cmd.join(' ')}" failed with exit status #{$?.exitstatus}>

273 autoscaler-metrics Pod Liquibase Error SUSE Cloud Applic… 1.5.1

Page 289: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

end

postgres_pod = JSON.load(capture('kubectl', 'get', 'pods', '--namespace', namespace, '--selector', 'skiff-role-name==autoscaler-postgres', '--output', 'json'))['items'] .find { |pod| pod['status']['conditions'] .find {|status| status['type'] == 'Ready'}['status'] == 'True' }['metadata']['name']

psql_binary = capture('kubectl', 'exec', postgres_pod, '--namespace', namespace, '--', 'sh', '-c', 'echo /var/vcap/packages/postgres-*/bin/psql').split.first

run 'kubectl', 'exec', postgres_pod, '--namespace', namespace, '--', psql_binary, '--username', 'postgres', '--dbname', 'autoscaler', '--command', 'SELECT * FROM databasechangeloglock'

locked_bys = (1..Float::INFINITY).each do |count| STDOUT.printf "\rWaiting for pod to get stuck (try %s)...", count candidate_locked_bys = capture('kubectl', 'get', 'pods', '--namespace', namespace, '--selector', 'skiff-role-name==autoscaler-metrics', '--output', 'jsonpath={.items[*].metadata.name}') .split .map do |pod_name| capture('kubectl', 'logs', pod_name, '--namespace', namespace, '--container', 'autoscaler-metrics') end .select { |message| message.include? EXPECTED_ERROR } .map { |message| /Currently locked by (.*?) since /.match(message)[1]}

break candidate_locked_bys unless candidate_locked_bys.empty? sleep 1endputs '' # new line after the waitlocked_bys.each do |locked_by| puts "Releasing lock for #{locked_by}" run 'kubectl', 'exec', postgres_pod, '--namespace', namespace,

274 autoscaler-metrics Pod Liquibase Error SUSE Cloud Applic… 1.5.1

Page 290: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

'--', psql_binary, '--username', 'postgres', '--dbname', 'autoscaler', '--command', %Q<UPDATE databasechangeloglock SET locked = FALSE WHERE lockedby = '#{locked_by}'>end

b. Run the script.

tux > ruby recovery-autoscaler-metrics-liquidbase-stale-lock

All autoscaler pods should now enter a fully ready state.

31.7 api-group Pod Not Ready after BuildpackUpdateWhen a built-in buildpack is updated as described in one of the scenarios below, the api-grouppod will fail to reach a fully ready state upon restart. A built-in buildpack is a buildpack thatis included as part of a SUSE Cloud Application Platform release and has no stack associationby default.

The following scenarios can result in the api-group not being fully ready:

A built-in buildpack is assigned a stack association. For example:

tux > cf update-buildpack ruby_buildpack --assign-stack sle15

A built-in buildpack is updated to to a version with an inherent stack association, such asthose provided by upstream Cloud Foundry. For example:

tux > cf update-buildpack ruby_buildpack -p https://github.com/cloudfoundry/ruby-buildpack/releases/download/v1.7.43/ruby-buildpack-cflinuxfs3-v1.7.43.zip

31.7.1 Recovery and Prevention

In order to bring the api-group pod back to a fully ready state, use the following procedure.

1. Delete the buildpack that was updated to have a stack association.

275 api-group Pod Not Ready after Buildpack Update SUSE Cloud Applic… 1.5.1

Page 291: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

tux > cf delete-buildpack ruby_buildpack

2. Restart the api-group pod.

tux > kubectl delete pod api-group-0 --namespace scf

In order to use a buildpack with a stack association and not encounter the issue described inthis section, do the following. Instead of updating one of the built-in buildpacks to either assigna stack or update to one with an inherent stack association, create a new buildpack with thedesired buildpack les and associated stack.

tux > cf create-buildpack ruby_cflinuxfs3 https://github.com/cloudfoundry/ruby-buildpack/releases/download/v1.7.41/ruby-buildpack-cflinuxfs3-v1.7.41.zip 1 --enable

31.8 mysql Pods Fail to StartIn some cases, mysql pods may fail to start due to various reasons. This can affect the mysqlpod in both uaa and scf .

31.8.1 Data initialization or incomplete transactions

1. Confirm whether the mysql pod failure is related to data initialization problems or in-complete transactions.

a. Perform an initial inspection of the the mysql container's logs. A pre-start relatederror should be observed. If the mysql pod from scf is experiencing issues, substi-tute scf as the namespace.

tux > kubectl logs --namespace uaa mysql-0

b. Examine the /var/vcap/sys/log/pxc-mysql/mysql.err.log le from the mysqlcontainer filesystem. If the mysql pod from scf is experiencing issues, substitutescf as the namespace.

tux > kubectl exec --stdin --tty --namespace uaa mysql-0 -- bash -c 'cat /var/vcap/sys/log/pxc-mysql/mysql.err.log'

276 mysql Pods Fail to Start SUSE Cloud Applic… 1.5.1

Page 292: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

One of the following error messages should be found:

[ERROR] Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist[ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.user' doesn't exist

[ERROR] Found 1 prepared transactions! It means that mysqld was not shut down properly last time and critical recovery information (last binlog or tc.log file) was manually deleted after a crash. You have to start mysqld with --tc-heuristic-recover switch to commit or rollback pending transactions.[ERROR] Aborting

This can happen when mysql fails to initialize and read its data directory, possiblydue to resource limitations in the cluster.

2. If the mysql pod failure is confirmed to be due to aborted transactions, perform thefollowing procedure to return it to a running state.When using the procedure, it is important to note that:

In a high availability setup, replication will synchronize data to the new, clean per-sistent volume claim (PVC).

In a single availability setup, backup your data before applying the procedure as itwill result in data loss. It is a recommended practice to regulary backup your data.

a. Delete the persistent volume claim (PVC) attached to the failed mysql pod. Identifythe name of the PVC by using kubectl describe pod and inspecting the Claim-Name eld. If the mysql pod from scf is experiencing issues, substitute scf asthe namespace.

tux > kubectl describe pod --namespace uaa mysql-0

tux > kubectl delete persistentvolumeclaim --namespace uaa mysql-data-mysql-0

b. Immediately delete the stuck mysql pod. It is important to get the order right asotherwise, PVC deletion will never terminate because of the attached pod. If themysql pod from scf is experiencing issues, substitute scf as the namespace.

tux > kubectl delete pod --namespace uaa mysql-0

277 Data initialization or incomplete transactions SUSE Cloud Applic… 1.5.1

Page 293: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

31.8.2 File ownership issues

If the mysql pod is in state READY: 0/1 and STATUS: Running , like this:

tux > kubectl get pods -namespace uaaNAME READY STATUS RESTARTS AGEconfiggin-helper-0 1/1 Running 0 16mmysql-0 0/1 Running 0 16mmysql-proxy-0 1/1 Running 0 16mpost-deployment-setup-1-lg7vl 0/1 Completed 0 16msecret-generation-1-fczw4 0/1 Completed 0 16muaa-0 0/1 Running 0 16m

Verify if all les in the /var/vcap/store directory of the mysql-0 pod are owned by thevcap user:

tux > kubectl exec --stdin --tty --namespace uaa mysql-0 -- bashmysql/0:/# ls -Al /var/vcap/store/total 12drwx------ 2 vcap nobody 4096 Mar 9 07:41 mysql_audit_logsdrwx------ 6 vcap nobody 4096 Mar 9 07:42 pxc-mysql

If not, please verify the configuration of your NFS share.

278 File ownership issues SUSE Cloud Applic… 1.5.1

Page 294: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

A Appendix

A.1 Manual Configuration of Pod Security PoliciesSUSE Cloud Application Platform 1.3.1 introduces built-in support for Pod Security Policies(PSPs), which are provided via Helm charts and are set up automatically, unlike older releaseswhich require manual PSP setup. SUSE CaaS Platform and Microsoft AKS both require PSPs forCloud Application Platform to operate correctly. This section provides instructions for configur-ing and applying the appropriate PSPs to older Cloud Application Platform releases.

See the upstream documentation at https://kubernetes.io/docs/concepts/policy/pod-se-

curity-policy/ , https://docs.cloudfoundry.org/concepts/roles.html , and https://docs.cloud-

foundry.org/uaa/identity-providers.html#id-flow for more information on understanding andusing PSPs.

Copy the following example into cap-psp-rbac.yaml :

---apiVersion: extensions/v1beta1kind: PodSecurityPolicymetadata: name: suse.cap.psp annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'spec: # Privileged #default in suse.caasp.psp.unprivileged #privileged: false privileged: true # Volumes and File Systems volumes: # Kubernetes Pseudo Volume Types - configMap - secret - emptyDir - downwardAPI - projected - persistentVolumeClaim # Networked Storage - nfs - rbd - cephFS - glusterfs

279 Manual Configuration of Pod Security Policies SUSE Cloud Applic… 1.5.1

Page 295: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

- fc - iscsi # Cloud Volumes - cinder - gcePersistentDisk - awsElasticBlockStore - azureDisk - azureFile - vsphereVolume allowedFlexVolumes: [] # hostPath volumes are not allowed; pathPrefix must still be specified allowedHostPaths: - pathPrefix: /opt/kubernetes-hostpath-volumes readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation #default in suse.caasp.psp.unprivileged #allowPrivilegeEscalation: false allowPrivilegeEscalation: true #default in suse.caasp.psp.unprivileged #defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: - SYS_RESOURCE defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: false hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unsed in CaaSP rule: 'RunAsAny'---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:

280 Manual Configuration of Pod Security Policies SUSE Cloud Applic… 1.5.1

Page 296: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

name: suse:cap:psprules:- apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['suse.cap.psp']---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: cap:clusterroleroleRef: kind: ClusterRole name: suse:cap:psp apiGroup: rbac.authorization.k8s.iosubjects:- kind: ServiceAccount name: default namespace: uaa- kind: ServiceAccount name: default namespace: scf- kind: ServiceAccount name: default namespace: stratos- kind: ServiceAccount name: default-privileged namespace: scf- kind: ServiceAccount name: node-reader namespace: scf

Apply it to your cluster with kubectl :

tux > kubectl create --filename cap-psp-rbac.yamlpodsecuritypolicy.extensions "suse.cap.psp" createdclusterrole.rbac.authorization.k8s.io "suse:cap:psp" createdclusterrolebinding.rbac.authorization.k8s.io "cap:clusterrole" created

Verify that the new PSPs exist by running the kubectl get psp command to list them. Thencontinue by deploying UAA and SCF. Ensure that your scf-config-values.yaml le specifiesthe name of your PSP in the kube: section. These settings will grant only a limited subset ofroles to be privileged.

kube: psp: privileged: "suse.cap.psp"

281 Manual Configuration of Pod Security Policies SUSE Cloud Applic… 1.5.1

Page 297: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

TipNote that the example cap-psp-rbac.yaml le sets the name of the PSPs, which in theprevious examples is suse.cap.psp .

A.1.1 Using Custom Pod Security Policies

When using a custom PSP, your scf-config-values.yaml le requires the SYS_RESOURCEcapability to be added to the following roles:

sizing: cc_uploader: capabilities: ["SYS_RESOURCE"] diego_api: capabilities: ["SYS_RESOURCE"] diego_brain: capabilities: ["SYS_RESOURCE"] diego_ssh: capabilities: ["SYS_RESOURCE"] nats: capabilities: ["SYS_RESOURCE"] router: capabilities: ["SYS_RESOURCE"] routing_api: capabilities: ["SYS_RESOURCE"]

A.2 Complete suse/uaa values.yaml FileThis is the complete output of helm inspect suse/uaa for the current SUSE Cloud ApplicationPlatform 1.5.1 release.

apiVersion: v1appVersion: 1.5.1description: A Helm chart for SUSE UAAname: uaaversion: 2.19.1scfVersion: 2.19.1

------kube:

282 Using Custom Pod Security Policies SUSE Cloud Applic… 1.5.1

Page 298: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

auth: "rbac" external_ips: []

# Whether HostPath volume mounts are available hostpath_available: false

limits: nproc: hard: "" soft: "" organization: "cap" psp: default: ~ registry: hostname: "registry.suse.com" username: "" password: ""

# Increment this counter to rotate all generated secrets secrets_generation_counter: 1

storage_class: persistent: "persistent" shared: "shared"config: # Flag to activate high-availability mode HA: false

# Flag to verify instance counts against HA minimums HA_strict: true

# Global memory configuration memory: # Flag to activate memory requests requests: false

# Flag to activate memory limits limits: false

# Global CPU configuration cpu: # Flag to activate cpu requests requests: false

# Flag to activate cpu limits limits: false

283 Complete suse/uaa values.yaml File SUSE Cloud Applic… 1.5.1

Page 299: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# Flag to specify whether to add Istio related annotations and labels use_istio: false

bosh: instance_groups: []services: loadbalanced: falsesecrets: # Administrator password for an external database server; this is required to # create the necessary databases. Only used if DB_EXTERNAL_HOST is set. # # This value is immutable and must not be changed once set. DB_EXTERNAL_PASSWORD: ~

# A PEM-encoded TLS certificate for the Galera server. # This value uses a generated default. GALERA_SERVER_CERT: ~

# A PEM-encoded TLS key for the Galera server. GALERA_SERVER_CERT_KEY: ~

# PEM-encoded CA certificate used to sign the TLS certificate used by all # components to secure their communications. # This value uses a generated default. INTERNAL_CA_CERT: ~

# PEM-encoded CA key. INTERNAL_CA_CERT_KEY: ~

# PEM-encoded JWT certificate. # This value uses a generated default. JWT_SIGNING_CERT: ~

# PEM-encoded JWT signing key. JWT_SIGNING_CERT_KEY: ~

# Password used for the monit API. # This value uses a generated default. MONIT_PASSWORD: ~

# The password for the MySQL server admin user. # This value uses a generated default. MYSQL_ADMIN_PASSWORD: ~

# The password for the cluster logger health user. # This value uses a generated default. MYSQL_CLUSTER_HEALTH_PASSWORD: ~

284 Complete suse/uaa values.yaml File SUSE Cloud Applic… 1.5.1

Page 300: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# Password used to authenticate to the MySQL Galera healthcheck endpoint. # This value uses a generated default. MYSQL_GALERA_HEALTHCHECK_ENDPOINT_PASSWORD: ~

# The password for Basic Auth used to secure the MySQL proxy API. # This value uses a generated default. MYSQL_PROXY_ADMIN_PASSWORD: ~

# A PEM-encoded TLS certificate for the MySQL server. # This value uses a generated default. MYSQL_SERVER_CERT: ~

# A PEM-encoded TLS key for the MySQL server. MYSQL_SERVER_CERT_KEY: ~

# PEM-encoded certificate # This value uses a generated default. SAML_SERVICEPROVIDER_CERT: ~

# PEM-encoded key. SAML_SERVICEPROVIDER_CERT_KEY: ~

# The password for access to the UAA database. # This value uses a generated default. UAADB_PASSWORD: ~

# The password of the admin client - a client named admin with uaa.admin as an # authority. UAA_ADMIN_CLIENT_SECRET: ~

# The server's ssl certificate. The default is a self-signed certificate and # should always be replaced for production deployments. # This value uses a generated default. UAA_SERVER_CERT: ~

# The server's ssl private key. Only passphrase-less keys are supported. UAA_SERVER_CERT_KEY: ~

env: # Expiration for generated certificates (in days) CERT_EXPIRATION: "10950"

# Database driver to use for the external database server used to manage the # UAA-internal database. Only used if DB_EXTERNAL_HOST is set. Currently only # `mysql` is valid. DB_EXTERNAL_DRIVER: "mysql"

285 Complete suse/uaa values.yaml File SUSE Cloud Applic… 1.5.1

Page 301: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# Hostname for an external database server to use for the UAA-internal # database If not set, the internal database is used. DB_EXTERNAL_HOST: ~

# Port for an external database server to use for the UAA-internal database. # Only used if DB_EXTERNAL_HOST is set. DB_EXTERNAL_PORT: "3306"

# TLS configuration for the external database server to use for the # UAA-internal database. Only used if DB_EXTERNAL_HOST is set. Valid values # depend on which database driver is in use. DB_EXTERNAL_SSL_MODE: ~

# Administrator user name for an external database server; this is required to # create the necessary databases. Only used if DB_EXTERNAL_HOST is set. DB_EXTERNAL_USER: ~

# Base domain name of the UAA endpoint; `uaa.${DOMAIN}` must be correctly # configured to point to this UAA instance. DOMAIN: ~

KUBERNETES_CLUSTER_DOMAIN: ~

# The cluster's log level: off, fatal, error, warn, info, debug, debug1, # debug2. LOG_LEVEL: "info"

# The log destination to talk to. This has to point to a syslog server. SCF_LOG_HOST: ~

# The port used by rsyslog to talk to the log destination. It defaults to 514, # the standard port of syslog. SCF_LOG_PORT: "514"

# The protocol used by rsyslog to talk to the log destination. The allowed # values are tcp, and udp. The default is tcp. SCF_LOG_PROTOCOL: "tcp"

# If true, authenticate against the SMTP server using AUTH command. See # https://javamail.java.net/nonav/docs/api/com/sun/mail/smtp/package-summary.html SMTP_AUTH: "false"

# SMTP from address, for password reset emails etc. SMTP_FROM_ADDRESS: ~

# SMTP server host address, for password reset emails etc.

286 Complete suse/uaa values.yaml File SUSE Cloud Applic… 1.5.1

Page 302: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

SMTP_HOST: ~

# SMTP server password, for password reset emails etc. SMTP_PASSWORD: ~

# SMTP server port, for password reset emails etc. SMTP_PORT: "25"

# If true, send STARTTLS command before logging in to SMTP server. See # https://javamail.java.net/nonav/docs/api/com/sun/mail/smtp/package-summary.html SMTP_STARTTLS: "false"

# SMTP server username, for password reset emails etc. SMTP_USER: ~

# The TCP port to report as the public port for the UAA server (root zone). UAA_PUBLIC_PORT: "2793"

# The sizing section contains configuration to change each individual instance# group. Due to limitations on the allowable names, any dashes ("-") in the# instance group names are replaced with underscores ("_").sizing: # The mysql instance group contains the following jobs: # # - global-uaa-properties: Dummy BOSH job used to host global parameters that # are required to configure SCF / fissile # # - patch-properties: Dummy BOSH job used to host parameters that are used in # SCF patches for upstream bugs # # Also: pxc-mysql, galera-agent, gra-log-purger, cluster-health-logger, # bootstrap, mysql, and bpm mysql: # Node affinity rules can be specified here affinity: {}

# The mysql instance group is enabled by the mysql feature. # It can scale between 1 and 7 instances. # The instance count must be an odd number (not divisible by 2). # For high availability it needs at least 3 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

287 Complete suse/uaa values.yaml File SUSE Cloud Applic… 1.5.1

Page 303: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

disk_sizes: mysql_data: 20

# Unit [MiB] memory: request: 2500 limit: ~

# The mysql-proxy instance group contains the following jobs: # # - global-uaa-properties: Dummy BOSH job used to host global parameters that # are required to configure SCF / fissile # # - switchboard-leader: Job to host the active/passive probe for mysql # switchboard and leader election # # Also: bpm and proxy mysql_proxy: # Node affinity rules can be specified here affinity: {}

# The mysql_proxy instance group is enabled by the mysql feature. # It can scale between 1 and 5 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 2500 limit: ~

# The post-deployment-setup instance group contains the following jobs: # # - global-uaa-properties: Dummy BOSH job used to host global parameters that # are required to configure SCF / fissile # # - database-seeder: When using an external database server, seed it with the # necessary databases. post_deployment_setup: # Node affinity rules can be specified here affinity: {}

288 Complete suse/uaa values.yaml File SUSE Cloud Applic… 1.5.1

Page 304: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# The post_deployment_setup instance group cannot be scaled. count: ~

# Unit [millicore] cpu: request: 1000 limit: ~

# Unit [MiB] memory: request: 256 limit: ~

# The secret-generation instance group contains the following jobs: # # - generate-secrets: This job will generate the secrets for the cluster secret_generation: # Node affinity rules can be specified here affinity: {}

# The secret_generation instance group cannot be scaled. count: ~

# Unit [millicore] cpu: request: 1000 limit: ~

# Unit [MiB] memory: request: 256 limit: ~

# The uaa instance group contains the following jobs: # # - global-uaa-properties: Dummy BOSH job used to host global parameters that # are required to configure SCF / fissile # # - wait-for-database: This is a pre-start job to delay starting the rest of # the role until a database connection is ready. Currently it only checks # that a response can be obtained from the server, and not that it responds # intelligently. # # # - uaa: The UAA is the identity management service for Cloud Foundry. It's # primary role is as an OAuth2 provider, issuing tokens for client # applications to use when they act on behalf of Cloud Foundry users. It can

289 Complete suse/uaa values.yaml File SUSE Cloud Applic… 1.5.1

Page 305: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# also authenticate users with their Cloud Foundry credentials, and can act # as an SSO service using those credentials (or others). It has endpoints # for managing user accounts and for registering OAuth2 clients, as well as # various other management functions. uaa: # Node affinity rules can be specified here affinity: {}

# The uaa instance group can scale between 1 and 65535 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 2100 limit: ~

enable: # The mysql feature enables these instance groups: mysql and mysql_proxy mysql: trueingress: # ingress.annotations allows specifying custom ingress annotations that gets # merged to the default annotations. annotations: {}

# ingress.enabled enables ingress support - working ingress controller # necessary. enabled: false

# ingress.tls.crt and ingress.tls.key, when specified, are used by the TLS # secret for the Ingress resource. tls: {}

A.3 Complete suse/scf values.yaml FileThis is the complete output of helm inspect suse/cf for the current SUSE Cloud ApplicationPlatform 1.5.1 release.

apiVersion: v1appVersion: 1.5.1

290 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 306: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

description: A Helm chart for SUSE Cloud Foundryname: cfversion: 2.19.1scfVersion: 2.19.1

------kube: auth: "rbac" external_ips: []

# Whether HostPath volume mounts are available hostpath_available: false

limits: nproc: hard: "" soft: "" organization: "cap" psp: default: ~ registry: hostname: "registry.suse.com" username: "" password: ""

# Increment this counter to rotate all generated secrets secrets_generation_counter: 1

storage_class: persistent: "persistent" shared: "shared"config: # Flag to activate high-availability mode HA: false

# Flag to verify instance counts against HA minimums HA_strict: true

# Global memory configuration memory: # Flag to activate memory requests requests: false

# Flag to activate memory limits limits: false

291 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 307: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# Global CPU configuration cpu: # Flag to activate cpu requests requests: false

# Flag to activate cpu limits limits: false

# Flag to specify whether to add Istio related annotations and labels use_istio: false

bosh: instance_groups: []services: loadbalanced: falsesecrets: # PEM encoded RSA private key used to identify host. # This value uses a generated default. APP_SSH_KEY: ~

# MD5 fingerprint of the host key of the SSH proxy that brokers connections to # application instances. APP_SSH_KEY_FINGERPRINT: ~

# PEM-encoded certificate # This value uses a generated default. AUCTIONEER_REP_CERT: ~

# PEM-encoded key AUCTIONEER_REP_CERT_KEY: ~

# PEM-encoded server certificate # This value uses a generated default. AUCTIONEER_SERVER_CERT: ~

# PEM-encoded server key AUCTIONEER_SERVER_CERT_KEY: ~

# A PEM-encoded TLS certificate of the Autoscaler API public https server. # This includes the Autoscaler ApiServer and the Service Broker. # This value uses a generated default. AUTOSCALER_ASAPI_PUBLIC_SERVER_CERT: ~

# A PEM-encoded TLS key of the Autoscaler API public https server. This # includes the Autoscaler ApiServer and the Service Broker. AUTOSCALER_ASAPI_PUBLIC_SERVER_CERT_KEY: ~

292 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 308: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# A PEM-encoded TLS certificate of the Autoscaler API https server. This # includes the Autoscaler ApiServer and the Service Broker. # This value uses a generated default. AUTOSCALER_ASAPI_SERVER_CERT: ~

# A PEM-encoded TLS key of the Autoscaler API https server. This includes the # Autoscaler ApiServer and the Service Broker. AUTOSCALER_ASAPI_SERVER_CERT_KEY: ~

# A PEM-encoded TLS certificate for clients to connect to the Autoscaler # Metrics. This includes the Autoscaler Metrics Collector and Event Generator. # This value uses a generated default. AUTOSCALER_ASMETRICS_CLIENT_CERT: ~

# A PEM-encoded TLS key for clients to connect to the Autoscaler Metrics. This # includes the Autoscaler Metrics Collector and Event Generator. AUTOSCALER_ASMETRICS_CLIENT_CERT_KEY: ~

# A PEM-encoded TLS certificate of the Autoscaler Metrics https server. This # includes the Autoscaler Metrics Collector. # This value uses a generated default. AUTOSCALER_ASMETRICS_SERVER_CERT: ~

# A PEM-encoded TLS key of the Autoscaler Metrics https server. This includes # the Autoscaler Metrics Collector. AUTOSCALER_ASMETRICS_SERVER_CERT_KEY: ~

# The password for the Autoscaler postgres database. # This value uses a generated default. AUTOSCALER_DB_PASSWORD: ~

# A PEM-encoded TLS certificate for clients to connect to the Autoscaler # Scaling Engine. # This value uses a generated default. AUTOSCALER_SCALING_ENGINE_CLIENT_CERT: ~

# A PEM-encoded TLS key for clients to connect to the Autoscaler Scaling # Engine. AUTOSCALER_SCALING_ENGINE_CLIENT_CERT_KEY: ~

# A PEM-encoded TLS certificate of the Autoscaler Scaling Engine https server. # This value uses a generated default. AUTOSCALER_SCALING_ENGINE_SERVER_CERT: ~

# A PEM-encoded TLS key of the Autoscaler Scaling Engine https server. AUTOSCALER_SCALING_ENGINE_SERVER_CERT_KEY: ~

293 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 309: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# A PEM-encoded TLS certificate for clients to connect to the Autoscaler # Scheduler. # This value uses a generated default. AUTOSCALER_SCHEDULER_CLIENT_CERT: ~

# A PEM-encoded TLS key for clients to connect to the Autoscaler Scheduler. AUTOSCALER_SCHEDULER_CLIENT_CERT_KEY: ~

# A PEM-encoded TLS certificate of the Autoscaler Scheduler https server. # This value uses a generated default. AUTOSCALER_SCHEDULER_SERVER_CERT: ~

# A PEM-encoded TLS key of the Autoscaler Scheduler https server. AUTOSCALER_SCHEDULER_SERVER_CERT_KEY: ~

# the uaa client secret used by Autoscaler. # This value uses a generated default. AUTOSCALER_UAA_CLIENT_SECRET: ~

# PEM-encoded certificate # This value uses a generated default. BBS_AUCTIONEER_CERT: ~

# PEM-encoded key BBS_AUCTIONEER_CERT_KEY: ~

# PEM-encoded client certificate. # This value uses a generated default. BBS_CLIENT_CRT: ~

# PEM-encoded client key. BBS_CLIENT_CRT_KEY: ~

# PEM-encoded certificate # This value uses a generated default. BBS_REP_CERT: ~

# PEM-encoded key BBS_REP_CERT_KEY: ~

# PEM-encoded client certificate. # This value uses a generated default. BBS_SERVER_CRT: ~

# PEM-encoded client key. BBS_SERVER_CRT_KEY: ~

294 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 310: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# This is the key secret Bits-Service uses and clients should use to generate # signed URLs. # This value uses a generated default. BITS_SERVICE_SECRET: ~

# PEM-encoded client certificate. # This value uses a generated default. BITS_SERVICE_SSL_CERT: ~

# PEM-encoded client key. BITS_SERVICE_SSL_CERT_KEY: ~

# The basic auth password that Cloud Controller uses to connect to the # blobstore server. Auto-generated if not provided. Passwords must be # alphanumeric (URL-safe). # This value uses a generated default. BLOBSTORE_PASSWORD: ~

# The secret used for signing URLs between Cloud Controller and blobstore. # This value uses a generated default. BLOBSTORE_SECURE_LINK: ~

# The PEM-encoded certificate (optionally as a certificate chain) for serving # blobs over TLS/SSL. # This value uses a generated default. BLOBSTORE_TLS_CERT: ~

# The PEM-encoded private key for signing TLS/SSL traffic. BLOBSTORE_TLS_CERT_KEY: ~

# The password for the bulk api. # This value uses a generated default. BULK_API_PASSWORD: ~

# A map of labels and encryption keys CC_DB_ENCRYPTION_KEYS: "~"

# The PEM-encoded certificate for secure TLS communication over external # endpoints. # This value uses a generated default. CC_PUBLIC_TLS_CERT: ~

# The PEM-encoded key for secure TLS communication over external endpoints. CC_PUBLIC_TLS_CERT_KEY: ~

# The PEM-encoded certificate for internal cloud controller traffic. # This value uses a generated default.

295 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 311: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

CC_SERVER_CRT: ~

# The PEM-encoded private key for internal cloud controller traffic. CC_SERVER_CRT_KEY: ~

# The PEM-encoded certificate for internal cloud controller uploader traffic. # This value uses a generated default. CC_UPLOADER_CRT: ~

# The PEM-encoded private key for internal cloud controller uploader traffic. CC_UPLOADER_CRT_KEY: ~

# PEM-encoded broker server certificate. # This value uses a generated default. CF_USB_BROKER_SERVER_CERT: ~

# PEM-encoded broker server key. CF_USB_BROKER_SERVER_CERT_KEY: ~

# The password for access to the Universal Service Broker. # This value uses a generated default. # Example: "password" CF_USB_PASSWORD: ~

# The password for the cluster administrator. CLUSTER_ADMIN_PASSWORD: ~

# PEM-encoded server certificate # This value uses a generated default. CREDHUB_SERVER_CERT: ~

# PEM-encoded server key CREDHUB_SERVER_CERT_KEY: ~

# Administrator password for an external database server; this is required to # create the necessary databases. Only used if DB_EXTERNAL_HOST is set. # # This value is immutable and must not be changed once set. DB_EXTERNAL_PASSWORD: ~

# PEM-encoded client certificate # This value uses a generated default. DIEGO_CLIENT_CERT: ~

# PEM-encoded client key DIEGO_CLIENT_CERT_KEY: ~

296 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 312: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# PEM-encoded certificate. # This value uses a generated default. DOPPLER_CERT: ~

# PEM-encoded key. DOPPLER_CERT_KEY: ~

# TLS certificate for Eirini server # This value uses a generated default. EIRINI_CLIENT_CRT: ~

# Private key associated with TLS certificate for Eirini server EIRINI_CLIENT_CRT_KEY: ~

# Basic auth password to verify on incoming Service Broker requests # This value uses a generated default. EIRINI_PERSI_NFS_BROKER_PASSWORD: ~

# Basic auth user password for registry # This value uses a generated default. EIRINI_REGISTRY_PASSWORD: ~

# TLS certificate for Eirini server # This value uses a generated default. EIRINI_SERVER_CERT: ~

# Private key associated with TLS certificate for Eirini server EIRINI_SERVER_CERT_KEY: ~

# A PEM-encoded TLS certificate for the Galera server. # This value uses a generated default. GALERA_SERVER_CERT: ~

# A PEM-encoded TLS key for the Galera server. GALERA_SERVER_CERT_KEY: ~

# Basic auth password for access to the Cloud Controller's internal API. # This value uses a generated default. INTERNAL_API_PASSWORD: ~

# PEM-encoded CA certificate used to sign the TLS certificate used by all # components to secure their communications. # This value uses a generated default. INTERNAL_CA_CERT: ~

# PEM-encoded CA key. INTERNAL_CA_CERT_KEY: ~

297 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 313: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# PEM-encoded JWT certificate. # This value uses a generated default. JWT_SIGNING_CERT: ~

# PEM-encoded JWT signing key. JWT_SIGNING_CERT_KEY: ~

# PEM-encoded certificate. # This value uses a generated default. LOGGREGATOR_AGENT_CERT: ~

# PEM-encoded key. LOGGREGATOR_AGENT_CERT_KEY: ~

# PEM-encoded client certificate for loggregator mutual authentication # This value uses a generated default. LOGGREGATOR_CLIENT_CERT: ~

# PEM-encoded client key for loggregator mutual authentication LOGGREGATOR_CLIENT_CERT_KEY: ~

# PEM-encoded client certificate for loggregator forwarder authentication # This value uses a generated default. LOGGREGATOR_FORWARD_CERT: ~

# PEM-encoded client key for loggregator forwarder authentication LOGGREGATOR_FORWARD_CERT_KEY: ~

# TLS cert for outgoing dropsonde connection # This value uses a generated default. LOGGREGATOR_OUTGOING_CERT: ~

# TLS key for outgoing dropsonde connection LOGGREGATOR_OUTGOING_CERT_KEY: ~

# PEM-encoded certificate. # This value uses a generated default. LOG_CACHE_CERT: ~

# PEM-encoded key. LOG_CACHE_CERT_KEY: ~

# The TLS cert for the auth proxy. # This value uses a generated default. LOG_CACHE_CF_AUTH_PROXY_EXTERNAL_CERT: ~

298 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 314: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# The TLS key for the auth proxy. LOG_CACHE_CF_AUTH_PROXY_EXTERNAL_CERT_KEY: ~

# PEM-encoded certificate. # This value uses a generated default. LOG_CACHE_TO_LOGGREGATOR_AGENT_CERT: ~

# PEM-encoded key. LOG_CACHE_TO_LOGGREGATOR_AGENT_CERT_KEY: ~

# Password used for the monit API. # This value uses a generated default. MONIT_PASSWORD: ~

# The password for the MySQL server admin user. # This value uses a generated default. MYSQL_ADMIN_PASSWORD: ~

# The password for access to the Cloud Controller database. # This value uses a generated default. MYSQL_CCDB_ROLE_PASSWORD: ~

# The password for access to the usb config database. # This value uses a generated default. # Example: "password" MYSQL_CF_USB_PASSWORD: ~

# The password for the cluster logger health user. # This value uses a generated default. MYSQL_CLUSTER_HEALTH_PASSWORD: ~

# The password for access to the credhub-user database. # This value uses a generated default. MYSQL_CREDHUB_USER_PASSWORD: ~

# Database password for the diego locket service. # This value uses a generated default. MYSQL_DIEGO_LOCKET_PASSWORD: ~

# The password for access to MySQL by diego. # This value uses a generated default. MYSQL_DIEGO_PASSWORD: ~

# Password used to authenticate to the MySQL Galera healthcheck endpoint. # This value uses a generated default. MYSQL_GALERA_HEALTHCHECK_ENDPOINT_PASSWORD: ~

299 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 315: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# Database password for storing broker state for the Persi NFS Broker # This value uses a generated default. MYSQL_PERSI_NFS_PASSWORD: ~

# The password for Basic Auth used to secure the MySQL proxy API. # This value uses a generated default. MYSQL_PROXY_ADMIN_PASSWORD: ~

# The password for access to MySQL by the routing-api # This value uses a generated default. MYSQL_ROUTING_API_PASSWORD: ~

# A PEM-encoded TLS certificate for the MySQL server. # This value uses a generated default. MYSQL_SERVER_CERT: ~

# A PEM-encoded TLS key for the MySQL server. MYSQL_SERVER_CERT_KEY: ~

# The password for access to NATS. # This value uses a generated default. NATS_PASSWORD: ~

# Basic auth password to verify on incoming Service Broker requests # This value uses a generated default. PERSI_NFS_BROKER_PASSWORD: ~

# LDAP service account password (required for LDAP integration only) PERSI_NFS_DRIVER_LDAP_PASSWORD: "-"

# PEM-encoded server certificate # This value uses a generated default. REP_SERVER_CERT: ~

# PEM-encoded server key REP_SERVER_CERT_KEY: ~

# PEM-encoded certificate # This value uses a generated default. RLP_GATEWAY_CERT: ~

# PEM-encoded key. RLP_GATEWAY_CERT_KEY: ~

# Support for route services is disabled when no value is configured. A robust # passphrase is recommended. # This value uses a generated default.

300 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 316: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

ROUTER_SERVICES_SECRET: ~

# The public ssl cert for ssl termination. Will be ignored if ROUTER_TLS_PEM # is set. # This value uses a generated default. ROUTER_SSL_CERT: ~

# The private ssl key for ssl termination. Will be ignored if ROUTER_TLS_PEM # is set. ROUTER_SSL_CERT_KEY: ~

# Password for HTTP basic auth to the varz/status endpoint. # This value uses a generated default. ROUTER_STATUS_PASSWORD: ~

# Array of private keys and certificates used for TLS handshakes with # downstream clients. Each element in the array is an object containing fields # 'private_key' and 'cert_chain', each of which supports a PEM block. This # setting overrides ROUTER_SSL_CERT and ROUTER_SSL_KEY. # Example: # - cert_chain: | # -----BEGIN CERTIFICATE----- # -----END CERTIFICATE----- # -----BEGIN CERTIFICATE----- # -----END CERTIFICATE----- # private_key: | # -----BEGIN RSA PRIVATE KEY----- # -----END RSA PRIVATE KEY----- ROUTER_TLS_PEM: ~

# PEM-encoded certificate # This value uses a generated default. SAML_SERVICEPROVIDER_CERT: ~

# PEM-encoded key. SAML_SERVICEPROVIDER_CERT_KEY: ~

# The password for access to the uploader of staged droplets. # This value uses a generated default. STAGING_UPLOAD_PASSWORD: ~

# PEM-encoded certificate # This value uses a generated default. SYSLOG_ADAPT_CERT: ~

# PEM-encoded key. SYSLOG_ADAPT_CERT_KEY: ~

301 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 317: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# PEM-encoded certificate # This value uses a generated default. SYSLOG_RLP_CERT: ~

# PEM-encoded key. SYSLOG_RLP_CERT_KEY: ~

# PEM-encoded certificate # This value uses a generated default. SYSLOG_SCHED_CERT: ~

# PEM-encoded key. SYSLOG_SCHED_CERT_KEY: ~

# PEM-encoded client certificate for internal communication between the cloud # controller and TPS. # This value uses a generated default. TPS_CC_CLIENT_CRT: ~

# PEM-encoded client key for internal communication between the cloud # controller and TPS. TPS_CC_CLIENT_CRT_KEY: ~

# PEM-encoded certificate for communication with the traffic controller of the # log infra structure. # This value uses a generated default. TRAFFICCONTROLLER_CERT: ~

# PEM-encoded key for communication with the traffic controller of the log # infra structure. TRAFFICCONTROLLER_CERT_KEY: ~

# The password for access to the UAA database. # This value uses a generated default. UAADB_PASSWORD: ~

# The password of the admin client - a client named admin with uaa.admin as an # authority. UAA_ADMIN_CLIENT_SECRET: ~

# The CA certificate for UAA UAA_CA_CERT: ~

# The password for UAA access by the Routing API. # This value uses a generated default. UAA_CLIENTS_CC_ROUTING_SECRET: ~

302 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 318: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# Used for third party service dashboard SSO. # This value uses a generated default. UAA_CLIENTS_CC_SERVICE_DASHBOARDS_CLIENT_SECRET: ~

# Used for fetching service key values from CredHub. # This value uses a generated default. UAA_CLIENTS_CC_SERVICE_KEY_CLIENT_SECRET: ~

# Client secret for the CF smoke tests job # This value uses a generated default. UAA_CLIENTS_CF_SMOKE_TESTS_CLIENT_SECRET: ~

# The password for UAA access by the Universal Service Broker. # This value uses a generated default. UAA_CLIENTS_CF_USB_SECRET: ~

# The password for UAA access by the Cloud Controller for fetching usernames. # This value uses a generated default. UAA_CLIENTS_CLOUD_CONTROLLER_USERNAME_LOOKUP_SECRET: ~

# The password for UAA access by the client for the user-accessible credhub # This value uses a generated default. UAA_CLIENTS_CREDHUB_USER_CLI_SECRET: ~

# The password for UAA access by the SSH proxy. # This value uses a generated default. UAA_CLIENTS_DIEGO_SSH_PROXY_SECRET: ~

# The password for UAA access by doppler. # This value uses a generated default. UAA_CLIENTS_DOPPLER_SECRET: ~

# The password for UAA access by the gorouter. # This value uses a generated default. UAA_CLIENTS_GOROUTER_SECRET: ~

# The OAuth client secret used by the routing-api. # This value uses a generated default. UAA_CLIENTS_ROUTING_API_CLIENT_SECRET: ~

# The password for UAA access by the task creating the cluster administrator # user # This value uses a generated default. UAA_CLIENTS_SCF_AUTO_CONFIG_SECRET: ~

# The password for UAA access by the TCP emitter.

303 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 319: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# This value uses a generated default. UAA_CLIENTS_TCP_EMITTER_SECRET: ~

# The password for UAA access by the TCP router. # This value uses a generated default. UAA_CLIENTS_TCP_ROUTER_SECRET: ~

# The server's ssl certificate. The default is a self-signed certificate and # should always be replaced for production deployments. # This value uses a generated default. UAA_SERVER_CERT: ~

# The server's ssl private key. Only passphrase-less keys are supported. UAA_SERVER_CERT_KEY: ~

env: # The number of times Ginkgo will run a CATS test before treating it as a # failure. Individual failed runs will still be reported in the test output. ACCEPTANCE_TEST_FLAKE_ATTEMPTS: "3"

# The number of parallel test executors to spawn for Cloud Foundry acceptance # tests. The larger the number the higher the stress on the system. ACCEPTANCE_TEST_NODES: "4"

# List of domains (including scheme) from which Cross-Origin requests will be # accepted, a * can be used as a wildcard for any part of a domain. ALLOWED_CORS_DOMAINS: "[]"

# Allow users to change the value of the app-level allow_ssh attribute. ALLOW_APP_SSH_ACCESS: "true"

# Extra token expiry time while uploading big apps, in seconds. APP_TOKEN_UPLOAD_GRACE_PERIOD: "1200"

# The db address for the Autoscaler postgres database. AUTOSCALER_DB_ADDRESS: "autoscaler-postgres-postgres.((KUBERNETES_NAMESPACE)).svc.((KUBERNETES_CLUSTER_DOMAIN))"

# The tcp port of postgres database serves on AUTOSCALER_DB_PORT: "5432"

# The role name of autoscaler postgres database AUTOSCALER_DB_ROLE_NAME: "postgres"

# The name of the metadata label to query on worker nodes to get AZ # information. AZ_LABEL_NAME: "failure-domain.beta.kubernetes.io/zone"

304 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 320: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# List of allow / deny rules for the blobstore internal server. Will be # followed by 'deny all'. Each entry must be follow by a semicolon. BLOBSTORE_ACCESS_RULES: "allow 10.0.0.0/8; allow 172.16.0.0/12; allow 192.168.0.0/16;"

# Maximal allowed file size for upload to blobstore, in megabytes. BLOBSTORE_MAX_UPLOAD_SIZE: "5000"

# For requests to service brokers, this is the HTTP (open and read) timeout # setting, in seconds. BROKER_CLIENT_TIMEOUT_SECONDS: "70"

# The set of CAT test suites to run. If not specified it falls back to a # hardwired set of suites. CATS_SUITES: ~

# The key used to encrypt entries in the CC database CC_DB_CURRENT_KEY_LABEL: ""

# URI for a CDN to use for buildpack downloads. CDN_URI: ""

# Expiration for generated certificates (in days) CERT_EXPIRATION: "10950"

# The Oauth2 authorities available to the cluster administrator. CLUSTER_ADMIN_AUTHORITIES: "scim.write,scim.read,openid,cloud_controller.admin,clients.read,clients.write,doppler.firehose,routing.router_groups.read,routing.router_groups.write"

# 'build' attribute in the /v2/info endpoint CLUSTER_BUILD: "2.19.0"

# 'description' attribute in the /v2/info endpoint CLUSTER_DESCRIPTION: "SUSE Cloud Foundry"

# 'name' attribute in the /v2/info endpoint CLUSTER_NAME: "SCF"

# 'version' attribute in the /v2/info endpoint CLUSTER_VERSION: "2"

# Database driver to use for the external database server used to manage the # CF-internal databases. Only used if DB_EXTERNAL_HOST is set. Currently only # `mysql` is valid. DB_EXTERNAL_DRIVER: "mysql"

# Hostname for an external database server to use for the CF-internal

305 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 321: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# databases (such as for the cloud controller database). If not set, the # internal database is used. DB_EXTERNAL_HOST: ~

# Port for an external database server to use for the CF-internal databases. # Only used if DB_EXTERNAL_HOST is set. DB_EXTERNAL_PORT: "3306"

# TLS configuration for the external database server to use for the # CF-internal databases. Only used if DB_EXTERNAL_HOST is set. Valid values # depend on which database driver is in use. DB_EXTERNAL_SSL_MODE: ~

# Administrator user name for an external database server; this is required to # create the necessary databases. Only used if DB_EXTERNAL_HOST is set. DB_EXTERNAL_USER: ~

# The standard amount of disk (in MB) given to an application when not # overriden by the user via manifest, command line, etc. DEFAULT_APP_DISK_IN_MB: "1024"

# The standard amount of memory (in MB) given to an application when not # overriden by the user via manifest, command line, etc. DEFAULT_APP_MEMORY: "1024"

# If set apps pushed to spaces that allow SSH access will have SSH enabled by # default. DEFAULT_APP_SSH_ACCESS: "true"

# The default stack to use if no custom stack is specified by an app. DEFAULT_STACK: "sle15"

# The container disk capacity the cell should manage. If this capacity is # larger than the actual disk quota of the cell component, over-provisioning # will occur. DIEGO_CELL_DISK_CAPACITY_MB: "auto"

# The memory capacity the cell should manage. If this capacity is larger than # the actual memory of the cell component, over-provisioning will occur. DIEGO_CELL_MEMORY_CAPACITY_MB: "auto"

# Maximum network transmission unit length in bytes for application # containers. DIEGO_CELL_NETWORK_MTU: "1400"

# A CIDR subnet mask specifying the range of subnets available to be assigned # to containers.

306 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 322: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

DIEGO_CELL_SUBNET: "10.38.0.0/16"

# Disable external buildpacks. Only admin buildpacks and system buildpacks # will be available to users. DISABLE_CUSTOM_BUILDPACKS: "false"

# Base domain of the SCF cluster. # Example: "my-scf-cluster.com" DOMAIN: ~

# The number of versions of an application to keep. You will be able to # rollback to this amount of versions. DROPLET_MAX_STAGED_STORED: "5"

# Downloads app-bits and buildpacks from the bits-service EIRINI_DOWNLOADER_IMAGE: "registry.suse.com/cap/recipe-downloader:0.18.0"

# Executes the buildpackapplifecyle to build a Droplet EIRINI_EXECUTOR_IMAGE: "registry.suse.com/cap/recipe-executor:0.18.0"

# Address of Kubernetes' Heapster installation, used for reading Cloud Foundry # app metrics. EIRINI_KUBE_HEAPSTER_ADDRESS: "http://heapster.kube-system/apis/metrics/v1alpha1"

# The namespace used by Eirini for deploying applications. EIRINI_KUBE_NAMESPACE: "eirini"

# Array of plans that the Eirini persi broker will expose to Cloud Foundry. # Example: # - id: "default-storageclass" # name: "default" # description: "Eirini persistence broker" # free: true # kube_storage_class: "persistent" # default_size: "1Gi" EIRINI_PERSI_PLANS: "[]"

# The port of the ssh proxy for Eirini EIRINI_SSH_PORT: "2222"

# Uploads the Droplet to the bits-service EIRINI_UPLOADER_IMAGE: "registry.suse.com/cap/recipe-uploader:0.17.0"

# By default, Cloud Foundry does not enable Cloud Controller request logging. # To enable this feature, you must set this property to "true". You can learn # more about the format of the logs here # https://docs.cloudfoundry.org/loggregator/cc-uaa-logging.html#cc

307 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 323: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

ENABLE_SECURITY_EVENT_LOGGING: "false"

# Enables setting the X-Forwarded-Proto header if SSL termination happened # upstream and the header value was set incorrectly. When this property is set # to true, the gorouter sets the header X-Forwarded-Proto to https. When this # value set to false, the gorouter sets the header X-Forwarded-Proto to the # protocol of the incoming request. FORCE_FORWARDED_PROTO_AS_HTTPS: "false"

# AppArmor profile name for garden-runc; set this to empty string to disable # AppArmor support GARDEN_APPARMOR_PROFILE: "garden-default"

# URL pointing to the Docker registry used for fetching Docker images. If not # set, the Docker service default is used. GARDEN_DOCKER_REGISTRY: "registry-1.docker.io"

# Override DNS servers to be used in containers; defaults to the same as the # host. GARDEN_LINUX_DNS_SERVER: ""

# The filesystem driver to use (btrfs or overlay-xfs). GARDEN_ROOTFS_DRIVER: "btrfs"

# Location of the proxy to use for secure web access. HTTPS_PROXY: ~

# Location of the proxy to use for regular web access. HTTP_PROXY: ~

# A comma-separated whitelist of insecure Docker registries in the form of # '<HOSTNAME|IP>:PORT'. Each registry must be quoted separately. # # Example: "\"docker-registry.example.com:80\", \"hello.example.org:443\"" INSECURE_DOCKER_REGISTRIES: ""

KUBERNETES_CLUSTER_DOMAIN: ~

# Allow the secrets-generator to auto-approve the kube cert signing requests # it makes. KUBE_CSR_AUTO_APPROVAL: "false"

# The cluster's log level: off, fatal, error, warn, info, debug, debug1, # debug2. LOG_LEVEL: "info"

# The maximum amount of disk a user can request for an application via

308 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 324: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# manifest, command line, etc., in MB. See also DEFAULT_APP_DISK_IN_MB for the # standard amount. MAX_APP_DISK_IN_MB: "2048"

# Maximum health check timeout that can be set for an app, in seconds. MAX_HEALTH_CHECK_TIMEOUT: "180"

# Sets the maximum allowed size of the client request body, specified in the # “Content-Length” request header field, in megabytes. If the size in a # request exceeds the configured value, the 413 (Request Entity Too Large) # error is returned to the client. Please be aware that browsers cannot # correctly display this error. Setting size to 0 disables checking of client # request body size. This limits application uploads, buildpack uploads, etc. NGINX_MAX_REQUEST_BODY_SIZE: "2048"

# Comma separated list of IP addresses and domains which should not be # directoed through a proxy, if any. NO_PROXY: ~

# Comma separated list of white-listed options that may be set during create # or bind operations. # Example: # "uid,gid,allow_root,allow_other,nfs_uid,nfs_gid,auto_cache,fsname,username,password" PERSI_NFS_ALLOWED_OPTIONS: "uid,gid,auto_cache,username,password"

# Comma separated list of default values for nfs mount options. If a default # is specified with an option not included in PERSI_NFS_ALLOWED_OPTIONS, then # this default value will be set and it won't be overridable. PERSI_NFS_DEFAULT_OPTIONS: ~

# Comma separated list of white-listed options that may be accepted in the # mount_config options. Note a specific 'sloppy_mount:true' volume option # tells the driver to ignore non-white-listed options, while a # 'sloppy_mount:false' tells the driver to fail fast instead when receiving a # non-white-listed option." # # Example: # "allow_root,allow_other,nfs_uid,nfs_gid,auto_cache,sloppy_mount,fsname" PERSI_NFS_DRIVER_ALLOWED_IN_MOUNT: "auto_cache"

# Comma separated list of white-listed options that may be configured in # supported in the mount_config.source URL query params. # Example: "uid,gid,auto-traverse-mounts,dircache" PERSI_NFS_DRIVER_ALLOWED_IN_SOURCE: "uid,gid"

# Comma separated list default values for options that may be configured in # the mount_config options, formatted as 'option:default'. If an option is not

309 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 325: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# specified in the volume mount, or the option is not white-listed, then the # specified default value will be used instead. # # Example: # "allow_root:false,nfs_uid:2000,nfs_gid:2000,auto_cache:true,sloppy_mount:true" PERSI_NFS_DRIVER_DEFAULT_IN_MOUNT: "auto_cache:true"

# Comma separated list of default values for options in the source URL query # params, formatted as 'option:default'. If an option is not specified in the # volume mount, or the option is not white-listed, then the specified default # value will be applied. PERSI_NFS_DRIVER_DEFAULT_IN_SOURCE: ~

# Disable Persi NFS driver PERSI_NFS_DRIVER_DISABLE: "false"

# LDAP server host name or ip address (required for LDAP integration only) PERSI_NFS_DRIVER_LDAP_HOST: ""

# LDAP server port (required for LDAP integration only) PERSI_NFS_DRIVER_LDAP_PORT: "389"

# LDAP server protocol (required for LDAP integration only) PERSI_NFS_DRIVER_LDAP_PROTOCOL: "tcp"

# LDAP service account user name (required for LDAP integration only) PERSI_NFS_DRIVER_LDAP_USER: ""

# LDAP fqdn for user records we will search against when looking up user uids # (required for LDAP integration only) # Example: "cn=Users,dc=corp,dc=test,dc=com" PERSI_NFS_DRIVER_LDAP_USER_FQDN: ""

# The name of the metadata label to query on worker nodes to get placement tag # information, also known as isolation segments. When set, the cells will # query their worker node for placement information and inject the result into # cloudfoundry via the KUBE_PZ parameter. When left to the default no custom # placement processing is done. PZ_LABEL_NAME: ""

# Certficates to add to the rootfs trust store. Multiple certs are possible by # concatenating their definitions into one big block of text. ROOTFS_TRUSTED_CERTS: ""

# The algorithm used by the router to distribute requests for a route across # backends. Supported values are round-robin and least-connection. ROUTER_BALANCING_ALGORITHM: "round-robin"

310 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 326: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# How to handle client certificates. Supported values are none, request, or # require. See # https://docs.cloudfoundry.org/adminguide/securing-traffic.html#gorouter_mutual_auth # for more information. ROUTER_CLIENT_CERT_VALIDATION: "request"

# How to handle the x-forwarded-client-cert (XFCC) HTTP header. Supported # values are always_forward, forward, and sanitize_set. See # https://docs.cloudfoundry.org/concepts/http-routing.html for more # information. ROUTER_FORWARDED_CLIENT_CERT: "always_forward"

# The log destination to talk to. This has to point to a syslog server. SCF_LOG_HOST: ~

# The port used by rsyslog to talk to the log destination. It defaults to 514, # the standard port of syslog. SCF_LOG_PORT: "514"

# The protocol used by rsyslog to talk to the log destination. The allowed # values are tcp, and udp. The default is tcp. SCF_LOG_PROTOCOL: "tcp"

# If true, authenticate against the SMTP server using AUTH command. See # https://javamail.java.net/nonav/docs/api/com/sun/mail/smtp/package-summary.html SMTP_AUTH: "false"

# SMTP from address, for password reset emails etc. SMTP_FROM_ADDRESS: ~

# SMTP server host address, for password reset emails etc. SMTP_HOST: ~

# SMTP server password, for password reset emails etc. SMTP_PASSWORD: ~

# SMTP server port, for password reset emails etc. SMTP_PORT: "25"

# If true, send STARTTLS command before logging in to SMTP server. See # https://javamail.java.net/nonav/docs/api/com/sun/mail/smtp/package-summary.html SMTP_STARTTLS: "false"

# SMTP server username, for password reset emails etc. SMTP_USER: ~

311 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 327: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# Timeout for staging an app, in seconds. STAGING_TIMEOUT: "900"

# Support contact information for the cluster SUPPORT_ADDRESS: "https://scc.suse.com"

# The number of times Ginkgo will run a SITS test before treating it as a # failure. Individual failed runs will still be reported in the test output. SYNC_INTEGRATION_TESTS_FLAKE_ATTEMPTS: "3"

# Regex for which SITS tests the test runner should focus on executing. SYNC_INTEGRATION_TESTS_FOCUS: ~

# The number of parallel test executors to spawn for Cloud Foundry sync # integration tests. SYNC_INTEGRATION_TESTS_NODES: "4"

# Regex for which SITS tests the test runner should skip. SYNC_INTEGRATION_TESTS_SKIP: ~

# Whether the output of the sync integration tests should be verbose or not. SYNC_INTEGRATION_TESTS_VERBOSE: "false"

# TCP routing domain of the SCF cluster; only used for testing; # Example: "tcp.my-scf-cluster.com" TCP_DOMAIN: ~

# Concatenation of trusted CA certificates to be made available on the cell. TRUSTED_CERTS: ~

# The host name of the UAA server (root zone) UAA_HOST: ~

# The tcp port the UAA server (root zone) listens on for requests. UAA_PORT: "2793"

# The TCP port to report as the public port for the UAA server (root zone). UAA_PUBLIC_PORT: "2793"

# Whether or not to use privileged containers for buildpack based # applications. Containers with a docker-image-based rootfs will continue to # always be unprivileged. USE_DIEGO_PRIVILEGED_CONTAINERS: "false"

# Whether or not to use privileged containers for staging tasks. USE_STAGER_PRIVILEGED_CONTAINERS: "false"

312 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 328: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# The sizing section contains configuration to change each individual instance# group. Due to limitations on the allowable names, any dashes ("-") in the# instance group names are replaced with underscores ("_").sizing: # The adapter instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # Also: adapter and bpm adapter: # Node affinity rules can be specified here affinity: {}

# The adapter instance group can scale between 1 and 65535 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 128 limit: ~

# The api-group instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - patch-properties: Dummy BOSH job used to host parameters that are used in # SCF patches for upstream bugs # # - cloud_controller_ng: The Cloud Controller provides primary Cloud Foundry # API that is by the CF CLI. The Cloud Controller uses a database to keep # tables for organizations, spaces, apps, services, service instances, user # roles, and more. Typically multiple instances of Cloud Controller are load # balanced. # # - route_registrar: Used for registering routes # # Also: bpm, statsd_injector, go-buildpack, go-buildpack, binary-buildpack,

313 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 329: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# binary-buildpack, nodejs-buildpack, nodejs-buildpack, ruby-buildpack, # ruby-buildpack, php-buildpack, php-buildpack, python-buildpack, # python-buildpack, staticfile-buildpack, staticfile-buildpack, # nginx-buildpack, nginx-buildpack, java-buildpack, java-buildpack, # dotnet-core-buildpack, and dotnet-core-buildpack api_group: # Node affinity rules can be specified here affinity: {}

# The api_group instance group can scale between 1 and 65535 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 4000 limit: ~

# Unit [MiB] memory: request: 3800 limit: ~

# The autoscaler-actors instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # Also: scheduler, scalingengine, operator, and bpm autoscaler_actors: # Node affinity rules can be specified here affinity: {}

# The autoscaler_actors instance group can be enabled by the autoscaler # feature. # It can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB]

314 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 330: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

memory: request: 2350 limit: ~

# The autoscaler-api instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - route_registrar: Used for registering routes # # Also: apiserver and bpm autoscaler_api: # Node affinity rules can be specified here affinity: {}

# The autoscaler_api instance group can be enabled by the autoscaler # feature. # It can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 256 limit: ~

# The autoscaler-metrics instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # Also: metricscollector, eventgenerator, and bpm autoscaler_metrics: # Node affinity rules can be specified here affinity: {}

# The autoscaler_metrics instance group can be enabled by the autoscaler

315 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 331: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# feature. # It can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 4000 limit: ~

# Unit [MiB] memory: request: 1024 limit: ~

# The autoscaler-postgres instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - postgres: The Postgres server provides a single instance Postgres database # that can be used with the Cloud Controller or the UAA. It does not provide # highly-available configuration. autoscaler_postgres: # Node affinity rules can be specified here affinity: {}

# The autoscaler_postgres instance group can be enabled by the autoscaler # feature. # It cannot be scaled. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

disk_sizes: postgres_data: 5

# Unit [MiB] memory: request: 1024 limit: ~

# The bits instance group contains the following jobs: #

316 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 332: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - eirinifs-sle15: This job copies the eirinifs-sle15 to a desired location # # Also: statsd_injector, bpm, and bits-service bits: # Node affinity rules can be specified here affinity: {}

# The bits instance group can be enabled by the eirini feature. # It cannot be scaled. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 256 limit: ~

# The blobstore instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - route_registrar: Used for registering routes # # Also: blobstore and bpm blobstore: # Node affinity rules can be specified here affinity: {}

# The blobstore instance group cannot be scaled. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

disk_sizes: blobstore_data: 50

317 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 333: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# Unit [MiB] memory: request: 500 limit: ~

# The cc-clock instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - wait-for-api: Wait for API to be ready before starting any jobs # # - cloud_controller_clock: The Cloud Controller Clock runs the Diego Sync job # to keep the actual state of running processes in Diego in sync with Cloud # Controller's desired state. Additionally, the Clock schedules periodic # clean up jobs to prune app usage events, audit events, failed jobs, and # more. # # Also: statsd_injector and bpm cc_clock: # Node affinity rules can be specified here affinity: {}

# The cc_clock instance group can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 750 limit: ~

# The cc-uploader instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # Also: tps, cc_uploader, and bpm

318 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 334: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

cc_uploader: # Node affinity rules can be specified here affinity: {}

# The cc_uploader instance group can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 4000 limit: ~

# Unit [MiB] memory: request: 128 limit: ~

# The cc-worker instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - cloud_controller_worker: Cloud Controller worker processes background # tasks submitted via the. # # Also: bpm cc_worker: # Node affinity rules can be specified here affinity: {}

# The cc_worker instance group can scale between 1 and 65535 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 750 limit: ~

319 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 335: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# The cf-usb-group instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - route_registrar: Used for registering routes # # Also: cf-usb and bpm cf_usb_group: # Node affinity rules can be specified here affinity: {}

# The cf_usb_group instance group is enabled by the cf_usb feature. # It can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 128 limit: ~

# The configure-eirini instance group contains the following jobs: # # - configure-eirini-scf: Creates and configures components needed for Eirini configure_eirini: # Node affinity rules can be specified here affinity: {}

# The configure_eirini instance group can be enabled by the eirini feature. # It cannot be scaled. count: ~

# Unit [millicore] cpu: request: 1000 limit: ~

# Unit [MiB] memory:

320 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 336: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

request: 256 limit: ~

# The credhub-user instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - route_registrar: Used for registering routes # # Also: credhub and bpm credhub_user: # Node affinity rules can be specified here affinity: {}

# The credhub_user instance group can be enabled by the credhub feature. # It cannot be scaled. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 2000 limit: ~

# The diego-api instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - patch-properties: Dummy BOSH job used to host parameters that are used in # SCF patches for upstream bugs # # Also: bbs and cfdot diego_api: # Node affinity rules can be specified here affinity: {}

# The diego_api instance group can be disabled by the eirini feature. # It can scale between 1 and 3 instances. # For high availability it needs at least 2 instances.

321 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 337: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 128 limit: ~

# The diego-brain instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - patch-properties: Dummy BOSH job used to host parameters that are used in # SCF patches for upstream bugs # # Also: auctioneer and cfdot diego_brain: # Node affinity rules can be specified here affinity: {}

# The diego_brain instance group can be disabled by the eirini feature. # It can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 4000 limit: ~

# Unit [MiB] memory: request: 128 limit: ~

# The diego-cell instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF #

322 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 338: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# - authorize-internal-ca: Install both internal and UAA CA certificates # # - get-kubectl: This job exists only to ensure the presence of the kubectl # binary in the role referencing it. # # - wait-for-uaa: Wait for UAA to be ready before starting any jobs # # - patch-properties: Dummy BOSH job used to host parameters that are used in # SCF patches for upstream bugs # # Also: rep, cfdot, route_emitter, garden, groot-btrfs, # cflinuxfs3-rootfs-setup, cf-sle12-setup, sle15-rootfs-setup, nfsv3driver, # and mapfs diego_cell: # Node affinity rules can be specified here affinity: {}

# The diego_cell instance group can be disabled by the eirini feature. # It can scale between 1 and 254 instances. # For high availability it needs at least 3 instances. count: ~

# Unit [millicore] cpu: request: 4000 limit: ~

disk_sizes: grootfs_data: 50

# Unit [MiB] memory: request: 2800 limit: ~

# The diego-ssh instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # Also: ssh_proxy and file_server diego_ssh: # Node affinity rules can be specified here affinity: {}

323 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 339: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# The diego_ssh instance group can be disabled by the eirini feature. # It can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 128 limit: ~

# The doppler instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - route_registrar: Used for registering routes # # Also: log-cache-gateway, log-cache-nozzle, log-cache-cf-auth-proxy, # log-cache-expvar-forwarder, log-cache, doppler, and bpm doppler: # Node affinity rules can be specified here affinity: {}

# The doppler instance group can scale between 1 and 65535 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 410 limit: ~

# The eirini instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are

324 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 340: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - patch-properties: Dummy BOSH job used to host parameters that are used in # SCF patches for upstream bugs # # Also: bpm, eirini-loggregator-bridge, and opi eirini: # Node affinity rules can be specified here affinity: {}

# The eirini instance group can be enabled by the eirini feature. # It cannot be scaled. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 256 limit: ~

# The eirini-persi instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - patch-properties: Dummy BOSH job used to host parameters that are used in # SCF patches for upstream bugs # # Also: bpm, eirini-persi-broker, and eirini-persi eirini_persi: # Node affinity rules can be specified here affinity: {}

# The eirini_persi instance group can be enabled by the eirini feature. # It cannot be scaled. count: ~

# Unit [millicore] cpu:

325 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 341: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

request: 2000 limit: ~

# Unit [MiB] memory: request: 256 limit: ~

# The eirini-ssh instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - patch-properties: Dummy BOSH job used to host parameters that are used in # SCF patches for upstream bugs # # Also: bpm, eirini-ssh-proxy, and eirini-ssh-extension eirini_ssh: # Node affinity rules can be specified here affinity: {}

# The eirini_ssh instance group can be enabled by the eirini feature. # It cannot be scaled. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 256 limit: ~

# The locket instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - patch-properties: Dummy BOSH job used to host parameters that are used in # SCF patches for upstream bugs #

326 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 342: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# Also: locket locket: # Node affinity rules can be specified here affinity: {}

# The locket instance group can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 128 limit: ~

# The log-api instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - route_registrar: Used for registering routes # # Also: loggregator_trafficcontroller, reverse_log_proxy, # reverse_log_proxy_gateway, and bpm log_api: # Node affinity rules can be specified here affinity: {}

# The log_api instance group can scale between 1 and 65535 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 128 limit: ~

327 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 343: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# The log-cache-scheduler instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - log-cache-scheduler-properties: Dummy BOSH job used to host parameters # that are used in SCF patches # # Also: log-cache-scheduler, log-cache-expvar-forwarder, and bpm log_cache_scheduler: # Node affinity rules can be specified here affinity: {}

# The log_cache_scheduler instance group can scale between 1 and 65535 # instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 410 limit: ~

# The loggregator-agent instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # Also: loggr-expvar-forwarder, loggregator_agent, and bpm loggregator_agent: # Node affinity rules can be specified here affinity: {}

# The loggregator_agent instance group can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu:

328 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 344: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

request: ~ limit: ~

# Unit [MiB] memory: request: ~ limit: ~

# The mysql instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - patch-properties: Dummy BOSH job used to host parameters that are used in # SCF patches for upstream bugs # # Also: pxc-mysql, galera-agent, gra-log-purger, cluster-health-logger, # bootstrap, mysql, and bpm mysql: # Node affinity rules can be specified here affinity: {}

# The mysql instance group is enabled by the mysql feature. # It can scale between 1 and 7 instances. # The instance count must be an odd number (not divisible by 2). # For high availability it needs at least 3 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

disk_sizes: mysql_data: 20

# Unit [MiB] memory: request: 2500 limit: ~

# The mysql-proxy instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - switchboard-leader: Job to host the active/passive probe for mysql

329 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 345: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# switchboard and leader election # # Also: bpm and proxy mysql_proxy: # Node affinity rules can be specified here affinity: {}

# The mysql_proxy instance group is enabled by the mysql feature. # It can scale between 1 and 5 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 2500 limit: ~

# The nats instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - nats: The NATS server provides publish-subscribe messaging system for the # Cloud Controller, the DEA , HM9000, and other Cloud Foundry components. # # Also: bpm nats: # Node affinity rules can be specified here affinity: {}

# The nats instance group can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 128

330 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 346: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

limit: ~

# The nfs-broker instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # Also: nfsbroker nfs_broker: # Node affinity rules can be specified here affinity: {}

# The nfs_broker instance group can be disabled by the eirini feature. # It can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 128 limit: ~

# The post-deployment-setup instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - database-seeder: When using an external database server, seed it with the # necessary databases. # # # - uaa-create-user: Create the initial user in UAA # # - configure-scf: Uses the cf CLI to configure SCF once it's online (things # like proxy settings, service brokers, etc.) post_deployment_setup: # Node affinity rules can be specified here affinity: {}

331 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 347: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# The post_deployment_setup instance group cannot be scaled. count: ~

# Unit [millicore] cpu: request: 1000 limit: ~

# Unit [MiB] memory: request: 256 limit: ~

# The router instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - gorouter: Gorouter maintains a dynamic routing table based on updates # received from NATS and (when enabled) the Routing API. This routing table # maps URLs to backends. The router finds the URL in the routing table that # most closely matches the host header of the request and load balances # across the associated backends. # # Also: bpm router: # Node affinity rules can be specified here affinity: {}

# The router instance group can scale between 1 and 65535 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 4000 limit: ~

# Unit [MiB] memory: request: 128 limit: ~

# The routing-api instance group contains the following jobs:

332 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 348: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # Also: bpm and routing-api routing_api: # Node affinity rules can be specified here affinity: {}

# The routing_api instance group can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 4000 limit: ~

# Unit [MiB] memory: request: 128 limit: ~

# The secret-generation instance group contains the following jobs: # # - generate-secrets: This job will generate the secrets for the cluster secret_generation: # Node affinity rules can be specified here affinity: {}

# The secret_generation instance group cannot be scaled. count: ~

# Unit [millicore] cpu: request: 1000 limit: ~

# Unit [MiB] memory: request: 256 limit: ~

# The syslog-scheduler instance group contains the following jobs: #

333 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 349: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

# - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # Also: scheduler and bpm syslog_scheduler: # Node affinity rules can be specified here affinity: {}

# The syslog_scheduler instance group can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 128 limit: ~

# The tcp-router instance group contains the following jobs: # # - global-properties: Dummy BOSH job used to host global parameters that are # required to configure SCF # # - authorize-internal-ca: Install both internal and UAA CA certificates # # - wait-for-uaa: Wait for UAA to be ready before starting any jobs # # Also: tcp_router and bpm tcp_router: # Node affinity rules can be specified here affinity: {}

# The tcp_router instance group can scale between 1 and 3 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory:

334 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 350: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

request: 128 limit: ~

ports: tcp_route: count: 9

# The uaa instance group contains the following jobs: # # - global-uaa-properties: Dummy BOSH job used to host global parameters that # are required to configure SCF / fissile # # - wait-for-database: This is a pre-start job to delay starting the rest of # the role until a database connection is ready. Currently it only checks # that a response can be obtained from the server, and not that it responds # intelligently. # # # - uaa: The UAA is the identity management service for Cloud Foundry. It's # primary role is as an OAuth2 provider, issuing tokens for client # applications to use when they act on behalf of Cloud Foundry users. It can # also authenticate users with their Cloud Foundry credentials, and can act # as an SSO service using those credentials (or others). It has endpoints # for managing user accounts and for registering OAuth2 clients, as well as # various other management functions. uaa: # Node affinity rules can be specified here affinity: {}

# The uaa instance group can be enabled by the uaa feature. # It can scale between 1 and 65535 instances. # For high availability it needs at least 2 instances. count: ~

# Unit [millicore] cpu: request: 2000 limit: ~

# Unit [MiB] memory: request: 2100 limit: ~

enable: # The autoscaler feature enables these instance groups: autoscaler_postgres, # autoscaler_api, autoscaler_metrics, and autoscaler_actors

335 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 351: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

autoscaler: false

# The cf_usb feature enables these instance groups: cf_usb_group cf_usb: true

# The credhub feature enables these instance groups: credhub_user credhub: false

# The eirini feature enables these instance groups: eirini, eirini_persi, # eirini_ssh, bits, and configure_eirini # It disables these instance groups: diego_api, diego_brain, diego_ssh, # nfs_broker, and diego_cell eirini: false

# The mysql feature enables these instance groups: mysql and mysql_proxy mysql: true

# The uaa feature enables these instance groups: uaa uaa: false

ingress: # ingress.annotations allows specifying custom ingress annotations that gets # merged to the default annotations. annotations: {}

# ingress.enabled enables ingress support - working ingress controller # necessary. enabled: false

# ingress.tls.crt and ingress.tls.key, when specified, are used by the TLS # secret for the Ingress resource. tls: {}

336 Complete suse/scf values.yaml File SUSE Cloud Applic… 1.5.1

Page 352: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

B GNU LicensesThis appendix contains the GNU Free Docu-mentation License version 1.2.

GNU Free Documentation License

Copyright (C) 2000, 2001, 2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor,Boston, MA 02110-1301 USA. Everyone is permitted to copy and distribute verbatim copiesof this license document, but changing it is not allowed.

0. PREAMBLE

The purpose of this License is to make a manual, textbook, or other functional and usefuldocument "free" in the sense of freedom: to assure everyone the effective freedom to copyand redistribute it, with or without modifying it, either commercially or non-commercially.Secondarily, this License preserves for the author and publisher a way to get credit for theirwork, while not being considered responsible for modifications made by others.

This License is a kind of "copyleft", which means that derivative works of the document mustthemselves be free in the same sense. It complements the GNU General Public License, whichis a copyleft license designed for free software.

We have designed this License to use it for manuals for free software, because free softwareneeds free documentation: a free program should come with manuals providing the samefreedoms that the software does. But this License is not limited to software manuals; it canbe used for any textual work, regardless of subject matter or whether it is published as aprinted book. We recommend this License principally for works whose purpose is instructionor reference.

1. APPLICABILITY AND DEFINITIONS

This License applies to any manual or other work, in any medium, that contains a notice placedby the copyright holder saying it can be distributed under the terms of this License. Such anotice grants a world-wide, royalty-free license, unlimited in duration, to use that work underthe conditions stated herein. The "Document", below, refers to any such manual or work. Anymember of the public is a licensee, and is addressed as "you". You accept the license if youcopy, modify or distribute the work in a way requiring permission under copyright law.

A "Modified Version" of the Document means any work containing the Document or a portionof it, either copied verbatim, or with modifications and/or translated into another language.

A "Secondary Section" is a named appendix or a front-matter section of the Document thatdeals exclusively with the relationship of the publishers or authors of the Document to theDocument's overall subject (or to related matters) and contains nothing that could fall directlywithin that overall subject. (Thus, if the Document is in part a textbook of mathematics, aSecondary Section may not explain any mathematics.) The relationship could be a matterof historical connection with the subject or with related matters, or of legal, commercial,philosophical, ethical or political position regarding them.

The "Invariant Sections" are certain Secondary Sections whose titles are designated, as beingthose of Invariant Sections, in the notice that says that the Document is released under thisLicense. If a section does not t the above definition of Secondary then it is not allowed to bedesignated as Invariant. The Document may contain zero Invariant Sections. If the Documentdoes not identify any Invariant Sections then there are none.

The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts orBack-Cover Texts, in the notice that says that the Document is released under this License. AFront-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.

A "Transparent" copy of the Document means a machine-readable copy, represented in a for-mat whose specification is available to the general public, that is suitable for revising the doc-ument straightforwardly with generic text editors or (for images composed of pixels) genericpaint programs or (for drawings) some widely available drawing editor, and that is suitablefor input to text formatters or for automatic translation to a variety of formats suitable forinput to text formatters. A copy made in an otherwise Transparent le format whose markup,or absence of markup, has been arranged to thwart or discourage subsequent modificationby readers is not Transparent. An image format is not Transparent if used for any substantialamount of text. A copy that is not "Transparent" is called "Opaque".

Examples of suitable formats for Transparent copies include plain ASCII without markup, Tex-info input format, LaTeX input format, SGML or XML using a publicly available DTD, and stan-dard-conforming simple HTML, PostScript or PDF designed for human modification. Examplesof transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary

formats that can be read and edited only by proprietary word processors, SGML or XML forwhich the DTD and/or processing tools are not generally available, and the machine-generat-ed HTML, PostScript or PDF produced by some word processors for output purposes only.

The "Title Page" means, for a printed book, the title page itself, plus such following pages asare needed to hold, legibly, the material this License requires to appear in the title page. Forworks in formats which do not have any title page as such, "Title Page" means the text near themost prominent appearance of the work's title, preceding the beginning of the body of the text.

A section "Entitled XYZ" means a named subunit of the Document whose title either is preciselyXYZ or contains XYZ in parentheses following text that translates XYZ in another language.(Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements","Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section whenyou modify the Document means that it remains a section "Entitled XYZ" according to thisdefinition.

The Document may include Warranty Disclaimers next to the notice which states that thisLicense applies to the Document. These Warranty Disclaimers are considered to be includedby reference in this License, but only as regards disclaiming warranties: any other implicationthat these Warranty Disclaimers may have is void and has no effect on the meaning of thisLicense.

2. VERBATIM COPYING

You may copy and distribute the Document in any medium, either commercially or non-commercially, provided that this License, the copyright notices, and the license notice sayingthis License applies to the Document are reproduced in all copies, and that you add no otherconditions whatsoever to those of this License. You may not use technical measures to obstructor control the reading or further copying of the copies you make or distribute. However, youmay accept compensation in exchange for copies. If you distribute a large enough number ofcopies you must also follow the conditions in section 3.

You may also lend copies, under the same conditions stated above, and you may publiclydisplay copies.

3. COPYING IN QUANTITY

If you publish printed copies (or copies in media that commonly have printed covers) of theDocument, numbering more than 100, and the Document's license notice requires Cover Texts,you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts:Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both coversmust also clearly and legibly identify you as the publisher of these copies. The front covermust present the full title with all words of the title equally prominent and visible. You mayadd other material on the covers in addition. Copying with changes limited to the covers, aslong as they preserve the title of the Document and satisfy these conditions, can be treatedas verbatim copying in other respects.

If the required texts for either cover are too voluminous to t legibly, you should put therst ones listed (as many as t reasonably) on the actual cover, and continue the rest ontoadjacent pages.

If you publish or distribute Opaque copies of the Document numbering more than 100, youmust either include a machine-readable Transparent copy along with each Opaque copy, orstate in or with each Opaque copy a computer-network location from which the general net-work-using public has access to download using public-standard network protocols a completeTransparent copy of the Document, free of added material. If you use the latter option, youmust take reasonably prudent steps, when you begin distribution of Opaque copies in quanti-ty, to ensure that this Transparent copy will remain thus accessible at the stated location untilat least one year after the last time you distribute an Opaque copy (directly or through youragents or retailers) of that edition to the public.

It is requested, but not required, that you contact the authors of the Document well beforeredistributing any large number of copies, to give them a chance to provide you with anupdated version of the Document.

337 SUSE Cloud Applic… 1.5.1

Page 353: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

4. MODIFICATIONS

You may copy and distribute a Modified Version of the Document under the conditions ofsections 2 and 3 above, provided that you release the Modified Version under precisely thisLicense, with the Modified Version filling the role of the Document, thus licensing distributionand modification of the Modified Version to whoever possesses a copy of it. In addition, youmust do these things in the Modified Version:

A. Use in the Title Page (and on the covers, if any) a title distinct from that of theDocument, and from those of previous versions (which should, if there were any,be listed in the History section of the Document). You may use the same title as aprevious version if the original publisher of that version gives permission.

B. List on the Title Page, as authors, one or more persons or entities responsible forauthorship of the modifications in the Modified Version, together with at least veof the principal authors of the Document (all of its principal authors, if it has fewerthan ve), unless they release you from this requirement.

C. State on the Title page the name of the publisher of the Modified Version, as thepublisher.

D. Preserve all the copyright notices of the Document.

E. Add an appropriate copyright notice for your modifications adjacent to the othercopyright notices.

F. Include, immediately after the copyright notices, a license notice giving the publicpermission to use the Modified Version under the terms of this License, in the formshown in the Addendum below.

G. Preserve in that license notice the full lists of Invariant Sections and required CoverTexts given in the Document's license notice.

H. Include an unaltered copy of this License.

I. Preserve the section Entitled "History", Preserve its Title, and add to it an itemstating at least the title, year, new authors, and publisher of the Modified Versionas given on the Title Page. If there is no section Entitled "History" in the Document,create one stating the title, year, authors, and publisher of the Document as givenon its Title Page, then add an item describing the Modified Version as stated inthe previous sentence.

J. Preserve the network location, if any, given in the Document for public access toa Transparent copy of the Document, and likewise the network locations given inthe Document for previous versions it was based on. These may be placed in the"History" section. You may omit a network location for a work that was publishedat least four years before the Document itself, or if the original publisher of theversion it refers to gives permission.

K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Titleof the section, and preserve in the section all the substance and tone of each of thecontributor acknowledgements and/or dedications given therein.

L. Preserve all the Invariant Sections of the Document, unaltered in their text andin their titles. Section numbers or the equivalent are not considered part of thesection titles.

M. Delete any section Entitled "Endorsements". Such a section may not be includedin the Modified Version.

N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict intitle with any Invariant Section.

O. Preserve any Warranty Disclaimers.

If the Modified Version includes new front-matter sections or appendices that qualify as Se-condary Sections and contain no material copied from the Document, you may at your optiondesignate some or all of these sections as invariant. To do this, add their titles to the list ofInvariant Sections in the Modified Version's license notice. These titles must be distinct fromany other section titles.

You may add a section Entitled "Endorsements", provided it contains nothing but endorse-ments of your Modified Version by various parties--for example, statements of peer reviewor that the text has been approved by an organization as the authoritative definition of astandard.

You may add a passage of up to ve words as a Front-Cover Text, and a passage of up to 25words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Onlyone passage of Front-Cover Text and one of Back-Cover Text may be added by (or througharrangements made by) any one entity. If the Document already includes a cover text for thesame cover, previously added by you or by arrangement made by the same entity you areacting on behalf of, you may not add another; but you may replace the old one, on explicitpermission from the previous publisher that added the old one.

The author(s) and publisher(s) of the Document do not by this License give permission to usetheir names for publicity for or to assert or imply endorsement of any Modified Version.

5. COMBINING DOCUMENTS

You may combine the Document with other documents released under this License, underthe terms defined in section 4 above for modified versions, provided that you include in thecombination all of the Invariant Sections of all of the original documents, unmodified, andlist them all as Invariant Sections of your combined work in its license notice, and that youpreserve all their Warranty Disclaimers.

The combined work need only contain one copy of this License, and multiple identical Invari-ant Sections may be replaced with a single copy. If there are multiple Invariant Sections withthe same name but different contents, make the title of each such section unique by addingat the end of it, in parentheses, the name of the original author or publisher of that section ifknown, or else a unique number. Make the same adjustment to the section titles in the list ofInvariant Sections in the license notice of the combined work.

In the combination, you must combine any sections Entitled "History" in the various originaldocuments, forming one section Entitled "History"; likewise combine any sections Entitled"Acknowledgements", and any sections Entitled "Dedications". You must delete all sectionsEntitled "Endorsements".

6. COLLECTIONS OF DOCUMENTS

You may make a collection consisting of the Document and other documents released underthis License, and replace the individual copies of this License in the various documents with asingle copy that is included in the collection, provided that you follow the rules of this Licensefor verbatim copying of each of the documents in all other respects.

You may extract a single document from such a collection, and distribute it individually underthis License, provided you insert a copy of this License into the extracted document, and followthis License in all other respects regarding verbatim copying of that document.

7. AGGREGATION WITH INDEPENDENT WORKS

A compilation of the Document or its derivatives with other separate and independent docu-ments or works, in or on a volume of a storage or distribution medium, is called an "aggregate"if the copyright resulting from the compilation is not used to limit the legal rights of the com-pilation's users beyond what the individual works permit. When the Document is included inan aggregate, this License does not apply to the other works in the aggregate which are notthemselves derivative works of the Document.

If the Cover Text requirement of section 3 is applicable to these copies of the Document, thenif the Document is less than one half of the entire aggregate, the Document's Cover Textsmay be placed on covers that bracket the Document within the aggregate, or the electronicequivalent of covers if the Document is in electronic form. Otherwise they must appear onprinted covers that bracket the whole aggregate.

8. TRANSLATION

Translation is considered a kind of modification, so you may distribute translations of theDocument under the terms of section 4. Replacing Invariant Sections with translations requiresspecial permission from their copyright holders, but you may include translations of someor all Invariant Sections in addition to the original versions of these Invariant Sections. Youmay include a translation of this License, and all the license notices in the Document, andany Warranty Disclaimers, provided that you also include the original English version of thisLicense and the original versions of those notices and disclaimers. In case of a disagreementbetween the translation and the original version of this License or a notice or disclaimer, theoriginal version will prevail.

If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", therequirement (section 4) to Preserve its Title (section 1) will typically require changing theactual title.

9. TERMINATION

You may not copy, modify, sublicense, or distribute the Document except as expressly pro-vided for under this License. Any other attempt to copy, modify, sublicense or distribute theDocument is void, and will automatically terminate your rights under this License. However,parties who have received copies, or rights, from you under this License will not have theirlicenses terminated so long as such parties remain in full compliance.

338 SUSE Cloud Applic… 1.5.1

Page 354: Guides Administration, and User Deployment, · 11.4 Install Helm Client and Tiller 129 11.5 Default Storage Class 130 11.6 DNS Configuration 130 11.7 Deployment Configuration 131

10. FUTURE REVISIONS OF THIS LICENSE

The Free Software Foundation may publish new, revised versions of the GNU Free Documen-tation License from time to time. Such new versions will be similar in spirit to the presentversion, but may differ in detail to address new problems or concerns. See http://www.gnu.org/

copyleft/ .

Each version of the License is given a distinguishing version number. If the Document specifiesthat a particular numbered version of this License "or any later version" applies to it, you havethe option of following the terms and conditions either of that specified version or of anylater version that has been published (not as a draft) by the Free Software Foundation. If theDocument does not specify a version number of this License, you may choose any version everpublished (not as a draft) by the Free Software Foundation.

ADDENDUM: How to use this License for your documents

Copyright (c) YEAR YOUR NAME.Permission is granted to copy, distribute and/or modify this documentunder the terms of the GNU Free Documentation License, Version 1.2or any later version published by the Free Software Foundation;with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.A copy of the license is included in the section entitled “GNUFree Documentation License”.

If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the“with...Texts.” line with this:

with the Invariant Sections being LIST THEIR TITLES, with theFront-Cover Texts being LIST, and with the Back-Cover Texts being LIST.

If you have Invariant Sections without Cover Texts, or some other combination of the three,merge those two alternatives to suit the situation.

If your document contains nontrivial examples of program code, we recommend releasingthese examples in parallel under your choice of free software license, such as the GNU GeneralPublic License, to permit their use in free software.

339 SUSE Cloud Applic… 1.5.1