cloud stack

112
# CloudStack Installation * * * This book is aimed at CloudStack users who need to install CloudStack from the community provided packages. These instructions are valid on a Ubuntu 12.04 system, please adapt them if you are on a different operating system. In this book we will setup a management server and one Hypervisor with KVM. We will setup a `basic` networking zone. 1. Installation of the prerequisites 2. Setting up the management server 3. Setting up a KVM hypervisor 4. Configuring a Basic Zone # Prerequisites In this section we'll look at installing the dependencies you'll need for Apache CloudStack development. First update and upgrade your system: apt-get update apt-get upgrade Install NTP to synchronize the clocks: apt-get install openntpd Install `openjdk`. As we're using Linux, OpenJDK is our first choice. apt-get install openjdk-6-jdk Install `tomcat6`, note that the new version of tomcat on [Ubuntu](http://packages.ubuntu.com/precise/all/tomcat6) is the 6.0.35 version. apt-get install tomcat6 Next, we'll install MySQL if it's not already present on the system. apt-get install mysql-server Remember to set the correct `mysql` password in the CloudStack properties file. Mysql should be running but you can check it's status with: service mysql status ## Optional Developers wanting to build CloudStack from source will want to install the following additional packages. If you dont' want to build from source just jump to the next section.

Upload: todd-watson

Post on 07-Apr-2016

28 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Cloud Stack

# CloudStack Installation

* * *

This book is aimed at CloudStack users who need to install CloudStack

from the community provided packages. These instructions are valid on a

Ubuntu 12.04 system, please adapt them if you are on a different

operating system. In this book we will setup a management server and one

Hypervisor with KVM. We will setup a `basic` networking zone.

1. Installation of the prerequisites

2. Setting up the management server

3. Setting up a KVM hypervisor

4. Configuring a Basic Zone

# Prerequisites

In this section we'll look at installing the dependencies you'll need for

Apache CloudStack development.

First update and upgrade your system:

apt-get update

apt-get upgrade

Install NTP to synchronize the clocks:

apt-get install openntpd

Install `openjdk`. As we're using Linux, OpenJDK is our first choice.

apt-get install openjdk-6-jdk

Install `tomcat6`, note that the new version of tomcat on

[Ubuntu](http://packages.ubuntu.com/precise/all/tomcat6) is the 6.0.35

version.

apt-get install tomcat6

Next, we'll install MySQL if it's not already present on the system.

apt-get install mysql-server

Remember to set the correct `mysql` password in the CloudStack properties

file. Mysql should be running but you can check it's status with:

service mysql status

## Optional

Developers wanting to build CloudStack from source will want to install

the following additional packages. If you dont' want to build from source

just jump to the next section.

Page 2: Cloud Stack

Install `git` to later clone the CloudStack source code:

apt-get install git

Install `Maven` to later build CloudStack

apt-get install maven

This should have installed Maven 3.0, check the version number with `mvn

--version`

A little bit of Python can be used (e.g simulator), install the Python

package management tools:

apt-get install python-pip python-setuptools

Finally install `mkisofs` with:

apt-get install genisoimage

# Setting up the management server

## Add the community hosted packages repo

Packages are being hosted in a community repo. To get the packages, add

the CloudStack repo to your list:

Edit `/etc/apt/sources.list.d/cloudstack.list` and add:

deb http://cloudstack.apt-get.eu/ubuntu precise 4.1

Replace 4.1 with 4.2 once 4.2 is out

Add the public keys to the trusted keys:

wget -O - http://cloudstack.apt-get.eu/release.asc|apt-key add -

Update your local apt cache

apt-get update

## Install the management server package

Grab the management server package

apt-get install cloudstack-management

Setup the database

cloudstack-setup-databases cloud:<dbpassword>@localhost \

--deploy-as=root:<password> \

-e <encryption_type> \

-m <management_server_key> \

-k <database_key> \

Page 3: Cloud Stack

-i <management_server_ip>

Start the management server

cloudstack-setup-management

You can check the status or restart the management server with:

service cloudstack-management <status|restart>

You should now be able to login to the management server UI at

`http://localhost:8080/client`. Replace `localhost` with the appropriate

IP address if needed. At this stage you have the CloudStack management

server running but no hypervisors and no storage configured.

## Prepare the Secondary storage and seed the SystemVM template

CloudStack has two types of storage: `Primary` and `Secondary`. The

`Primary` storage is defined at the cluster level and avaialable on the

hypervisors that make up a cluster. In this installation we will use

local storage for `Primary` storage. The `Secondary` storage is shared

zone wide and hosts the image templates and snapshots. In this

installation we will use an NFS server running on the same node that we

used to run the management server. In terms of networking we will setup a

`Basic` zone with no VLANs, `Advanced` zones that use VLANs or `SDN`

solutions for isolations of guest networks will be covered in another

book. In our setup the management server has the address `192.168.38.100`

and the hypervisor has the address `192.168.38.101`

Install NFS packages

apt-get install nfs-kernel-server portmap

mkdir -p /export/secondary

chown nobody:nogroup /export/secondary

The hypervisors in your infrastructure as well as the secondary storage

VM will mount this secondary storage. Edit `/etc/exports` in such a way

that these nodes can mount the share. For instance:

/export/secondary

192.168.38.*(rw,async,no_root_squash,no_subtree_check)

Then start the export

exportfs -a

We now need to seed this secondary storage with `SystemVM` templates.

SystemVMs are small appliances that run on one of the hypervisor of your

infrastructure and help orchestrate the cloud. We have the `Secondary

storage VM` which manages image placement and snapshots, the 'Proxy VM'

which handles VNC connections to the instances and the 'Virtual Route`

which provides network services. To seed the secondary storage with the

system VM template on Ubuntu for a KVM hypervisor:

Page 4: Cloud Stack

/usr/share/cloudstack-common/scripts/storage/secondary/cloud-

install-sys-tmplt -m /export/secondary -u

http://download.cloud.com/templates/acton/acton-systemvm-

02062012.qcow2.bz2 -h kvm

Note that you will need at least 5GB of disk space on the secondary

storage.

# Preparing a KVM Hypervisor

In this section we will setup an Ubuntu 12.04 KVM hypervisor. The

`Secondary` storage setup in the previous section needs to be mounted on

this node. Let's start by making this mount.

## Install the packages and mount the secondary storage

First install openntpd on this server as well as the nfs packages for the

client

apt-get install openntpd

apt-get install nfs-common portmap

Then make the mount

mkdir -p /mnt/export/secondary

mount 192.168.38.100:/export/secondary /mnt/export/secondary

Check that the mount is successfull with the `df -h` or the `mount`

command. Then try to create a file in the mounted directory `touch

/mnt/export/secondary`. Verify that you can also edit the file from the

management server.

Add the CloudStack repository as was done in the `Using Packages`.

Install the CloudStack agent

apt-get install cloudstack-agent

## Configuring libvirt

You will see that `libvirt` is a dependency of the CloudStack agent

package. Once the agent is installed, configure libvirt.

Edit `/etc/libvirt/libvirt.conf` to include:

listen_tls = 0

listen_tcp = 1

tcp_port = "16509"

auth_tcp = "none"

mdns_adv = 0

Edit `/etc/libvirt/qemu.conf` and uncomment:

vnc_listen=0.0.0.0

Page 5: Cloud Stack

In addition edit `/etc/init/libvirt-bin.conf` to modify the libvirt

options like so:

env libvirtd_opts="-d -l"

Then restart libvirt

service libvirt-bin restart

Security Policies needs to be configure properly, check that `apparmor`

is running with `dpkg --list 'apparmor'`, if it's not then you have

nothing to do, if it is then enter the following commands:

ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/

ln -s /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper

/etc/apparmor.d/disable/

apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd

apparmor_parser -R /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper

## Network bridge setup

We are now going to setup the network bridges, these are used to give

network connectivity to the instances that will run on this hypervisor.

This configuration can change depending on the number of network

interfaces you have, whether or not you use vlans etc. In our simple

case, we only have one network interface on the hypervisor and no VLANs.

In this setup, the bridges will be automatically configured when adding

the hypervisor in the infrastructure description on the management

server. You should not have anything to do.

## Firewall settings

If you are working on an isolated/safe network and doing a basic proof of

concept, you might want to disable the firewall and skip this section.

Check the status of the firewall with `ufw status` and if it is running

simply disable it with `ufw disable`. Otherwise setup the firewall

properly. For `libvirt` to work you need to open port 16509 and 49512-

49216 to enable migration. You can skip those ports if you are not going

to do any migraiton. Open port 5900:6100 for VNC sessions to your

instances, open port 1798 for the management server communication and 22

so you can ssh to your hypervisor.

The default firewall under Ubuntu is UFW (Uncomplicated FireWall). To

open the required ports, execute the following commands:

ufw allow proto tcp from any to any port 22

ufw allow proto tcp from any to any port 1798

ufw allow proto tcp from any to any port 16509

ufw allow proto tcp from any to any port 5900:6100

ufw allow proto tcp from any to any port 49152:49216

Page 6: Cloud Stack

By default the firewall on Ubuntu 12.04 is disabled. You will need to

activate it with `ufw enable`.

Now that the management server, secondary storage and hypervisor are all

setup, we can configure our infrastructure through the CloudStack

dashboard running on the management server.

# Configuring a Basic Zone

With the management server running and a hypervisor setup, you are now

ready to configure your first basic zone in CloudStack. Login to the

management server UI `http://192.168.38.100:8080/client`. Replace the IP

with the IP of your management server. Login with the username `admin`

and the password `password`. You can be adventurous and click where you

want or keep on following this guide. Click on the button that says `I

have used CloudStack before, skip this guide` we are going to bypass the

wizard. You will then see the admin view of the dashboard. Click on the

`Infrastructure` tab on the left side. Click on the `View zones` icon and

find and follow the `Add Zone` icon on the top right. You will then

follow a series of windows where you have to enter information describing

the zone.

Our zone is a basic zone with 8.8.8.8 as primary DNS and 8.8.4.4 as

internal DNS.

The reserved IPs are IPs used by the system VMs, allocate a slice of your

private network to it, for example: 192.168.38.10 to 192.168.38.20 ,

specify the gateway and netmask: 192.168.38.1 and 255.255.255.0

The guest network will be another slice of this private network for

example: 192.168.38.150 to 192.168.38.200 with gateway 192.168.38.1 and

netmask 255.255.255.0

The host is the KVM hypervisor that we setup, enter its IP:

192.168.38.101 and its root password. Make sure that you can ssh as root

to the host with that password.

Finally add the secondary storage, in our case it is the NFS server we

setup on the management server: 192.168.38.100 and with a path of

`/export/secondary`

Once you are done entering the information, launch the zone, CloudStack

will configure everything. If all goes well all the steps should have

gone `green`. When the host was being added the bridge was setup properly

on your hypervisor. you can check that it is indeed the case by looking

at your network interfaces on your hypervisor: `ifconfig`, you should see

a `cloudbr0` added. Since we are using local storage on the hypervisor,

we will need to go to the `Global setttings` tab and set that up. We saw

a warning during the configuration phase to that effect. In the search

icon (top right), enter `system`, you should see the setting

`system.vm.use.local.storage`. Set it to true and restart the management

server `service cloudstack-management restart`. At this stage CloudStack

will start by trying to run the system VMs and you may enter your first

troubleshooting issue. Especially if your hypervisor does not have much

Page 7: Cloud Stack

RAM see the trouble shooting section. If all goes well the systemVMs will

start and you should be able to start adding templates and launch

instances. On KVM your templates will need to be `qcow2` images with a

`qcow2` file extension, you will also need to have this image on a web

server that is reachable by your management server.

Once your systemVMs are up and that you have managed to add an template

or ISO, go ahead and launch an instance.

# TroubleShooting

## Secondary Storage SystemVM (SSVM)

You can ssh into the systemVM to check their network configuration and

connectivity. To do this find the link local address of the secondary

storage systemVM in the management server UI. Go to the Infrastructure

tab, select `systemVM`, select `secondary storage VM`. You will find the

link local address.

ssh -i /root/.ssh/id_rsa.cloud -p 3922 [email protected]

Then run the SSVM health check script `/usr/local/cloud/systemvm/ssvm-

check.sh`. By experience issues arise, with the NFS export not being set

properly and ending up not moutned on the SSVM, or having bad privileges.

A common issue is also network connectivity, the SSVM needs access to the

public internet. To diagnose more you might want to have a look on the

[wiki](https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSVM,+templ

ates,+Secondary+storage+troubleshooting)

Also check the logs in `var/log/cloud/systemvm.log` they can help you

diagnose other issues such as the NFS secondary storage not mounting

which will prevent you from downloading templates. The management server

IP needs to be in the same network as the management network of the

systemVM. You might want to check the UI, go in Global Settings and

search for `host`, you will find a variable `host` which should be an IP

reachable form the systemVM. If not edit it and restart the management

server. In our case since the management server was at `192.168.38.100`

we change the host global settings to that IP. This situation might arise

if your management server has multiple interfaces on different networks.

If you are on a private network without public internet connectivity you

will need to server your templates/isos from this private network (a.k.a

management network), this can be done by putting the template/iso on the

management server and using a simple `python -m SimpleHTTPServer 80`,

then using the IP of the management server in the url for downloading the

templates/iso. You will also need to change the following setting

`secstorage.use.internal.sites` and set it to a cidr that contains the

node from which you will serve the template -node that should be

reachable by the ssvm.

## Unsufficient server capacity error

Page 8: Cloud Stack

By default the systemVMs will start with a set memory allocation. The

console proxy is set to use 1GB of RAM. In some testing scenarios this

could be quite large. You can change this by modifying the database:

mysql -u root

mysql> use cloud;

mysql> select * from service_offering;

mysql> update service_offering set ram_size=256 where id=10;

Then restart the management server with `service cloudstack-management

restart`

If instances don't start due to this issue, it may be that your hosts

don't have enough RAM to start the instances or that the service offering

that the service offering that you are using is too `big`. Try to create

a service offering that requires less RAM and storage. Alternatively

increase the RAM of your hypervisors.

## Other useful settings

If you need to purge instances quickly, edit the global settings

`expunge.delay` and `expunge.interval` and restart the management server

`service cloudstack-management restart`

# Upgrading from 4.1.1 to 4.2

While writing this tutorial CloudStack 4.2 came out, it seems appropriate

to also go through the upgrade procedure. The official procedure is

documented in the [release notes](http://cloudstack.apache.org/docs/en-

US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-

instructions.html#upgrade-from-4.0-to-4.1) but here we focus on our

setup: Ubuntu 12.04, KVM and upgrading from 4.1.1 to 4.2. Other upgrade

paths are possible but the community recommends to stay close to the

latest release. In the future expect upgrade paths only from the latest

bug fix release to the next major release and between bug fix releases.

A summary of the overall procedure is as follows:

1. Stop the management server and the agent

2. Edit your repository to point to the 4.2 release and update the

packages

3. Backup your management server database for safety

4. Restart the management server and the agent

# Conclusions

CloudStack is a mostly Java application running with Tomcat and Mysql. It

consists of a management server and depending on the hypervisors being

used, an agent installed on the hypervisor farm. To complete a Cloud

infrastructure however you will also need some Zone wide storage a.k.a

Secondary Storage and some Cluster wide storage a.k.a Primary storage.

The choice of hypervisor, storage solution and type of Zone (i.e Basic

Page 9: Cloud Stack

vs. Advanced) will dictate how complex your installation can be. As a

quick start, KVM+NFS on Ubuntu 12.04 and a Basic Zone was illustrated in

this book.

If you've run into any problems with this, please ask on the cloudstack-

dev [mailing list](/mailing-lists.html).

Page 10: Cloud Stack

CloudStack Installation

=======================

This book is aimed at CloudStack users and developers who need to build

the code. These instructions are valid on a CentOS 6.4 system, please

adapt them if you are on a different operating system. We go through

several scenarios:

1. Installation of the prerequisites

2. Compiling and installation from source

3. Using the CloudStack simulator

4. Installation with DevCloud the CloudStack sandbox

5. Building packages and/or using the community packaged repo.

Prerequisites

=============

In this section we'll look at installing the dependencies you'll need for

Apache CloudStack development.

First update and upgrade your system:

yum -y update

yum -y upgrade

If not already installed, install NTP for clock synchornization

yum -y install ntp

Install `openjdk`. As we're using Linux, OpenJDK is our first choice.

yum -y install java-1.6.0-openjdk

Install `tomcat6`, note that the version of tomcat6 in the default CentOS

6.4 repo is 6.0.24, so we will grab the 6.0.35 version.

The 6.0.24 version will be installed anyway as a dependency to

cloudstack.

wget https://archive.apache.org/dist/tomcat/tomcat-

6/v6.0.35/bin/apache-tomcat-6.0.35.tar.gz

tar xzvf apache-tomcat-6.0.35.tar.gz -C /usr/local

Setup tomcat6 system wide by creating a file `/etc/profile.d/tomcat.sh`

with the following content:

export CATALINA_BASE=/usr/local/apache-tomcat-6.0.35

export CATALINA_HOME=/usr/local/apache-tomcat-6.0.35

Next, we'll install MySQL if it's not already present on the system.

yum -y install mysql mysql-server

Remember to set the correct `mysql` password in the CloudStack properties

file. Mysql should be running but you can check it's status with:

Page 11: Cloud Stack

service mysqld status

At this stage you can jump to the section on installing from packages.

Developers who want to build from source will need to add the following

packages:

Install `git` to later clone the CloudStack source code:

yum -y install git

Install `Maven` to later build CloudStack. Grab the 3.0.5 release from

the Maven [website](http://maven.apache.org/download.cgi)

wget http://mirror.cc.columbia.edu/pub/software/apache/maven/maven-

3/3.0.5/binaries/apache-maven-3.0.5-bin.tar.gz

tar xzf apache-maven-3.0.5-bin.tar.gz -C /usr/local

cd /usr/local

ln -s apache-maven-3.0.5 maven

Setup Maven system wide by creating a `/etc/profile.d/maven.sh` file with

the following content:

export M2_HOME=/usr/local/maven

export PATH=${M2_HOME}/bin:${PATH}

Log out and log in again and you will have maven in your PATH:

mvn --version

This should have installed Maven 3.0, check the version number with `mvn

--version`

A little bit of Python can be used (e.g simulator), install the Python

package management tools:

yum -y install python-setuptools

To install python-pip you might want to setup the Extra Packages for

Enterprise Linux (EPEL) repo

cd /tmp

wget http://mirror-fpt-telecom.fpt.net/fedora/epel/6/i386/epel-

release-6-8.noarch.rpm

rpm -ivh epel-release-6-8.noarch.rpm

Then update you repository cache `yum update` and install pip `yum -y

install python-pip`

Finally install `mkisofs` with:

yum -y install genisoimage

Installing from Source

Page 12: Cloud Stack

======================

CloudStack uses git for source version control, if you know little about

[git](http://book.git-scm.com/) is a good start. Once you have git setup

on your machine, pull the source with:

git clone https://git-wip-us.apache.org/repos/asf/cloudstack.git

To build the latest stable release:

git checkout 4.2

To compile Apache CloudStack, go to the cloudstack source folder and run:

mvn -Pdeveloper,systemvm clean install

If you want to skip the tests add `-DskipTests` to the command above

You will have made sure to set the proper db password in

`utils/conf/db.properties`

Deploy the database next:

mvn -P developer -pl developer -Ddeploydb

Run Apache CloudStack with jetty for testing. Note that `tomcat` maybe be

running on port 8080, stop it before you use `jetty`

mvn -pl :cloud-client-ui jetty:run

Log Into Apache CloudStack:

Open your Web browser and use this URL to connect to CloudStack:

http://localhost:8080/client/

Replace `localhost` with the IP of your management server if need be.

**Note**: If you have iptables enabled, you may have to open the ports

used by CloudStack. Specifically, ports 8080, 8250, and 9090.

You can now start configuring a Zone, playing with the API. Of course we

did not setup any infrastructure, there is no storage, no

hypervisors...etc

Using the Simulator

===================

CloudStack comes with a simulator based on Python bindings called

*Marvin*. Marvin is available in the CloudStack source code or on Pypi.

With Marvin you can simulate your data center infrastructure by providing

CloudStack with a configuration file that defines the number of

zones/pods/clusters/hosts, types of storage etc. You can then develop and

Page 13: Cloud Stack

test the CloudStack management server *as if* it was managing your

production infrastructure.

Do a clean build:

mvn -Pdeveloper -Dsimulator -DskipTests clean install

Deploy the database:

mvn -Pdeveloper -pl developer -Ddeploydb

mvn -Pdeveloper -pl developer -Ddeploydb-simulator

Install marvin. Note that you will need to have installed `pip` properly

in the prerequisites step.

pip install tools/marvin/dist/Marvin-0.1.0.tar.gz

Stop jetty (from any previous runs)

mvn -pl :cloud-client-ui jetty:stop

Start jetty

mvn -pl client jetty:run

Setup a basic zone with Marvin. In a separate shell://

mvn -Pdeveloper,marvin.setup -Dmarvin.config=setup/dev/basic.cfg -pl

:cloud-marvin integration-test

At this stage log in the CloudStack management server at

http://localhost:8080/client with the credentials admin/password, you

should see a fully configured basic zone infrastructure. To simulate an

advanced zone replace `basic.cfg` with `advanced.cfg`.

You can now run integration tests, use the API etc...

Using DevCloud

==============

The Installing from source section will only get you to the point of

runnign the management server, it does not get you any hypervisors.

The simulator section gets you a simulated datacenter for testing. With

DevCloud you can run at least one hypervisor and add it to your

management server the way you would a real physical machine.

[DevCloud](https://cwiki.apache.org/confluence/display/CLOUDSTACK/DevClou

d) is the CloudStack sandbox, the standard version is a VirtualBox based

image. There is also a KVM based image for it. Here we only show steps

with the VirtualBox image. For KVM see the

[wiki](https://cwiki.apache.org/confluence/display/CLOUDSTACK/devcloud-

kvm).

DevCloud Pre-requisites

Page 14: Cloud Stack

-----------------------

1. Install [VirtualBox](http://www.virtualbox.org) on your machine

2. Run VirtualBox and under >Preferences create a *host-only interface*

on which you disable the DHCP server

3. Download the DevCloud

[image](http://people.apache.org/~bhaisaab/cloudstack/devcloud/devcloud2.

ova)

4. In VirtualBox, under File > Import Appliance import the DevCloud

image.

5. Verify the settings under > Settings and check the `enable PAE` option

in the processor menu

6. Once the VM has booted try to `ssh` to it with credentials:

root/password

ssh [email protected]

Adding DevCloud as an Hypervisor

--------------------------------

Picking up from a clean build:

mvn -Pdeveloper,systemvm clean install

mvn -P developer -pl developer -Ddeploydb

At this stage install marvin similarly than with the simulator:

pip install tools/marvin/dist/Marvin-0.1.0.tar.gz

Then you are going to configure CloudStack to use the running DevCloud

instance:

cd tools/devcloud

python ../marvin/marvin/deployDataCenter.py -i devcloud.cfg

If you are curious, check the `devcloud.cfg` file and see how the data

center is defined: 1 Zone, 1 Pod, 1 Cluster, 1 Host, 1 primary Storage, 1

Secondary Storage, all provided by Devcloud.

You can now log in the management server at

`http://localhost:8080/client` and start experimenting with the UI or the

API.

Do note that the management server is running in your local machine and

that DevCloud is used only as a n Hypervisor. You could potentially run

the management server within DevCloud as well, or memory granted, run

multiple DevClouds.

Using Packages

Page 15: Cloud Stack

==============

If you want you can build your own packages but you can use existing one

hosted in a community repo.

To prepare your own .rpm packages

---------------------------------

To use hosted packages

----------------------

Create and edit `/etc/yum.repos.d/cloudstack.repo` and add:

[cloudstack]

name=cloudstack

baseurl=http://cloudstack.apt-get.eu/rhel/4.1

enabled=1

gpgcheck=0

Replace 4.1 with 4.2 once 4.2 is out

Update your local yum cache

yum update

Install the management server package

yum install cloudstack-management

Set SELINUX to permissive (you will need to edit /etc/selinux/config to

make it persist on reboot):

setenforce permissive

Setup the database

cloudstack-setup-databases cloud:<dbpassword>@localhost \

--deploy-as=root:<password> \

-e <encryption_type> \

-m <management_server_key> \

-k <database_key> \

-i <management_server_ip>

Start the management server

cloudstack-setup-management

You can check the status or restart the management server with:

service cloudstack-management <status|restart>

You should now be able to login to the management server UI at

`http://localhost:8080/client`. Replace `localhost` with the appropriate

IP address if needed

Page 16: Cloud Stack

Conclusions

===========

CloudStack is a mostly Java application running with Tomcat and Mysql. It

consists of a management server and depending on the hypervisors being

used, an agent installed on the hypervisor farm. To complete a Cloud

infrastructure however you will also need some Zone wide storage a.k.a

Secondary Storage and some Cluster wide storage a.k.a Primary storage.

The choice of hypervisor, storage solution and type of Zone (i.e Basic

vs. Advanced) will dictate how complex your installation can be. As a

quick started, you might want to consider KVM+NFS and a Basic Zone.

If you've run into any problems with this, please ask on the cloudstack-

dev [mailing list](/mailing-lists.html).

Page 17: Cloud Stack

About This Book

===============

License

-------

The Little CloudStack Book is licensed under the Attribution-

NonCommercial 3.0 Unported license. **You should not have paid for this

book**

You are basically free to copy, distribute, modify or display the book.

However, I ask that you always attribute the book to me, Sebastien

Goasguen and do not use it for commercial purposes.

You can see the full text of the license at:

<http://creativecommons.org/licenses/by-nc/3.0/legalcode>

"Apache", "CloudStack", "Apache CloudStack", the Apache CloudStack logo,

the Apache CloudStack CloudMonkey logo and the Apache feather logos are

registered trademarks or trademarks of The Apache Software Foundation.

About The Author

----------------

Sebastien Goasguen is an Apache CloudStack committer and member of the

CloudStack Project Management Committee (PMC). His day job is to be a

Senior Open Source Solutions Architect for the Open Source Business

Office at Citrix. He will never call himself an expert or a developer but

is a decent Python programmer. He is currently active in Apache Libcloud

and SaltStack salt-cloud projects to bring better support for CloudStack.

He blogs regularly about cloud technologies and spends lots of time

testing and writing about his experiences. Prior to working actively on

CloudStack he had a life as an academic, he authored over seventy

international publications on grid computing, high performance computing,

electromagnetics, nanoelectronics and of course cloud computing. He also

taught courses on distributed computing, network programming, ethical

hacking and cloud.

His blog can be found at http://sebgoa.blogspot.com and he tweets via

@sebgoa. You can find him on github at https://github.com/runseb

Introduction

------------

Clients and high level Wrappers are critical to the ease of use of any

API, even more so Cloud APIs. In this book we present the basics of the

CloudStack API and introduce some low level clients before diving into

more advanced wrappers.

The first chapter is dedicated to clients and the second chapter to

wrappers or what I considered to be high level tools built on top of a

CloudStack client.

In the first chapter, we start by illustrating how to sign requests with

the native API -for the sake of completeness- and

Page 18: Cloud Stack

because it is a very nice exercise for beginners. We then introduce

CloudMonkey the CloudStack CLI and shell which boasts a 100% coverage of

the API. Then jclouds is discussed. While jclouds is a java library, it

can also be used as a cli or interactive shell, we present jclouds-cli to

contrast it to

CloudMonkey and introduce jclouds. Apache libcloud is a Python module

that provides a common API on top of many Cloud providers API, once

installed, a developer can use libcloud to talk to multiple cloud

providers and cloud APIs, it serves a similar role as jclouds but in

Python. Finally, we present Boto, the well-known Python Amazon Web

Service interface, and show how it can be used with a CloudStack cloud

running the AWS interface.

In the second chapter we introduce several high level wrappers for

configuration management and automated provisioning.

The presentation of these wrappers aim to answer the question "I have a

cloud now what ?". Starting and stopping virtual machines is the core

functionality of a cloud,

but it empowers users to do much more. Automation is the key of today's

IT infrastructure. The wrappers presented here show you how you can

automate configuration management and automate provisioning of

infrastructures that lie within your cloud. We introduce Salt-cloud for

Saltstack, a Python alternative to the well known Chef and Puppet

systems. We then introduce the knife CloudStack plugin for Chef and show

you how easy it is to deploy machines in a cloud and configure them. We

finish with another Apache project based on jclouds: Whirr. Apache Whirr

simplifies the on-demand provisioning of clusters of virtual machine

instances, hence it allows you to easily provision big data

infrastructure on-demand, whether you need a *HADOOP* cluster, an

*Elasticsearch* cluster or even a *Cassandra* cluster.

The CloudStack API

==================

All functionalities of the CloudStack data center orchestrator are

exposed

via an API server. Github currently has over twenty clients for this

API, in various languages. In this section we introduce this API and the

signing mechanism. The follow on sections will introduce clients that

already contain a signing method. The signing process is only

highlighted for completeness.

Basics of the API

-----------------

The CloudStack API is a query based API using http which returns results

in XML or JSON. It is used to implement the default web UI. This API is

not a standard like [OGF

OCCI](http://www.ogf.org/gf/group_info/view.php?group=occi-wg) or [DMTF

CIMI](http://dmtf.org/standards/cloud) but is easy to learn. A mapping

exists between the AWS API and the CloudStack API as will be seen in the

next section. Recently a Google Compute Engine interface was also

developed that maps the GCE REST API to the CloudStack API described

here. The API [docs](http://cloudstack.apache.org/docs/api/) are a good

start to learn the extent of the API. Multiple clients exist on

[github](https://github.com/search?q=cloudstack+client&ref=cmdform) to

Page 19: Cloud Stack

use this API, you should be able to find one in your favourite language.

The reference documentation for the API and changes that might occur from

version to version is available [on-

line](http://cloudstack.apache.org/docs/en-

US/Apache_CloudStack/4.1.1/html/Developers_Guide/index.html). This short

section is aimed at providing a quick summary to give you a base

understanding of how to use this API. As a quick start, a good way to

explore the API is to navigate the dashboard with a firebug console (or

similar developer console) to study the queries.

In a succinct statement, the CloudStack query API can be used via http

GET requests made against your cloud endpoint (e.g

http://localhost:8080/client/api). The API name is passed using the

`command` key and the various parameters for this API call are passed as

key value pairs. The request is signed using the secret key of the user

making the call. Some calls are synchronous while some are asynchronous,

this is documented in the API

[docs](http://cloudstack.apache.org/docs/api/). Asynchronous calls return

a `jobid`, the status and result of a job can be queried with the

`queryAsyncJobResult` call. Let's get started and give an example of

calling the `listUsers` API in Python.

First you will need to generate keys to make requests. Going through the

dashboard, go under `Accounts` select the appropriate account then click

on `Show Users` select the intended user and generate keys using the

`Generate Keys` icon. You will see an `API Key` and `Secret Key` field

being generated. The keys will be of the form:

API Key : XzAz0uC0t888gOzPs3HchY72qwDc7pUPIO8LxC-

VkIHo4C3fvbEBY_Ccj8fo3mBapN5qRDg_0_EbGdbxi8oy1A

Secret Key: zmBOXAXPlfb-

LIygOxUVblAbz7E47eukDS_0JYUxP3JAmknOYo56T0R-

AcM7rK7SMyo11Y6XW22gyuXzOdiybQ

Open a Python shell and import the basic modules necessary to make the

request. Do note that this request could be made many different ways,

this is just a low level example. The `urllib*` modules are used to make

the http request and do url encoding. The `hashlib` module gives us the

sha1 hash function. It is used to generate the `hmac` (Keyed Hashing for

Message Authentication) using the secretkey. The result is encoded using

the `base64` module.

$python

Python 2.7.3 (default, Nov 17 2012, 19:54:34)

[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))]

on darwin

Type "help", "copyright", "credits" or "license" for more

information.

>>> import urllib2

>>> import urllib

>>> import hashlib

>>> import hmac

>>> import base64

Page 20: Cloud Stack

Define the endpoint of the Cloud, the command that you want to execute,

the type of the response (i.e XML or JSON) and the keys of the user. Note

that we do not put the secretkey in our request dictionary because it is

only used to compute the hmac.

>>> baseurl='http://localhost:8080/client/api?'

>>> request={}

>>> request['command']='listUsers'

>>> request['response']='json'

>>> request['apikey']='plgWJfZK4gyS3mOMTVmjUVg-X-

jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg'

>>>

secretkey='VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZsYBkoXk

Y9b7EhwJaw7FF3akA3KBQ'

Build the base request string, the combination of all the key/pairs of

the request, url encoded and joined with ampersand.

>>> request_str='&'.join(['='.join([k,urllib.quote_plus(request[k])])

for k in request.keys()])

>>> request_str

'apikey=plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-

kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg&command=listUsers&response=json'

Compute the signature with hmac, do a 64 bit encoding and a url encoding,

the string used for the signature is similar to the base request string

shown above but the keys/values are lower cased and joined in a sorted

order

>>>

sig_str='&'.join(['='.join([k.lower(),urllib.quote_plus(request[k].lower(

).replace('+','%20'))])for k in sorted(request.iterkeys())])

>>> sig_str

'apikey=plgwjfzk4gys3momtvmjuvg-x-jlwlnfauj9gabbbf9edm-

kaymmailqzzq1elzlyq_u38zcm0bewzgudp66mg&command=listusers&response=json'

>>> sig=hmac.new(secretkey,sig_str,hashlib.sha1).digest()

>>> sig

'M:]\x0e\xaf\xfb\x8f\xf2y\xf1p\x91\x1e\x89\x8a\xa1\x05\xc4A\xdb'

>>>

sig=base64.encodestring(hmac.new(secretkey,sig_str,hashlib.sha1).digest()

)

>>> sig

'TTpdDq/7j/J58XCRHomKoQXEQds=\n'

>>>

sig=base64.encodestring(hmac.new(secretkey,sig_str,hashlib.sha1).digest()

).strip()

>>> sig

'TTpdDq/7j/J58XCRHomKoQXEQds='

>>>

sig=urllib.quote_plus(base64.encodestring(hmac.new(secretkey,sig_str,hash

lib.sha1).digest()).strip())

Finally, build the entire string by joining the baseurl, the request str

and the signature. Then do an http GET:

Page 21: Cloud Stack

>>> req=baseurl+request_str+'&signature='+sig

>>> req

'http://localhost:8080/client/api?apikey=plgWJfZK4gyS3mOMTVmjUVg-X-

jlWlnfaUJ9GAbBbf9EdM-

kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg&command=listUsers&response=json&s

ignature=TTpdDq%2F7j%2FJ58XCRHomKoQXEQds%3D'

>>> res=urllib2.urlopen(req)

>>> res.read()

'{ "listusersresponse" : { "count":1 ,"user" : [ {"id":"7ed6d5da-

93b2-4545-a502-23d20b48ef2a","username":"admin","firstname":"admin",

"lastname":"cloud","created":"2012-07-05T12:18:27-

0700","state":"enabled","account":"admin",

"accounttype":1,"domainid":"8a111e58-e155-4482-93ce-

84efff3c7c77","domain":"ROOT",

"apikey":"plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-

kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg",

"secretkey":"VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZshwJa

w7FF3akA3KBQ",

"accountid":"7548ac03-af1d-4c1c-9064-2f3e2c0eda0d"}]}}

All the clients that you will find on github will implement this

signature technique, you should not have to do it by hand. Now that you

have explored the API through the UI and that you understand how to make

low level calls, pick your favourite client or use

[CloudMonkey](https://pypi.python.org/pypi/cloudmonkey/). CloudMonkey is

a sub-project of Apache CloudStack and gives operators/developers the

ability to use any of the API methods. It has nice auto-completion,

history and help features as well as an API discovery mechanism since

4.2.

CloudMonkey

===========

CloudMonkey is the CloudStack Command Line Interface (CLI). It is written

in Python. CloudMonkey can be used both as an interactive shell and as a

command line tool which simplifies CloudStack configuration and

management.

It can be used with CloudStack 4.0-incubating and above.

Installing CloudMonkey

----------------------

CloudMonkey is dependent on *readline, pygments, prettytable*, when

installing from source you will need to resolve those dependencies.

Using the cheese shop, the dependencies will be automatically installed.

There are two ways to get CloudMonkey. Via the official CloudStack source

releases or via a community maintained distribution at [the cheese

shop](http://pypi.python.org/pypi/cloudmonkey/). CloudMonkey now lives

within its own repository but it used to be part of the CloudStack

release. Developers could get

Page 22: Cloud Stack

it directly from the CloudStack git repository in *tools/cli/*. Now, it

is better to use the CloudMonkey specific repository.

- Via the official Apache CloudStack-CloudMonkey git repository.

$ git clone https://git-wip-us.apache.org/repos/asf/cloudstack-

cloudmonkey.git

$ sudo python setup.py install

- Via a community maintained package on [Cheese

Shop](https://pypi.python.org/pypi/cloudmonkey/)

pip install cloudmonkey

Configuration

-------------

To configure CloudMonkey you can edit the `~/.cloudmonkey/config` file in

the user's home directory as shown below. The values can also be set

interactively at the cloudmonkey prompt. Logs are kept in

`~/.cloudmonkey/log`, and history is stored in `~/.cloudmonkey/history`.

Discovered apis are listed in `~/.cloudmonkey/cache`. Only the log and

history files can be custom paths and can be configured by setting

appropriate file paths in `~/.cloudmonkey/config`

$ cat ~/.cloudmonkey/config

[core]

log_file = /Users/sebastiengoasguen/.cloudmonkey/log

asyncblock = true

paramcompletion = false

history_file = /Users/sebastiengoasguen/.cloudmonkey/history

[ui]

color = true

prompt = >

display = table

[user]

secretkey

=VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZsYBkoXkY9b7eq1Ehw

Jaw7FF3akA3KBQ

apikey = plgWJfZK4gyS3mOMTVmjUVg-X-

jlWlnfaUJ9GAbBbf9EdMkAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg

[server]

path = /client/api

host = localhost

protocol = http

port = 8080

timeout = 3600

The values can also be set at the CloudMonkey prompt. The API and secret

keys are obtained via the CloudStack UI or via a raw api call.

Page 23: Cloud Stack

$ cloudmonkey

☁ Apache CloudStack cloudmonkey 4.1.0-snapshot. Type help or ? to list commands.

> set prompt myprompt>

myprompt> set host localhost

myprompt> set port 8080

myprompt> set apikey <your api key>

myprompt> set secretkey <your secret key>

You can use CloudMonkey to interact with a local cloud, and even with a

remote public cloud. You just need to set the host value properly and

obtain the keys from the cloud administrator.

API Discovery

-------------

> **Note**

>

> In CloudStack 4.0.\* releases, the list of api calls available will be

> pre-cached, while starting with CloudStack 4.1 releases and above an

API

> discovery service is enabled. CloudMonkey will discover automatically

> the api calls available on the management server. The sync command in

> CloudMonkey pulls a list of apis which are accessible to your user

> role. This allows CloudMonkey to be adaptable to

> changes in management server, so in case the sysadmin enables a plugin

such

> as Nicira NVP for that user role, the users can get those changes.

To discover the APIs available do:

> sync

324 APIs discovered and cached

Tabular Output

--------------

The number of key/value pairs returned by the api calls can be large

resulting in a very long output. To enable easier viewing of the output,

a tabular formatting can be setup. You may enable tabular listing and

even choose set of column fields, this allows you to create your own

field using the filter param which takes in comma separated argument. If

argument has a space, put them under double quotes. The create table

will have the same sequence of field filters provided

To enable it, use the *set* function and create filters like so:

> set display table

> list users filter=id,domain,account

count = 1

user:

+--------------------------------------+--------+---------+

| id | domain | account |

+--------------------------------------+--------+---------+

Page 24: Cloud Stack

| 7ed6d5da-93b2-4545-a502-23d20b48ef2a | ROOT | admin |

+--------------------------------------+--------+---------+

Interactive Shell Usage

-----------------------

To start learning CloudMonkey, the best is to use the interactive shell.

Simply type CloudMonkey at the prompt and you should get the interactive

shell.

At the CloudMonkey prompt press the tab key twice, you will see all

potential verbs available. Pick one, enter a space and then press tab

twice. You will see all actions available for that verb

cloudmonkey>

EOF assign cancel create detach extract

ldap prepare reconnect restart shell update

...

cloudmonkey>create

account diskoffering loadbalancerrule

portforwardingrule snapshot tags vpc

...

Picking one action and entering a space plus the tab key, you will

obtain the list of parameters for that specific api call.

cloudmonkey>create network

account= domainid= isAsync=

networkdomain= projectid= vlan=

acltype= endip= name=

networkofferingid= startip= vpcid=

displaytext= gateway= netmask=

physicalnetworkid= subdomainaccess= zoneid=

To get additional help on that specific api call you can use the

following:

cloudmonkey>create network -h

Creates a network

Required args: displaytext name networkofferingid zoneid

Args: account acltype displaytext domainid endip gateway isAsync name

netmask networkdomain networkofferingid physicalnetworkid projectid

startip subdomainaccess vlan vpcid zoneid

cloudmonkey>create network -help

Creates a network

Required args: displaytext name networkofferingid zoneid

Args: account acltype displaytext domainid endip gateway isAsync name

netmask networkdomain networkofferingid physicalnetworkid projectid

startip subdomainaccess vlan vpcid zoneid

cloudmonkey>create network --help

Creates a network

Required args: displaytext name networkofferingid zoneid

Page 25: Cloud Stack

Args: account acltype displaytext domainid endip gateway isAsync name

netmask networkdomain networkofferingid physicalnetworkid projectid

startip subdomainaccess vlan vpcid zoneid

cloudmonkey>

Note the required arguments necessary for the calls.

> **Note**

>

> To find out the required parameters value, using a debugger console on

> the CloudStack UI might be very useful. For instance using Firebug on

> Firefox, you can navigate the UI and check the parameters values for

> each call you are making as you navigate the UI.

Starting a Virtual Machine instance with CloudMonkey

----------------------------------------------------

To start a virtual machine instance we will use the *deploy

virtualmachine* call.

cloudmonkey>deploy virtualmachine -h

Creates and automatically starts a virtual machine based on a service

offering, disk offering, and template.

Required args: serviceofferingid templateid zoneid

Args: account diskofferingid displayname domainid group hostid

hypervisor ipaddress iptonetworklist isAsync keyboard keypair name

networkids projectid securitygroupids securitygroupnames

serviceofferingid size startvm templateid userdata zoneid

The required arguments are *serviceofferingid, templateid and zoneid*

In order to specify the template that we want to use, we can list all

available templates with the following call:

cloudmonkey>list templates templatefilter=all

count = 2

template:

========

domain = ROOT

domainid = 8a111e58-e155-4482-93ce-84efff3c7c77

zoneid = e1bfdfaf-3d9b-43d4-9aea-2c9f173a1ae7

displaytext = SystemVM Template (XenServer)

ostypeid = 849d7d0a-9fbe-452a-85aa-70e0a0cbc688

passwordenabled = False

id = 6d360f79-4de9-468c-82f8-a348135d298e

size = 2101252608

isready = True

templatetype = SYSTEM

zonename = devcloud

...<snipped>

In this snippet, I used DevCloud and only showed the beginning output of

the first template, the SystemVM template

Similarly to get the *serviceofferingid* you would do:

Page 26: Cloud Stack

cloudmonkey>list serviceofferings | grep id

id = ef2537ad-c70f-11e1-821b-0800277e749c

id = c66c2557-12a7-4b32-94f4-48837da3fa84

id = 3d8b82e5-d8e7-48d5-a554-cf853111bc50

Note that we can use the linux pipe as well as standard linux commands

within the interactive shell. Finally we would start an instance with

the following call:

cloudmonkey>deploy virtualmachine templateid=13ccff62-132b-4caf-b456-

e8ef20cbff0e zoneid=e1bfdfaf-3d9b-43d4-9aea-2c9f173a1ae7

serviceofferingid=ef2537ad-c70f-11e1-821b-0800277e749c

jobprocstatus = 0

created = 2013-03-05T13:04:51-0800

cmd = com.cloud.api.commands.DeployVMCmd

userid = 7ed6d5da-93b2-4545-a502-23d20b48ef2a

jobstatus = 1

jobid = c441d894-e116-402d-aa36-fdb45adb16b7

jobresultcode = 0

jobresulttype = object

jobresult:

=========

virtualmachine:

==============

domain = ROOT

domainid = 8a111e58-e155-4482-93ce-84efff3c7c77

haenable = False

templatename = tiny Linux

...<snipped>

The instance would be stopped with:

cloudmonkey>stop virtualmachine id=7efe0377-4102-4193-bff8-

c706909cc2d2

> **Note**

>

> The *ids* that you will use will differ from this example. Make sure

> you use the ones that corresponds to your CloudStack cloud.

Scripting with CloudMonkey

--------------------------

All previous examples use CloudMonkey via the interactive shell, however

it can be used as a straightfoward CLI, passing the commands to the

*cloudmonkey* command like shown below.

$cloudmonkey list users

As such it can be used in shell scripts, it can received commands via

stdin and its output can be parsed like any other unix commands as

mentioned before.

jClouds CLI

Page 27: Cloud Stack

===========

jclouds is a Java wrapper for many Cloud Providers APIs, it used in a

large number of Cloud application to access providers that do not offer

a standard APIs. jclouds-cli is the command line interface to jclouds

and in CloudStack terminology could be seen as an equivalent to

CloudMonkey.

However CloudMonkey covers the entire CloudStack API and jclouds-cli does

not. Management of virtual machines, blobstore (i.e S3 like) and

configuration management via chef are the main features.

> **Warning**

>

> jclouds is under going incubation at the Apache Software Foundation,

> jclouds-cli is available on github. Changes may occur in the software

> from the time of this writing to the time of you reading it.

Installation and Configuration

------------------------------

First install jclouds-cli via github and build it with maven:

$git clone https://github.com/jclouds/jclouds-cli.git

$cd jclouds-cli

$mvn install

Locate the tarball generated by the build in *assembly/target*, extract

the tarball in the directory of your choice and add the bin directory to

your path. For instance:

export PATH=/Users/sebastiengoasguen/Documents/jclouds-cli-1.7.0/bin

Define a few environmental variables to set your endpoint and your

credentials, the ones listed below are just examples. Adapt to your own

endpoint and keys.

export JCLOUDS_COMPUTE_API=cloudstack

export JCLOUDS_COMPUTE_ENDPOINT=http://localhost:8080/client/api

export JCLOUDS_COMPUTE_CREDENTIAL=_UKIzPgw7BneOyJO621Tdlslicg

export JCLOUDS_COMPUTE_IDENTITY=mnH5EbKcKeJdJrvguEIwQG_Fn-N0l

You should now be able to use jclouds-cli, check that it is in your path

and runs, you should see the following output:

sebmini:jclouds-cli-1.7.0-SNAPSHOT sebastiengoasguen$ jclouds-cli

_ _ _

(_) | | | |

_ ____| | ___ _ _ _ | | ___

| |/ ___) |/ _ \| | | |/ || |/___)

| ( (___| | |_| | |_| ( (_| |___ |

_| |\____)_|\___/ \____|\____(___/

(__/

jclouds cli (1.7.0-SNAPSHOT)

http://jclouds.org

Page 28: Cloud Stack

Hit '<tab>' for a list of available commands

and '[cmd] --help' for help on a specific command.

Hit '<ctrl-d>' to shutdown jclouds cli.

jclouds> features:list

State Version Name

Repository Description

[installed ] [1.7.0-SNAPSHOT] jclouds-guice

jclouds-1.7.0-SNAPSHOT Jclouds - Google Guice

[installed ] [1.7.0-SNAPSHOT] jclouds

jclouds-1.7.0-SNAPSHOT JClouds

[installed ] [1.7.0-SNAPSHOT] jclouds-blobstore

jclouds-1.7.0-SNAPSHOT JClouds Blobstore

[installed ] [1.7.0-SNAPSHOT] jclouds-compute

jclouds-1.7.0-SNAPSHOT JClouds Compute

[installed ] [1.7.0-SNAPSHOT] jclouds-management

jclouds-1.7.0-SNAPSHOT JClouds Management

[uninstalled] [1.7.0-SNAPSHOT] jclouds-api-filesystem

jclouds-1.7.0-SNAPSHOT JClouds - API - FileSystem

[installed ] [1.7.0-SNAPSHOT] jclouds-aws-ec2

jclouds-1.7.0-SNAPSHOT Amazon Web Service - EC2

[uninstalled] [1.7.0-SNAPSHOT] jclouds-aws-route53

jclouds-1.7.0-SNAPSHOT Amazon Web Service - Route 53

[installed ] [1.7.0-SNAPSHOT] jclouds-aws-s3

jclouds-1.7.0-SNAPSHOT Amazon Web Service - S3

[uninstalled] [1.7.0-SNAPSHOT] jclouds-aws-sqs

jclouds-1.7.0-SNAPSHOT Amazon Web Service - SQS

[uninstalled] [1.7.0-SNAPSHOT] jclouds-aws-sts

jclouds-1.7.0-SNAPSHOT Amazon Web Service - STS

...<snip>

> **Note**

>

> I edited the output of jclouds-cli to gain some space, there a lot

> more providers available

Using jclouds CLI

-----------------

The CloudStack API driver is not installed by default. Install it with:

jclouds> features:install jclouds-api-cloudstack

For now we will only test the virtual machine management functionality.

Pretty basic but that's what we want to do to get a feel for

jclouds-cli. If you have set your endpoint and keys properly, you should

be able to list the location of your cloud like so:

$ jclouds location list

[id] [scope] [description]

[parent]

cloudstack PROVIDER

https://api.exoscale.ch/compute

Page 29: Cloud Stack

1128bd56-b4d9-4ac6-a7b9-c715b187ce11 ZONE CH-GV2

cloudstack

Again this is an example, you will see something different depending on

your endpoint.

You can list the service offerings with:

$ jclouds hardware list

[id] [ram] [cpu] [cores]

71004023-bb72-4a97-b1e9-bc66dfce9470 512 2198.0 1.0

b6cd1ff5-3a2f-4e9d-a4d1-8988c1191fe8 1024 2198.0 1.0

21624abb-764e-4def-81d7-9fc54b5957fb 2048 4396.0 2.0

b6e9d1e8-89fc-4db3-aaa4-9b4c5b1d0844 4096 4396.0 2.0

c6f99499-7f59-4138-9427-a09db13af2bc 8182 8792.0 4.0

350dc5ea-fe6d-42ba-b6c0-efb8b75617ad 16384 8792.0 4.0

a216b0d1-370f-4e21-a0eb-3dfc6302b564 32184 17584.0 8.0

List the images available with:

$ jclouds image list

[id] [location] [os family] [os

version] [status]

0f9f4f49-afc2-4139-b26b-b05a9f51ea74 windows null

AVAILABLE

1d16c78d-268f-47d0-be0c-b80d31e765d2 unrecognized null

AVAILABLE

3cfd96dc-acce-4423-a095-e558f740db5c unrecognized null

AVAILABLE

...<snip>

We see that the os family is not listed properly, this is probably due

to some regex used by jclouds to guess the OS type. Unfortunately the

name key is not given.

To start an instance we can check the syntax of *jclouds node create*

$ jclouds node create --help

DESCRIPTION

jclouds:node-create

Creates a node.

SYNTAX

jclouds:node-create [options] group [number]

ARGUMENTS

group

Node group.

number

Number of nodes to create.

(defaults to 1)

Page 30: Cloud Stack

We need to define the name of a group and give the number of instance

that we want to start. Plus the hardware and image id. In terms of

hardware, we are going to use the smallest possible hardware and for

image we give a uuid from the previous list.

$ jclouds node list

[id] [location]

[hardware] [group] [status]

4e733609-4c4a-4de1-9063-6fe5800ccb10 1128bd56-b4d9-4ac6-a7b9-

c715b187ce11 71004023-bb72-4a97-b1e9-bc66dfce9470 foobar RUNNING

$ jclouds node info 4e733609-4c4a-4de1-9063-6fe5800ccb10

[id] [location]

[hardware] [group] [status]

4e733609-4c4a-4de1-9063-6fe5800ccb10 1128bd56-b4d9-4ac6-a7b9-

c715b187ce11 71004023-bb72-4a97-b1e9-bc66dfce9470 foobar RUNNING

Operating System: unrecognized null null

Configured User: root

Public Address: 9.9.9.9

Private Address:

Image Id: 1d16c78d-268f-47d0-be0c-b80d31e765d2

With this short intro, you are well on your way to using jclouds-cli.

Check out the interactive shell, the blobstore and the chef facility to

automate VM configuration. Remember that jclouds is also and actually

foremost a java library that you can use to write other applications.

Apache Libcloud

===============

There are many tools available to interface with the CloudStack API, we

just saw jClouds. Apache

Libcloud is another one, but this time Python based. In this section we

provide a basic example of

how to use Libcloud with CloudStack. It assumes that you have access to a

CloudStack endpoint and that you have the API access key and secret key

of

a user.

Installation

------------

To install Libcloud refer to the libcloud

[website](http://libcloud.apache.org). If you are familiar with Pypi

simply do:

pip install apache-libcloud

You should see the following output:

pip install apache-libcloud

Downloading/unpacking apache-libcloud

Downloading apache-libcloud-0.12.4.tar.bz2 (376kB): 376kB downloaded

Running setup.py egg_info for package apache-libcloud

Page 31: Cloud Stack

Installing collected packages: apache-libcloud

Running setup.py install for apache-libcloud

Successfully installed apache-libcloud

Cleaning up...

Developers will want to clone the repository, for example from the

github mirror:

git clone https://github.com/apache/libcloud.git

To install libcloud from the cloned repo, simply do the following from

within the clone repository directory:

sudo python ./setup.py install

> **Note**

>

> The CloudStack driver is located in

> */path/to/libcloud/source/libcloud/compute/drivers/cloudstack.py*.

> file bugs on the libcloud JIRA and submit your patches as an attached

> file to the JIRA entry.

Using Libcloud

--------------

With libcloud installed either via PyPi or via the source, you can now

open a Python interactive shell, create an instance of a CloudStack

driver

and call the available methods via the libcloud API.

First you need to import the libcloud modules and create a CloudStack

driver.

>>> from libcloud.compute.types import Provider

>>> from libcloud.compute.providers import get_driver

>>> Driver = get_driver(Provider.CLOUDSTACK)

Then, using your keys and endpoint, create a connection object. Note

that this is a local test and thus not secured. If you use a CloudStack

public cloud, make sure to use SSL properly (i.e `secure=True`).

>>> apikey='plgWJfZK4gyS3mlZLYq_u38zCm0bewzGUdP66mg'

>>> secretkey='VDaACYb0LV9eNjeq1EhwJaw7FF3akA3KBQ'

>>> host='http://localhost:8080'

>>> path='/client/api'

>>>

conn=Driver(key=apikey,secret=secretkey,secure=False,host='localhost',por

t='8080',path=path)

With the connection object in hand, you now use the libcloud base api to

list such things as the templates (i.e images), the service offerings

(i.e sizes) and the zones (i.e locations)

Page 32: Cloud Stack

>>> conn.list_images()

[<NodeImage: id=13ccff62-132b-4caf-b456-e8ef20cbff0e, name=tiny

Linux, driver=CloudStack ...>]

>>> conn.list_sizes()

[<NodeSize: id=ef2537ad-c70f-11e1-821b-0800277e749c,

name=tinyOffering, ram=100 disk=0 bandwidth=0 price=0 driver=CloudStack

...>,

<NodeSize: id=c66c2557-12a7-4b32-94f4-48837da3fa84, name=Small

Instance, ram=512 disk=0 bandwidth=0 price=0 driver=CloudStack ...>,

<NodeSize: id=3d8b82e5-d8e7-48d5-a554-cf853111bc50, name=Medium

Instance, ram=1024 disk=0 bandwidth=0 price=0 driver=CloudStack ...>]

>>> images=conn.list_images()

>>> offerings=conn.list_sizes()

The `create_node` method will take an instance name, a template and an

instance type as arguments. It will return an instance of a

*CloudStackNode* that has additional extensions methods, such as

`ex_stop` and `ex_start`.

>>>

node=conn.create_node(name='toto',image=images[0],size=offerings[0])

>>> help(node)

>>> node.get_uuid()

'b1aa381ba1de7f2d5048e248848993d5a900984f'

>>> node.name

u'toto'

Keypairs and Security Groups

----------------------------

I recently added support for keypair management in libcloud. For

instance, given a conn object obtained from the previous interactive

session:

conn.ex_list_keypairs()

conn.ex_create_keypair(name='foobar')

conn.ex_delete_keypair(name='foobar')

Management of security groups was also added. Below we show how to list,

create and delete security groups. As well as add an ingree rule to open

port 22 to the world. Both keypair and security groups are key for

access to a CloudStack Basic zone like

[Exoscale](http://www.exoscale.ch).

conn.ex_list_security_groups()

conn.ex_create_security_group(name='libcloud')

conn.ex_authorize_security_group_ingress(securitygroupname='llibcloud',pr

otocol='TCP',startport=22,cidrlist='0.0.0.0/0')

conn.ex_delete_security_group('llibcloud')

Development of the CloudStack driver in Libcloud is very active, there is

also support for advanced zone via calls to do SourceNAT and StaticNAT.

Multiple Clouds

Page 33: Cloud Stack

---------------

One of the interesting use cases of Libcloud is that you can use

multiple Cloud Providers, such as AWS, Rackspace, OpenNebula, vCloud and

so on. You can then create Driver instances to each of these clouds and

create your own multi cloud application. In the example below we

instantiate to libcloud CloudStack driver, one on

[Exoscale](http://exoscale.ch) and the other one on

[Ikoula](http://ikoula.com).

import libcloud.security as sec

Driver = get_driver(Provider.CLOUDSTACK)

apikey=os.getenv('EXOSCALE_API_KEY')

secretkey=os.getenv('EXOSCALE_SECRET_KEY')

endpoint=os.getenv('EXOSCALE_ENDPOINT')

host=urlparse.urlparse(endpoint).netloc

path=urlparse.urlparse(endpoint).path

exoconn=Driver(key=apikey,secret=secretkey,secure=True,host=host,path=pat

h)

Driver = get_driver(Provider.CLOUDSTACK)

apikey=os.getenv('IKOULA_API_KEY')

secretkey=os.getenv('IKOULA_SECRET_KEY')

endpoint=os.getenv('IKOULA_ENDPOINT')

host=urlparse.urlparse(endpoint).netloc

print host

path=urlparse.urlparse(endpoint).path

print path

sec.VERIFY_SSL_CERT = False

ikoulaconn=Driver(key=apikey,secret=secretkey,secure=True,host=host,path=

path)

drivers = [exoconn, ikoulaconn]

for driver in drivers:

print driver.list_locations()

> **Note**

>

> In the example above, I set my access and secret keys as well as the

> endpoints as environment variable. Also note the libcloud security

> module and the VERIFY\_SSL\_CERT. In the case of iKoula the SSL

> certificate used was not verifiable by the CERTS that libcloud checks.

> Especially if you use a self-signed SSL certificate for testing, you

> might have to disable this check as well.

From this basic setup you can imagine how you would write an application

Page 34: Cloud Stack

that would manage instances in different Cloud Providers. Providing more

resiliency to your overall infrastructure.

Python Boto

==========

There are many tools available to interface with a AWS compatible API.

In this section we provide a short example that users of CloudStack can

build upon using the AWS interface to CloudStack.

Boto Examples

-------------

Boto is one of them. It is a Python package available at

https://github.com/boto/boto. In this section we provide two examples of

Python scripts that use Boto and have been tested with the CloudStack AWS

API Interface.

First is an EC2 example. Replace the Access and Secret Keys with your

own and update the endpoint.

#!/usr/bin/env python

import sys

import os

import boto

import boto.ec2

region =

boto.ec2.regioninfo.RegionInfo(name="ROOT",endpoint="localhost")

apikey='GwNnpUPrO6KgIdZu01z_ZhhZnKjtSdRwuYd4DvpzvFpyxGMvrzno2q05MB0ViBoFY

tdqKd'

secretkey='t4eXLEYWw7chBhDlaKf38adCMSHx_wlds6JfSx3z9fSpSOm0AbP9Moj0oGIzy2

LSC8iw'

def main():

'''Establish connection to EC2 cloud'''

conn =boto.connect_ec2(aws_access_key_id=apikey,

aws_secret_access_key=secretkey,

is_secure=False,

region=region,

port=7080,

path="/awsapi",

api_version="2012-08-15")

'''Get list of images that I own'''

images = conn.get_all_images()

print images

myimage = images[0]

'''Pick an instance type'''

vm_type='m1.small'

reservation =

myimage.run(instance_type=vm_type,security_groups=['default'])

if __name__ == '__main__':

Page 35: Cloud Stack

main()

With boto you can also interact with other AWS services like S3.

CloudStack has an S3 tech preview but it

is backed by a standard NFS server and therefore is not a true scalable

distributed block store. To provide an S3

service in your Cloud I recommend to use other software like RiakCS, Ceph

radosgw or Glusterfs S3 interface. These

systems handle large scale, chunking and replication.

Wrappers

========

In this paragraph we introduce several CloudStack *wrappers*. These tools

are using client libraries presented in the previous chapter (or their

own built-in request mechanisms) and add

additional functionality that involve some high-level orchestration. For

instance *knife-cloudstack* uses the power of

[Chef](http://opscode.com), the configuration management system, to

seamlessly bootstrap instances running in a CloudStack cloud. Apache

[Whirr](http://whirr.apache.org) uses

[jclouds](http://jclouds.incubator.apache.org) to boostrap

[Hadoop](http://hadoop.apache.org) clusters in the cloud and

[SaltStack](http://saltstack.com) does configuration management in the

Cloud using Apache libcloud.

Knife CloudStack

=============

Knife is a command line utility for Chef, the configuration management

system from OpsCode.

Install, Configure and Feel

---------------------------

The Knife family of tools are drivers that automate the provisioning and

configuration of machines in the Cloud. Knife-cloudstack is a CloudStack

plugin for knife. Written in ruby it is used by the Chef community. To

install Knife-CloudStack you can simply install the gem or get it from

github:

gem install knife-cloudstack

If successful the *knife* command should now be in your path. Issue

*knife* at the prompt and see the various options and sub-commands

available.

If you want to use the version on github simply clone it:

git clone https://github.com/CloudStack-extras/knife-cloudstack.git

If you clone the git repo and do changes to the code, you will want to

build and install a new gem. As an example, in the directory where you

cloned the knife-cloudstack repo do:

$ gem build knife-cloudstack.gemspec

Successfully built RubyGem

Page 36: Cloud Stack

Name: knife-cloudstack

Version: 0.0.14

File: knife-cloudstack-0.0.14.gem

$ gem install knife-cloudstack-0.0.14.gem

Successfully installed knife-cloudstack-0.0.14

1 gem installed

Installing ri documentation for knife-cloudstack-0.0.14...

Installing RDoc documentation for knife-cloudstack-0.0.14...

You will then need to define your CloudStack endpoint and your

credentials

in a *knife.rb* file like so:

knife[:cloudstack_url] =

"http://yourcloudstackserver.com:8080/client/api

knife[:cloudstack_api_key] = "Your CloudStack API Key"

knife[:cloudstack_secret_key] = "Your CloudStack Secret Key"

With the endpoint and credentials configured as well as knife-cloudstack

installed, you should be able to issue your first command. Remember that

this is simply sending a CloudStack API call to your CloudStack based

Cloud

provider. Later in the section we will see how to do more advanced

things with knife-cloudstack. For example, to list the service offerings

(i.e instance types) available on the iKoula Cloud, do:

$ knife cs service list

Name Memory CPUs CPU Speed Created

m1.extralarge 15GB 8 2000 Mhz 2013-05-27T16:00:11+0200

m1.large 8GB 4 2000 Mhz 2013-05-27T15:59:30+0200

m1.medium 4GB 2 2000 Mhz 2013-05-27T15:57:46+0200

m1.small 2GB 1 2000 Mhz 2013-05-27T15:56:49+0200

To list all the *knife-cloudstack* commands available just enter *knife

cs* at the prompt. You will see:

$ knife cs

Available cs subcommands: (for details, knife SUB-COMMAND --help)

** CS COMMANDS **

knife cs account list (options)

knife cs cluster list (options)

knife cs config list (options)

knife cs disk list (options)

knife cs domain list (options)

knife cs firewallrule list (options)

knife cs host list (options)

knife cs hosts

knife cs iso list (options)

knife cs template create NAME (options)

...

> **Note**

>

Page 37: Cloud Stack

> If you only have user privileges on the Cloud you are using, as

> opposed to admin privileges, do note that some commands won't be

> available to you. For instance on the Cloud I am using where I am a

> standard user I cannot access any of the infrastructure type commands

> like:

>

> $ knife cs pod list

> Error 432: Your account does not have the right to execute this

command or the command does not exist.

>

Similarly to CloudMonkey, you can pass a list of fields to output. To

find the potential fields enter the *--fieldlist* option at the end of

the command. You can then pick the fields that you want to output by

passing a comma separated list to the *--fields* option like so:

$ knife cs service list --fieldlist

Name Memory CPUs CPU Speed Created

m1.extralarge 15GB 8 2000 Mhz 2013-05-27T16:00:11+0200

m1.large 8GB 4 2000 Mhz 2013-05-27T15:59:30+0200

m1.medium 4GB 2 2000 Mhz 2013-05-27T15:57:46+0200

m1.small 2GB 1 2000 Mhz 2013-05-27T15:56:49+0200

Key Type Value

cpunumber Fixnum 8

cpuspeed Fixnum 2000

created String 2013-05-27T16:00:11+0200

defaultuse FalseClass false

displaytext String 8 Cores CPU with 15.3GB RAM

domain String ROOT

domainid String 1

hosttags String ex10

id String 1412009f-0e89-4cfc-a681-1cda0631094b

issystem FalseClass false

limitcpuuse TrueClass true

memory Fixnum 15360

name String m1.extralarge

networkrate Fixnum 100

offerha FalseClass false

storagetype String local

tags String ex10

$ knife cs service list --fields id,name,memory,cpunumber

id name memory

cpunumber

1412009f-0e89-4cfc-a681-1cda0631094b m1.extralarge 15360 8

d2b2e7b9-4ffa-419e-9ef1-6d413f08deab m1.large 7680 4

8dae8be9-5dae-4f81-89d1-b171f25ef3fd m1.medium 3840 2

c6b89fea-1242-4f54-b15e-9d8ec8a0b7e8 m1.small 1740 1

Starting an Instance

--------------------

In order to manage instances *knife* has several commands:

Page 38: Cloud Stack

- *knife cs server list* to list all instances

- *knife cs server start* to restart a paused instance

- *knife cs server stop* to suspend a running instance

- *knife cs server delete* to destroy an instance

- *knife cs server reboot* to reboot a running instance

And of course to create an instance *knife cs server create*

Knife will automatically allocate a Public IP address and associate it

with your running instance. If you additionally pass some port forwarding

rules and firewall rules it will set those up. You need to specify an

instance type, from the list returned by *knife cs service list* as well

as a template, from the list returned by *knife cs template list*. The

*--no-boostrap* option will tell knife to not install chef on the

deployed instance. Syntax for the port forwarding and firewall rules are

explained on the [knife

cloudstack](https://github.com/CloudStack-extras/knife-cloudstack)

website. Here is an example on the [iKoula cloud](http://www.ikoula.com)

in France:

$ knife cs server create --no-bootstrap --service m1.small --template

"CentOS 6.4 - Minimal - 64bits" foobar

Waiting for Server to be created.......

Allocate ip address, create forwarding rules

params: {"command"=>"associateIpAddress", "zoneId"=>"a41b82a0-78d8-

4a8f-bb79-303a791bb8a7", "networkId"=>"df2288bb-26d7-4b2f-bf41-

e0fae1c6d198"}.

Allocated IP Address: 178.170.XX.XX

...

Name: foobar

Public IP: 178.170.XX.XX

$ knife cs server list

Name Public IP Service Template State

Instance Hypervisor

foobar 178.170.XX.XX m1.small CentOS 6.4 - Minimal - 64bits

Running N/A N/A

Bootstrapping Instances with Hosted-Chef

----------------------------------------

Knife is taking it's full potential when used to bootstrap Chef and use

it for configuration management of the instances. To get started with

Chef, the easiest is to use [Hosted

Chef](http://www.opscode.com/hosted-chef/). There is some great

documentation on

[how](https://learnchef.opscode.com/quickstart/chef-repo/) to do it. The

basic concept is that you will download or create cookbooks locally and

Page 39: Cloud Stack

publish them to your own hosted Chef server.

Using Knife with Hosted-Chef

----------------------------

With your *hosted Chef* account created and your local *chef-repo*

setup, you can start instances on your Cloud and specify the *cookbooks*

to use to configure those instances. The boostrapping process will fetch

those cookbooks and configure the node. Below is an example that does

so, it uses the [exoscale](http://www.exoscale.ch) cloud which runs on

CloudStack. This cloud is enabled as a Basic zone and uses ssh keypairs

and security groups for access.

$ knife cs server create --service Tiny --template "Linux CentOS 6.4

64-bit" --ssh-user root --identity ~/.ssh/id_rsa --run-list

"recipe[apache2]" --ssh-keypair foobar --security-group www --no-public-

ip foobar

Waiting for Server to be created....

Name: foobar

Public IP: 185.19.XX.XX

Waiting for sshd.....

Name: foobar13

Public IP: 185.19.XX.XX

Environment: _default

Run List: recipe[apache2]

Bootstrapping Chef on 185.19.XX.XX

185.19.XX.XX --2013-06-10 11:47:54--

http://opscode.com/chef/install.sh

185.19.XX.XX Resolving opscode.com...

185.19.XX.XX 184.ZZ.YY.YY

185.19.XX.XX Connecting to opscode.com|184.ZZ.XX.XX|:80...

185.19.XX.XX connected.

185.19.XX.XX HTTP request sent, awaiting response...

185.19.XX.XX 301 Moved Permanently

185.19.XX.XX Location: http://www.opscode.com/chef/install.sh

[following]

185.19.XX.XX --2013-06-10 11:47:55--

http://www.opscode.com/chef/install.sh

185.19.XX.XX Resolving www.opscode.com...

185.19.XX.XX 184.ZZ.YY.YY

185.19.XX.XX Reusing existing connection to opscode.com:80.

185.19.XX.XX HTTP request sent, awaiting response...

185.19.XX.XX 200 OK

185.19.XX.XX Length: 6509 (6.4K) [application/x-sh]

185.19.XX.XX Saving to: “STDOUT”

185.19.XX.XX

0% [ ] 0 --.-K/s

100%[======================================>] 6,509 --.-K/s

in 0.1s

185.19.XX.XX

Page 40: Cloud Stack

185.19.XX.XX 2013-06-10 11:47:55 (60.8 KB/s) - written to stdout

[6509/6509]

185.19.XX.XX

185.19.XX.XX Downloading Chef 11.4.4 for el...

185.19.XX.XX Installing Chef 11.4.4

Chef will then configure the machine based on the cookbook passed in the

--run-list option, here I setup a simple web server. Note the keypair

that I used and the security group. I also specify *--no-public-ip*

which disables the IP address allocation and association. This is

specific to the setup of *exoscale* which automatically uses a public IP

address for the instances.

> **Note**

>

> The latest version of knife-cloudstack allows you to manage keypairs

> and securitygroups. For instance listing, creation and deletion of

> keypairs is possible, as well as listing of securitygroups:

>

> $ knife cs securitygroup list

> Name Description Account

> default Default Security Group [email protected]

> www apache server [email protected]

> $ knife cs keypair list

> Name Fingerprint

> exoscale xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx

>

When using a CloudStack based cloud in an Advanced zone setting, *knife*

can automatically allocate and associate an IP address. To illustrate

this slightly different example I use [iKoula](http://www.ikoula.com) a

french Cloud Provider which uses CloudStack. I edit my *knife.rb* file to

setup a different endpoint and the different API and secret keys. I

remove the keypair, security group and public ip option and I do not

specify an identity file as I will retrieve the ssh password with the

*--cloudstack-password* option. The example is as follows:

$ knife cs server create --service m1.small --template "CentOS 6.4 -

Minimal - 64bits" --ssh-user root --cloudstack-password --run-list

"recipe[apache2]" foobar

Waiting for Server to be created........

Allocate ip address, create forwarding rules

params: {"command"=>"associateIpAddress", "zoneId"=>"a41b82a0-78d8-

4a8f-bb79-303a791bb8a7", "networkId"=>"df2288bb-26d7-4b2f-bf41-

e0fae1c6d198"}.

Allocated IP Address: 178.170.71.148

...

Name: foobar

Password: $%@#$%#$%#$

Public IP: 178.xx.yy.zz

Page 41: Cloud Stack

Waiting for sshd......

Name: foobar

Public IP: 178.xx.yy.zz

Environment: _default

Run List: recipe[apache2]

Bootstrapping Chef on 178.xx.yy.zz

178.xx.yy.zz --2013-06-10 13:24:29--

http://opscode.com/chef/install.sh

178.xx.yy.zz Resolving opscode.com...

> **Warning**

>

> You will want to review the security implications of doing the

> bootstrap as root and using the default password to do so.

>

> In Advanced Zone, your cloud provider may also have decided to block

> all egress traffic to the public internet, which means that contacting

> the hosted Chef server would fail. To configure the egress rules

> properly, CloudMonkey can be used. List the networks to find the id of

> your guest network, then create an egress firewall rule. Review the

> CloudMonkey section to find the proper API calls and their arguments.

>

> > list networks filter=id,name,netmask

> count = 1

> network:

> +--------------------------------------+------+---------------+

> | id | name | netmask |

> +--------------------------------------+------+---------------+

> | df2288bb-26d7-4b2f-bf41-e0fae1c6d198 | test | 255.255.255.0 |

> +--------------------------------------+------+---------------+

>

> > create egressfirewallrule networkid=df2288bb-26d7-4b2f-bf41-

e0fae1c6d198 startport=80 endport=80 protocol=TCP cidrlist=10.1.1.0/24

> id = b775f1cb-a0b3-4977-90b0-643b01198367

> jobid = 8a5b735c-6aab-45f8-b687-0a1150a66e0f

>

> > list egressfirewallrules

> count = 1

> firewallrule:

> +-----------+-----------+---------+------+-------------+--------+--

--------+--------------------------------------+

> | networkid | startport | endport | tags | cidrlist | state |

protocol | id |

> +-----------+-----------+---------+------+-------------+--------+--

--------+--------------------------------------+

> | 326 | 80 | 80 | [] | 10.1.1.0/24 | Active |

tcp | baf8d072-7814-4b75-bc8e-a47bfc306eb1 |

> +-----------+-----------+---------+------+-------------+--------+--

--------+--------------------------------------+

>

>

Page 42: Cloud Stack

Salt

====

[Salt](http://saltstack.com) is a configuration management system

written in Python. It can be seen as an alternative to Chef and Puppet.

Its concept is similar with a master node holding states called *salt

states (SLS)* and minions that get their configuration from the master.

A nice difference with Chef and Puppet is that Salt is also a remote

execution engine and can be used to execute commands on the minions by

specifying a set of targets. In this chapter we dive straight

into [SaltCloud](http://saltcloud.org), an open source software to

provision *Salt* masters and minions in the Cloud. *SaltCloud* can be

looked at as an alternative to *knife-cs* but certainly with less

functionality. In this short walkthrough we intend to boostrap a Salt

master (equivalent to a Chef server) in the cloud and then add minions

that will get their configuration from the master.

SaltCloud installation and usage.

---------------------------------

To install Saltcloud one simply clones the git repository. To develop

Saltcloud, just fork it on github and clone your fork, then commit

patches and submit pull request. SaltCloud depends on libcloud,

therefore you will need libcloud installed as well. See the previous

chapter to setup libcloud. With Saltcloud installed and in your path,

you need to define a Cloud provider in *\~/.saltcloud/cloud*. For

example:

providers:

exoscale:

apikey: <your api key>

secretkey: <your secret key>

host: api.exoscale.ch

path: /compute

securitygroup: default

user: root

private_key: ~/.ssh/id_rsa

provider: cloudstack

The apikey, secretkey, host, path and provider keys are mandatory. The

securitygroup key will specify which security group to use when starting

the instances in that cloud. The user will be the username used to

connect to the instances via ssh and the private\_key is the ssh key to

use. Note that the optional parameter are specific to the Cloud that

this was tested on. Cloud in advanced zones especially will need a

different setup.

> Warning

>

> Saltcloud used libcloud. Support for advanced zones in libcloud is

> still experimental, therefore using SaltCloud in advanced zone will

> likely need some development of libcloud.

Once a provider is defined, we can start using saltcloud to list the

zones, the service offerings and the templates available on that cloud

Page 43: Cloud Stack

provider. So far nothing more than what libcloud provides. For example:

#salt-cloud –list-locations exoscale

[INFO ] salt-cloud starting

exoscale:

----------

cloudstack:

----------

CH-GV2:

----------

country:

AU

driver:

id:

1128bd56-b4d9-4ac6-a7b9-c715b187ce11

name:

CH-GV2

#salt-cloud –list-images exoscale

#salt-cloud –list-sizes exoscale

To start creating instances and configuring them with Salt, we need to

define node profiles in *\~/.saltcloud/config*. To illustrate two

different profiles we show a Salt Master and a Minion. The Master would

need a specific template (image:uuid), a service offering or instance

type (size:uuid). In a basic zone with keypair access and security

groups, one would also need to specify which keypair to use, where to

listen for ssh connections and of course you would need to define the

provider (e.g exoscale in our case, defined above). Below if the node

profile for a Salt Master deployed in the Cloud:

ubuntu-exoscale-master:

provider: exoscale

image: 1d16c78d-268f-47d0-be0c-b80d31e765d2

size: b6cd1ff5-3a2f-4e9d-a4d1-8988c1191fe8

ssh_interface: public

ssh_username: root

keypair: exoscale

make_master: True

master:

user: root

interface: 0.0.0.0

The master key shows which user to use and what interface, the

make\_master key if set to true will boostrap this node as a Salt

Master. To create it on our cloud provider simply enter:

$salt-cloud –p ubuntu-exoscale-master mymaster

Where *mymaster* is going to be the instance name. To create a minion,

add a minion node profile in the config file:

ubuntu-exoscale-minion:

provider: exoscale

image: 1d16c78d-268f-47d0-be0c-b80d31e765d2

Page 44: Cloud Stack

size: b6cd1ff5-3a2f-4e9d-a4d1-8988c1191fe8

ssh_interface: public

ssh_username: root

keypair: exoscale

minion:

master: W.X.Y.Z

you would then start it with:

$salt-cloud –p ubuntu-exoscale-minion myminion

The W.X.Y.Z IP address above should be the IP address of the master that

was deployed previously. On the master you will need to have port 4505

and 4506 opened, this is best done in basic zone using security groups.

Once this security group is properly setup the minions will be able to

contact the master. You will then accept the keys from the minion and be

able to talk to them from your Salt master.

root@mymaster11:~# salt-key -L

Accepted Keys:

minion001

minion002

Unaccepted Keys:

minion003

Rejected Keys:

root@mymaster11:~# salt-key -A

The following keys are going to be accepted:

Unaccepted Keys:

minion003

Proceed? [n/Y] Y

Key for minion minion003 accepted.

root@mymaster11:~# salt '*' test.ping

minion002:

True

minion001:

True

root@mymaster11:~# salt '*' test.ping

minion003:

True

minion002:

True

minion001:

True

Apache Whirr

============

[Apache Whirr](http://whirr.apache.org) is a set of libraries to run

cloud services, internally it uses

[jclouds](http://jclouds.incubator.apache.org) that we introduced

earlier via the jclouds-cli interface to CloudStack, it is java based and

of interest to provision clusters of virtual machines on cloud

providers. Historically it started as a set of scripts to deploy

[Hadoop](http://hadoop.apache.org) clusters on Amazon EC2. We introduce

Whirr has a potential CloudStack tool to provision Hadoop cluster on

Page 45: Cloud Stack

CloudStack based clouds.

Installing Apache Whirr

-----------------------

To install Whirr you can follow the [Quick Start

Guide](http://whirr.apache.org/docs/0.8.1/quick-start-guide.html),

download a tarball or clone the git repository. In the spirit of this

document we clone the repo:

git clone git://git.apache.org/whirr.git

And build the source with maven that we now know and love...:

mvn install

The whirr binary will be available in the *bin* directory that we can

add to our path

export PATH=$PATH:/Users/sebgoa/Documents/whirr/bin

If all went well you should now be able to get the usage of *whirr*:

$ whirr --help

Unrecognized command '--help'

Usage: whirr COMMAND [ARGS]

where COMMAND may be one of:

launch-cluster Launch a new cluster running a service.

start-services Start the cluster services.

stop-services Stop the cluster services.

restart-services Restart the cluster services.

destroy-cluster Terminate and cleanup resources for a running

cluster.

destroy-instance Terminate and cleanup resources for a single

instance.

list-cluster List the nodes in a cluster.

list-providers Show a list of the supported providers

run-script Run a script on a specific instance or a group of

instances matching a role name

version Print the version number and exit.

help Show help about an action

Available roles for instances:

cassandra

elasticsearch

ganglia-metad

ganglia-monitor

hadoop-datanode

...

From the look of the usage you clearly see that *whirr* is about more

than just *hadoop* and that it can be used to configure *elasticsearch*

clusters, *cassandra* databases as well as the entire *hadoop* ecosystem

Page 46: Cloud Stack

with *mahout*, *pig*, *hbase*, *hama*, *mapreduce* and *yarn*.

Using Apache Whirr

------------------

To get started with Whirr you need to setup the credentials and endpoint

of your CloudStack based cloud that you will be using. Edit the

*\~/.whirr/credentials* file to include a PROVIDER, IDENTITY, CREDENTIAL

and ENDPOINT. The PROVIDER needs to be set to *cloudstack*, the IDENTITY

is your API key, the CREDENTIAL is your secret key and the ENDPPOINT is

the endpoint url. For instance:

PROVIDER=cloudstack

IDENTITY=mnH5EbKc4534592347523486724389673248AZW4kYV5gdsfgdfsgdsfg87sdfoh

rjktn5Q

CREDENTIAL=Hv97W58iby5PWL1ylC4oJls46456435634564537sdfgdfhrteydfg87sdf89g

ysdfjhlicg

ENDPOINT=https://api.exoscale.ch/compute

With the credentials and endpoint defined you can create a *properties*

file that describes the cluster you want to launch on your cloud. The

file contains information such as the cluster name, the number of

instances and their type, the distribution of hadoop you want to use,

the service offering id and the template id of the instances. It also

defines the ssh keys to be used for accessing the virtual machines. In

the case of a cloud that uses security groups, you may also need to

specify it. A tricky point is the handling of DNS name resolution. You

might have to use the *whirr.store-cluster-in-etc-hosts* key to bypass

any DNS issues. For a full description of the whirr property keys, see

the

[documentation](http://whirr.apache.org/docs/0.8.1/configuration-

guide.html).

$ more whirr.properties

#

# Setup an Apache Hadoop Cluster

#

# Change the cluster name here

whirr.cluster-name=hadoop

whirr.store-cluster-in-etc-hosts=true

whirr.use-cloudstack-security-group=true

# Change the name of cluster admin user

whirr.cluster-user=${sys:user.name}

# Change the number of machines in the cluster here

whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,3

hadoop-datanode+hadoop-tasktracker

Page 47: Cloud Stack

# Uncomment out the following two lines to run CDH

whirr.env.repo=cdh4

whirr.hadoop.install-function=install_cdh_hadoop

whirr.hadoop.configure-function=configure_cdh_hadoop

whirr.hardware-id=b6cd1ff5-3a2f-4e9d-a4d1-8988c1191fe8

whirr.private-key-file=/path/to/ssh/key/

whirr.public-key-file=/path/to/ssh/public/key/

whirr.provider=cloudstack

whirr.endpoint=https://the/endpoint/url

whirr.image-id=1d16c78d-268f-47d0-be0c-b80d31e765d2

> **Warning**

>

> The example shown above is specific to a CloudStackion

> [Cloud](http://exoscale.ch) setup as a basic zone. This cloud uses

> security groups for isolation between instances. The proper rules had

> to be setup by hand. Also note the use of

> *whirr.store-cluster-in-etc-hosts*. If set to true whirr will edit the

> */etc/hosts* file of the nodes and enter the IP adresses. This is

> handy in the case where DNS resolution is problematic.

> **Note**

>

> To use the Cloudera Hadoop distribution (CDH) like in the example

> above, you will need to copy the

> *services/cdh/src/main/resources/functions* directory to the root of

> your Whirr source. In this directory you will find the bash scripts

> used to bootstrap the instances. It may be handy to edit those

> scripts.

You are now ready to launch an hadoop cluster:

$ whirr launch-cluster --config hadoop.properties

Running on provider cloudstack using identity

mnH5EbKcKeJd456456345634563456345654634563456345

Bootstrapping cluster

Configuring template for bootstrap-hadoop-datanode_hadoop-tasktracker

Configuring template for bootstrap-hadoop-namenode_hadoop-jobtracker

Starting 3 node(s) with roles [hadoop-datanode, hadoop-tasktracker]

Starting 1 node(s) with roles [hadoop-namenode, hadoop-jobtracker]

>> running InitScript{INSTANCE_NAME=bootstrap-hadoop-datanode_hadoop-

tasktracker} on node(b9457a87-5890-4b6f-9cf3-1ebd1581f725)

>> running InitScript{INSTANCE_NAME=bootstrap-hadoop-datanode_hadoop-

tasktracker} on node(9d5c46f8-003d-4368-aabf-9402af7f8321)

>> running InitScript{INSTANCE_NAME=bootstrap-hadoop-datanode_hadoop-

tasktracker} on node(6727950e-ea43-488d-8d5a-6f3ef3018b0f)

>> running InitScript{INSTANCE_NAME=bootstrap-hadoop-namenode_hadoop-

jobtracker} on node(6a643851-2034-4e82-b735-2de3f125c437)

<< success executing InitScript{INSTANCE_NAME=bootstrap-hadoop-

datanode_hadoop-tasktracker} on node(b9457a87-5890-4b6f-9cf3-

Page 48: Cloud Stack

1ebd1581f725): {output=This function does nothing. It just needs to exist

so Statements.call("retry_helpers") doesn't call something which doesn't

exist

Get:1 http://security.ubuntu.com precise-security Release.gpg [198 B]

Get:2 http://security.ubuntu.com precise-security Release [49.6 kB]

Hit http://ch.archive.ubuntu.com precise Release.gpg

Get:3 http://ch.archive.ubuntu.com precise-updates Release.gpg [198

B]

Get:4 http://ch.archive.ubuntu.com precise-backports Release.gpg [198

B]

Hit http://ch.archive.ubuntu.com precise Release

..../snip/.....

You can log into instances using the following ssh commands:

[hadoop-datanode+hadoop-tasktracker]: ssh -i

/Users/sebastiengoasguen/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o

StrictHostKeyChecking=no [email protected]

[hadoop-datanode+hadoop-tasktracker]: ssh -i

/Users/sebastiengoasguen/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o

StrictHostKeyChecking=no [email protected]

[hadoop-datanode+hadoop-tasktracker]: ssh -i

/Users/sebastiengoasguen/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o

StrictHostKeyChecking=no [email protected]

[hadoop-namenode+hadoop-jobtracker]: ssh -i

/Users/sebastiengoasguen/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o

StrictHostKeyChecking=no [email protected]

To destroy cluster, run 'whirr destroy-cluster' with the same options

used to launch it.

After the boostrapping process finishes, you should be able to login to

your instances and use *hadoop* or if you are running a proxy on your

machine, you will be able to access your hadoop cluster locally. Testing

of Whirr for CloudStack is still under

[investigation](https://issues.apache.org/jira/browse/WHIRR-725) and the

subject of a Google Summer of Code 2013 project. We currently identified

issues with the use of security groups. Moreover this was tested on a

basic zone. Complete testing on an advanced zone is future work.

Running Map-Reduce jobs on Hadoop

---------------------------------

Whirr gives you the ssh command to connect to the instances of your

hadoop cluster, login to the namenode and browse the hadoop file system

that was created:

$ hadoop fs -ls /

Found 5 items

drwxrwxrwx - hdfs supergroup 0 2013-06-21 20:11 /hadoop

drwxrwxrwx - hdfs supergroup 0 2013-06-21 20:10 /hbase

drwxrwxrwx - hdfs supergroup 0 2013-06-21 20:10 /mnt

drwxrwxrwx - hdfs supergroup 0 2013-06-21 20:11 /tmp

drwxrwxrwx - hdfs supergroup 0 2013-06-21 20:11 /user

Create a directory to put your input data

Page 49: Cloud Stack

$ hadoop fs -mkdir input

$ hadoop fs -ls /user/sebastiengoasguen

Found 1 items

drwxr-xr-x - sebastiengoasguen supergroup 0 2013-06-21

20:15 /user/sebastiengoasguen/input

Create a test input file and put in the hadoop file system:

$ cat foobar

this is a test to count the words

$ hadoop fs -put ./foobar input

$ hadoop fs -ls /user/sebastiengoasguen/input

Found 1 items

-rw-r--r-- 3 sebastiengoasguen supergroup 34 2013-06-21

20:17 /user/sebastiengoasguen/input/foobar

Define the map-reduce environment. Note that the default Cloudera HADOOP

distribution installation uses MRv1. To use Yarn one would have to edit

the hadoop.properties file.

$ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce

Start the map-reduce job:

$ hadoop jar $HADOOP_MAPRED_HOME/hadoop-examples.jar

wordcount input output

13/06/21 20:19:59 WARN mapred.JobClient: Use

GenericOptionsParser for parsing the arguments. Applications should

implement Tool for the same.

13/06/21 20:20:00 INFO input.FileInputFormat: Total input

paths to process : 1

13/06/21 20:20:00 INFO mapred.JobClient: Running job:

job_201306212011_0001

13/06/21 20:20:01 INFO mapred.JobClient: map 0% reduce

0%

13/06/21 20:20:11 INFO mapred.JobClient: map 100% reduce

0%

13/06/21 20:20:17 INFO mapred.JobClient: map 100% reduce

33%

13/06/21 20:20:18 INFO mapred.JobClient: map 100% reduce

100%

13/06/21 20:20:21 INFO mapred.JobClient: Job complete:

job_201306212011_0001

13/06/21 20:20:22 INFO mapred.JobClient: Counters: 32

13/06/21 20:20:22 INFO mapred.JobClient: File System

Counters

13/06/21 20:20:22 INFO mapred.JobClient: FILE: Number

of bytes read=133

13/06/21 20:20:22 INFO mapred.JobClient: FILE: Number

of bytes written=766347

...

And you can finally check the output:

Page 50: Cloud Stack

$ hadoop fs -cat output/part-* | head

this 1

to 1

the 1

a 1

count 1

is 1

test 1

words 1

Conclusions

===========

The CloudStack API is very rich and easy to use. You can write your own

client by following the section on how to sign requests, or you can use

an existing client in the language of your choice. Well known libraries

developed by the community work well with CloudStack, such as Apache

libcloud and Apache jclouds. Configuration management systems also have

plugins to work transparently with CloudStack, in this little book we

presented SaltStack and Knife-cs. Finally, going a bit beyond simple

clients we presented Apache Whirr that allows you to create Hadoop

clusters on-demand (e.g elasticsearch, cassandra also work). Take your

pick and write your applications on top of CloudStack using one of those

tools. Based on these tools you will be able to deploy infrastructure

easily, quickly and in a reproducible manner. Lately CloudStack has seen

the number of tools grow, just today, I learned about a Fluentd plugin

and last week a Cloudfoundry BOSH interface was released. I also

committed a straightforward dynamic inventory script for Ansible and a

tweet just flew by about a vagrant-cloudstack plugin. The list goes on,

pick what suits you and answers your need, then have fun.

Page 51: Cloud Stack

About This Book

===============

License

-------

The Little CloudStack Book is licensed under the Attribution

NonCommercial 3.0 Unported license. **You should not have paid for this

book**

You are basically free to copy, distribute, modify or display the book.

However, I ask that you always attribute the book to me, Sebastien

Goasguen and do not use it for commercial purposes.

You can see the full text of the license at:

<http://creativecommons.org/licenses/by-nc/3.0/legalcode>

"Apache", "CloudStack", "Apache CloudStack", the Apache CloudStack logo,

the Apache CloudStack CloudMonkey logo and the Apache feather logos are

registered trademarks or trademarks of The Apache Software Foundation.

About The Author

----------------

Sebastien Goasguen is an Apache CloudStack committer and member of the

CloudStack Project Management Committee (PMC). His day job is to be a

Senior Open Source Solutions Architect for the Open Source Business

Office at Citrix. He will never call himself an expert or a developer but

is a decent Python programmer. He is currently active in Apache Libcloud

and SaltStack salt-cloud projects to bring better support for CloudStack.

He blogs regularly about cloud technologies and spends lots of time

testing and writing about his experiences. Prior to working actively on

CloudStack he had a life as an academic, he authored over seventy

international publications on grid computing, high performance computing,

electromagnetics, nanoelectronics and of course cloud computing. He also

taught courses on distributed computing, network programming, ethical

hacking and cloud.

His blog can be found at http://sebgoa.blogspot.com and he tweets via

@sebgoa. You can find him on github at https://github.com/runseb

With Thanks To

--------------

A special thanks to [Geoff

Higginbottom](https://github.com/geoffhigginbottom) for proof-reading the

book, I am proud to have accepted his first Github pull request.

Latest Version

--------------

The latest source of this book is available at:

https://github.com/runseb/cloudstack-books

Introduction

------------

Page 52: Cloud Stack

Clients and high level Wrappers are critical to the ease of use of any

API, even more so Cloud APIs. In this book we present the basics of the

CloudStack API and introduce some low level clients before diving into

more advanced wrappers. The first chapter is dedicated to clients and the

second chapter to wrappers or what I considered to be high level tools

built on top of a CloudStack client.

In the first chapter, we start by illustrating how to sign requests with

the native API -for the sake of completeness- and

because it is a very nice exercise for beginners. We then introduce

CloudMonkey the CloudStack CLI and shell which boasts a 100% coverage of

the API. Then jclouds is discussed. While jclouds is a java library, it

can also be used as a cli or interactive shell, we present jclouds-cli to

contrast it to

CloudMonkey and introduce jclouds. Apache libcloud is a Python module

that provides a common API on top of many Cloud providers API, once

installed, a developer can use libcloud to talk to multiple cloud

providers and cloud APIs, it serves a similar role as jclouds but in

Python. Finally, we present Boto, the well-known Python Amazon Web

Service interface, and show how it can be used with a CloudStack cloud

running the AWS interface.

In the second chapter we introduce several high level wrappers for

configuration management and automated provisioning.

The presentation of these wrappers aim to answer the question "I have a

cloud now what ?". Starting and stopping virtual machines is the core

functionality of a cloud,

but it empowers users to do much more. Automation is the key of today's

IT infrastructure. The wrappers presented here show you how you can

automate configuration management and automate provisioning of

infrastructures that lie within your cloud. We introduce Salt-cloud for

Saltstack, a Python alternative to the well known Chef and Puppet

systems. We then introduce the knife CloudStack plugin for Chef and show

you how easy it is to deploy machines in a cloud and configure them. We

finish with another Apache project based on jclouds: Whirr. Apache Whirr

simplifies the on-demand provisioning of clusters of virtual machine

instances, hence it allows you to easily provision big data

infrastructure on-demand, whether you need a *HADOOP*, *Elasticsearch* or

even a *Cassandra* cluster.

Getting Started - The CloudStack API

====================================

All functionalities of the CloudStack data center orchestrator are

exposed

via an API server. Github currently has over twenty clients for this

API, in various languages. In this section we introduce this API and the

signing mechanism. The follow on sections will introduce clients that

already contain a signing method. The signing process is only

highlighted for completeness.

Basics of the API

-----------------

The CloudStack API is a query based API using http which returns results

in XML or JSON. It is used to implement the default web UI. This API is

Page 53: Cloud Stack

not a standard like [OGF

OCCI](http://www.ogf.org/gf/group_info/view.php?group=occi-wg) or [DMTF

CIMI](http://dmtf.org/standards/cloud) but is easy to learn. A mapping

exists between the AWS API and the CloudStack API as will be seen in the

next section. Recently a Google Compute Engine interface was also

developed that maps the GCE REST API to the CloudStack API described

here. The API [docs](http://cloudstack.apache.org/docs/api/) are a good

start to learn the extent of the API. Multiple clients exist on

[github](https://github.com/search?q=cloudstack+client&ref=cmdform) to

use this API, you should be able to find one in your favourite language.

The reference documentation for the API and changes that might occur from

version to version is available [on-

line](http://cloudstack.apache.org/docs/en-

US/Apache_CloudStack/4.1.1/html/Developers_Guide/index.html). This short

section is aimed at providing a quick summary to give you a base

understanding of how to use this API. As a quick start, a good way to

explore the API is to navigate the dashboard with a firebug console (or

similar developer console) to study the queries.

In a succinct statement, the CloudStack query API can be used via http

GET requests made against your cloud endpoint like:

http://localhost:8080/client/api.

The API name is passed using the `command` key and the various parameters

for this API call are passed as key value pairs. The request is signed

using the secret key of the user making the call. Some calls are

synchronous while some are asynchronous, this is documented in the API

[docs](http://cloudstack.apache.org/docs/api/). Asynchronous calls return

a `jobid`, the status and result of a job can be queried with the

`queryAsyncJobResult` call. Let's get started and give an example of

calling the `listUsers` API in Python.

First you will need to generate keys to make requests. Going through the

dashboard, go under `Accounts` select the appropriate account then click

on `Show Users` select the intended user and generate keys using the

`Generate Keys` icon. You will see an `API Key` and `Secret Key` field

being generated. The keys will be of the form:

API Key : XzAz0uC0t888gOzPs3HchY72qwDc7pUPIO8LxC-

VkIHo4C3fvbEBY__EbGdbxi8oy1A

Secret Key: zmBOXAXPlfb-LIygOxUVblAbz7E47eukDS_-

AcM7rK7SMyo11Y6XW22gyuXzOdiybQ

Open a Python shell and import the basic modules necessary to make the

request. Do note that this request could be made many different ways,

this is just a low level example. The `urllib*` modules are used to make

the http request and do url encoding. The `hashlib` module gives us the

sha1 hash function. It is used to generate the `hmac` (Keyed Hashing for

Message Authentication) using the secretkey. The result is encoded using

the `base64` module.

$python

Python 2.7.3 (default, Nov 17 2012, 19:54:34)

Page 54: Cloud Stack

[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))]

on darwin

Type "help", "copyright", "credits" or "license" for more

information.

>>> import urllib2

>>> import urllib

>>> import hashlib

>>> import hmac

>>> import base64

Define the endpoint of the Cloud, the command that you want to execute,

the type of the response (i.e XML or JSON) and the keys of the user. Note

that we do not put the secretkey in our request dictionary because it is

only used to compute the hmac.

>>> baseurl='http://localhost:8080/client/api?'

>>> request={}

>>> request['command']='listUsers'

>>> request['response']='json'

>>> request['apikey']='plgWJfZK4gyS3mOMTVmjUVgu38zCm0bewzGUdP66mg'

>>> secretkey='VDaACYb0LV9eNjTe7EhwJaw7FF3akA3KBQ'

Build the base request string, the combination of all the key/pairs of

the request, url encoded and joined with ampersand.

>>> request_str='&'.join(['='.join([k,urllib.quote_plus(request[k])])

for k in request.keys()])

>>> request_str

'apikey=plgWJfZK4gyS3mOMTVmjUVg38zCm0bewzGUdP66mg&command=listUsers&respo

nse=json'

Compute the signature with hmac, do a 64 bit encoding and a url encoding,

the string used for the signature is similar to the base request string

shown above but the keys/values are lower cased and joined in a sorted

order

>>>

sig_str='&'.join(['='.join([k.lower(),urllib.quote_plus(request[k].lower(

).replace('+','%20'))])for k in sorted(request.iterkeys())])

>>> sig_str

'apikey=plgwjfzk4gys3momtvmjuvg38zcm0bewzgudp66mg&command=listusers&respo

nse=json'

>>> sig=hmac.new(secretkey,sig_str,hashlib.sha1).digest()

>>> sig

'M:]\x0e\xaf\xfb\x8f\xf2y\xf1p\x91\x1e\x89\x8a\xa1\x05\xc4A\xdb'

>>>

sig=base64.encodestring(hmac.new(secretkey,sig_str,hashlib.sha1).digest()

)

>>> sig

'TTpdDq/7j/J58XCRHomKoQXEQds=\n'

Page 55: Cloud Stack

>>>

sig=base64.encodestring(hmac.new(secretkey,sig_str,hashlib.sha1).digest()

).strip()

>>> sig

'TTpdDq/7j/J58XCRHomKoQXEQds='

>>>

sig=urllib.quote_plus(base64.encodestring(hmac.new(secretkey,sig_str,hash

lib.sha1).digest()).strip())

Finally, build the entire string by joining the baseurl, the request str

and the signature. Then do an http GET:

>>> req=baseurl+request_str+'&signature='+sig

>>> req

'http://localhost:8080/client/api?apikey=plgWJfZK4g38zCm0bewzGUdP66mg&com

mand=listUsers&response=json&signature=TTpdDq%2F7j%2FJ58XCRHomKoQXEQds%3D

'

>>> res=urllib2.urlopen(req)

>>> res.read()

'{ "listusersresponse" : { "count":1 ,"user" : [ {"id":"7ed6d5da-

93b2-4545-a502-23d20b48ef2a","username":"admin","firstname":"admin",

"lastname":"cloud","created":"2012-07-05T12:18:27-

0700","state":"enabled","account":"admin",

"accounttype":1,"domainid":"8a111e58-e155-4482-93ce-

84efff3c7c77","domain":"ROOT",

"apikey":"plgWJfZK4gyS3mOMTVmjUVg38zCm0bewzGUdP66mg",

"secretkey":"VDaACYb0LV9ehwJaw7FF3akA3KBQ",

"accountid":"7548ac03-af1d-4c1c-9064-2f3e2c0eda0d"}]}}

All the clients that you will find on github will implement this

signature technique, you should not have to do it by hand. Now that you

have explored the API through the UI and that you understand how to make

low level calls, pick your favourite client or use

[CloudMonkey](https://pypi.python.org/pypi/cloudmonkey/). CloudMonkey is

a sub-project of Apache CloudStack and gives operators/developers the

ability to use any of the API methods.

Chapter 1 - Clients

===================

CloudMonkey

-----------

CloudMonkey is the CloudStack Command Line Interface (CLI). It is written

in Python. CloudMonkey can be used both as an interactive shell and as a

command line tool which simplifies CloudStack configuration and

management.

It can be used with CloudStack 4.0-incubating and above.

Installing CloudMonkey

----------------------

CloudMonkey is dependent on *readline, pygments, prettytable*, when

installing from source you will need to resolve those dependencies.

Page 56: Cloud Stack

Using the cheese shop, the dependencies will be automatically installed.

There are two ways to get CloudMonkey. Via the official CloudStack source

releases or via a community maintained distribution at [the cheese

shop](http://pypi.python.org/pypi/cloudmonkey/). CloudMonkey now lives

within its own repository but it used to be part of the CloudStack

release. Developers could get

it directly from the CloudStack git repository in *tools/cli/*. Now, it

is better to use the CloudMonkey specific repository.

- Via the official Apache CloudStack-CloudMonkey git repository.

$ git clone https://git-wip-us.apache.org/repos/asf/cloudstack-

cloudmonkey.git

$ sudo python setup.py install

- Via a community maintained package on [Cheese

Shop](https://pypi.python.org/pypi/cloudmonkey/)

pip install cloudmonkey

Configuration

-------------

To configure CloudMonkey you can edit the `~/.cloudmonkey/config` file in

the user's home directory as shown below. The values can also be set

interactively at the cloudmonkey prompt. Logs are kept in

`~/.cloudmonkey/log`, and history is stored in `~/.cloudmonkey/history`.

Discovered apis are listed in `~/.cloudmonkey/cache`. Only the log and

history files can be custom paths and can be configured by setting

appropriate file paths in `~/.cloudmonkey/config`

$ cat ~/.cloudmonkey/config

[core]

log_file = /Users/sebastiengoasguen/.cloudmonkey/log

asyncblock = true

paramcompletion = false

history_file = /Users/sebastiengoasguen/.cloudmonkey/history

[ui]

color = true

prompt = >

display = table

[user]

secretkey =VDaACYb0LV9EhwJaw7FF3akA3KBQ

apikey = plgWJfZK438zCm0bewzGUdP66mg

[server]

path = /client/api

host = localhost

protocol = http

port = 8080

Page 57: Cloud Stack

timeout = 3600

The values can also be set at the CloudMonkey prompt. The API and secret

keys are obtained via the CloudStack UI or via a raw api call.

$ cloudmonkey

☁ Apache CloudStack cloudmonkey 4.1.0-snapshot. Type help or ? to list commands.

> set prompt myprompt>

myprompt> set host localhost

myprompt> set port 8080

myprompt> set apikey <your api key>

myprompt> set secretkey <your secret key>

You can use CloudMonkey to interact with a local cloud, and even with a

remote public cloud. You just need to set the host value properly and

obtain the keys from the cloud administrator.

API Discovery

-------------

> **Note**

>

> In CloudStack 4.0.\* releases, the list of api calls available will be

> pre-cached, while starting with CloudStack 4.1 releases and above an

API

> discovery service is enabled. CloudMonkey will discover automatically

> the api calls available on the management server. The sync command in

> CloudMonkey pulls a list of apis which are accessible to your user

> role.

To discover the APIs available do:

> sync

324 APIs discovered and cached

Tabular Output

--------------

The number of key/value pairs returned by the api calls can be large

resulting in a very long output. To enable easier viewing of the output,

a tabular formatting can be setup. You may enable tabular listing and

even choose set of column fields, this allows you to create your own

field using the filter param which takes in comma separated argument. If

argument has a space, put them under double quotes. The create table

will have the same sequence of field filters provided. To enable it, use

the *set* function and create filters like so:

> set display table

> list users filter=id,domain,account

count = 1

user:

+---------+--------+---------+

| id | domain | account |

+---------+--------+---------+

Page 58: Cloud Stack

| 7ed6d5 | ROOT | admin |

+---------+--------+---------+

Interactive Shell Usage

-----------------------

To start learning CloudMonkey, the best is to use the interactive shell.

Simply type CloudMonkey at the prompt and you should get the interactive

shell.

At the CloudMonkey prompt press the tab key twice, you will see all

potential verbs available. Pick one, enter a space and then press tab

twice. You will see all actions available for that verb

cloudmonkey>

EOF assign cancel create detach extract

ldap prepare reconnect restart shell update

...

cloudmonkey>create

account diskoffering loadbalancerrule

portforwardingrule snapshot tags vpc

...

Picking one action and entering a space plus the tab key, you will

obtain the list of parameters for that specific api call.

cloudmonkey>create network

account= domainid= isAsync=

networkdomain= projectid= vlan=

acltype= endip= name=

networkofferingid= startip= vpcid=

displaytext= gateway= netmask=

physicalnetworkid= subdomainaccess= zoneid=

To get additional help on that specific api call you can use the

following (or `-help` and `--help`):

cloudmonkey>create network -h

Creates a network

Required args: displaytext name networkofferingid zoneid

Args: account acltype displaytext domainid endip gateway isAsync name

netmask networkdomain networkofferingid physicalnetworkid projectid

startip subdomainaccess vlan vpcid zoneid

Note the required arguments necessary for the calls.

> **Note**

>

> To find out the required parameters value, using a debugger console on

> the CloudStack UI might be very useful. For instance using Firebug on

> Firefox, you can navigate the UI and check the parameters values for

> each call you are making as you navigate the UI.

Starting a Virtual Machine instance with CloudMonkey

Page 59: Cloud Stack

----------------------------------------------------

To start a virtual machine instance we will use the *deploy

virtualmachine* call.

cloudmonkey>deploy virtualmachine -h

Creates and automatically starts a virtual machine based on a service

offering, disk offering, and template.

Required args: serviceofferingid templateid zoneid

Args: account diskofferingid displayname domainid group hostid

hypervisor ipaddress iptonetworklist isAsync keyboard keypair name

networkids projectid securitygroupids securitygroupnames

serviceofferingid size startvm templateid userdata zoneid

The required arguments are *serviceofferingid, templateid and zoneid*

In order to specify the template that we want to use, we can list all

available templates with the following call:

cloudmonkey>list templates templatefilter=all

count = 2

template:

========

domain = ROOT

domainid = 8a111e58-e155-4482-93ce-84efff3c7c77

zoneid = e1bfdfaf-3d9b-43d4-9aea-2c9f173a1ae7

displaytext = SystemVM Template (XenServer)

ostypeid = 849d7d0a-9fbe-452a-85aa-70e0a0cbc688

passwordenabled = False

id = 6d360f79-4de9-468c-82f8-a348135d298e

size = 2101252608

isready = True

templatetype = SYSTEM

zonename = devcloud

...<snipped>

In this snippet, I used DevCloud and only showed the beginning output of

the first template, the SystemVM template.

Similarly to get the *serviceofferingid* you would do:

cloudmonkey>list serviceofferings | grep id

id = ef2537ad-c70f-11e1-821b-0800277e749c

id = c66c2557-12a7-4b32-94f4-48837da3fa84

id = 3d8b82e5-d8e7-48d5-a554-cf853111bc50

Finally we would start an instance with the following call:

cloudmonkey>deploy virtualmachine templateid=13ccff62-132b-4caf-b456-

e8ef20cbff0e zoneid=e1bfdfaf-3d9b-43d4-9aea-2c9f173a1ae7

serviceofferingid=ef2537ad-c70f-11e1-821b-0800277e749c

jobprocstatus = 0

created = 2013-03-05T13:04:51-0800

cmd = com.cloud.api.commands.DeployVMCmd

userid = 7ed6d5da-93b2-4545-a502-23d20b48ef2a

Page 60: Cloud Stack

jobstatus = 1

jobid = c441d894-e116-402d-aa36-fdb45adb16b7

jobresultcode = 0

jobresulttype = object

jobresult:

=========

virtualmachine:

==============

domain = ROOT

domainid = 8a111e58-e155-4482-93ce-84efff3c7c77

haenable = False

templatename = tiny Linux

...<snipped>

The instance would be stopped with:

cloudmonkey>stop virtualmachine id=7efe0377-4102-4193-bff8-

c706909cc2d2

> **Note**

>

> The *ids* that you will use will differ from this example. Make sure

> you use the ones that corresponds to your CloudStack cloud.

Scripting with CloudMonkey

--------------------------

All previous examples use CloudMonkey via the interactive shell, however

it can be used as a straightfoward CLI, passing the commands to the

*cloudmonkey* command like shown below.

$cloudmonkey list users

As such it can be used in shell scripts, it can received commands via

stdin and its output can be parsed like any other unix commands as

mentioned before.

jClouds CLI

===========

jClouds is a Java wrapper for many Cloud Providers APIs, it used in a

large number of Cloud application to access providers that do not offer

a standard APIs. jCloud has recenlty graduated to a top-level project at

the Apache Software Foundation (ASF). `jclouds-cli` is the command line

interface to jClouds

and in CloudStack terminology could be seen as an equivalent to

CloudMonkey.

However CloudMonkey covers the entire CloudStack API and jclouds-cli does

not. Management of virtual machines, blobstore (i.e S3 like) and

configuration management via chef are the main features.

Installation and Configuration

------------------------------

First install jclouds-cli via github and build it with maven:

Page 61: Cloud Stack

$git clone https://github.com/jclouds/jclouds-cli.git

$cd jclouds-cli

$mvn install

Locate the tarball generated by the build in *assembly/target*, extract

the tarball in the directory of your choice and add the bin directory to

your path. For instance:

export PATH=/Users/sebastiengoasguen/Documents/jclouds-cli-1.7.0/bin

Define a few environmental variables to set your endpoint and your

credentials, the ones listed below are just examples. Adapt to your own

endpoint and keys.

export JCLOUDS_COMPUTE_API=cloudstack

export JCLOUDS_COMPUTE_ENDPOINT=http://localhost:8080/client/api

export JCLOUDS_COMPUTE_CREDENTIAL=_UKIzPgw7BneOyJO621Tdlslicg

export JCLOUDS_COMPUTE_IDENTITY=mnH5EbKcKeJdJrvguEIwQG_Fn-N0l

You should now be able to use jclouds-cli, check that it is in your path

and runs, you should see the following output:

sebmini:jclouds-cli-1.7.0-SNAPSHOT sebastiengoasguen$ jclouds-cli

_ _ _

(_) | | | |

_ ____| | ___ _ _ _ | | ___

| |/ ___) |/ _ \| | | |/ || |/___)

| ( (___| | |_| | |_| ( (_| |___ |

_| |\____)_|\___/ \____|\____(___/

(__/

jclouds cli (1.7.0-SNAPSHOT)

http://jclouds.org

Hit '<tab>' for a list of available commands

and '[cmd] --help' for help on a specific command.

Hit '<ctrl-d>' to shutdown jclouds cli.

jclouds> features:list

State Version Name

Repository Description

[installed ] [1.7.0-SNAPSHOT] jclouds-guice

jclouds-1.7.0-SNAPSHOT Jclouds - Google Guice

[installed ] [1.7.0-SNAPSHOT] jclouds

jclouds-1.7.0-SNAPSHOT JClouds

[installed ] [1.7.0-SNAPSHOT] jclouds-blobstore

jclouds-1.7.0-SNAPSHOT JClouds Blobstore

[installed ] [1.7.0-SNAPSHOT] jclouds-compute

jclouds-1.7.0-SNAPSHOT JClouds Compute

[installed ] [1.7.0-SNAPSHOT] jclouds-aws-ec2

jclouds-1.7.0-SNAPSHOT Amazon Web Service - EC2

[uninstalled] [1.7.0-SNAPSHOT] jclouds-aws-route53

jclouds-1.7.0-SNAPSHOT Amazon Web Service - Route 53

Page 62: Cloud Stack

[installed ] [1.7.0-SNAPSHOT] jclouds-aws-s3

jclouds-1.7.0-SNAPSHOT Amazon Web Service - S3

...<snip>

Using jclouds CLI

-----------------

The CloudStack API driver is not installed by default. Install it with:

jclouds> features:install jclouds-api-cloudstack

For now we will only test the virtual machine management functionality.

Pretty basic but that's what we want to do to get a feel for

jclouds-cli. If you have set your endpoint and keys properly, you should

be able to list the location of your cloud like so:

$ jclouds location list

[id] [scope] [description]

[parent]

cloudstack PROVIDER

https://api.exoscale.ch/compute

1128bd56-b4d9-4ac6-a7b9-c715b187ce11 ZONE CH-GV2

cloudstack

Again this is an example, you will see something different depending on

your endpoint.

You can list the service offerings with:

$ jclouds hardware list

[id] [ram] [cpu] [cores]

71004023-bb72-4a97-b1e9-bc66dfce9470 512 2198.0 1.0

b6cd1ff5-3a2f-4e9d-a4d1-8988c1191fe8 1024 2198.0 1.0

21624abb-764e-4def-81d7-9fc54b5957fb 2048 4396.0 2.0

b6e9d1e8-89fc-4db3-aaa4-9b4c5b1d0844 4096 4396.0 2.0

c6f99499-7f59-4138-9427-a09db13af2bc 8182 8792.0 4.0

350dc5ea-fe6d-42ba-b6c0-efb8b75617ad 16384 8792.0 4.0

a216b0d1-370f-4e21-a0eb-3dfc6302b564 32184 17584.0 8.0

List the images available with:

$ jclouds image list

[id] [location] [os family] [os

version] [status]

0f9f4f49-afc2-4139-b26b-b05a9f51ea74 windows null

AVAILABLE

1d16c78d-268f-47d0-be0c-b80d31e765d2 unrecognized null

AVAILABLE

3cfd96dc-acce-4423-a095-e558f740db5c unrecognized null

AVAILABLE

...<snip>

We see that the os family is not listed properly, this is probably due

to some regex used by jclouds to guess the OS type. Unfortunately the

name key is not given.

Page 63: Cloud Stack

To start an instance we can check the syntax of *jclouds node create*

$ jclouds node create --help

DESCRIPTION

jclouds:node-create

Creates a node.

SYNTAX

jclouds:node-create [options] group [number]

ARGUMENTS

group

Node group.

number

Number of nodes to create.

(defaults to 1)

We need to define the name of a group and give the number of instance

that we want to start. Plus the hardware and image id. In terms of

hardware, we are going to use the smallest possible hardware and for

image we give a uuid from the previous list.

$jclouds node create --smallest --ImageId d16c78d-268f-47d0-be0c-

b80d31e765d2 foobar 1

$ jclouds node list

[id] [location]

[hardware] [group] [status]

4e733609-4c4a-4de1-9063-6fe5800ccb10 1128bd56-b4d9-4ac6-a7b9-

c715b187ce11 71004023-bb72-4a97-b1e9-bc66dfce9470 foobar RUNNING

$ jclouds node info 4e733609-4c4a-4de1-9063-6fe5800ccb10

[id] [location]

[hardware] [group] [status]

4e733609-4c4a-4de1-9063-6fe5800ccb10 1128bd56-b4d9-4ac6-a7b9-

c715b187ce11 71004023-bb72-4a97-b1e9-bc66dfce9470 foobar RUNNING

Operating System: unrecognized null null

Configured User: root

Public Address: 9.9.9.9

Private Address:

Image Id: 1d16c78d-268f-47d0-be0c-b80d31e765d2

With this short intro, you are well on your way to using jclouds-cli.

Check out the interactive shell, the blobstore and the chef facility to

automate VM configuration. Remember that jclouds is also and actually

foremost a java library that you can use to write other applications.

Apache Libcloud

===============

There are many tools available to interface with the CloudStack API, we

just saw jClouds. Apache

Page 64: Cloud Stack

Libcloud is another one, but this time Python based. In this section we

provide a basic example of

how to use Libcloud with CloudStack. It assumes that you have access to a

CloudStack endpoint and that you have the API access key and secret key

of

a user.

Installation

------------

To install Libcloud refer to the libcloud

[website](http://libcloud.apache.org). If you are familiar with Pypi

simply do:

pip install apache-libcloud

You should see the following output:

pip install apache-libcloud

Downloading/unpacking apache-libcloud

Downloading apache-libcloud-0.12.4.tar.bz2 (376kB): 376kB downloaded

Running setup.py egg_info for package apache-libcloud

Installing collected packages: apache-libcloud

Running setup.py install for apache-libcloud

Successfully installed apache-libcloud

Cleaning up...

Developers will want to clone the repository, for example from the

github mirror:

git clone https://github.com/apache/libcloud.git

To install libcloud from the cloned repo, simply do the following from

within the clone repository directory:

sudo python ./setup.py install

> **Note**

>

> The CloudStack driver is located in

> */path/to/libcloud/source/libcloud/compute/drivers/cloudstack.py*.

> file bugs on the libcloud JIRA and submit your patches as an attached

> file to the JIRA entry.

Using Libcloud

--------------

With libcloud installed either via PyPi or via the source, you can now

open a Python interactive shell, create an instance of a CloudStack

driver

and call the available methods via the libcloud API.

First you need to import the libcloud modules and create a CloudStack

driver.

Page 65: Cloud Stack

>>> from libcloud.compute.types import Provider

>>> from libcloud.compute.providers import get_driver

>>> Driver = get_driver(Provider.CLOUDSTACK)

Then, using your keys and endpoint, create a connection object. Note

that this is a local test and thus not secured. If you use a CloudStack

public cloud, make sure to use SSL properly (i.e `secure=True`).

>>> apikey='plgWJfZK4gyS3mlZLYq_u38zCm0bewzGUdP66mg'

>>> secretkey='VDaACYb0LV9eNjeq1EhwJaw7FF3akA3KBQ'

>>> host='http://localhost:8080'

>>> path='/client/api'

>>>

conn=Driver(key=apikey,secret=secretkey,secure=False,host='localhost',por

t='8080',path=path)

With the connection object in hand, you now use the libcloud base api to

list such things as the templates (i.e images), the service offerings

(i.e sizes) and the zones (i.e locations)

>>> conn.list_images()

[<NodeImage: id=13ccff62-132b-4caf-b456-e8ef20cbff0e, name=tiny

Linux, driver=CloudStack ...>]

>>> conn.list_sizes()

[<NodeSize: id=ef2537ad-c70f-11e1-821b-0800277e749c,

name=tinyOffering, ram=100 disk=0 bandwidth=0 price=0 driver=CloudStack

...>,

<NodeSize: id=c66c2557-12a7-4b32-94f4-48837da3fa84, name=Small

Instance, ram=512 disk=0 bandwidth=0 price=0 driver=CloudStack ...>,

<NodeSize: id=3d8b82e5-d8e7-48d5-a554-cf853111bc50, name=Medium

Instance, ram=1024 disk=0 bandwidth=0 price=0 driver=CloudStack ...>]

>>> images=conn.list_images()

>>> offerings=conn.list_sizes()

The `create_node` method will take an instance name, a template and an

instance type as arguments. It will return an instance of a

*CloudStackNode* that has additional extensions methods, such as

`ex_stop` and `ex_start`.

>>>

node=conn.create_node(name='toto',image=images[0],size=offerings[0])

>>> help(node)

>>> node.get_uuid()

'b1aa381ba1de7f2d5048e248848993d5a900984f'

>>> node.name

u'toto'

Keypairs and Security Groups

----------------------------

I recently added support for keypair management in libcloud. For

instance, given a conn object obtained from the previous interactive

session:

Page 66: Cloud Stack

conn.ex_list_keypairs()

conn.ex_create_keypair(name='foobar')

conn.ex_delete_keypair(name='foobar')

Management of security groups was also added. Below we show how to list,

create and delete security groups. As well as add an ingree rule to open

port 22 to the world. Both keypair and security groups are key for

access to a CloudStack Basic zone like

[Exoscale](http://www.exoscale.ch).

conn.ex_list_security_groups()

conn.ex_create_security_group(name='libcloud')

conn.ex_authorize_security_group_ingress(securitygroupname='libcloud',pro

tocol='TCP',startport=22,cidrlist='0.0.0.0/0')

conn.ex_delete_security_group('libcloud')

Development of the CloudStack driver in Libcloud is very active, there is

also support for advanced zone via calls to do SourceNAT and StaticNAT.

Multiple Clouds

---------------

One of the interesting use cases of Libcloud is that you can use

multiple Cloud Providers, such as AWS, Rackspace, OpenNebula, vCloud and

so on. You can then create Driver instances to each of these clouds and

create your own multi cloud application. In the example below we

instantiate to libcloud CloudStack driver, one on

[Exoscale](http://exoscale.ch) and the other one on

[Ikoula](http://ikoula.com).

import libcloud.security as sec

Driver = get_driver(Provider.CLOUDSTACK)

apikey=os.getenv('EXOSCALE_API_KEY')

secretkey=os.getenv('EXOSCALE_SECRET_KEY')

endpoint=os.getenv('EXOSCALE_ENDPOINT')

host=urlparse.urlparse(endpoint).netloc

path=urlparse.urlparse(endpoint).path

exoconn=Driver(key=apikey,secret=secretkey,secure=True,host=host,path=pat

h)

Driver = get_driver(Provider.CLOUDSTACK)

apikey=os.getenv('IKOULA_API_KEY')

secretkey=os.getenv('IKOULA_SECRET_KEY')

endpoint=os.getenv('IKOULA_ENDPOINT')

host=urlparse.urlparse(endpoint).netloc

print host

path=urlparse.urlparse(endpoint).path

print path

Page 67: Cloud Stack

sec.VERIFY_SSL_CERT = False

ikoulaconn=Driver(key=apikey,secret=secretkey,secure=True,host=host,path=

path)

drivers = [exoconn, ikoulaconn]

for driver in drivers:

print driver.list_locations()

> **Note**

>

> In the example above, I set my access and secret keys as well as the

> endpoints as environment variable. Also note the libcloud security

> module and the VERIFY\_SSL\_CERT. In the case of iKoula the SSL

> certificate used was not verifiable by the CERTS that libcloud checks.

> Especially if you use a self-signed SSL certificate for testing, you

> might have to disable this check as well.

From this basic setup you can imagine how you would write an application

that would manage instances in different Cloud Providers. Providing more

resiliency to your overall infrastructure.

Pyton Boto

==========

There are many tools available to interface with a AWS compatible API.

In this section we provide a short example that users of CloudStack can

build upon using the AWS interface to CloudStack.

Boto Examples

-------------

Boto is one of them. It is a Python package available at:

https://github.com/boto/boto

In this section we provide one example of a

Python script that uses Boto and has been tested with the CloudStack AWS

API Interface. The AWS interface can be started with *`service

cloudstack-awsapi start`* and at least one service offering needs to

match the EC2 instance types (e.g m1.small). Here is the EC2 example.

Replace the Access and Secret Keys with your

own and update the endpoint.

#!/usr/bin/env python

import sys

import os

import boto

import boto.ec2

region =

boto.ec2.regioninfo.RegionInfo(name="ROOT",endpoint="localhost")

apikey='GwNnpUPrO6KgIq05MB0ViBoFYtdqKd'

secretkey='t4eXLEYWwzy2LSC8iw'

Page 68: Cloud Stack

def main():

'''Establish connection to EC2 cloud'''

conn =boto.connect_ec2(aws_access_key_id=apikey,

aws_secret_access_key=secretkey,

is_secure=False,

region=region,

port=7080,

path="/awsapi",

api_version="2012-08-15")

'''Get list of images that I own'''

images = conn.get_all_images()

print images

myimage = images[0]

'''Pick an instance type'''

vm_type='m1.small'

reservation =

myimage.run(instance_type=vm_type,security_groups=['default'])

if __name__ == '__main__':

main()

With boto you can also interact with other AWS services like S3.

CloudStack has an S3 tech preview but it

is backed by a standard NFS server and therefore is not a true scalable

distributed block store. To provide an S3

service in your Cloud I recommend to use other software like RiakCS, Ceph

radosgw or Glusterfs S3 interface. These

systems handle large scale, chunking and replication.

Chapter 2 - Wrappers

====================

In this paragraph we introduce several CloudStack *wrappers*. These tools

are using client libraries presented in the previous chapter (or their

own built-in request mechanisms) and add

additional functionality that involve some high-level orchestration. For

instance *knife-cloudstack* uses the power of

[Chef](http://opscode.com), the configuration management system, to

seamlessly bootstrap instances running in a CloudStack cloud. Apache

[Whirr](http://whirr.apache.org) uses

[jclouds](http://jclouds.incubator.apache.org) to boostrap

[Hadoop](http://hadoop.apache.org) clusters in the cloud and

[SaltStack](http://saltstack.com) does configuration management in the

Cloud using Apache libcloud.

Knife CloudStack

=============

Knife is a command line utility for Chef, the configuration management

system from OpsCode.

Install, Configure and Feel

---------------------------

The Knife family of tools are drivers that automate the provisioning and

Page 69: Cloud Stack

configuration of machines in the Cloud. Knife-cloudstack is a CloudStack

plugin for knife. Written in ruby it is used by the Chef community. To

install Knife-CloudStack you can simply install the gem or get it from

github:

gem install knife-cloudstack

If successful the *knife* command should now be in your path. Issue

*knife* at the prompt and see the various options and sub-commands

available.

If you want to use the version on github simply clone it:

git clone https://github.com/CloudStack-extras/knife-cloudstack.git

If you clone the git repo and do changes to the code, you will want to

build and install a new gem. As an example, in the directory where you

cloned the knife-cloudstack repo do:

$ gem build knife-cloudstack.gemspec

Successfully built RubyGem

Name: knife-cloudstack

Version: 0.0.14

File: knife-cloudstack-0.0.14.gem

$ gem install knife-cloudstack-0.0.14.gem

Successfully installed knife-cloudstack-0.0.14

1 gem installed

Installing ri documentation for knife-cloudstack-0.0.14...

Installing RDoc documentation for knife-cloudstack-0.0.14...

You will then need to define your CloudStack endpoint and your

credentials

in a *knife.rb* file like so:

knife[:cloudstack_url] =

"http://yourcloudstackserver.com:8080/client/api

knife[:cloudstack_api_key] = "Your CloudStack API Key"

knife[:cloudstack_secret_key] = "Your CloudStack Secret Key"

With the endpoint and credentials configured as well as knife-cloudstack

installed, you should be able to issue your first command. Remember that

this is simply sending a CloudStack API call to your CloudStack based

Cloud

provider. Later in the section we will see how to do more advanced

things with knife-cloudstack. For example, to list the service offerings

(i.e instance types) available on the iKoula Cloud, do:

$ knife cs service list

Name Memory CPUs CPU Speed Created

m1.extralarge 15GB 8 2000 Mhz 2013-05-27T16:00:11+0200

m1.large 8GB 4 2000 Mhz 2013-05-27T15:59:30+0200

m1.medium 4GB 2 2000 Mhz 2013-05-27T15:57:46+0200

m1.small 2GB 1 2000 Mhz 2013-05-27T15:56:49+0200

Page 70: Cloud Stack

To list all the *knife-cloudstack* commands available just enter *knife

cs* at the prompt. You will see:

$ knife cs

Available cs subcommands: (for details, knife SUB-COMMAND --help)

** CS COMMANDS **

knife cs account list (options)

knife cs cluster list (options)

knife cs config list (options)

knife cs disk list (options)

knife cs domain list (options)

knife cs firewallrule list (options)

knife cs host list (options)

knife cs hosts

knife cs iso list (options)

knife cs template create NAME (options)

...

> **Note**

>

> If you only have user privileges on the Cloud you are using, as

> opposed to admin privileges, do note that some commands won't be

> available to you. For instance on the Cloud I am using where I am a

> standard user I cannot access any of the infrastructure type commands

> like:

>

> $ knife cs pod list

> Error 432: Your account does not have the right to execute this

command or the command does not exist.

>

Similarly to CloudMonkey, you can pass a list of fields to output. To

find the potential fields enter the *--fieldlist* option at the end of

the command. You can then pick the fields that you want to output by

passing a comma separated list to the *--fields* option like so:

$ knife cs service list --fieldlist

Name Memory CPUs CPU Speed Created

m1.extralarge 15GB 8 2000 Mhz 2013-05-27T16:00:11+0200

m1.large 8GB 4 2000 Mhz 2013-05-27T15:59:30+0200

m1.medium 4GB 2 2000 Mhz 2013-05-27T15:57:46+0200

m1.small 2GB 1 2000 Mhz 2013-05-27T15:56:49+0200

Key Type Value

cpunumber Fixnum 8

cpuspeed Fixnum 2000

created String 2013-05-27T16:00:11+0200

defaultuse FalseClass false

displaytext String 8 Cores CPU with 15.3GB RAM

domain String ROOT

domainid String 1

hosttags String ex10

id String 1412009f-0e89-4cfc-a681-1cda0631094b

Page 71: Cloud Stack

issystem FalseClass false

limitcpuuse TrueClass true

memory Fixnum 15360

name String m1.extralarge

networkrate Fixnum 100

offerha FalseClass false

storagetype String local

tags String ex10

$ knife cs service list --fields id,name,memory,cpunumber

id name memory

cpunumber

1412009f-0e89-4cfc-a681-1cda0631094b m1.extralarge 15360 8

d2b2e7b9-4ffa-419e-9ef1-6d413f08deab m1.large 7680 4

8dae8be9-5dae-4f81-89d1-b171f25ef3fd m1.medium 3840 2

c6b89fea-1242-4f54-b15e-9d8ec8a0b7e8 m1.small 1740 1

Starting an Instance

--------------------

In order to manage instances *knife* has several commands:

- *knife cs server list* to list all instances

- *knife cs server start* to restart a paused instance

- *knife cs server stop* to suspend a running instance

- *knife cs server delete* to destroy an instance

- *knife cs server reboot* to reboot a running instance

And of course to create an instance *knife cs server create*

Knife will automatically allocate a Public IP address and associate it

with your running instance. If you additionally pass some port forwarding

rules and firewall rules it will set those up. You need to specify an

instance type, from the list returned by *knife cs service list* as well

as a template, from the list returned by *knife cs template list*. The

*--no-boostrap* option will tell knife to not install chef on the

deployed instance. Syntax for the port forwarding and firewall rules are

explained on the [knife

cloudstack](https://github.com/CloudStack-extras/knife-cloudstack)

website. Here is an example on the [iKoula cloud](http://www.ikoula.com)

in France:

$ knife cs server create --no-bootstrap --service m1.small --template

"CentOS 6.4 - Minimal - 64bits" foobar

Waiting for Server to be created.......

Allocate ip address, create forwarding rules

params: {"command"=>"associateIpAddress", "zoneId"=>"a41b82a0-78d8-

4a8f-bb79-303a791bb8a7", "networkId"=>"df2288bb-26d7-4b2f-bf41-

e0fae1c6d198"}.

Page 72: Cloud Stack

Allocated IP Address: 178.170.XX.XX

...

Name: foobar

Public IP: 178.170.XX.XX

$ knife cs server list

Name Public IP Service Template State

Instance Hypervisor

foobar 178.170.XX.XX m1.small CentOS 6.4 - Minimal - 64bits

Running N/A N/A

Bootstrapping Instances with Hosted-Chef

----------------------------------------

Knife is taking it's full potential when used to bootstrap Chef and use

it for configuration management of the instances. To get started with

Chef, the easiest is to use [Hosted

Chef](http://www.opscode.com/hosted-chef/). There is some great

documentation on

[how](https://learnchef.opscode.com/quickstart/chef-repo/) to do it. The

basic concept is that you will download or create cookbooks locally and

publish them to your own hosted Chef server.

Using Knife with Hosted-Chef

----------------------------

With your *hosted Chef* account created and your local *chef-repo*

setup, you can start instances on your Cloud and specify the *cookbooks*

to use to configure those instances. The boostrapping process will fetch

those cookbooks and configure the node. Below is an example that does

so, it uses the [exoscale](http://www.exoscale.ch) cloud which runs on

CloudStack. This cloud is enabled as a Basic zone and uses ssh keypairs

and security groups for access.

$ knife cs server create --service Tiny --template "Linux CentOS 6.4

64-bit" --ssh-user root --identity ~/.ssh/id_rsa --run-list

"recipe[apache2]" --ssh-keypair foobar --security-group www --no-public-

ip foobar

Waiting for Server to be created....

Name: foobar

Public IP: 185.19.XX.XX

Waiting for sshd.....

Name: foobar13

Public IP: 185.19.XX.XX

Environment: _default

Run List: recipe[apache2]

Bootstrapping Chef on 185.19.XX.XX

185.19.XX.XX --2013-06-10 11:47:54--

http://opscode.com/chef/install.sh

185.19.XX.XX Resolving opscode.com...

Page 73: Cloud Stack

185.19.XX.XX 184.ZZ.YY.YY

185.19.XX.XX Connecting to opscode.com|184.ZZ.XX.XX|:80...

185.19.XX.XX connected.

185.19.XX.XX HTTP request sent, awaiting response...

185.19.XX.XX 301 Moved Permanently

185.19.XX.XX Location: http://www.opscode.com/chef/install.sh

[following]

185.19.XX.XX --2013-06-10 11:47:55--

http://www.opscode.com/chef/install.sh

185.19.XX.XX Resolving www.opscode.com...

185.19.XX.XX 184.ZZ.YY.YY

185.19.XX.XX Reusing existing connection to opscode.com:80.

185.19.XX.XX HTTP request sent, awaiting response...

185.19.XX.XX 200 OK

185.19.XX.XX Length: 6509 (6.4K) [application/x-sh]

185.19.XX.XX Saving to: “STDOUT”

185.19.XX.XX

0% [ ] 0 --.-K/s

100%[======================================>] 6,509 --.-K/s

in 0.1s

185.19.XX.XX

185.19.XX.XX 2013-06-10 11:47:55 (60.8 KB/s) - written to stdout

[6509/6509]

185.19.XX.XX

185.19.XX.XX Downloading Chef 11.4.4 for el...

185.19.XX.XX Installing Chef 11.4.4

Chef will then configure the machine based on the cookbook passed in the

*--run-list* option, here I setup a simple web server. Note the keypair

that I used and the security group. I also specify *--no-public-ip*

which disables the IP address allocation and association. This is

specific to the setup of *exoscale* which automatically uses a public IP

address for the instances.

> **Note**

>

> The latest version of knife-cloudstack allows you to manage keypairs

> and securitygroups. For instance listing, creation and deletion of

> keypairs is possible, as well as listing of securitygroups:

>

> $ knife cs securitygroup list

> Name Description Account

> default Default Security Group [email protected]

> www apache server [email protected]

> $ knife cs keypair list

> Name Fingerprint

> exoscale xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx

>

When using a CloudStack based cloud in an Advanced zone setting, *knife*

can automatically allocate and associate an IP address. To illustrate

this slightly different example I use [iKoula](http://www.ikoula.com) a

french Cloud Provider which uses CloudStack. I edit my *knife.rb* file to

setup a different endpoint and the different API and secret keys. I

Page 74: Cloud Stack

remove the keypair, security group and public ip option and I do not

specify an identity file as I will retrieve the ssh password with the

*--cloudstack-password* option. The example is as follows:

$ knife cs server create --service m1.small --template "CentOS 6.4 -

Minimal - 64bits" --ssh-user root --cloudstack-password --run-list

"recipe[apache2]" foobar

Waiting for Server to be created........

Allocate ip address, create forwarding rules

params: {"command"=>"associateIpAddress", "zoneId"=>"a41b82a0-78d8-

4a8f-bb79-303a791bb8a7", "networkId"=>"df2288bb-26d7-4b2f-bf41-

e0fae1c6d198"}.

Allocated IP Address: 178.170.71.148

...

Name: foobar

Password: $%@#$%#$%#$

Public IP: 178.xx.yy.zz

Waiting for sshd......

Name: foobar

Public IP: 178.xx.yy.zz

Environment: _default

Run List: recipe[apache2]

Bootstrapping Chef on 178.xx.yy.zz

178.xx.yy.zz --2013-06-10 13:24:29--

http://opscode.com/chef/install.sh

178.xx.yy.zz Resolving opscode.com...

> **Warning**

>

> You will want to review the security implications of doing the

> bootstrap as root and using the default password to do so.

Salt

====

[Salt](http://saltstack.com) is a configuration management system

written in Python. It can be seen as an alternative to Chef and Puppet.

Its concept is similar with a master node holding states called *salt

states (SLS)* and minions that get their configuration from the master.

A nice difference with Chef and Puppet is that Salt is also a remote

execution engine and can be used to execute commands on the minions by

specifying a set of targets. In this chapter we dive straight

into [SaltCloud](http://saltcloud.org), an open source software to

provision *Salt* masters and minions in the Cloud. *SaltCloud* can be

looked at as an alternative to *knife-cs* but certainly with less

functionality. In this short walkthrough we intend to boostrap a Salt

master (equivalent to a Chef server) in the cloud and then add minions

that will get their configuration from the master.

Page 75: Cloud Stack

SaltCloud installation and usage.

---------------------------------

To install Saltcloud one simply clones the git repository. To develop

Saltcloud, just fork it on github and clone your fork, then commit

patches and submit pull request. SaltCloud depends on libcloud,

therefore you will need libcloud installed as well. See the previous

chapter to setup libcloud. With Saltcloud installed and in your path,

you need to define a Cloud provider in *\~/.saltcloud/cloud*. For

example:

providers:

exoscale:

apikey: <your api key>

secretkey: <your secret key>

host: api.exoscale.ch

path: /compute

securitygroup: default

user: root

private_key: ~/.ssh/id_rsa

provider: cloudstack

The apikey, secretkey, host, path and provider keys are mandatory. The

securitygroup key will specify which security group to use when starting

the instances in that cloud. The user will be the username used to

connect to the instances via ssh and the private\_key is the ssh key to

use. Note that the optional parameter are specific to the Cloud that

this was tested on. Cloud in advanced zones especially will need a

different setup.

> **Warning**

>

> Saltcloud uses libcloud. Support for advanced zones in libcloud is

> still experimental, therefore using SaltCloud in advanced zone will

> likely need some development of libcloud.

Once a provider is defined, we can start using saltcloud to list the

zones, the service offerings and the templates available on that cloud

provider. So far nothing more than what libcloud provides. For example:

#salt-cloud –list-locations exoscale

[INFO ] salt-cloud starting

exoscale:

----------

cloudstack:

----------

CH-GV2:

----------

country:

AU

driver:

id:

1128bd56-b4d9-4ac6-a7b9-c715b187ce11

name:

CH-GV2

Page 76: Cloud Stack

#salt-cloud –list-images exoscale

#salt-cloud –list-sizes exoscale

To start creating instances and configuring them with Salt, we need to

define node profiles in *\~/.saltcloud/config*. To illustrate two

different profiles we show a Salt Master and a Minion. The Master would

need a specific template (image:uuid), a service offering or instance

type (size:uuid). In a basic zone with keypair access and security

groups, one would also need to specify which keypair to use, where to

listen for ssh connections and of course you would need to define the

provider (e.g exoscale in our case, defined above). Below if the node

profile for a Salt Master deployed in the Cloud:

ubuntu-exoscale-master:

provider: exoscale

image: 1d16c78d-268f-47d0-be0c-b80d31e765d2

size: b6cd1ff5-3a2f-4e9d-a4d1-8988c1191fe8

ssh_interface: public

ssh_username: root

keypair: exoscale

make_master: True

master:

user: root

interface: 0.0.0.0

The master key shows which user to use and what interface, the

make\_master key if set to true will boostrap this node as a Salt

Master. To create it on our cloud provider simply enter:

$salt-cloud –p ubuntu-exoscale-master mymaster

Where *mymaster* is going to be the instance name. To create a minion,

add a minion node profile in the config file:

ubuntu-exoscale-minion:

provider: exoscale

image: 1d16c78d-268f-47d0-be0c-b80d31e765d2

size: b6cd1ff5-3a2f-4e9d-a4d1-8988c1191fe8

ssh_interface: public

ssh_username: root

keypair: exoscale

minion:

master: W.X.Y.Z

you would then start it with:

$salt-cloud –p ubuntu-exoscale-minion myminion

The W.X.Y.Z IP address above should be the IP address of the master that

was deployed previously. On the master you will need to have port 4505

and 4506 opened, this is best done in basic zone using security groups.

Once this security group is properly setup the minions will be able to

contact the master. You will then accept the keys from the minion and be

able to talk to them from your Salt master.

Page 77: Cloud Stack

root@mymaster11:~# salt-key -L

Accepted Keys:

minion001

minion002

Unaccepted Keys:

minion003

Rejected Keys:

root@mymaster11:~# salt-key -A

The following keys are going to be accepted:

Unaccepted Keys:

minion003

Proceed? [n/Y] Y

Key for minion minion003 accepted.

Once the keys of your minions have been accepted by the master, you can

start sending commands to them and use SLS formulaes to configure the

minions

root@mymaster11:~# salt '*' test.ping

minion003:

True

minion002:

True

minion001:

True

Have fun with SaltStack in the Cloud. You could also use Salt to install

CloudStack itself and some SLS formulaes are in the works to do it.

Apache Whirr

============

[Apache Whirr](http://whirr.apache.org) is a set of libraries to run

cloud services, internally it uses

[jclouds](http://jclouds.incubator.apache.org) that we introduced

earlier via the jclouds-cli interface to CloudStack, it is java based and

of interest to provision clusters of virtual machines on cloud

providers. Historically it started as a set of scripts to deploy

[Hadoop](http://hadoop.apache.org) clusters on Amazon EC2. We introduce

Whirr has a potential CloudStack tool to provision Hadoop cluster on

CloudStack based clouds.

Installing Apache Whirr

-----------------------

To install Whirr you can follow the [Quick Start

Guide](http://whirr.apache.org/docs/0.8.1/quick-start-guide.html),

download a tarball or clone the git repository. In the spirit of this

document we clone the repo:

git clone git://git.apache.org/whirr.git

And build the source with maven that we now know and love...:

mvn install

Page 78: Cloud Stack

The whirr binary will be available in the *bin* directory that we can

add to our path

export PATH=$PATH:/Users/sebgoa/Documents/whirr/bin

If all went well you should now be able to get the usage of *whirr*:

$ whirr --help

Unrecognized command '--help'

Usage: whirr COMMAND [ARGS]

where COMMAND may be one of:

launch-cluster Launch a new cluster running a service.

start-services Start the cluster services.

stop-services Stop the cluster services.

restart-services Restart the cluster services.

destroy-cluster Terminate and cleanup resources for a running

cluster.

destroy-instance Terminate and cleanup resources for a single

instance.

list-cluster List the nodes in a cluster.

list-providers Show a list of the supported providers

run-script Run a script on a specific instance or a group of

instances matching a role name

version Print the version number and exit.

help Show help about an action

Available roles for instances:

cassandra

elasticsearch

ganglia-metad

ganglia-monitor

hadoop-datanode

...

From the look of the usage you clearly see that *whirr* is about more

than just *hadoop* and that it can be used to configure *elasticsearch*

clusters, *cassandra* databases as well as the entire *hadoop* ecosystem

with *mahout*, *pig*, *hbase*, *hama*, *mapreduce* and *yarn*.

Using Apache Whirr

------------------

To get started with Whirr you need to setup the credentials and endpoint

of your CloudStack based cloud that you will be using. Edit the

*\~/.whirr/credentials* file to include a PROVIDER, IDENTITY, CREDENTIAL

and ENDPOINT. The PROVIDER needs to be set to *cloudstack*, the IDENTITY

is your API key, the CREDENTIAL is your secret key and the ENDPPOINT is

the endpoint url. For instance:

PROVIDER=cloudstack

IDENTITY=mnHrjktn5Q

CREDENTIAL=Hv97W5fjhlicg

Page 79: Cloud Stack

ENDPOINT=https://api.exoscale.ch/compute

With the credentials and endpoint defined you can create a *properties*

file that describes the cluster you want to launch on your cloud. The

file contains information such as the cluster name, the number of

instances and their type, the distribution of hadoop you want to use,

the service offering id and the template id of the instances. It also

defines the ssh keys to be used for accessing the virtual machines. In

the case of a cloud that uses security groups, you may also need to

specify it. A tricky point is the handling of DNS name resolution. You

might have to use the *whirr.store-cluster-in-etc-hosts* key to bypass

any DNS issues. For a full description of the whirr property keys, see

the

[documentation](http://whirr.apache.org/docs/0.8.1/configuration-

guide.html).

$ more whirr.properties

#

# Setup an Apache Hadoop Cluster

#

# Change the cluster name here

whirr.cluster-name=hadoop

whirr.store-cluster-in-etc-hosts=true

whirr.use-cloudstack-security-group=true

# Change the name of cluster admin user

whirr.cluster-user=${sys:user.name}

# Change the number of machines in the cluster here

whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,3

hadoop-datanode+hadoop-tasktracker

# Uncomment out the following two lines to run CDH

whirr.env.repo=cdh4

whirr.hadoop.install-function=install_cdh_hadoop

whirr.hadoop.configure-function=configure_cdh_hadoop

whirr.hardware-id=b6cd1ff5-3a2f-4e9d-a4d1-8988c1191fe8

whirr.private-key-file=/path/to/ssh/key/

whirr.public-key-file=/path/to/ssh/public/key/

whirr.provider=cloudstack

whirr.endpoint=https://the/endpoint/url

whirr.image-id=1d16c78d-268f-47d0-be0c-b80d31e765d2

> **Warning**

>

> The example shown above is specific to a CloudStack

Page 80: Cloud Stack

> [Cloud](http://exoscale.ch) setup as a basic zone. This cloud uses

> security groups for isolation between instances. The proper rules had

> to be setup by hand. Also note the use of

> *whirr.store-cluster-in-etc-hosts*. If set to true whirr will edit the

> */etc/hosts* file of the nodes and enter the IP adresses. This is

> handy in the case where DNS resolution is problematic.

> **Note**

>

> To use the Cloudera Hadoop distribution (CDH) like in the example

> above, you will need to copy the

> *services/cdh/src/main/resources/functions* directory to the root of

> your Whirr source. In this directory you will find the bash scripts

> used to bootstrap the instances. It may be handy to edit those

> scripts.

You are now ready to launch an hadoop cluster:

$ whirr launch-cluster --config hadoop.properties

Running on provider cloudstack using identity

mnH5EbKcKeJd456456345634563456345654634563456345

Bootstrapping cluster

Configuring template for bootstrap-hadoop-datanode_hadoop-tasktracker

Configuring template for bootstrap-hadoop-namenode_hadoop-jobtracker

Starting 3 node(s) with roles [hadoop-datanode, hadoop-tasktracker]

Starting 1 node(s) with roles [hadoop-namenode, hadoop-jobtracker]

>> running InitScript{INSTANCE_NAME=bootstrap-hadoop-datanode_hadoop-

tasktracker} on node(b9457a87-5890-4b6f-9cf3-1ebd1581f725)

>> running InitScript{INSTANCE_NAME=bootstrap-hadoop-datanode_hadoop-

tasktracker} on node(9d5c46f8-003d-4368-aabf-9402af7f8321)

>> running InitScript{INSTANCE_NAME=bootstrap-hadoop-datanode_hadoop-

tasktracker} on node(6727950e-ea43-488d-8d5a-6f3ef3018b0f)

>> running InitScript{INSTANCE_NAME=bootstrap-hadoop-namenode_hadoop-

jobtracker} on node(6a643851-2034-4e82-b735-2de3f125c437)

<< success executing InitScript{INSTANCE_NAME=bootstrap-hadoop-

datanode_hadoop-tasktracker} on node(b9457a87-5890-4b6f-9cf3-

1ebd1581f725): {output=This function does nothing. It just needs to exist

so Statements.call("retry_helpers") doesn't call something which doesn't

exist

Get:1 http://security.ubuntu.com precise-security Release.gpg [198 B]

Get:2 http://security.ubuntu.com precise-security Release [49.6 kB]

Hit http://ch.archive.ubuntu.com precise Release.gpg

Get:3 http://ch.archive.ubuntu.com precise-updates Release.gpg [198

B]

Get:4 http://ch.archive.ubuntu.com precise-backports Release.gpg [198

B]

Hit http://ch.archive.ubuntu.com precise Release

..../snip/.....

You can log into instances using the following ssh commands:

[hadoop-datanode+hadoop-tasktracker]: ssh -i

/Users/sebastiengoasguen/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o

StrictHostKeyChecking=no [email protected]

Page 81: Cloud Stack

[hadoop-datanode+hadoop-tasktracker]: ssh -i

/Users/sebastiengoasguen/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o

StrictHostKeyChecking=no [email protected]

[hadoop-datanode+hadoop-tasktracker]: ssh -i

/Users/sebastiengoasguen/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o

StrictHostKeyChecking=no [email protected]

[hadoop-namenode+hadoop-jobtracker]: ssh -i

/Users/sebastiengoasguen/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o

StrictHostKeyChecking=no [email protected]

To destroy cluster, run 'whirr destroy-cluster' with the same options

used to launch it.

After the boostrapping process finishes, you should be able to login to

your instances and use *hadoop* or if you are running a proxy on your

machine, you will be able to access your hadoop cluster locally. Testing

of Whirr for CloudStack is still under

[investigation](https://issues.apache.org/jira/browse/WHIRR-725) and the

subject of a Google Summer of Code 2013 project. We currently identified

issues with the use of security groups. Moreover this was tested on a

basic zone. Complete testing on an advanced zone is future work.

Running Map-Reduce jobs on Hadoop

---------------------------------

Whirr gives you the ssh command to connect to the instances of your

hadoop cluster, login to the namenode and browse the hadoop file system

that was created:

$ hadoop fs -ls /

Found 5 items

drwxrwxrwx - hdfs supergroup 0 2013-06-21 20:11 /hadoop

drwxrwxrwx - hdfs supergroup 0 2013-06-21 20:10 /hbase

drwxrwxrwx - hdfs supergroup 0 2013-06-21 20:10 /mnt

drwxrwxrwx - hdfs supergroup 0 2013-06-21 20:11 /tmp

drwxrwxrwx - hdfs supergroup 0 2013-06-21 20:11 /user

Create a directory to put your input data:

$ hadoop fs -mkdir input

$ hadoop fs -ls /user/sebastiengoasguen

Found 1 items

drwxr-xr-x - sebastiengoasguen supergroup 0 2013-06-21

20:15 /user/sebastiengoasguen/input

Create a test input file and put in the hadoop file system:

$ cat foobar

this is a test to count the words

$ hadoop fs -put ./foobar input

$ hadoop fs -ls /user/sebastiengoasguen/input

Found 1 items

-rw-r--r-- 3 sebastiengoasguen supergroup 34 2013-06-21

20:17 /user/sebastiengoasguen/input/foobar

Define the map-reduce environment. Note that the default Cloudera HADOOP

Page 82: Cloud Stack

distribution installation uses MRv1. To use Yarn one would have to edit

the hadoop.properties file.

$ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce

Start the map-reduce job:

$ hadoop jar $HADOOP_MAPRED_HOME/hadoop-examples.jar wordcount input

output

13/06/21 20:19:59 WARN mapred.JobClient: Use GenericOptionsParser for

parsing the arguments. Applications should implement Tool for the same.

13/06/21 20:20:00 INFO input.FileInputFormat: Total input paths to

process : 1

13/06/21 20:20:00 INFO mapred.JobClient: Running job:

job_201306212011_0001

13/06/21 20:20:01 INFO mapred.JobClient: map 0% reduce 0%

13/06/21 20:20:11 INFO mapred.JobClient: map 100% reduce 0%

13/06/21 20:20:17 INFO mapred.JobClient: map 100% reduce 33%

13/06/21 20:20:18 INFO mapred.JobClient: map 100% reduce 100%

13/06/21 20:20:21 INFO mapred.JobClient: Job complete:

job_201306212011_0001

13/06/21 20:20:22 INFO mapred.JobClient: Counters: 32

13/06/21 20:20:22 INFO mapred.JobClient: File System Counters

13/06/21 20:20:22 INFO mapred.JobClient: FILE: Number of bytes

read=133

13/06/21 20:20:22 INFO mapred.JobClient: FILE: Number of bytes

written=766347

...

And you can finally check the output:

$ hadoop fs -cat output/part-* | head

this 1

to 1

the 1

a 1

count 1

is 1

test 1

words 1

Conclusions

===========

The CloudStack API is very rich and easy to use. You can write your own

client by following the section on how to sign requests, or you can use

an existing client in the language of your choice. Well known libraries

developed by the community work well with CloudStack, such as Apache

libcloud and Apache jclouds. Configuration management systems also have

plugins to work transparently with CloudStack, in this little book we

presented SaltStack and Knife-cs. Finally, going a bit beyond simple

clients we presented Apache Whirr that allows you to create Hadoop

clusters on-demand (e.g elasticsearch, cassandra also work). Take your

pick and write your applications on top of CloudStack using one of those

tools. Based on these tools you will be able to deploy infrastructure

Page 83: Cloud Stack

easily, quickly and in a reproducible manner. Lately CloudStack has seen

the number of tools grow, just today, I learned about a Fluentd plugin

and last week a Cloudfoundry BOSH interface was released. I also

committed a straightforward dynamic inventory script for Ansible and a

tweet just flew by about a vagrant-cloudstack plugin. The list goes on,

pick what suits you and answers your need, then have fun.

Page 84: Cloud Stack

# CloudStack Installation from Source for Developers

This book is aimed at CloudStack developers who need to build the code.

These instructions are valid on a Ubuntu 12.04 and CentOS 6.4 systems and

were tested with the 4.2 release of Apache CloudStack, please adapt them

if you are on a different operating system or using a newer/older version

of CloudStack. This book is composed of the following sections:

1. Installation of the prerequisites

2. Compiling and installation from source

3. Using the CloudStack simulator

4. Installation with DevCloud the CloudStack sandbox

5. Building your own packages

6. The CloudStack API

7. Testing the AWS API interface

# Prerequisites

In this section we'll look at installing the dependencies you'll need for

Apache CloudStack development.

## On Ubuntu 12.04

First update and upgrade your system:

apt-get update

apt-get upgrade

NTP might already be installed, check it with `service ntp status`. If

it's not then install NTP to synchronize the clocks:

apt-get install openntpd

Install `openjdk`. As we're using Linux, OpenJDK is our first choice.

apt-get install openjdk-6-jdk

Install `tomcat6`, note that the new version of tomcat on

[Ubuntu](http://packages.ubuntu.com/precise/all/tomcat6) is the 6.0.35

version.

apt-get install tomcat6

Next, we'll install MySQL if it's not already present on the system.

apt-get install mysql-server

Remember to set the correct `mysql` password in the CloudStack properties

file. Mysql should be running but you can check it's status with:

service mysql status

Page 85: Cloud Stack

Developers wanting to build CloudStack from source will want to install

the following additional packages. If you dont' want to build from source

just jump to the next section.

Install `git` to later clone the CloudStack source code:

apt-get install git

Install `Maven` to later build CloudStack

apt-get install maven

This should have installed Maven 3.0, check the version number with `mvn

--version`

A little bit of Python can be used (e.g simulator), install the Python

package management tools:

apt-get install python-pip python-setuptools

Finally install `mkisofs` with:

apt-get install genisoimage

## On centOS 6.4

First update and upgrade your system:

yum -y update

yum -y upgrade

If not already installed, install NTP for clock synchornization

yum -y install ntp

Install `openjdk`. As we're using Linux, OpenJDK is our first choice.

yum -y install java-1.6.0-openjdk java-1.6.0-openjdk-devel

Install `tomcat6`, note that the version of tomcat6 in the default CentOS

6.4 repo is 6.0.24, so we will grab the 6.0.35 version.

The 6.0.24 version will be installed anyway as a dependency to

cloudstack.

wget https://archive.apache.org/dist/tomcat/tomcat-

6/v6.0.35/bin/apache-tomcat-6.0.35.tar.gz

tar xzvf apache-tomcat-6.0.35.tar.gz -C /usr/local

Setup tomcat6 system wide by creating a file `/etc/profile.d/tomcat.sh`

with the following content:

export CATALINA_BASE=/usr/local/apache-tomcat-6.0.35

export CATALINA_HOME=/usr/local/apache-tomcat-6.0.35

Page 86: Cloud Stack

Next, we'll install MySQL if it's not already present on the system.

yum -y install mysql mysql-server

Remember to set the correct `mysql` password in the CloudStack properties

file. Mysql should be running but you can check it's status with:

service mysqld status

Install `git` to later clone the CloudStack source code:

yum -y install git

Install `Maven` to later build CloudStack. Grab the 3.0.5 release from

the Maven [website](http://maven.apache.org/download.cgi)

wget http://mirror.cc.columbia.edu/pub/software/apache/maven/maven-

3/3.0.5/binaries/apache-maven-3.0.5-bin.tar.gz

tar xzf apache-maven-3.0.5-bin.tar.gz -C /usr/local

cd /usr/local

ln -s apache-maven-3.0.5 maven

Setup Maven system wide by creating a `/etc/profile.d/maven.sh` file with

the following content:

export M2_HOME=/usr/local/maven

export PATH=${M2_HOME}/bin:${PATH}

Log out and log in again and you will have maven in your PATH:

mvn --version

This should have installed Maven 3.0, check the version number with `mvn

--version`

A little bit of Python can be used (e.g simulator), install the Python

package management tools:

yum -y install python-setuptools

To install python-pip you might want to setup the Extra Packages for

Enterprise Linux (EPEL) repo

cd /tmp

wget http://mirror-fpt-telecom.fpt.net/fedora/epel/6/i386/epel-

release-6-8.noarch.rpm

rpm -ivh epel-release-6-8.noarch.rpm

Then update you repository cache `yum update` and install pip `yum -y

install python-pip`

To install Marvin you will also need the Python development package `yum

-y install python-devel`

Finally install `mkisofs` with:

Page 87: Cloud Stack

yum -y install genisoimage

# Installing from Source

CloudStack uses git for source version control, if you know little about

[git](http://book.git-scm.com/) is a good start. Once you have git setup

on your machine, pull the source with:

git clone https://git-wip-us.apache.org/repos/asf/cloudstack.git

To build the latest stable release:

git checkout 4.2

To compile Apache CloudStack, go to the cloudstack source folder and run:

mvn -Pdeveloper,systemvm clean install

If you want to skip the tests add `-DskipTests` to the command above

You will have made sure to set the proper db password in

`utils/conf/db.properties`

Deploy the database next:

mvn -P developer -pl developer -Ddeploydb

Run Apache CloudStack with jetty for testing. Note that `tomcat` maybe be

running on port 8080, stop it before you use `jetty`

mvn -pl :cloud-client-ui jetty:run

Log Into Apache CloudStack:

Open your Web browser and use this URL to connect to CloudStack:

http://localhost:8080/client/

Replace `localhost` with the IP of your management server if need be.

**Note**: If you have iptables enabled, you may have to open the ports

used by CloudStack. Specifically, ports 8080, 8250, and 9090.

You can now start configuring a Zone, playing with the API. Of course we

did not setup any infrastructure, there is no storage, no

hypervisors...etc. However you can run tests using the simulator. The

following section shows you how to use the simulator so that you don't

have to setup a physical infrastructure.

# Using the Simulator

Page 88: Cloud Stack

CloudStack comes with a simulator based on Python bindings called

*Marvin*. Marvin is available in the CloudStack source code or on Pypi.

With Marvin you can simulate your data center infrastructure by providing

CloudStack with a configuration file that defines the number of

zones/pods/clusters/hosts, types of storage etc. You can then develop and

test the CloudStack management server *as if* it was managing your

production infrastructure.

Do a clean build:

mvn -Pdeveloper -Dsimulator -DskipTests clean install

Deploy the database:

mvn -Pdeveloper -pl developer -Ddeploydb

mvn -Pdeveloper -pl developer -Ddeploydb-simulator

Install marvin. Note that you will need to have installed `pip` properly

in the prerequisites step.

pip install tools/marvin/dist/Marvin-0.1.0.tar.gz

Stop jetty (from any previous runs)

mvn -pl :cloud-client-ui jetty:stop

Start jetty

mvn -pl client jetty:run

Setup a basic zone with Marvin. In a separate shell:

mvn -Pdeveloper,marvin.setup -Dmarvin.config=setup/dev/basic.cfg -pl

:cloud-marvin integration-test

At this stage log in the CloudStack management server at

http://localhost:8080/client with the credentials admin/password, you

should see a fully configured basic zone infrastructure. To simulate an

advanced zone replace `basic.cfg` with `advanced.cfg`.

You can now run integration tests, use the API etc...

# Using DevCloud

The Installing from source section will only get you to the point of

runnign the management server, it does not get you any hypervisors.

The simulator section gets you a simulated datacenter for testing. With

DevCloud you can run at least one hypervisor and add it to your

management server the way you would a real physical machine.

[DevCloud](https://cwiki.apache.org/confluence/display/CLOUDSTACK/DevClou

d) is the CloudStack sandbox, the standard version is a VirtualBox based

image. There is also a KVM based image for it. Here we only show steps

with the VirtualBox image. For KVM see the

Page 89: Cloud Stack

[wiki](https://cwiki.apache.org/confluence/display/CLOUDSTACK/devcloud-

kvm).

** DevCloud Pre-requisites

1. Install [VirtualBox](http://www.virtualbox.org) on your machine

2. Run VirtualBox and under >Preferences create a *host-only interface*

on which you disable the DHCP server

3. Download the DevCloud

[image](http://people.apache.org/~bhaisaab/cloudstack/devcloud/devcloud2.

ova)

4. In VirtualBox, under File > Import Appliance import the DevCloud

image.

5. Verify the settings under > Settings and check the `enable PAE` option

in the processor menu

6. Once the VM has booted try to `ssh` to it with credentials:

`root/password`

ssh [email protected]

## Adding DevCloud as an Hypervisor

Picking up from a clean build:

mvn -Pdeveloper,systemvm clean install

mvn -P developer -pl developer,tools/devcloud -Ddeploydb

At this stage install marvin similarly than with the simulator:

pip install tools/marvin/dist/Marvin-0.1.0.tar.gz

Start the management server

mvn -pl client jetty:run

Then you are going to configure CloudStack to use the running DevCloud

instance:

cd tools/devcloud

python ../marvin/marvin/deployDataCenter.py -i devcloud.cfg

If you are curious, check the `devcloud.cfg` file and see how the data

center is defined: 1 Zone, 1 Pod, 1 Cluster, 1 Host, 1 primary Storage, 1

Seondary Storage, all provided by Devcloud.

You can now log in the management server at

`http://localhost:8080/client` and start experimenting with the UI or the

API.

Page 90: Cloud Stack

Do note that the management server is running in your local machine and

that DevCloud is used only as a n Hypervisor. You could potentially run

the management server within DevCloud as well, or memory granted, run

multiple DevClouds.

# Building Packages

Working from source is necessary when developing CloudStack. As mentioned

earlier this is not primarily intended for users. However some may want

to modify the code for their own use and specific infrastructure. The may

also need to build their own packages for security reasons and due to

network connectivity constraints. This section shows you the gist of how

to build packages. We assume that the reader will know how to create a

repository to serve this packages. The complete documentation is

available on the [website](http://cloudstack.apache.org/docs/en-

US/Apache_CloudStack/4.2.0/html/Installation_Guide/sect-source-

builddebs.html)

To build debian packages you will need couple extra packages that we did

not need to install for source compilation:

apt-get install python-mysqldb

apt-get install debhelper

Then build the packages with:

dpkg-buildpackage -uc -us

One directory up from the CloudStack root dir you will find:

cloudstack_4.2.0_amd64.changes

cloudstack_4.2.0.dsc

cloudstack_4.2.0.tar.gz

cloudstack-agent_4.2.0_all.deb

cloudstack-awsapi_4.2.0_all.deb

cloudstack-cli_4.2.0_all.deb

cloudstack-common_4.2.0_all.deb

cloudstack-docs_4.2.0_all.deb

cloudstack-management_4.2.0_all.deb

cloudstack-usage_4.2.0_all.deb

Of course the community provides a repository for these packages and you

can use it instead of building your own packages and putting them in your

own repo. Instructions on how to use this community repository are

available in the installation book.

# The CloudStack API

The CloudStack API is a query based API using http that return results in

XML or JSON. It is used to implement the default web UI. This API is not

a standard like [OGF

OCCI](http://www.ogf.org/gf/group_info/view.php?group=occi-wg) or [DMTF

CIMI](http://dmtf.org/standards/cloud) but is easy to learn. Mapping

exists between the AWS API and the CloudStack API as will be seen in the

Page 91: Cloud Stack

next section. Recently a Google Compute Engine interface was also

developed that maps the GCE REST API to the CloudStack API described

here. The API [docs](http://cloudstack.apache.org/docs/api/) are a good

start to learn the extent of the API. Multiple clients exist on

[github](https://github.com/search?q=cloudstack+client&ref=cmdform) to

use this API, you should be able to find one in your favorite language.

The reference documentation for the API and changes that might occur from

version to version is availble [on-

line](http://cloudstack.apache.org/docs/en-

US/Apache_CloudStack/4.1.1/html/Developers_Guide/index.html). This short

section is aimed at providing a quick summary to give you a base

understanding of how to use this API. As a quick start, a good way to

explore the API is to navigate the dashboard with a firebug console (or

similar developer console) to study the queries.

In a succint statement, the CloudStack query API can be used via http GET

requests made against your cloud endpoint (e.g

http://localhost:8080/client/api). The API name is passed using the

`command` key and the various parameters for this API call are passed as

key value pairs. The request is signed using the access key and secret

key of the user making the call. Some calls are synchronous while some

are asynchronous, this is documented in the API

[docs](http://cloudstack.apache.org/docs/api/). Asynchronous calls return

a `jobid`, the status and result of a job can be queried with the

`queryAsyncJobResult` call. Let's get started and give an example of

calling the `listUsers` API in Python.

First you will need to generate keys to make requests. Going through the

dashboard, go under `Accounts` select the appropriate account then click

on `Show Users` select the intended users and generate keys using the

`Generate Keys` icon. You will see an `API Key` and `Secret Key` field

being generated. The keys will be of the form:

API Key : XzAz0uC0t888gOzPs3HchY72qwDc7pUPIO8LxC-

VkIHo4C3fvbEBY_Ccj8fo3mBapN5qRDg_0_EbGdbxi8oy1A

Secret Key: zmBOXAXPlfb-

LIygOxUVblAbz7E47eukDS_0JYUxP3JAmknOYo56T0R-

AcM7rK7SMyo11Y6XW22gyuXzOdiybQ

Open a Python shell and import the basic modules necessary to make the

request. Do note that this request could be made many different ways,

this is just a low level example. The `urllib*` modules are used to make

the http request and do url encoding. The `hashlib` module gives us the

sha1 hash function. It used to geenrate the `hmac` (Keyed Hashing for

Message Authentication) using the secretkey. The result is encoded using

the `base64` module.

$python

Python 2.7.3 (default, Nov 17 2012, 19:54:34)

[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))]

on darwin

Type "help", "copyright", "credits" or "license" for more

information.

>>> import urllib2

Page 92: Cloud Stack

>>> import urllib

>>> import hashlib

>>> import hmac

>>> import base64

Define the endpoint of the Cloud, the command that you want to execute,

the type of the response (i.e XML or JSON) and the keys of the user. Note

that we do not put the secretkey in our request dictionary because it is

only used to compute the hmac.

>>> baseurl='http://localhost:8080/client/api?'

>>> request={}

>>> request['command']='listUsers'

>>> request['response']='json'

>>> request['apikey']='plgWJfZK4gyS3mOMTVmjUVg-X-

jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg'

>>>

secretkey='VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZsYBkoXk

Y9b7eq1EhwJaw7FF3akA3KBQ'

Build the base request string, the combination of all the key/pairs of

the request, url encoded and joined with ampersand.

>>> request_str='&'.join(['='.join([k,urllib.quote_plus(request[k])])

for k in request.keys()])

>>> request_str

'apikey=plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-

kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg&command=listUsers&response=json'

Compute the signature with hmac, do a 64 bit encoding and a url encoding,

the string used for the signature is similar to the base request string

shown above but the keys/values are lower cased and joined in a sorted

order

>>>

sig_str='&'.join(['='.join([k.lower(),urllib.quote_plus(request[k].lower(

).replace('+','%20'))])for k in sorted(request.iterkeys())])

>>> sig_str

'apikey=plgwjfzk4gys3momtvmjuvg-x-jlwlnfauj9gabbbf9edm-

kaymmailqzzq1elzlyq_u38zcm0bewzgudp66mg&command=listusers&response=json'

>>> sig=hmac.new(secretkey,sig_str,hashlib.sha1).digest()

>>> sig

'M:]\x0e\xaf\xfb\x8f\xf2y\xf1p\x91\x1e\x89\x8a\xa1\x05\xc4A\xdb'

>>>

sig=base64.encodestring(hmac.new(secretkey,sig_str,hashlib.sha1).digest()

)

>>> sig

'TTpdDq/7j/J58XCRHomKoQXEQds=\n'

>>>

sig=base64.encodestring(hmac.new(secretkey,sig_str,hashlib.sha1).digest()

).strip()

>>> sig

'TTpdDq/7j/J58XCRHomKoQXEQds='

Page 93: Cloud Stack

>>>

sig=urllib.quote_plus(base64.encodestring(hmac.new(secretkey,sig_str,hash

lib.sha1).digest()).strip())

Finally, build the entire string by joining the baseurl, the request str

and the signature. Then do an http GET:

>>> req=baseurl+request_str+'&signature='+sig

>>> req

'http://localhost:8080/client/api?apikey=plgWJfZK4gyS3mOMTVmjUVg-X-

jlWlnfaUJ9GAbBbf9EdM-

kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg&command=listUsers&response=json&s

ignature=TTpdDq%2F7j%2FJ58XCRHomKoQXEQds%3D'

>>> res=urllib2.urlopen(req)

>>> res.read()

'{ "listusersresponse" : { "count":1 ,"user" : [ {"id":"7ed6d5da-

93b2-4545-a502-23d20b48ef2a","username":"admin","firstname":"admin",

"lastname":"cloud","created":"2012-07-05T12:18:27-

0700","state":"enabled","account":"admin",

"accounttype":1,"domainid":"8a111e58-e155-4482-93ce-

84efff3c7c77","domain":"ROOT",

"apikey":"plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-

kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg",

"secretkey":"VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZsYBko

XkY9b7eq1EhwJaw7FF3akA3KBQ",

"accountid":"7548ac03-af1d-4c1c-9064-2f3e2c0eda0d"}]}}

All the clients that you will find on github will implement this

signature technique, you should not have to do it by hand. Now that you

have explored the API through the UI and that you understand how to make

low level calls, pick your favorite client of use

[CloudMonkey](https://pypi.python.org/pypi/cloudmonkey/). CloudMonkey is

a sub-project of Apache CloudStack and gives operators/developers the

ability to use any of the API methods. It has nice auto-completion and

help feature as well as an API discovery mechanism since 4.2.

# Testing the AWS API interface

While the native CloudStack API is not a standard, CloudStack provides a

AWS EC2 compatible interface. It has the great advantage that existing

tools written with EC2 libraries can be re-used against a CloudStack

based cloud. In the installation books we described how to run this

interface from installing packages. In this section we show you how to

compile the interface with `maven` and test it with Python boto module.

Page 94: Cloud Stack

Starting from a running management server (with DevCloud for instance),

start the AWS API interface in a separate shell with:

mvn -Pawsapi -pl :cloud-awsapi jetty:run

Log into the CloudStack UI `http://localhost:8080/client`, go to *Service

Offerings* and edit one of the compute offerings to have the name

`m1.small` or any of the other AWS EC2 instance types.

With access and secret keys generated for a user you should now be able

to use Python [Boto](http://docs.pythonboto.org/en/latest/) module:

import boto

import boto.ec2

accesskey="2IUSA5xylbsPSnBQFoWXKg3RvjHgsufcKhC1SeiCbeEc0obKwUlwJamB_gFmMJ

kFHYHTIafpUx0pHcfLvt-dzw"

secretkey="oxV5Dhhk5ufNowey7OVHgWxCBVS4deTl9qL0EqMthfPBuy3ScHPo2fifDxw1aX

eL5cyH10hnLOKjyKphcXGeDA"

region = boto.ec2.regioninfo.RegionInfo(name="ROOT",

endpoint="localhost")

conn = boto.connect_ec2(aws_access_key_id=accesskey,

aws_secret_access_key=secretkey, is_secure=False, region=region,

port=7080, path="/awsapi", api_version="2012-08-15")

images=conn.get_all_images()

print images

res =

images[0].run(instance_type='m1.small',security_groups=['default'])

Note the new `api_version` number in the connection object and also note

that there was no user registration to make like in previous CloudStack

releases.

# Conclusions

CloudStack is a mostly Java application running with Tomcat and Mysql. It

consists of a management server and depending on the hypervisors being

used, an agent installed on the hypervisor farm. To complete a Cloud

infrastructure however you will also need some Zone wide storage a.k.a

Secondary Storage and some Cluster wide storage a.k.a Primary storage.

The choice of hypervisor, storage solution and type of Zone (i.e Basic

vs. Advanced) will dictate how complex your installation can be. As a

quick start, you might want to consider KVM+NFS and a Basic Zone.

If you've run into any problems with this, please ask on the cloudstack-

dev [mailing list](/mailing-lists.html).

Page 95: Cloud Stack

CloudStack tutorial Clients and Tools

=====================================

These instructions aim to give an introduction to Apache CloudStack.

Accessing a production cloud based on CloudStack, getting a feel for it,

then using a few tools to provision and configure machines in the cloud.

For a more complete guide see this [Little

Book](https://github.com/runseb/cloudstack-

books/blob/master/en/clients.markdown)

What we will do in this tutorial is:

0. Getting your feet wet with http://exoscale.ch

1. Using CloudMonkey

2. Discovering Apache libcloud

3. Using Vagrant boxes and deploying in the cloud

4. Using Ansible configuration management tool

Getting your feet wet with [exoscale.ch](http://exoscale.ch)

============================================================

Start an instance via the UI

----------------------------

1. Go to [exoscale.ch](http://exoscale.ch) and click on `free sign-up` in

the Open Cloud section

2. Ask me for the voucher code, you will get an additional 15CHF

2. Browse the UI, identify the `security groups` and `keypairs` sections.

3. Create a rule in your default security group to allow inbound traffic

on port 22 (ssh)

4. Create a keypair and store the private key on your machine

5. Start an instance

6. ssh to the instance

Find your API Keys and inspect the API calls

--------------------------------------------

7. Inspect the API requests with firebug or dev console of your choice

8. Find your API keys (account section)

Get wordpress installed on an instance

---------------------------------------

Open port 80 on the default security group.

Start an Ubuntu 12.04 instance and in the User-Data tab input:

#!/bin/sh

set -e -x

apt-get --yes --quiet update

apt-get --yes --quiet install git puppet-common

#

# Fetch puppet configuration from public git repository.

Page 96: Cloud Stack

#

mv /etc/puppet /etc/puppet.orig

git clone https://github.com/retrack/exoscale-wordpress.git

/etc/puppet

#

# Run puppet.

#

puppet apply /etc/puppet/manifests/init.pp

Now open your browser on port 80 of the IP of your instance and Voila !

CloudMonkey, an interactive shell to your cloud

===============================================

The exoscale cloud is based on CloudStack. It exposes the CloudStack

native API. Let's use CloudMonkey, the ACS cli.

pip install cloudmonkey

cloudmonkey

The full documentation for cloudmonkey is on the

[wiki](https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+

cloudmonkey+CLI)

set port 443

set protocol https

set path /compute

set host api.exoscale.ch

set apikey <yourapikey>

set secretkey <secretkey>

Explore the ACS native API with CloudMonkey and tab tab....

Tabular Output

--------------

The number of key/value pairs returned by the api calls can be large

resulting in a very long output. To enable easier viewing of the output,

a tabular formatting can be setup. You may enable tabular listing and

even choose set of column fields, this allows you to create your own

field using the filter param which takes in comma separated argument. If

argument has a space, put them under double quotes. The create table

will have the same sequence of field filters provided

To enable it, use the *set* function and create filters like so:

> set display table

> list zones filter=name,id

count = 1

zone:

+--------+--------------------------------------+

Page 97: Cloud Stack

| name | id |

+--------+--------------------------------------+

| CH-GV2 | 1128bd56-b4d9-4ac6-a7b9-c715b187ce11 |

+--------+--------------------------------------+

Starting a Virtual Machine instance with CloudMonkey

----------------------------------------------------

To start a virtual machine instance we will use the

*deployvirtualmachine* call.

cloudmonkey>deploy virtualmachine -h

Creates and automatically starts a virtual machine based on a service

offering, disk offering, and template.

Required args: serviceofferingid templateid zoneid

Args: account diskofferingid displayname domainid group hostid

hypervisor ipaddress iptonetworklist isAsync keyboard keypair name

networkids projectid securitygroupids securitygroupnames

serviceofferingid size startvm templateid userdata zoneid

The required arguments are *serviceofferingid, templateid and zoneid*

In order to specify the template that we want to use, we can list all

available templates with the following call:

> list templates filter=id,displaytext templatefilter=executable

count = 36

template:

+--------------------------------------+-----------------------------

-------------+

| id | displaytext

|

+--------------------------------------+-----------------------------

-------------+

| 3235e860-2f00-416a-9fac-79a03679ffd8 | Windows Server 2012 R2 WINRM

100GB Disk |

| 20d4ebc3-8898-431c-939e-adbcf203acec | Linux Ubuntu 13.10 64-bit

10 GB Disk |

| 70d31a38-c030-490b-bca9-b9383895ade7 | Linux Ubuntu 13.10 64-bit

50 GB Disk |

| 4822b64b-418f-4d6b-b64e-1517bb862511 | Linux Ubuntu 13.10 64-bit

100 GB Disk |

| 39bc3611-5aea-4c83-a29a-7455298241a7 | Linux Ubuntu 13.10 64-bit

200 GB Disk |

...<snipped>

Similarly to get the *serviceofferingid* you would do:

> list serviceofferings filter=id,name

count = 7

serviceoffering:

+--------------------------------------+-------------+

| id | name |

+--------------------------------------+-------------+

| 71004023-bb72-4a97-b1e9-bc66dfce9470 | Micro |

Page 98: Cloud Stack

| b6cd1ff5-3a2f-4e9d-a4d1-8988c1191fe8 | Tiny |

| 21624abb-764e-4def-81d7-9fc54b5957fb | Small |

| b6e9d1e8-89fc-4db3-aaa4-9b4c5b1d0844 | Medium |

| c6f99499-7f59-4138-9427-a09db13af2bc | Large |

| 350dc5ea-fe6d-42ba-b6c0-efb8b75617ad | Extra-large |

| a216b0d1-370f-4e21-a0eb-3dfc6302b564 | Huge |

+--------------------------------------+-------------+

Note that we can use the linux pipe as well as standard linux commands

within the interactive shell. Finally we would start an instance with

the following call:

cloudmonkey>deploy virtualmachine templateid=20d4ebc3-8898-431c-939e-

adbcf203acec zoneid=1128bd56-b4d9-4ac6-a7b9-c715b187ce11

serviceofferingid=71004023-bb72-4a97-b1e9-bc66dfce9470

id = 5566c27c-e31c-438e-9d97-c5d5904453dc

jobid = 334fbc33-c720-46ba-a710-182af31e76df

This is an asynchronous job, therefore it returns a `jobid`, you can

query the state of this job with:

> query asyncjobresult jobid=334fbc33-c720-46ba-a710-182af31e76df

accountid = b8c0baab-18a1-44c0-ab67-e24049212925

cmd = com.cloud.api.commands.DeployVMCmd

created = 2014-03-05T13:40:18+0100

jobid = 334fbc33-c720-46ba-a710-182af31e76df

jobinstanceid = 5566c27c-e31c-438e-9d97-c5d5904453dc

jobinstancetype = VirtualMachine

jobprocstatus = 0

jobresultcode = 0

jobstatus = 0

userid = 968f6b4e-b382-4802-afea-dd731d4cf9b9

Once the machine is started you can list it:

> list virtualmachines filter=id,displayname

count = 1

virtualmachine:

+--------------------------------------+-----------------------------

---------+

| id | displayname

|

+--------------------------------------+-----------------------------

---------+

| 5566c27c-e31c-438e-9d97-c5d5904453dc | 5566c27c-e31c-438e-9d97-

c5d5904453dc |

+--------------------------------------+-----------------------------

---------+

The instance would be stopped with:

> stop virtualmachine id=5566c27c-e31c-438e-9d97-c5d5904453dc

jobid = 391b4666-293c-442b-8a16-aeb64eef0246

Page 99: Cloud Stack

> list virtualmachines filter=id,displayname,state

count = 1

virtualmachine:

+--------------------------------------+-----------------------------

---------+---------+

| id | displayname

| state |

+--------------------------------------+-----------------------------

---------+---------+

| 5566c27c-e31c-438e-9d97-c5d5904453dc | 5566c27c-e31c-438e-9d97-

c5d5904453dc | Stopped |

+--------------------------------------+-----------------------------

---------+---------+

The *ids* that you will use will differ from this example. Make sure you

use the ones that corresponds to your CloudStack cloud.

Try to create a `sshkeypair` with `create sshkeypair`, a `securitygroup`

with `create securitygroup` and add some rules to it.

With CloudMonkey all CloudStack APIs are available.

Apache libcloud

===============

Libcloud is a python module that abstracts the different APIs of most

cloud providers. It offers a common API for the basic functionality of

clouds list nodes,sizes,templates, create nodes etc...Libcloud can be

used with CloudStack, OpenStack, Opennebula, GCE, AWS.

Check the CloudStack driver

[documentation](https://libcloud.readthedocs.org/en/latest/compute/driver

s/cloudstack.html)

Installation

------------

To install Libcloud refer to the libcloud

[website](http://libcloud.apache.org). Or simply do:

pip install apache-libcloud

Generic use of Libcloud with CloudStack

---------------------------------------

With libcloud installed, you can now open a Python interactive shell,

create an instance of a CloudStack driver

and call the available methods via the libcloud API.

First you need to import the libcloud modules and create a CloudStack

driver.

>>> from libcloud.compute.types import Provider

>>> from libcloud.compute.providers import get_driver

Page 100: Cloud Stack

>>> Driver = get_driver(Provider.CLOUDSTACK)

Then, using your keys and endpoint, create a connection object. Note

that this is a local test and thus not secured. If you use a CloudStack

public cloud, make sure to use SSL properly (i.e `secure=True`). Replace

the host and path with the ones of your public cloud. For exoscale use

`host='http://api.exoscale.ch` and `path=/compute`

>>> apikey='plgWJfZK4gyS3mlZLYq_u38zCm0bewzGUdP66mg'

>>> secretkey='VDaACYb0LV9eNjeq1EhwJaw7FF3akA3KBQ'

>>> host='http://localhost:8080'

>>> path='/client/api'

>>>

conn=Driver(key=apikey,secret=secretkey,secure=False,host='localhost',por

t='8080',path=path)

With the connection object in hand, you now use the libcloud base api to

list such things as the templates (i.e images), the service offerings

(i.e sizes) and the zones (i.e locations)

>>> conn.list_images()

[<NodeImage: id=13ccff62-132b-4caf-b456-e8ef20cbff0e, name=tiny

Linux, driver=CloudStack ...>]

>>> conn.list_sizes()

[<NodeSize: id=ef2537ad-c70f-11e1-821b-0800277e749c,

name=tinyOffering, ram=100 disk=0 bandwidth=0 price=0 driver=CloudStack

...>,

<NodeSize: id=c66c2557-12a7-4b32-94f4-48837da3fa84, name=Small

Instance, ram=512 disk=0 bandwidth=0 price=0 driver=CloudStack ...>,

<NodeSize: id=3d8b82e5-d8e7-48d5-a554-cf853111bc50, name=Medium

Instance, ram=1024 disk=0 bandwidth=0 price=0 driver=CloudStack ...>]

>>> images=conn.list_images()

>>> offerings=conn.list_sizes()

The `create_node` method will take an instance name, a template and an

instance type as arguments. It will return an instance of a

*CloudStackNode* that has additional extensions methods, such as

`ex_stop` and `ex_start`.

>>>

node=conn.create_node(name='toto',image=images[0],size=offerings[0])

>>> help(node)

>>> node.get_uuid()

'b1aa381ba1de7f2d5048e248848993d5a900984f'

>>> node.name

u'toto'

libcloud with exoscale

----------------------

Libcloud also has an exoscale specific driver. For complete description

see this recent [post](https://www.exoscale.ch/syslog/2014/01/27/licloud-

guest-post/) from Tomaz Murauz the VP of Apache Libcloud.

Page 101: Cloud Stack

To get you started quickly, save the following script in a .py file.

#!/usr/bin/env python

import sys

import os

from IPython.terminal.embed import InteractiveShellEmbed

from libcloud.compute.types import Provider

from libcloud.compute.providers import get_driver

from libcloud.compute.deployment import ScriptDeployment

from libcloud.compute.deployment import MultiStepDeployment

apikey=os.getenv('EXOSCALE_API_KEY')

secretkey=os.getenv('EXOSCALE_SECRET_KEY')

Driver = get_driver(Provider.EXOSCALE)

conn=Driver(key=apikey,secret=secretkey)

print conn.list_locations()

def listimages():

for i in conn.list_images():

print i.id, i.extra['displaytext']

def listsizes():

for i in conn.list_sizes():

print i.id, i.name

def getimage(id):

return [i for i in conn.list_images() if i.id == id][0]

def getsize(id):

return [i for i in conn.list_sizes() if i.id == id][0]

script=ScriptDeployment("/bin/date")

image=getimage('2c8bede9-c3b6-4450-9985-7b715d8e58c5')

size=getsize('71004023-bb72-4a97-b1e9-bc66dfce9470')

msd = MultiStepDeployment([script])

# directly open the shell

shell = InteractiveShellEmbed(banner1="Hello from Libcloud Shell !!")

shell()

Set your API keys properly and execute the script. You can now explore

the libcloud API interactively, try to start a node, and also deploy a

node. For instance type `list` and press the tab key.

Vagrant boxes

=============

Install Vagrant and create the exo boxes

----------------------------------------

Page 102: Cloud Stack

[Vagrant](http://vagrantup.com) is a tool to create *lightweight,

portable and reproducible development environments*. Specifically it

allows you to use configuration management tools to configure a virtual

machine locally (via virtualbox) and then deploy it *in the cloud* via

Vagrant providers.

In this next exercise we are going to install vagrant on our local

machine and use Exoscale vagrant boxes to provision VM in the Cloud using

configuration setup in Vagrant. For future reading check this

[post](http://sebgoa.blogspot.co.uk/2013/12/veewee-vagrant-and-

cloudstack.html)

First install [Vagrant](http://www.vagrantup.com/downloads.html) and then

get the cloudstack plugin:

vagrant plugin install vagrant-cloudstack

Then we are going to clone a small github

[project](https://github.com/exoscale/vagrant-exoscale-boxes) from

exoscale. This project is going to give us *vagrant boxes*, fake virtual

machine images that refer to Exoscale templates available.

git clone https://github.com/exoscale/vagrant-exoscale-boxes

cd vagrant-exoscale-boxes

Edit the `config.py` script to specify your API keys, then run:

python ./make-boxes.py

If you are familiar with Vagrant this will be straightforward, if not,

you need to add a box to your local installation for instance:

vagrant box add Linux-Ubuntu-13.10-64-bit-50-GB-Disk

/path/or/url/to/boxes/Linux-Ubuntu-13.10-64-bit-50-GB-Disk.box

Initialize a `Vagrantfile` and start an instance

----------------------------------------------

Now you need to create a *Vagrantfile*. In the directory of you choice

for example `/tutorial` do:

vagrant init

Then edit the `Vagrantfile` created to contain this:

Vagrant.configure("2") do |config|

config.vm.box = "Linux-Ubuntu-13.10-64-bit-50-GB-Disk"

config.ssh.username = "root"

config.ssh.private_key_path =

"/Users/vagrant/.ssh/id_rsa.vagrant"

config.vm.provider :cloudstack do |cloudstack, override|

cloudstack.api_key = "AAAAAAAAAAAAAAAA-aaaaaaaaaaa"

cloudstack.secret_key = "SSSSSSSSSSSSSSSS-ssssssssss"

Page 103: Cloud Stack

# Uncomment ONE of the following service offerings:

cloudstack.service_offering_id = "71004023-bb72-4a97-b1e9-

bc66dfce9470" # Micro - 512 MB

#cloudstack.service_offering_id = "b6cd1ff5-3a2f-4e9d-a4d1-

8988c1191fe8" # Tiny - 1GB

#cloudstack.service_offering_id = "21624abb-764e-4def-81d7-

9fc54b5957fb" # Small - 2GB

#cloudstack.service_offering_id = "b6e9d1e8-89fc-4db3-aaa4-

9b4c5b1d0844" # Medium - 4GB

#cloudstack.service_offering_id = "c6f99499-7f59-4138-9427-

a09db13af2bc" # Large - 8GB

#cloudstack.service_offering_id = "350dc5ea-fe6d-42ba-b6c0-

efb8b75617ad" # Extra-large - 16GB

#cloudstack.service_offering_id = "a216b0d1-370f-4e21-a0eb-

3dfc6302b564" # Huge - 32GB

cloudstack.keypair = "vagrant" # for SSH boxes the name of

the public key pushed to the machine

end

end

Make sure to set your API keys and your keypair properly. Also edit the

`config.vm.box` line to set the name of the box you actually added with

`vagrant box add` and edit the `config.ssh.private_key_path` to point to

the private key you got from exoscale. In this configuration the default

security group will be used.

You are now ready to bring the box up:

vagrant up --provider=cloudstack

Don't forget to use the `--provider=cloudstack` or the box won't come up.

Check the exoscale dashboard to see the machine boot, try to ssh into the

box.

Add provisioning steps

----------------------

Once you have successfully started a machine with vagrant, you are ready

to specify a provisioning script. Create a `boostrap.sh` bash script in

your working directory and make it do whatever your want.

Add this provisioning step in the `Vagrantfile` like so:

# Test bootstrap script

config.vm.provision :shell, :path => "bootstrap.sh"

Relaunch the machine with `vagrant up` or `vagrant reload --provision`.

To stop a machine `vagrant destroy`

You are now ready to dig deeper into Vagrant provisioning. See the

provisioner

[documentation](http://docs.vagrantup.com/v2/provisioning/index.html) and

Page 104: Cloud Stack

pick your favorite configuration management tool. For example with

[chef](http://www.getchef.com) you would specify a cookbook like so:

config.vm.provision "chef_solo" do |chef|

chef.add_recipe "mycookbook"

end

Puppet example

---------------

For [Puppet](http://docs.vagrantup.com/v2/provisioning/puppet_apply.html)

remember the script that we put in the Userdata of the very first

example. We are going to use the same Puppet configuration but via

Vagrant.

Edit the `Vagrantfile` to have:

config.vm.provision "puppet" do |puppet|

puppet.module_path = "modules"

end

Vagrant will look for the manifest in the `manifests` directory and for

the modules in the `modules` directory.

Now simply clone the repository that we used earlier:

git clone https://github.com/retrack/exoscale-wordpress

You should now see the `modules` and `manifests` directory in the root of

your working directory that contains the `Vagrantfile`.

Remove the shell provisioning step, make sure to use the Ubuntu 12.04

template id and start the instance like before:

vagrant up --provider=cloudstack

Open your browser and get back to Wordpress ! Of course the whole idea of

Vagrant is that you can test all of these provisioning steps on your

local machines using VirtualBox. Once you are happy with your recipes you

can then move to provision in the cloud. Check out

[Packer](http://packer.io) a related project which you can use to

generate images for your cloud.

Playing with multi-machines configuration

-----------------------------------------

Vagrant is also very interesting because you can start multiple machines

at [once](http://docs.vagrantup.com/v2/multi-machine/index.html). Edit

the `Vagrantfile` to add a `web` and a `db` machine. Add the cloudstack

specific information and specify different bootstrap scripts.

config.vm.define "web" do |web|

web.vm.box = "tutorial"

end

config.vm.define "db" do |db|

Page 105: Cloud Stack

db.vm.box = "tutorial"

end

You can control each machine separately `vagrant up web`, `vagrant up db`

or all at once in parallel `vagrant up`

Let the fun begin. Pick your favorite configuration management tool,

decide what you want to provision, setup your recipes and launch the

instances.

Ansible

=======

Our last exercise for this tutorial will be an introduction to

[Ansible](http://ansibleworks.com). Ansible is a new configuration

management systems based on ssh communications with the instances and a

no-server setup. It is easy to install and get

[started](http://docs.ansible.com/intro.html). Of course it can be used

in conjunction with Vagrant.

Install and remote execution

----------------------------

First install *ansible*:

pip install ansible

Or get it via packages `yum install ansible`, `apt-get install ansible`

if you have set the proper repositories.

If you kept the instances from the previous exercise running, create an

*inventory* `inv` file with the IP addresses. Like so

185.1.2.3

185.3.4.5

Then run your first ansible command: `ping`:

ansible all -i inv -m ping

You should see the following output:

185.1.2.3 | success >> {

"changed": false,

"ping": "pong"

}

185.3.4.5 | success >> {

"changed": false,

"ping": "pong"

}

And see how you can use Ansible as a remote execution framework:

Page 106: Cloud Stack

$ ansible all -i inv -a "/bin/echo hello"

185.1.2.3 | success | rc=0 >>

hello

185.3.4.5 | success | rc=0 >>

hello

Now check all the great Ansible

[examples](https://github.com/ansible/ansible-examples), pick one,

download it via github and try to configure your instances with `ansible-

playbook`

Provisioning with Playbooks

----------------------------

Clone the `ansible-examples` outside your Vagrant project:

cd ..

git clone https://github.com/ansible/ansible-examples.git

Pick the one you want to try, go easy first :) Maybe wordpress or a lamp

stack and copy it's content to a `ansible` directory within the root of

the Vagrant project.

cd ./tutorial

mkdir ansible

cd ansible

cp -R ../../ansible-examples/wordpress-nginx/ .

Go back to the Vagrant project directory we have been working on and edit

the `Vagrantfile`. Remove the Puppet provisioning or comment it out and

add:

# Ansible test

config.vm.provision "ansible" do |ansible|

ansible.playbook = "ansible/site.yml"

ansible.verbose = "vvvv"

ansible.host_key_checking = "false"

ansible.sudo_user = "root"

end

And start the instance once again

vagrant up --provision=cloudstack

Watch the output from the Ansible provisioning and once finished access

the wordpress application that was just configured.

Page 107: Cloud Stack

Using Veewee and Vagrant in development cycle

=============================================

Automation is key to a reproducible, failure tolerant infrastructure.

Cloud administrators should aim to automate all steps of building their

infrastructure and be able to re-provision everything with a single

click. This is possible through a combination of configuration

management, monitoring and provisioning tools. To get started in created

appliances that will be automatically configured and provisioned two

tools stand out in the arsenal: Veewee and Vagrant.

Veewee

------

[Veewee](https://github.com/jedi4ever/veewee) is a tool to easily create

appliances for different hypervisors. It fetches the .iso of the

distribution you want and build the machine with a kickstart file. It

integrates with providers like VirtualBox so that you can build these

appliances on your local machine. It supports most commonly used OS

templates. Coupled with virtual box it allows admins and devs to create

reproducible base appliances. Getting started with veewee is a 10 minutes

exericse. The README is great and there is also a very nice

[post](http://cbednarski.com/articles/veewee/) that guides you through

your first box building.

Most folks will have no issues cloning Veewee from github and building

it, you will need ruby 1.9.2 or above. You can get it via `rvm` or your

favorite ruby version manager.

git clone https://github.com/jedi4ever/veewee

gem install bundler

bundle install

Setting up an alias is handy at this point `alias veewee="bundle exec

veewee"`. You will need a virtual machine provider (e.g VirtualBox,

VMware Fusion, Parallels, KVM). I personnaly use VirtualBox but pick one

and install it if you don't have it already. You will then be able to

start using `veewee` on your local machine. Check the sub-commands

available (for virtualbox):

$ veewee vbox

Commands:

veewee vbox build [BOX_NAME] # Build box

veewee vbox copy [BOX_NAME] [SRC] [DST] # Copy a file to

the VM

veewee vbox define [BOX_NAME] [TEMPLATE] # Define a new

basebox starting from a template

veewee vbox destroy [BOX_NAME] # Destroys the

virtualmachine that was built

veewee vbox export [BOX_NAME] # Exports the

basebox to the vagrant format

veewee vbox halt [BOX_NAME] # Activates a

shutdown the virtualmachine

Page 108: Cloud Stack

veewee vbox help [COMMAND] # Describe

subcommands or one specific subcommand

veewee vbox list # Lists all

defined boxes

veewee vbox ostypes # List the

available Operating System types

veewee vbox screenshot [BOX_NAME] [PNGFILENAME] # Takes a

screenshot of the box

veewee vbox sendkeys [BOX_NAME] [SEQUENCE] # Sends the key

sequence (comma separated) to the box. E.g for testing the

:boot_cmd_sequence

veewee vbox ssh [BOX_NAME] [COMMAND] # SSH to box

veewee vbox templates # List the

currently available templates

veewee vbox undefine [BOX_NAME] # Removes the

definition of a basebox

veewee vbox up [BOX_NAME] # Starts a Box

veewee vbox validate [BOX_NAME] # Validates a box

against vagrant compliancy rules

veewee vbox winrm [BOX_NAME] [COMMAND] # Execute command

via winrm

Options:

[--debug] # enable debugging

-w, --workdir, [--cwd=CWD] # Change the working directory. (The

folder containing the definitions folder).

# Default:

/Users/sebgoa/Documents/gitforks/veewee

Pick a template from the `templates` directory and `define` your first

box:

veewee vbox define myfirstbox CentOS-6.5-x86_64-minimal

You should see that a `defintions/` directory has been created, browse to

it and inspect the `definition.rb` file. You might want to comment out

some lines, like removing `chef` or `puppet`. If you don't change

anything and build the box, you will then be able to `validate` the box

with `veewee vbox validate myfirstbox`. To build the box simply do:

veewee vbox build myfristbox

Everything should be successfull, and you should see a running VM in your

virtual box UI. To export it for use with `Vagrant`, `veewee` provides an

export mechanism (really a VBoxManage command): `veewee vbox export

myfirstbox`. At the end of the export, a .box file should be present in

your directory.

Vagrant

-------

Picking up from where we left with `veewee`, we can now add the box to

[Vagrant](https://github.com/jedi4ever/veewee/blob/master/doc/vagrant.md)

Page 109: Cloud Stack

and customize it with shell scripts or much better, with Puppet recipes

or Chef cookbooks. First let's add the box file to Vagrant:

vagrant box add 'myfirstbox' '/path/to/box/myfirstbox.box'

Then in a directory of your choice, create the Vagrant "project":

vagrant init 'myfirstbox'

This will create a `Vagrantfile` that we will later edit to customize the

box. You can boot the machine with `vagrant up` and once it's up , you

can ssh to it with `vagrant ssh`.

While `veewee` is used to create a base box with almost no

[customization](https://github.com/jedi4ever/veewee/blob/master/doc/custo

mize.md) (except potentially a chef and/or puppet client), `vagrant` is

used to customize the box using the Vagrantfile. For example, to

customize the `myfirstbox` that we just built, set the memory to 2 GB,

add a host-only interface with IP 192.168.56.10, use the apache2 Chef

cookbook and finally run a `boostrap.sh` script, we will have the

following `Vagrantfile`:

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

# Every Vagrant virtual environment requires a box to build off of.

config.vm.box = "myfirstbox"

config.vm.provider "virtualbox" do |vb|

vb.customize ["modifyvm", :id, "--memory", 2048]

end

#host-only network setup

config.vm.network "private_network", ip: "192.168.56.10"

# Chef solo provisioning

config.vm.provision "chef_solo" do |chef|

chef.add_recipe "apache2"

end

#Test script to install CloudStack

#config.vm.provision :shell, :path => "bootstrap.sh"

end

The cookbook will be in a `cookbooks` directory and the boostrap script

will be in the root directory of this vagrant definition. For more

information, check the Vagrant [website](http://www.vagrantup.com) and

experiment.

Vagrant CloudStack

------------------

What is very interesting with Vagrant is that you can use various plugins

to deploy machines on public clouds. There is a `vagrant-aws` plugin and

of course a `vagrant-cloudstack` plugin. You can get the latest

Page 110: Cloud Stack

CloudStack plugin from [github](https://github.com/klarna/vagrant-

cloudstack). You can install it directly with the `vagrant` command line:

vagrant plugin install vagrant-cloudstack

Or if you are building it from source, clone the git repository, build

the gem and install it in `vagrant`

git clone https://github.com/klarna/vagrant-cloudstack.git

gem build vagrant-cloudstack.gemspec

gem install vagrant-cloudstack-0.1.0.gem

vagrant plugin install /Users/sebgoa/Documents/gitforks/vagrant-

cloudstack/vagrant-cloudstack-0.0.7.gem

The only drawback that I see is that one would want to upload his local

box (created from the previous section) and use it. Instead one has to

create `dummy boxes` that use existing templates available on the public

cloud. This is easy to do, but creates a gap between local testing and

production deployments. To build a dummy box simply create a

`Vagrantfile` file and a `metadata.json` file like so:

$ cat metadata.json

{

"provider": "cloudstack"

}

$ cat Vagrantfile

# -*- mode: ruby -*-

# vi: set ft=ruby :

Vagrant.configure("2") do |config|

config.vm.provider :cloudstack do |cs|

cs.template_id = "a17b40d6-83e4-4f2a-9ef0-dce6af575789"

end

end

Where the `cs.template_id` is a uuid of a CloudStack template in your

cloud. CloudStack users will know how to easily get those uuids with

`CloudMonkey`. Then create a `box` file with `tar cvzf cloudstack.box

./metadata.json ./Vagrantfile`. Simply add the box in `Vagrant` with:

vagrant box add ./cloudstack.box

You can now create a new `Vagrant` project:

mkdir cloudtest

cd cloudtest

vagrant init

And edit the newly created `Vagrantfile` to use the `cloudstack` box. Add

additional parameters like `ssh` configuration, if the box does not use

the default from `Vagrant`, plus `service_offering_id` etc. Remember to

use your own api and secret keys and change the name of the box to what

you created. For example on [exoscale](http://www.exoscale.ch):

Page 111: Cloud Stack

# -*- mode: ruby -*-

# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what

you're doing!

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

# Every Vagrant virtual environment requires a box to build off of.

config.vm.box = "cloudstack"

config.vm.provider :cloudstack do |cs, override|

cs.host = "api.exoscale.ch"

cs.path = "/compute"

cs.scheme = "https"

cs.api_key = "PQogHs2sk_3..."

cs.secret_key = "...NNRC5NR5cUjEg"

cs.network_type = "Basic"

cs.keypair = "exoscale"

cs.service_offering_id = "71004023-bb72-4a97-b1e9-bc66dfce9470"

cs.zone_id = "1128bd56-b4d9-4ac6-a7b9-c715b187ce11"

override.ssh.username = "root"

override.ssh.private_key_path =

"/path/to/private/key/id_rsa_example"

end

# Test bootstrap script

config.vm.provision :shell, :path => "bootstrap.sh"

end

The machine is brought up with:

vagrant up --provider=cloudstack

The following example output will follow:

$ vagrant up --provider=cloudstack

Bringing machine 'default' up with 'cloudstack' provider...

[default] Warning! The Cloudstack provider doesn't support any of the

Vagrant

high-level network configurations (`config.vm.network`). They

will be silently ignored.

[default] Launching an instance with the following settings...

[default] -- Service offering UUID: 71004023-bb72-4a97-b1e9-

bc66dfce9470

[default] -- Template UUID: a17b40d6-83e4-4f2a-9ef0-dce6af575789

[default] -- Zone UUID: 1128bd56-b4d9-4ac6-a7b9-c715b187ce11

[default] -- Keypair: exoscale

[default] Waiting for instance to become "ready"...

[default] Waiting for SSH to become available...

Page 112: Cloud Stack

[default] Machine is booted and ready for use!

[default] Rsyncing folder: /Users/sebgoa/Documents/exovagrant/ =>

/vagrant

[default] Running provisioner: shell...

[default] Running:

/var/folders/76/sx82k6cd6cxbp7_djngd17f80000gn/T/vagrant-shell20131203-

21441-1ipxq9e

Tue Dec 3 14:25:49 CET 2013

This works

Which is a perfect execution of my amazing bootstrap script:

#!/usr/bin/env bash

/bin/date

echo "This works"

You can now start playing with Chef cookbooks or Puppet recipes and

automate the configuration of your cloud instances, thanks to

[Vagrant](http://vagrantup.com) and

[CloudStack](http://cloudstack.apache.org).