open-source software toolkits for creating and managing distributed heterogeneous cloud...
TRANSCRIPT
OPEN-SOURCE SOFTWARE TOOLKITS FOR CREATINGAND MANAGING DISTRIBUTED HETEROGENEOUSCLOUD INFRASTRUCTURES
A.V. Pyarn
Lomonosov Moscow State University, Faculty of Computational Mathematics and Cybernetics
Agenda Aim of paper Virtualization Hypervisor architecture IaaS and cloud toolkits Requirements Hypervisors toolstacks Cloud platforms Comparison
Aim of paper Show use cases of cloud toolkits as
virtual educational polygons for educational purposes.
Cloud toolkits design aspects, architectural features, functionality capabilities, installation how-to’s, extension and support capabilities: Xen, KVM toolkits and OpenNebula, OpenStack toolkits comparison.
Virtualization
Types of Virtualization Emulation:
Fully-emulate the underlying hardware architecture
Full virtualization:Simulate the base hardware
architecture
Paravirtualization:Abstract the base architecture
OS-level virtualization:Shared kernel (and architecture),
separate user spaces
Oracle VirtualBoxVmware player, server
Vmware ESXi, vSphere, Hyper-V, KVM, XEN
XEN
OpenVZ
Hypervisor role Thin, privileged abstraction layer
between the hardware and operating systems
Defines the virtual machine that guest domains see instead of physical hardware:− Grants portions of physical resources to
each guest− Exports simplified devices to guests− Enforces isolation among guests
Hypervisor architecture
Toolstack
Toolstack = standard Linux tools+specific 3-d party toolkits, API daemons: libvirt, XEND, XAPI etc.
IaaS IaaS = Virtualization (hypervisor
features) + “Amazon Style” Self-Service Portal and convenient GUI management + Billing + Multitenancy
Hypervisors toolstack and APIs vs 3-rd party open source cloud toolkits (OpenNebula, OpenStack “datacenter virtualization” platform, etc.)
What should we use?
Depends on requirements
Requirements
For educational polygons: Open-source software: hypervisor and
management subsystem NFS or iSCSI independent storage for
virtual disks and images Easy installation and support GUI: Management center, optional - self-
service portal, monitoring and accounting tools
Cloud platforms
They don’t consist of hypervisors themselves, only management role
Hardware
Hypervisor
OS OS OS
Hardware
Hypervisor
OS OS OS
StorageNFS/ iSCSI
Management server(-s):• scheduler • authorization• monitoring• web-interface
DB
SSH
SSH
Worker node
Worker node
Agentless
KVM-QEMU
KVM-QEMUSMP hostsSMP guests (as of kvm-61, max 16 cpu supported)Live Migration of guests from one host to another
Emulated hardware:Class Device
Video card Cirrus CLGD 5446 PCI VGA card or dummy VGA card with Bochs VESA extensions[14]
PCI i440FX host PCI bridge and PIIX3 PCI to ISA bridge[14]
Input device PS/2 Mouse and Keyboard[14]
Sound card Sound Blaster 16, ENSONIQ AudioPCI ES1370, Gravis Ultrasound GF1, CS4231A compatible[14]
Ethernet Network card
AMD Am79C970A (Am7990), E1000 (Intel 82540EM, 82573L, 82544GC), NE2000, and Realtek RTL8139
Watchdog timer
Intel 6300ESB or IB700
RAM 50 MB - 32 TB
CPU 1-16 CPUs
KVM-QEMU• Ease of use +• Shared storage +• Live migrations +• Management GUI + (virtual machine manager)
XEN
Virtualization in XenXen can scale to >255 physical CPUs, 128 VCPUs per PV guest, 1TB of RAM per host, and up to 1TB of RAM per HVM guest or 512 GB of RAM per PV guest.
Paravirtualization: Uses a modified Linux kernel• Guest loads Dom0's pygrub or Dom0's kernel• Front-end and back-end virtual device model• Cannot run Windows• Guest "knows" it's a VM and cooperates with hypervisor
Hardware-assisted full virtualization (HVM): • Uses the same, normal, OS kernel• Guest contains grub and kernel • Normal device drivers• Can run Windows• Guest doesn't "know" it's a VM, so hardware manages it
Xen – Cold Relocation
Motivation:Moving guest between hosts without shared storage or with different architectures or hypervisor versions Process: Shut down a guest on the source hostMove the guest from one Domain0's file system to another's by manually copying the guest's disk image and configuration filesStart the guest on the destination host
Xen – Cold Relocation
Benefits:Hardware maintenance with less downtimeShared storage not requiredDomain0s can be different Multiple copies and duplications
Limitation:More manual processService should be down during copy
Xen – Live Migration
Motivation:Load balancing, hardware maintenance, and power management Result:
Begins transferring guest's state to new hostRepeatedly copies dirtied guest memory (due to continued execution) until completeRe-routes network connections, and guest continues executing with execution and network uninterrupted
Xen – Live Migration
Benefits: No downtimeNetwork connections to and from guest often remain active and uninterruptedGuest and its services remain available Limitations:Requires shared storageHosts must be on the same layer 2 networkSufficient spare resources needed on target machineHosts must be configured similarly
Xen Cloud Platform (XCP)
XCP includes:Open-source Xen hypervisorEnterprise-level XenAPI (XAPI) management tool stackSupport for Open vSwitch (open-source, standards-compliant virtual switch)
Features:
• Fully-signed Windows PV drivers• Heterogeneous machine resource pool support• Installation by templates for many different
guest OSes
Xen Cloud Platform (XCP)
XCP includes:Open-source Xen hypervisorEnterprise-level XenAPI (XAPI) management tool stackSupport for Open vSwitch (open-source, standards-compliant virtual switch)
Features:
• Fully-signed Windows PV drivers• Heterogeneous machine resource pool support• Installation by templates for many different
guest OSes
XCP XenAPI Management Tool StackVM lifecycle: live snapshots, checkpoint, migration Resource pools: live relocation, auto configuration, disaster recovery Flexible storage, networking, and power management Event tracking: progress, notification
Upgrade and patching capabilities Real-time performance monitoring and alerting
XCP Installation
XCP Management SoftwareXencenter
XCP Toolstack
Command Line Interface (CLI) Tools
Toolstack xl XAPI libvirt xend
CLI tool xl xe virsh xm
XCP ToolstackToolstack Feature Comparison
Features xl xapi libvirtPurpose-built for Xen
X X
Basic VM Operations
X X X
Managed Domains X X
Live Migration X X X
PCI Passthrough X X X
Host Pools X
Flexible, Advanced Storage Types
X
Built-in advanced performance monitoring (RRDs)
X
Host Plugins (XAPI)
X
OpenNebula
OpenNebula
What are the Main Components?• Interfaces & APIs: OpenNebula provides many different interfaces that can be used
to interact with the functionality offered to manage physical and virtual resources. There are two main ways to manage OpenNebula instances:command line interface and the Sunstone GUI. There are also several cloud interfaces that can be used to create public clouds: OCCI and EC2 Query, and a simple self-service portal for cloud consumers. In addition, OpenNebula features powerful integration APIs to enable easy development of new components (new virtualization drivers for hypervisor support, new information probes, etc).
• Users and Groups• Hosts: The main hypervisors are supported, Xen, KVM, and VMware.• Networking• Storage: OpenNebula is flexible enough to support as many different image storage
configurations as possible. The support for multiple data stores in the Storage subsystem provides extreme flexibility in planning the storage backend and important performance benefits. The main storage configurations are supported, file system datastore, to store disk images in a file form and with image transferring using ssh or shared file systems (NFS, GlusterFS, Lustre…),iSCSI/LVM to store disk images in a block device form, and VMware datastore specialized for the VMware hypervisor that handle the vmdk format.
• Clusters: Clusters are pools of hosts that share datastores and virtual networks. Clusters are used for load balancing, high availability, and high performance computing.
OpenNebula - installation
Front-end, executes the OpenNebula services.Hosts, hypervisor-enabled hosts that provide the resources needed by the VMs.Datastores hold the base images of the VMs.Service Network, physical network used to support basic services: interconnection of the storage servers and OpenNebula control operationsVM Networks physical network that will support VLAN for the VMs.
OpenNebula – installation front-end
Front-EndThe machine that holds the OpenNebula installation is called the front-end. This machine needs to have access to the storage Datastores (e.g. directly mount or network), and network connectivity to each host. The base installation of OpenNebula takes less than 10MB.OpenNebula services include:• Management daemon (oned) and scheduler (mm_sched)• Monitoring and accounting daemon (onecctd)• Web interface server (sunstone)• Cloud API servers (ec2-query and/or occi) Note that these components communicate through XML-RPC and may be installed in different machines for security or performance reasons
Requirements for the Front-End are:ruby >= 1.8.7
sudo apt-get install opennebula
OpenNebula – installation hosts
HostsThe hosts are the physical machines that will run the VMs. During the installation you will have to configure the OpenNebula administrative account to be able to ssh to the hosts, and depending on your hypervisor you will have to allow this account to execute commands with root privileges or make it part of a given group.
OpenNebula doesn't need to install any packages in the hosts, and the only requirements for them are:• ssh server running• hypervisor working properly configured• Ruby >= 1.8.7
OpenNebula – installation storage
StorageOpenNebula uses Datastores to handle the VM disk Images. VM Images are registered, or created (empty volumes) in a Datastore. In general, each Datastore has to be accessible through the front-end using any suitable technology NAS, SAN or direct attached storage.When a VM is deployed the Images are transferred from the Datastore to the hosts. Depending on the actual storage technology used it can mean a real transfer, a symbolic link or setting up an iSCSI target.
There are two configuration steps needed to perform a basic set up:First, you need to configure the system datastore to hold images for the running VMs.Then you have to setup one ore more datastore for the disk images of the VMs, you can find more information on setting up Filesystem Datastores here. OpenNebula can work without a Shared FS. This will force the deployment to always clone the images and you will only be able to do cold migrations.
OpenNebula – installation networking
The network is needed by the OpenNebula front-end daemons to access the hosts to manage and monitor the hypervisors; and move image files. It is highly recommended to install a dedicated network for this purpose.To offer network connectivity to the VMs across the different hosts, the default configuration connects the virtual machine network interface to a bridge in the physical host.
You should create bridges with the same name in all the hosts. Depending on the network model, OpenNebula will dynamically create network bridges.
OpenNebula – CLI
OpenNebula – Sunstone
OpenNebula – Sunstone
OpenNebula – Sunstone
OpenStack
Projects, Python
• Compute• Storage• Networking• Dashboard (GUI)
OpenStack - compute
OpenStack - installation1. Install Ubuntu 12.04 (Precise) or Fedora 16In order to correctly install all the dependencies, we assume a specific version of Ubuntu or Fedora to make it as easy as possible. OpenStack works on other flavors of Linux (and some folks even run it on Windows!) We recommend using a minimal install of Ubuntu server or in a VM if this is your first time.2. Download DevStackgit clone git://github.com/openstack-dev/devstack.gitThe devstack repo contains a script that installs openstack and templates for configuration files3. Start the installcd devstack; ./stack.sh
OpenStack - installation
OpenStack - dashboard
OpenStack - summary
1. Hard to install and maintain2. Bad logical structure of SW3. Not stable, nany bugs
Conclusion
XenKVM/QEMUOpenNebulaOpenStack