mycloud for $100k
DESCRIPTION
A simple setup to build a private or public cloud. A cloud at the IaaS layer is simply a cluster of hypervisors with some added storage infrastructure and software to orchestrate everything. In this presentation we show some straightfoward DELL hardware that could be purchased to build a single rack as the basic for a private or public cloud. It totals $100k and coupled with open source software: cloudstack, ceph, glusterfs, nfs etc is the basis for your cloud. You will get a AWS compatible cloud in no-time and with limited acquisition cost.TRANSCRIPT
My $100k Cloud
Sebastien Goasguen - Citrix Michael Fenn - D. E Shaw Research
Oct 1st 2012
Goal
• Build a rack that can act as a private/public cloud• IaaS implementation from hardware to software• Entry level system for SME /Academic research /
POC• Main capability: Provision/Manage virtual
machines on-demand, AWS compliant
Assumptions
• There is a machine room to put this rack• We choose DELL as a vendor for no other
reason than familiarity and our hope that we can get a 33% discount on the list price
• We are going to use CloudStack as cloud platform solution.
• And use other open source software for configuration, storage and monitoring.
Head nodeHead + storage node: Dell R720xd (2U)
2 Intel Xeon E5-2650 2.30 Ghz8x 8 GB RDIMM (64 GB RAM)12x 2TB NL-SAS Hot Plug (24 TB)Quad-port Broadcom 5720 1GbDual Hot-Plug Redundant Power Supply
Per node cost w/ discount: $9,500
Compute/Hypervisor Node
Compute node: Dell R420 (1U)
2 Intel Xeon E5-2430 2.20 GHz4x 8GB RDIMM ( 32 GB RAM)4x 1TB SATA Hot-PlugOn-Board Dual Gigabit Network
Per node cost w/ discount: $3,500
Switch
Networking: Dell PowerConnect 7048
48 port Managed Switch, 1 GbE with 10 GbE and stacking capabilities
1x 10 GbE Uplink Module
Per switch cost w/ discount: $5,000
Rack and PDUs
• Standard air cooled rack. The DELL 4220 rack would be a good choice.
• The whole solution should draw around 6kW, so an 8kW UPS would be a good fit. APC has one called the Smart-UPS RT 10000.
Total Budget
• Networking (1 unit) = $5,000• Head node (1 unit) = $9,500• Compute nodes (21 units) = $73,500• Rack + power infrastructure = $10,000
• Total: $98,000• Total: 264 cores, 736 GB of RAM, and 109 TB
of storage, on 25 Us
Software setup• OS: RHEL like, and since we used to work for High
Energy Physics we will choose Scientific Linux 6.3. Not supported by CloudStack but it does work.
• Hypervisor: KVM or Xen depending on local expertise• Cloud Platform: Apache CloudStack
Software setup• Storage: NFS for image store for ease of setup.
GlusterFS for primary storage or local mount point depending on expertise.
• Configuration management: Puppet or Chef• Monitoring: Zenoss core with CloudStack ZenPack
CloudStack History• Original company VMOPs (2008)• Open source (GPLv3) as CloudStack• Acquired by Citrix (July 2011)• Relicensed under ASL v2 April 3, 2012• Accepted as Apache Incubating Project April
16, 2012 (http://www.cloudstack.org)• First Apache (ACS 4.0) coming really soon !!
Multiple ContributorsSungard: Seven developers have joined the incubating projectSchuberg Philis: Big contribution in building/packaging and Nicira supportGo Daddy: Maven buildingCaringo: Support for own object storeBasho: Support for RiackCS
TerminologyZone: Availability zone, aka Regions. Could be worldwide. Different data centersPods: Racks or aisles in a data centerClusters: Group of machines with a common type of HypervisorHost: A Single serverPrimary Storage: Shared storage across a clusterSecondary Storage: Shared storage in a single Zone
“Logical” CS deployment• Farm of hypervisors. Primary storage available
“cluster” wide for running VMs• Separate secondary storage to store VM images
and data volumes.
Our deployment
Economy
• We have 252 cores of hypervisors• If we consider overprovisioning of 2 VMs per
core, full capacity is 504 VMs.• At $0.10 per hour for small instances, we need
1M hours to get back our $100k.• 1M/(480*24) = 83
• 83 days to recover the capital investment
Optional Setup• Dual GigE cards allows us to do NIC bonding.• Or to create a separate management network or
storage network if need be.• First deployment should use CloudStack security
groups (to avoid having to configure VLANs on the switch). Second deployment could try to use VLANs.
• Run an openflow controller on the head node and experiment with SDN, using Open Vswitch on the nodes.
Possible Expansion with more $$• Fill the rack with nodes to be used as
hypervisors –no change to the software setup, just add hosts in CloudStack-.
• Fill the rack with GPU nodes for HPC. –add hosts in CloudStack using the baremetal component PXE/IPMI -.
• Fill the rack with storage nodes setup as a hadoop cluster on bare metal
• Fill the rack with SSD base storage nodes
“Bare Metal” Hybrid deployment• Hypervisor cluster, bare metal cluster with
specialized hardware (e.g GPUs) or software (Hadoop).
Info
• Apache incubator project• http://www.cloudstack.org• #cloudstack on irc.freenode.net• @cloudstack on Twitter• http://www.slideshare.net/cloudstack• http://cloudstack.org/discuss/mailing-lists.html
Welcoming contributions and feedback, Join the fun !