geoff wilmington - challenge 3 - virtual design master

21
Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo 1 Executive Summary The world has been taken over by zombies and they have ravaged nearly everyone on the planet. A billionaire is providing the investment to create a highly scalable and full orchestrated environment which will be the backbone of a manufacturing facility. This facility is building ships to transport what’s left of the human race to the moon before sending them to Mars for colonization. This system will be a lab-based deployment to determine whether the infrastructure is sound to build out for the other sites. Business Requirements Number Item R01 Deployments must go through Horizon Dashboard Business Constraints Number Item C01 VMware cluster C02 Other hypervisor integration C03 OpenStack C04 1 Linux VM, 1 Windows VM Business Assumptions Number Item A01 UPS Power is provided A02 Air-conditioning is provided A03 Limited professional support services for environment Business Risks Number Item

Upload: vdmchallenge

Post on 14-Jun-2015

1.126 views

Category:

Technology


2 download

DESCRIPTION

Space ship depots continue to come online, with launches to the Moon occurring daily. The moon bases have been stabilized, and humans are beginning to settle in. Not surprisingly, many island nations fared better than expected during the outbreak. Their isolation could be very valuable if we face a third round of infection before the earth has been evacuated. We need to get them back on the grid as soon as possible. Japan, Madagascar, and Iceland are first on the list for building infrastructures. Local teams have managed to get some equipment, but all you’ll have to start with is one repository and blank hardware. As we’ve learned while building the depots, travel is dangerous and difficult. You will need to create your infrastructure in a lab first, to ensure it will be able to be quickly deployed by a local team. Once the process has been deemed successful, we will establish a satellite link to the islands to get everything we need to the local repositories

TRANSCRIPT

Page 1: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

1

Executive Summary

The world has been taken over by zombies and they have ravaged nearly everyone on the planet. A

billionaire is providing the investment to create a highly scalable and full orchestrated environment

which will be the backbone of a manufacturing facility. This facility is building ships to transport what’s

left of the human race to the moon before sending them to Mars for colonization.

This system will be a lab-based deployment to determine whether the infrastructure is sound to build

out for the other sites.

Business Requirements

Number Item

R01 Deployments must go through Horizon Dashboard

Business Constraints

Number Item

C01 VMware cluster

C02 Other hypervisor integration

C03 OpenStack

C04 1 Linux VM, 1 Windows VM

Business Assumptions

Number Item

A01 UPS Power is provided

A02 Air-conditioning is provided

A03 Limited professional support services for environment

Business Risks

Number Item

Page 2: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

2

K01 Geoff building and understanding an OpenStack deployment in 5 days

K02 KVM failover is manual

Document Purpose This document serves as the configuration document to layout the deployment infrastructure and

process for the local deployment teams.

Physical Datacenter Overviews Datacenter rack layout

42 U 42 U 42 U

Rack 1 Rack 2 Rack 3

Swift Storage

ESXi Compute

KVM Compute

ESXi Management

KVM Management

42 U

Rack 4

SolidFire Storage

The following configuration would allow the best scaling of compute and storage nodes while taking into

consideration fault domains within a datacenter.

o Networking

Cisco 5548UP – 2

Cisco 6248 - 2

o Compute

Cisco C240 M3 – 12

o Storage

Cisco C240 M3 – 6

Solidfire – SF6010

Page 3: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

3

Power configurations

The installed equipment will draw the following power requirements. Some numbers are obtained from

maximums from the vendor websites and some are actual tested numbers based on the vendor’s

documentation. The racks will consist of two power distribution units that will be connected to separate

uninterruptible power supplies. There is an assumption, A01, that the facility will have a generator

capable of supplying power in case of main grid failure. The rack will have connections to UPS-A and

UPS-B. The numbers are a best representation of the data provided:

Rack Item UPS-A Watts UPS-B Watts

3 Cisco 5548UP 193W 193W

4 Cisco 5548UP 193W 193W

3 Cisco 6248 193W 193W

4 Cisco 6248 193W 193W

1 Cisco C240 M3 375W 375W

1 Cisco C240 M3 375W 375W

1 Cisco C240 M3 375W 375W

1 Cisco C240 M3 375W 375W

1 Cisco C240 M3 375W 375W

1 Cisco C240 M3 375W 375W

1 Cisco C240 M3 375W 375W

1 Cisco C240 M3 375W 375W

2 Cisco C240 M3 375W 375W

2 Cisco C240 M3 375W 375W

2 Cisco C240 M3 375W 375W

2 Cisco C240 M3 375W 375W

2 Cisco C240 M3 375W 375W

2 Cisco C240 M3 375W 375W

2 Cisco C240 M3 375W 375W

2 Cisco C240 M3 375W 375W

2 Cisco C240 M3 375W 375W

2 Cisco C240 M3 375W 375W

Page 4: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

4

1 Solidfire SF6010 150W 150W

1 Solidfire SF6010 150W 150W

1 Solidfire SF6010 150W 150W

1 Solidfire SF6010 150W 150W

1 Solidfire SF6010 150W 150W

2 Solidfire SF6010 150W 150W

2 Solidfire SF6010 150W 150W

2 Solidfire SF6010 150W 150W

2 Solidfire SF6010 150W 150W

2 Solidfire SF6010 150W 150W

Total Consumed Power 9022 9022

HVAC configurations

The equipment will need to be cooled during operation. There is an assumption, A02, that the facility

will have a room for the datacenter equipment to reside in. Using the Table 1 provided in this

document, the amount of AC tonnage required to cool the equipment is as follows:

o Formula – Total Watts * 0.000283 = Tons

o Calculation – 18044 * 0.000283 = 5.106452 Tons

Physical Infrastructure Overviews Network Infrastructure

vPC vPC

vPC Peer Link

The following configuration is composed of a pair of Cisco 5596UP switches. There are Cisco 6296UP

Fabric Interconnects that are providing the UCS Manager and Service Profiles for the Cisco C240 M3s.

Page 5: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

5

These machines connect into the fabric extenders which then connects back to the 5596UPs. This

configuration provides scale out functions should the need to add more compute nodes be necessary.

Network Design

VLAN Purpose

A Management Network

B VM Network

C vMotion

D iSCSI

E Floating IP Network

Host Infrastructure

o Cisco C240 M3 x12

CPU RAM NIC Power Storage

2x2.8Ghz E5-2680 256GB – 1866Mhz VIC 1225 Dual 750W Dual SD Card

Storage Infrastructure – Swift Object Storage

o Cisco C240 M3 x6

CPU RAM NIC Power Boot Storage Internal Storage

2x2.8Ghz E5-2680

256GB – 1866Mhz

VIC 1225 Dual 750W Dual SD Card 3x Intel S3700 DC 800GB

vPC vPC

Management – 1Gbe

Storage – 10Gbe

vPC

Page 6: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

6

The storage infrastructure physical layout is shown above. Certain components have been removed for

clarity of connectivity. Only one of the Solidfire SF6010’s is being shown as the others connect in the

same fashion to the 5596UP for 10Gbe storage and the 2232TM for 1Gbe Management. The two 10Gbe

connections from each of the SolidFire controllers will be setup in a vPC between the two 5596UPs. The

Management network connections will be plugged into both Fabric Extenders to allow connectivity in

case of a Fabric Extender or 5596UP device failure. The C240 servers will be plugged into the Fabric

Extenders via their VIC 1225 for their 10Gbe uplinks. These will

Virtualization Infrastructure Overviews vSphere and Ubuntu KVM Definitions

One of the requirements was to scale down but still use the same software components, R01, with a

constraint that the software vendors had to be used, C03.

vSphere Component Description

VMware vSphere The core products of the VMware vSphere

environment include:

ESXi – 2 instances will compose the

management cluster and 6 hosts will

comprise the compute cluster for VM

consumption

vCenter Server – 1 installed VM

instance

vCenter Server Database – 1 VM

instance for the single instance of

vCenter Server

SSO – Single Sign-on component that

is required for connecting to the

vSphere Client and vSphere Web

Client

vSphere Client – Still needed to

manage VMware Update Manager

vSphere Web Client – Used to

manager the vSphere environment

vSphere Update Manager – Used to

update hosts and virtual machines

Ubuntu KVM 14.04 LTS KVM – 2 instances of KVM will

provide an HA cluster for an

additional copy of the Nova

Controller VM

Horizon

Page 7: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

7

Glance

Cinder

Keystone

RabbitMQ

MySQL

HAProxy

Nova

Swift Proxy

vSphere Component Architecture

Design Section vSphere Components

vSphere Architecture – Management Cluster vCenter Server and vCenter Database

vCenter Cluster and ESXi hosts

Single Sign-On

vSphere Update Manager

vSphere Architecture – Compute Cluster vCenter Cluster and ESXi hosts

Ubuntu KVM Component Architecture

Design Section vSphere Components

Ubuntu KVM – Management Cluster HAProxy

Nova-controller

Keystone

MySQL

RabbitMQ

Horizon

Glance

Cinder

Swift

Ubuntu KVM Architecture – Compute Cluster KVM

Nova-compute

vSphere and Ubuntu Architecture Design Overview

High level Architecture

Page 8: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

8

The vSphere components are being split out to facilitate ease of troubleshooting of the management

components without disruption of the resource components. They are split out as follows:

Management cluster that will host the management components of the vSphere infrastructure as well

as one of the redundant Ubuntu Nova Controller VMs. They are split out to ensure they have dedicated

resources in which to consume. The Ubuntu Nova Controller VMs will be split across the hypervisors in

an active/active configuration. In case of VMware host failure, VMware HA will restart the controller on

the other ESXi host. In case of full failure of the VMware management cluster, the Ubuntu KVM nodes

will take over servicing the compute nodes.

Compute cluster that will host the virtual machines for OpenStack to spawn up on.

Hosts

vSphere 5.5

Shared Storage

SolidFire

Hosts

vSphere 5.5

Shared Storage

NetApp

vCenter vCenter DB

Virtual Machines

Management Cluster

Compute Cluster

AD Ubuntu Controller

Hosts

KVM

Shared Storage

SolidFire

SolidFire

Virtual Machines

Ubuntu Controller

Shared Storage

NetApp

Virtual Machines

Linux Windows

Virtual Machines

Linux Windows

Hosts

KVM

Page 9: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

9

ESXi Host

ESXi Host KVM Host

vCenter

Active Directory

vCenter DB

Ubuntu Controller

Nova-controller

Keystone

RabbitMQ

Horizon

Glance

MySQL

Cinder

Ubuntu Controller

Nova-controller

Keystone

RabbitMQ

Horizon

Glance

MySQL

Cinder

Galera

HAProxy HAProxy

HAProxy

HAProxy

HAProxy

HAProxy

VIP

HAProxy

Ubuntu Controller

vCenter

ESXi Host

vCenter

Active Directory

vCenter DB

Ubuntu Controller

vCenterVmware HA

Vmware HA

Vmware HA

Vmware HA

vSphere HA

OpenStack HA

Swift Proxy Swift ProxyHAProxy

o Site Considerations

Page 10: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

10

The vSphere management and compute clusters are both residing within the same facility. This will

provide the lowest latency for management as well as a consistent datacenter in which to manage the

clusters.

There are no other sites that are in scope for this project.

o Design Specifications

vSphere Architecture Design – Management Cluster

o Computer Logical Design

Datacenter

One vCenter datacenter will be built to house the two clusters for the vSphere environment.

vSphere Cluster

Below is the cluster configuration for the management cluster for the environment.

Attributes Specification

Number of ESXi Hosts 2

DRS Configuration Fully Automated

DRS Migration Threshold Level 3

HA Enable Host Monitoring Enabled

HA Admission Control Policy Disabled

VM restart priority Medium – vCenter and vCenter DB set High priority

Host Isolation response Leave powered on

VM Monitoring Disabled

Host logical Design

Attribute Specification

Host Type and Version VMware ESXi Installable

Processors X86 Compatible

Storage FlexFlash SD for local ESXi install, shared storage for VMs

Networking Connectivity to all needed VLANS

Memory Sized for workloads

Network Logical Design

Switch Name Switch Type Function # of Physical Ports

vSwitch0 Standard Management vMotion VM Network Floating IP Network

2x10Gbe

vSwitch1 Standard iSCSI 2x10Gbe

Page 11: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

11

The VIC 1225 in the Cisco C240 M3 allows the ability to split out 2 10Gbe Network connections into 256

vnics. The configuration is to create 4 vnics through the Service Profile, 2 with bindings to Fabric

Interconnect A and Fabric Interconnect B. A pair of vnics, one going to either Fabric Interconnect, will

compose the port group uplinks necessary for failover purposes and redundancy within the vSphere

environment. This is illustrated below:

vMotion

VM Network

iSCSI1

MGMT

VLAN A VLAN C

VLAN B

VLAN D

vmnic0/vnicA-1

vmnic1/vnicB-1

vmnic3/vnicA-2

vmnic4/vnicB-2

Standby

Active

vSwitch0

vSwitch1

VLAN E

iSCSI2

Unused

Floating IP

Network

Page 12: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

12

Port group configurations

Attribute Setting

Load balancing Route based on originating virtual port ID

Failover Detection Link Status Only

Notify Switches Yes

Failover Order MGMT – Active vmnic0/Standby vmnic1 vMotion – Standby vmnic0/Active vmnic1 VM Network – Active vmnic3/Active vmnic4 Floating IP Network – Active vmnic0/Standby vmnic1 iSCSI1 – Active vmnic3/Unused vmnic4 iSCSI2 – Active vmnic4/Unused vmnic3

Shared Storage Logical Design

Attribute Specification

Number of LUNs to start 1

LUN Size 1TB

VMFS Datastores per LUN 1

VMs per LUN 4

Management Components

This is the list of Management components that will be running on the management cluster:

vCenter Server

vCenter Database

Active Directory

Ubuntu Controller

Management Components Resiliency Considerations

Component HA Enabled?

vCenter Server Yes

vCenter Database Yes

Active Directory Yes

Ubuntu Controller Yes

Management Server Configurations

VM vCPUs RAM Disk1 Disk2 Disk3 Controller Quantity

vCenter Server

2 16GB 40GB 200GB N/A LSI Logic SAS

1

vCenter Database

2 16GB 40GB 100GB N/A LSI Logic SAS

1

Active Directory

2 8GB 30GB N/A N/A LSI Logic SAS

1

Page 13: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

13

Ubuntu Controller

4 16GB 100GB N/A N/A LSI Logic SAS

1

vSphere Architecture Design – Compute Cluster

o Computer Logical Design

Datacenter

One datacenter will be built to house the two clusters for the environment.

vSphere Cluster

Attributes Specification

Number of ESXi Hosts 3

DRS Configuration Fully Automated

DRS Migration Threshold Level 3

HA Enable Host Monitoring Enabled

HA Admission Control Policy Disabled

VM restart priority Medium -

Host Isolation response Leave powered on

VM Monitoring Disabled

Host Logical Design

Attribute Specification

Host Type and Version VMware ESXi Installable

Processors X86 Compatible

Storage FlexFlash SD for local ESXi install, shared storage for VMs

Networking Connectivity to all needed VLANS

Memory Sized for workloads

Network Logical Design

Switch Name Switch Type Function # of Physical Ports

vSwitch0 Standard Management vMotion VM Network Floating IP Network

2x10Gbe

vSwitch1 Standard iSCSI 2x10Gbe

The network configuration for the Compute Cluster follows the exact same pattern as that of the

Management cluster for simplicity.

Page 14: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

14

vMotion

VM Network

iSCSI1

MGMT

VLAN A VLAN C

VLAN B

VLAN D

vmnic0/vnicA-1

vmnic1/vnicB-1

vmnic3/vnicA-2

vmnic4/vnicB-2

Standby

Active

vSwitch0

vSwitch1

VLAN E

iSCSI2

Unused

Floating IP

Network

Port group configurations

Attribute Setting

Load balancing Route based on originating virtual port ID

Failover Detection Link Status Only

Notify Switches Yes

Failover Order MGMT – Active vmnic0/Standby vmnic1 vMotion – Standby vmnic0/Active vmnic1

Page 15: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

15

VM Network – Active vmnic3/Active vmnic4 Floating IP Network – Active vmnic0/Standby vmnic1 iSCSI1 – Active vmnic3/Unused vmnic4 iSCSI2 – Active vmnic4/Unused vmnic3

KVM Architecture Design – Management Cluster

o Host logical Design

Attribute Specification

Host Type and Version Ubuntu 14.04 LTS

Processors X86 Compatible

Storage FlexFlash SD for local KVM install, shared storage for VMs

Networking Connectivity to all needed VLANS

Memory Sized for workloads

Network Logical Design

Switch Name vBonds Function # of Physical Ports

Bond Group1 Bond0.A Bond0.B Bond0.D

Management vMotion VM Network Floating IP Network

2x10Gbe

Bond Group2 Bond1.E iSCSI 2x10Gbe

The VIC 1225 in the Cisco C240 M3 allows the ability to split out 2 10Gbe Network connections into 256

vnics. The configuration is to create 4 vnics through the Service Profile, 2 with bindings to Fabric

Interconnect A and Fabric Interconnect B. A pair of vnics, one going to either Fabric Interconnect, will

compose the port group uplinks necessary for failover purposes and redundancy within the KVM

environment. This is illustrated below:

Page 16: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

16

VM Network

iSCSI1

MGMTBond0.A

Bond0.B

Bond1.D

eth0/vnicA-1

eth1/vnicB-1

eth3/vnicA-2

eth4/vnicB-2

Active

Bond Group 0

Bond0.E

iSCSI2

Floating IP

Network

bond0

bond1

Bond Group 1

KVM uses a uniquely different type of connection binding called a bond. A Bond is split into a bond per

VLAN necessary.

Shared Storage Logical Design

Attribute Specification

Number of LUNs to start 1

LUN Size 200GB

VMs per LUN 1

Management Components

This is the list of Management components that will be running on the management cluster:

Ubuntu Controller

o HAProxy

o Nova-controller

o Keystone

Page 17: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

17

o MySQL

o RabbitMQ

o Horizon

o Glance

o Cinder

o Swift Proxy

Management Components Resiliency Considerations

Component Live Migration

Ubuntu Controller Yes

The KVM hypervisor supports Live Migration, however the process is a manual one, K02.

Management Server Configurations

VM vCPUs RAM Disk1 Disk2 Disk3 Controller Quantity

Ubuntu Controller

4 16GB 100GB N/A N/A LSI Logic SAS

1

KVM Architecture Design – Compute Cluster

Attributes Specification

Number of KVM Hosts 3

o Host Logical Design

Attribute Specification

Host Type and Version Ubuntu 14.04 LTS

Processors X86 Compatible

Storage FlexFlash SD for local KVM install, shared storage for VMs

Networking Connectivity to all needed VLANS

Memory Sized for workloads

o Network Logical Design

Switch Name vBonds Function # of Physical Ports

Bond Group1 Bond0.A Bond0.B Bond0.D

Management vMotion VM Network Floating IP Network

2x10Gbe

Bond Group2 Bond1.E iSCSI 2x10Gbe

Page 18: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

18

The network configuration for the Compute Cluster follows the exact same pattern as that of the

Management cluster for simplicity.

VM Network

iSCSI1

MGMTBond0.A

Bond0.B

Bond1.D

eth0/vnicA-1

eth1/vnicB-1

eth3/vnicA-2

eth4/vnicB-2

Active

Bond Group 0

Bond0.E

iSCSI2

Floating IP

Network

bond0

bond1

Bond Group 1

Attribute Specification

Number of LUNs to start 1

LUN Size 1TB

VMFS Datastores per LUN 1

VMs per LUN 4

SolidFire storage will be used for block storage. Swift is not able to produce block based storage. Also,

vSphere needs block-based access that only SolidFire can provide. Use of SolidFire storage alleviates this

problem.

o Compute Components

This is a list of the components that will be running on the compute cluster for the environment:

Page 19: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

19

Nova-compute

o Storage Components

Swift – Object Storage Design

The Swift Object Storage access is controlled by the Swift Proxy. These proxies broker connections into

the store. Each Swift Storage node is composed of six the following servers:

CPU RAM NIC Power Boot Storage Internal Storage

2x2.8Ghz E5-2680

256GB – 1866Mhz

VIC 1225 Dual 750W Dual SD Card 3x Intel S3700 DC 800GB

Page 20: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

20

The SSDs provide the storage space necessary to provide the distributed storage. The SSDs are not put

into any form of RAID and are made resilient through Swift rings.

Swift Proxy

Ring

Storage Storage Storage

vSphere Security

o Host Security

Hosts will be placed into lockdown mode to prevent root access. This would ensure that only access can

be done through the DCUI.

o Network Security

All virtual switches will have the following settings:

Attribute Setting

Promiscuous Mode Management Cluster – Reject Compute Cluster - Reject

MAC Address Changes Management Cluster – Reject

Page 21: Geoff Wilmington - Challenge 3 - Virtual Design Master

Virtual Design Master Challenge 3 Geoff Wilmington - @vWilmo

21

Compute Cluster – Reject

Forged Transmits Management Cluster – Reject Compute Cluster - Reject

o vCenter Security

By default when vCenter is added to an Active Directory domain, the Domain Administrators group is

granted local administrator permissions to the vCenter Server. A new vCenter Admins group will be

created, appropriate users will be added to the group and that group will become the new local

administrators on the vCenter Server. The Domain Administrators group will be removed.

Appendix A – Bill of Materials Equipment Quantity

Cisco 5596UP 2

Cisco 6296UP 2

Cisco C240 M3 18

SolidFire SF6010 10

Racks 4

PDUs 8