vs5dw_enterprisedesignscenario
Post on 22-Nov-2015
15 Views
Preview:
DESCRIPTION
TRANSCRIPT
-
VMware and Customer Confidential
VMware vSphere: Design Workshop [V5.0] Enterprise Lab Scenario
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 2 of 34
Version History
Date Ver. Author Description Reviewers
19 Jan 2010 V1 Ben Lin
Shridhar Deuskar
Initial Draft Mahesh Rajani
2 Mar 2010 V2 Ben Lin Updated Rupen Sheth
11 Nov 2011 V5 Mike Sutton Final Mike Sutton
2011 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents listed at http://www.vmware.com/download/patents.html.
VMware, VMware vSphere, VMware vCenter, the VMware boxes logo and design, Virtual SMP and VMotion are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware, Inc 3401 Hillview Ave Palo Alto, CA 94304 www.vmware.com
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 3 of 34
Contents
1. Overview ...................................................................................... 51.1 Summary ........................................................................................................................ 51.2 Current State Analysis ................................................................................................... 61.3 Rack Consolidation Scenario ......................................................................................... 71.4 Business Critical Servers ............................................................................................. 161.5 Requirements ............................................................................................................... 171.6 Constraints ................................................................................................................... 171.7 Assumptions ................................................................................................................. 18
2. Host ............................................................................................ 192.1 Requirements ............................................................................................................... 192.2 Design Patterns ............................................................................................................ 192.3 Logical Design .............................................................................................................. 202.4 Physical Design ............................................................................................................ 20
3. Virtual Datacenter ...................................................................... 223.1 Requirements ............................................................................................................... 223.2 Design Patterns ............................................................................................................ 223.3 Logical Design .............................................................................................................. 253.4 Physical Design ............................................................................................................ 26
4. Network ...................................................................................... 274.1 Requirements ............................................................................................................... 274.2 Design Patterns ............................................................................................................ 274.3 Logical Design .............................................................................................................. 284.4 Physical Design ............................................................................................................ 28
5. Storage ....................................................................................... 305.1 Requirements ............................................................................................................... 305.2 Design Patterns ............................................................................................................ 305.3 Logical Design .............................................................................................................. 315.4 Physical Design ............................................................................................................ 32
6. Virtual Machine .......................................................................... 336.1 Requirements ............................................................................................................... 336.2 Design Patterns ............................................................................................................ 33
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 4 of 34
7. Management / Monitoring .......................................................... 34
7.1 Requirements ............................................................................................................... 347.2 Design Patterns ............................................................................................................ 34
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 5 of 34
1. Overview
1.1 Summary ACME Energy Corporation engages in the acquisition, development, and operation of utility-scale renewable energy generation projects. It focuses on wind and solar energy, and selling the energy it produces to regulated utility companies. The company is headquartered in Phoenix, AZ and maintains remote offices in Bakersfield, CA and Ft. Worth, TX.
As part of a datacenter optimization project, IT has been asked to virtualize all x86 based servers onto the VMware vSphere platform. The primary datacenter is in Phoenix with smaller datacenters in the other locations. After consolidation, all servers will be located in the primary datacenter in Phoenix, AZ. There is sufficient network bandwidth to support operational requirements. Remote users are on LAN or campus network.
ACME Energys server environment has three zones: Production, Dev/Test, and QA.
From the preliminary virtualization assessment, it was determined that ACME Energy can consolidate a considerable number of existing and expected future workloads. This increases average server utilization and lowers the overall hardware footprint and associated costs.
The virtualization assessment shows that 1000 physical servers can be virtualized. The consolidation ratio depended upon two possible target platforms:
Target Platform Consolidation Ratio
Production Dev/Test QA
Blade server: 2 socket, Quad Core CPUs 2.93GHz, 64GB of RAM, 2x CNAs (10Gb/s)
20:1 50:1 50:1
Rack server: 4 socket, Quad Core CPUs 2.93GHz, 96GB of RAM, 2x NICs (1Gb/s), 2x HBAs (8Gb/s)
30:1 60:1 60:1
Additional interface cards can be added via mezzanine adaptor if required. Assume that 8 half height blade servers can fit in 1 blade chassis. The blade chassis is 6U in height. The rack server is 4U in height. Several existing servers are powerful enough such that they can be reused as ESX/ESXi hosts. Maximum availability is a requirement for business critical servers. The production workloads must be highly available with the ability to tolerate the loss of multiple ESX hosts in a cluster. Separation of management and production virtual machines is desired.
The 1000 physical servers are comprised of 400 Linux servers and 600 Windows servers.
Linux server distribution:
100 servers Production
200 servers Dev/Test
100 servers QA
Windows server distribution
300 servers Production
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 6 of 34
200 servers Dev/Test
100 servers QA
On average, each Windows server is provisioned with a 15GB OS drive (average used 10GB) and 40GB (average used 25GB) data drive. Each Linux server is configured with 60GB total storage (40GB average used). ACME Energy expects 10% annual server growth over the next three years.
An existing high performance fibre channel storage array will be leveraged (active/passive). The array has a 256 GB of mirrored cache (expandable to 512 GB) with 120 disks in the central system bay. One adjacent storage bay holds 120 disks. Additional storage bays can be purchased with 240 disks per bay to expand the system to upwards of 1,920 disk drives. The disk drives are a mix of 73GB SSD, 146GB FC, 300GB FC, and 500GB SATA. There is 3.6TB of SSD, 30TB of FC, and 35TB of SATA. Currently only the production servers are SAN-attached. The storage network infrastructure includes several Cisco Nexus 5000 series switches.
A majority of the servers have 4 CPUs. The production servers must be segregated from the Dev/Test/QA servers. Once virtualized, a production VM cannot share the same ESXi host as Dev/Test or QA VMs. This is a mandatory requirement.
Due to security and network infrastructure requirements, production network traffic must be isolated from Dev/Test and QA network traffic. The security team at ACME Energy has insisted that the IDS software used by their team requires each servers networking port to have consistent properties. This requires the networking properties of each VMs virtual networking port to be preserved after a VMotion.
The network infrastructure consists of multiple VLANs to provide separation for network traffic. ACME Energy would like to reduce the number of VLANs required to improve manageability. LAN infrastructure includes multiple Access Switches to provide redundancy and load balancing. There is no DMZ in the environment.
Current VLAN configuration
VLAN 10 - Management
VLAN 20 - IP Storage
VLAN 30 - Production
VLAN 40 - Dev/Test
VLAN 50 - QA
VLAN 60 - Voice
VLAN 70 - Replication
VLAN 80 - Backup
TASK: Develop an architecture design for ACME Energys virtualization project. 1.2 Current State Analysis
To determine the required number of hosts needed to consolidate the existing datacenters 1000 physical x86 servers, the performance and utilization of the existing servers was analyzed using VMware Capacity Planner for 30 days. The analysis captured the resource utilization for each system, including average and peak CPU and RAM utilization.
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 7 of 34
A total of 616 candidates were selected for this first virtualization initiative. Over the sampling period, the following metrics were observed:
CPU Resource Requirements
Metric Amount
Average number of CPUs per physical system 4
Average CPU MHz 2800 MHz
Average normalized CPU MHz 11200
Average CPU utilization per physical system 2.7% (302.4 MHz)
Average peak CPU utilization per physical system 5% (560 MHz)
Total CPU resources required for 1,000 VMs at peak 560,000 MHz
RAM Resource Requirements
Metric Amount
Average amount of RAM per physical system 4096 MB
Average memory utilization 37% (1515.52 MB)
Average peak memory utilization 70% (2867.2 MB)
Total RAM required for 1000 VMs at peak before memory sharing
2,867,200 MB
Anticipated memory sharing benefit when virtualized 50%
Total RAM required for 1,000 VMs at peak with memory sharing
1,433,600 MB
1.3 Rack Consolidation Scenario
Capacity estimation using the rack server option as the target platform:
Proposed ESXi Host CPU Logical Design Specifications
Attribute Specification
Number of CPUs (sockets) per host 4
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 8 of 34
Attribute Specification
Number of cores per CPU 4
MHz per CPU core 2,930
Total CPU MHz per CPU 11,720
Total CPU MHz per host 46,880
Proposed maximum host CPU utilization 80%
Available CPU MHz per host 37,504 MHz
Proposed ESXi Host RAM Logical Design Specifications
Attribute Specification
Total RAM per host 98,304 MB (96 GB)
Proposed maximum host RAM utilization 80%
Available RAM per host 78,643 MB
Estimation assumptions:
Hosts sized for peak utilization levels, rather than average. This is to support all systems running at their observed peak resource levels simultaneously
CPU and memory utilization for each host capped at 80% (allow 20% for overhead and breathing room)
Memory sharing: 50% (achieved through running the same Guest OS across the majority of all VMs)
The following formula was used in calculating estimated required host capacity to support the peak CPU utilization of the anticipated VM workloads:
Total CPU required for total VMs at peak = # of ESXi Hosts Required
Available CPU per ESX/ESXi Host
Using this formula, the following estimated required host capacity was calculated for the planned vSphere infrastructure:
560,000 MHz (Total CPU) = 14.9 ESXi Hosts
37,504 MHz (CPU per Host)
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 9 of 34
The following formula was used in calculating the number of hosts required to support anticipated at peak RAM utilization:
Total RAM required for total VMs at peak = # of ESXi Hosts Required
Available RAM per ESX/ESXi Host
Using this formula, the following estimated required host capacity was calculated for the planned vSphere infrastructure:
1,433,600 MB (Total RAM) = 18.23 ESXi Hosts
78,643 MB (RAM per Host)
From a CPU workload perspective, 15 VMware ESXi hosts are needed. From a memory workload perspective, 19 hosts are needed. The higher value is used since that is the limiting factor.
This provides substantial consolidation ratios:
VMware vSphere Consolidation Ratios
# of Virtualization Candidates
# of ESX/ESXi Hosts Required
Consolidation Ratio: VMs per Host
Consolidation Ratio: VMs per Core*
Max Host CPU/ RAM Utilization
1000 19 52.63 3.3 80%
* each VM has one vCPU
In actuality, since 1000 VMs can be supported by 18.23 hosts, the true consolidation ratio is 54.85, which means that through extrapolation with 19 hosts, the infrastructure should be able to support not just 1000 VMs, but 1042 VMs.
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 10 of 34
CPU Count
1CPU's2% 2CPU's28% 4CPU's58% 8CPU's12% 16CPU's1%
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 11 of 34
CPU MHz Graph
501to1000MHz2% 1001to1500MHz2% 1501to2000MHz0% 2001to2500MHz10%2501to3000MHz48% 3001to3500MHz38% 3501to4000MHz1%
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 12 of 34
0
100
200
300
400
500
600
700
800
900
1000
0%to10%
10%to20%
20%to30%
30%to40%
40%to50%
50%to60%
60%to70%
70%to80%
80%to90%
90%to100%
#ofServers
CPU Utilization
Peak
Prime
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 13 of 34
0
50
100
150
200
250
300
0%to10%
10%to20%
20%to30%
30%to40%
40%to50%
50%to60%
60%to70%
70%to80%
80%to90%
90%to100%
#ofServers
Memory Utilization
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 14 of 34
Memory Summary
512to1023MB1% 1024to1535MB4% 1536to2047MB2% 2048to2559MB14%2560to3071MB2% 3072to3583MB9% 3584to4095MB2% 4096to4607MB47%4608to5119MB0% 5120to5631MB1% 6144to6655MB1% 8192to8703MB5%8704to9215MB0% 10240to10751MB0% 12288to12799MB1% 16384to16895MB3%24576to25087MB1% 25088to25599MB0% 32768to33279MB2% 33280to33791MB0%65536to66047MB4%
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 15 of 34
OS Chart
RedHatEnterpriseLinuxAS100RedHatLinux200RedHatEnterpriseLinuxES100MicrosoftWindowsServer200319MicrosoftWindows20003Microsoft(R)Windows(R)Server2003,StandardEdition386MicrosoftWindows2000Server63MicrosoftWindows2000AdvancedServer2Microsoft(R)Windows(R)Server2003,EnterpriseEdition60
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 16 of 34
1.4 Business Critical Servers
HostName OSName Model #ofCPU's
CPUSpeedMHz
TotalRam
DiskSpaceGB
AverageCPU%
LINQA55RedHatEnterpriseLinuxES
AT/ATCOMPATIBLE 8 2666 3583 0 2.39
LINQA78RedHatEnterpriseLinuxES
AT/ATCOMPATIBLE 4 3400 2048 0 30.58
LINQA88RedHatEnterpriseLinuxES
AT/ATCOMPATIBLE 4 3400 3584 0 27.07
SQLPROD2MicrosoftWindowsServer2003
HPProLiantDL380G4 4 3400 3584 0 13.57
SQLPROD13MicrosoftWindows2000Server
HPProLiantDL380G3 2 3189 2048 312 28.56
SQLPROD16MicrosoftWindows2000Server
HPProLiantDL380G3 2 2790 1024 312 19.46
ORAPROD5Microsoft(R)Windows(R)Server2003,StandardEdition
HPProLiantDL380G3 1 3056 4096 36 20.97
IISPROD1Microsoft(R)Windows(R)Server2003,StandardEdition
HPProLiantDL380G3 1 3049 4096 36 17.83
WINPROD23MicrosoftWindows2000Server
HPProLiantML370G3 2 2783 1536 36 75.56
WINPROD26Microsoft(R)Windows(R)Server2003,StandardEdition
HPProLiantDL380G4 4 3400 4096 330 7.37
WINPROD186
Microsoft(R)Windows(R)Server2003,StandardEdition
HPProLiantDL380G4 4 3400 4096 73 34.92
WINPROD187
Microsoft(R)Windows(R)Server2003,StandardEdition
HPProLiantDL380G4 4 3400 4096 73 33.42
ECRAIG1Microsoft(R)Windows(R)Server2003,StandardEdition
HPProLiantDL380G5 4 2666 4096 147 0.93
RUPEN1Microsoft(R)Windows(R)Server2003,StandardEdition
HPProLiantDL380G5 4 2666 4096 367 0.47
LINPROD23RedHatEnterpriseLinuxAS
AT/ATCOMPATIBLE 1 3056 4096 0 2.56
LINPROD24RedHatEnterpriseLinuxAS
AT/ATCOMPATIBLE 1 3056 4096 36 2.69
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 17 of 34
1.5 Requirements Requirements describe, in business or technical terms, the necessary properties, qualities, and characteristics of a solution. These are provided by the client and used as a basis for the design.
Number Description
R001 Virtualize existing 1000 servers as virtual machines with no degradation in performance, compared to current physical workloads
R002 Establish a sound and best practice architecture design while addressing ACME Energy specific requirements and constraints
R003 Design should address security zone requirements for Production, Dev/Test, and QA workloads
R004 Design should be scalable and the implementation easily repeatable
R005 Design should be resilient and provide high levels of availability where possible
R006 Operations should help facilitate automated deployment of systems and services
R007 Overall anticipated cost of ownership should be reduced after deployment
R008 Business-critical applications should be given higher priority to network resources than noncritical virtual machines.
R009 Business-critical applications should be given higher priority to storage resources than noncritical virtual machines.
1.6 Constraints Constraints can limit the design features as well as the implementation of the design.
Number Description
C001 Storage array will be high performance fibre channel array
C002 Target Platform Option 1: Blade Server, 2x quad core CPU, 32GB RAM
C003 Target Platform Option 2: Rack Server, 4x quad core CPU, 96GB RAM
C004 8 full height blade servers can fit in 1 blade chassis. Blade chassis is 10U.
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 18 of 34
1.7 Assumptions Assumptions are expectations regarding the implementation and usage of a system. These assumptions cannot be confirmed at the design phase and are used to provide guidance within the design.
Number Description
A001 All required upstream dependencies will be present during the implementation phase. ACME Energy will determine which dependencies sit outside of the virtual infrastructure.
A002 All VLANs and subnets required will be configured prior to implementation.
A003 There is sufficient network bandwidth to support operational requirements. Users are on LAN or campus network.
A004 ACME will maintain a change management database (CMDB) to track all objects in the virtual infrastructure.
A005 Storage will be provisioned and presented to the ESX hosts accordingly.
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 19 of 34
2. Host
2.1 Requirements - Host capacity must accommodate the planned virtualization of 1000 physical servers - Size capacity to ensure that there is no significant change in performance or stability, compared
to current physical workloads - Expected 10% annual server growth
2.2 Design Patterns
Blade or Rack Servers
Design Choice
Justification
Impact
Server Consolidation (minimum # hosts required)
Design Choice
Justification
Impact
Server Containment (# additional hosts required)
Design Choice
Justification
Impact
Hypervisor Selection
Design Choice
Justification
Impact
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 20 of 34
2.3 Logical Design
Attribute Specification
Host type and version
Number of CPU sockets
Number of cores per CPU
Total number of cores
Processor speed
Memory
Number of NIC ports
Number of HBA ports
2.4 Physical Design
Attribute Specification
Vendor and model
Processor type
Total CPU sockets
Cores per CPU
Total number of cores
Processor speed
Memory
Onboard NIC vendor and model
Onboard NIC ports x speed
Number of attached NICs
NIC vendor and model
Number of ports/NIC x speed
Total number of NIC ports
Storage HBA vendor and model
Storage HBA type
Number of HBAs
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 21 of 34
Attribute Specification
Number of HBA ports
Total number of HBA ports
Number and type of local drives
RAID level
Total storage
System monitoring
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 22 of 34
3. Virtual Datacenter
3.1 Requirements - Site
o 1 primary datacenter (1000 VMs and grow fast) o 10 branch office (less than 10 servers per site)
- Availability o Design for maximum availability o There is an existing highly available SQL database system which can be leveraged
- Management: o All component must use corporate authentication (Active Directory) o Some VM administrator are running Mac OS
- Compute o Production VMs cannot reside on the same ESX/ESXi host as Dev/Test or QA VMs o Maximum agility (stateless preferred)
3.2 Design Patterns
vCenter Server Physical or Virtual (VM or Virtual Appliance)
Design Choice
Justification
Impact
vCenter Server Shared or Dedicated
Design Choice
Justification
Impact
vCenter Server Database Shared or Dedicated
Design Choice
Justification
Impact
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 23 of 34
vCenter Update Manager Location
Design Choice
Justification
Impact
vCenter Management Assistant (vMA)
Design Choice
Justification
Impact
vSphere Auto-Deply
Design Choice
Justification
Impact
vSphere Syslog Collector
Design Choice
Justification
Impact
vSphere ESXi Dump Collector
Design Choice
Justification
Impact
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 24 of 34
vSphere Authentication Proxy
Design Choice
Justification
Impact
vCLI & PowerShell CLI
Design Choice
Justification
Impact
Web Client
Design Choice
Justification
Impact
Cluster Architecture
Design Choice
Justification
Impact
Resource Pools
Design Choice
Justification
Impact
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 25 of 34
Branch office design (cluster, resource pool, vCenter)
Design Choice
Justification
Impact
vSphere License Edition
Design Choice
Justification
Impact
3.3 Logical Design
Draw Cluster Logical Design
Attribute Specification
vCenter Server version
Physical or virtual system
Number of CPUs
Processor type
Processor speed
Memory
Number of NIC and ports
Number of disks and disk size(s)
Operating System Type
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 26 of 34
3.4 Physical Design
Attribute Specification
Vendor and model
Processor type
NIC vendor and model
Number of ports
Network
Local disk
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 27 of 34
4. Network
4.1 Requirements - Production traffic should be isolated from Dev/Test and QA. - Network properties and statistics of each VM must be preserved after a VMotion - Virtual networking must be configured for availability, security, and performance.
4.2 Design Patterns
vNetwork Standard Switch or vNetwork Distributed Switch
Design Choice
Justification
Impact
vSwitch VLAN Configuration
Design Choice
Justification
Impact
References
vSwitch Private VLAN (PVLAN) Configuration
Design Choice
Justification
Impact
vSwitch Load Balancing Configuration
Design Choice
Justification
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 28 of 34
vSwitch Load Balancing Configuration
Impact
vShield Zones
Design Choice
Justification
Impact
vSwitch Security Settings
Design Choice
Justification
Impact
4.3 Logical Design
Draw Network Logical Design
Shading denotes active physical adapter to port group mapping. The vmnics shaded in the same color as a given port group will be configured as active, with all other vmnics designated as standby.
4.4 Physical Design
dvSwitch vmnic NIC / Slot Port Function
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 29 of 34
vSwitch Port Group Name VLAN ID
Primary VLAN VM Type PVLAN Type
Secondary VLAN ID
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 30 of 34
5. Storage
5.1 Requirements - High performance fibre channel storage array will be used (active/active) - Average Windows server is provisioned with a 15GB OS drive (average used 10GB) and 40GB
(average used 25GB) data drive. - Average Linux server is configured with 60GB total storage (40GB average used). - Must optimize for performance
5.2 Design Patterns
LUN Sizing
Design Choice
Justification
Impact
Storage Load Balancing
Design Choice
Justification
Impact
VMFS or RDM
Design Choice
Justification
Impact
Host Zoning
Design Choice
Justification
Impact
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 31 of 34
LUN Presentation
Design Choice
Justification
Impact
Thin vs. Thick Provisioning
Design Choice
Justification
Impact
5.3 Logical Design
Attribute Specification
Storage type
Number of storage processors
Number of FC switches
Number of ports per host per switch
LUN size
Total LUNs
VMFS datastores per LUN
Draw Logical SAN Design
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 32 of 34
5.4 Physical Design
Attribute Specification
Vendor and model
Type
ESXi host multipathing policy
Min./Max. speed rating of switch ports
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 33 of 34
6. Virtual Machine
6.1 Requirements Requirement 1
Requirement 2
Requirement 3
6.2 Design Patterns
VM Deployment Considerations
Design Choice
Justification
Impact
Swap and OS Paging File Location
Design Choice
Justification
Impact
-
VMware vSphere: Design Workshop Course Lab
2011 VMware, Inc. All rights reserved.
Page 34 of 34
7. Management / Monitoring
7.1 Requirements Requirement 1
Requirement 2
Requirement 3
7.2 Design Patterns
Server, Network, SAN Infrastructure Monitoring
Design Choice
Justification
Impact
vSphere Management
Design Choice
Justification
Impact
Backup / Restore Considerations
Design Choice
Justification
Impact
top related