planning and designing virtual uc solutions on ucs platform- joseph bassaly

61
BRKUCC-2782 Planning and Designing Virtual UC Solutions on UCS Platform

Upload: cisco-canada

Post on 12-Nov-2014

1.735 views

Category:

Documents


3 download

DESCRIPTION

If you are planning to run Cisco Unified Communications Applications as virtual machines then this session is for you. It will discuss how to enable new or existing UC Solution to run on Cisco UCS B and C Series servers in a VMWare based virtualized environment. It will provide a systems level overview, requirements and design caveats of virtual UC on UCS architecture; discussion on adjacent technologies like VMware, SAN, QoS, Nexus 1000v, differences between physical and virtual UC deployment etc. Attendees should have a working knowledge of Cisco UC product portfolio specially CUCM. Also introductory level knowledge on VMware, UCS and SAN concepts is required. This session is aimed for UC as well as Data Center Solution Architects, Consulting/Systems/Design Engineers and Administrators. This session will not cover configuration and troubleshooting details of virtual UC on UCS solution but will touch upon them briefly where necessary.

TRANSCRIPT

Page 1: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

BRKUCC-2782

Planning and Designing Virtual UC Solutions on UCS Platform

Presenter
Presentation Notes
Start Strong: Hi, I'm Shahzad. It's really great to be here, and thank you so much for coming to my session. Today, we're going to talk about “Planning and Designing Virtual UC Solutions on UCS”. In simple words, “UC on UCS”.��
Page 2: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 2

Housekeeping

We value your feedback Please don't forget to complete session evaluation

You might have visited “World of Solutions” already

Please remember this is a 'non-smoking' venue!

Please switch off your mobile phones

Please make use of the recycling bins provided

Please remember to wear your badge at all times

Page 3: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 3

Abstract Attendees should have a working knowledge of Cisco

UC product portfolio specially CUCM

Knowing VMware, UCS and SAN concepts is must for this session

UC applications on Cisco UCS B and C Series - As VMs 90 min Design Session - Based on UC 8.5 Will Not cover configuration and troubleshooting details

Q/A Policy Questions may be asked during the session

But due to time limit, flow and respecting every one’s interest, some questions might be deferred towards the end

Page 4: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 5

Virtual DC, UCS Server Virtualization Architecture UCS Hardware, LAN/SAN Interconnect

Deployment models, scalability and capacity planningVM Sizing and PlacementSAN & VMware Design Best PracticesNetwork considerations , QoS, Redundancy / High Availability

Case StudyMonitoring/DiagnosticManagement Options

Page 5: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco ConfidentialPresentation_ID 6

Fundamentals of UC on UCS Architecture

Page 6: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 7

Prerequisites

Do you know the basics of Answer

CUCM or CallManager Yes. That’s why I am here

Virtualization/VMware Running server apps as VMs.VMware ESX vs. ESXi hypervisorvSphere

Storage Area Networking Separation of storage from the computeFC, iSCSI & FCoE protocol

Cisco UCS It is a new computing architecture from Cisco.Beside other components it offers B and C series x86 based servers

Presenter
Presentation Notes
There are different protocols and designs to access the storage that is separated from the compute. I will only cover the major ones here. So first we have DAS (Direct Attached Storage) This is what every one is familiar with here. In this mechanism the hard drive is directly attached to your server like MCS server and you load your OS and Apps directly near to your compute. It provides you a single manage point which is good. You don’t need to worry about local hard drives failing. You have pool of backup drives available. But the drawback is that it is typically not shared and you cannot run multiple OS or apps on it for example. Then on the other hand we have the iSCSI protocol which actually allows to access the SCSI data over the existing IP network. And this is great because now you can share your medium and at the same time access the data. The SAN Storage companies like EMC and NetApps do support this protocol almost all of their Storage Solutions. But there are some overhead in this approach when we look at the protocol stack. And also the Ethernet links were not very fast (mainly 1 GIG) with the IP overhead would not be ideal specially when you know that on the same Ethernet wire you have SCSI data and IP. Now with the 10GIG becoming popular we might see a shift. So the final solution that industry came up with was the 4GIG FC or Fibre Channel solution which is secure, fast and scalable and would allow compute to access the SCSI data with little overhead as compare to iSCSI. And 8 GIG FC is also in consideration with the standard bodies. But there is still one challenge left that is cabling because not on each server you need two card. One is Ethernet NIC and second is FC HBA to carry your FC SCSI data traffic. So every one looked for consolidation and unification And the Answer was FCoE.
Page 7: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 8

DAS, iSCSI and SAN Comparison

DAS

SCSI

Computer System

SCSI Bus Adapter

SCSI Device Driver

Volume Manager

File System

Application

iSCSI

File System

Application

SCSI Device DriveriSCSI DriverTCP/IP Stack

NIC

Volume Manager

NICTCP/IP StackiSCSI LayerBus Adapter

Host/Server

StorageTransport

StorageMedia

SAN

FC

FC HBA

SCSI Device Driver

Volume Manager

File System

Application

Computer System Computer System

Block I/O

SAN IP

DAS (Direct Attached Storage): Rack Mount HP/IBM MCS Servers

Popular DAS Protocol: SCSI

iSCSI: Access SCSI storage media using IP network

SAN: Storage (Hard Drives) away from physical server or compute

Popular SAN Protocol: FC

Page 8: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 9

Storage Area Network (SAN)

SAN: Way to access the storage sitting away from compute

Remove Hard Drive and put it 1km away

High-performance interconnect providing high I/O throughput

Lower TCO relative to direct attached storage, storage can be shared

Separation of Storage from the Server

Servers

BlockStorageDevices

SAN

Clients

LAN

Page 9: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 10

Ethernet

IP

Unified Fabric – FCoE Standard

TCP

iSCSI

FCoE

Physical Wire

FCPFCP

Ethernet

SCSI

FC FC

SCSI SCSI SCSI

FCoEDAS iSCSI FC

Page 10: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 11

Unified I/O Architecture Consolidation

Ethernet FC

LAN SAN BSAN A

Today I/O Consolidation with FCoE

SAN BLAN SAN A

FCoE

Nexus5000

Page 11: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 12

UC on UCS - Virtual DC OverviewNetwork vs. Server Virtualization

UC Application UC Application UC Application

VMware ESXi , Unified Computing System

12 3

45

Presenter
Presentation Notes
* Virtual Data Center * OS BOX: ESXi specific * UC Focused Virtual Data Center
Page 12: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco ConfidentialPresentation_ID 13

Compute/Server

Page 13: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 14

Blade CPU Size Memory Disks VMs

UCS B440 M14x Intel 7500 Full Width

32 DIMM256 GB

4 3.5” SAS/SATA

DrivesNA

UCS B250 2x Intel 5540 or 5640 for M2 Full Width

48 DIMM384 GB

2x 3.5” SAS Drives NA

UCS B230 2x Intel 6500 or 7500 for M2 Half Size 32 DIMM

256 GB2x 3.5” SSD

Drives NA

UCS B2002x Intel 5540 or

5640 M2 Half Width12 DIMM

96GB2x 3.5” SAS

Drives 4

UCS B200

UCS B440

Compute: Cisco UCS B-Series Blade Servers Examples

UCS B250

UCS B230

Presenter
Presentation Notes
Extended memory server 2S/2U, 384 GB memory 8 SFF SAS/SATA drives 5 PCIe adapters Storage-intensive server 2S/2U, 96 GB memory 16 SFF SAS/SATA drives 5 PCIe adapters General purpose server 2S/1U, 96 GB memory 4 x 3.5” SAS/SATA drives 2 PCIe adapters
Page 14: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 15

Compute: UCS B Series ComponentsUCS Manager (Embedded)

Fabric Extender (Up to 2)

Fabric Interconnect Switch

UCS 5108 Blade Server Chassis

UCS B200M1 – half sizeUCS B250M1 – full size

CPU: Intel Xeon E5540I/O: M71KR-Q/E CNA & M81KR VICMemory & Hard Drive

Presenter
Presentation Notes
UCS 2104XP Fabric Extender UCS 5108 Blade Server Chassis (up to 40 per system and up to 320 half-size blades)
Page 15: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 16

Rack Server CPU Size Memory Disks Adaptor VMs

UCS C460 M1 4x Intel 7500 4RU64 DIMM512 GB

12 SAS/SATA Drives 10 PCIe NA

UCS C250 M1(memory intensive) 2x Intel 5540 2RU

48 DIMM384 GB

8 SFF SAS/SATA

Drives5 PCIe NA

UCS C210 M1 2x Intel 5540(5640) 2RU

12 DIMM96 GB

16 SFF SAS/SATA

Drives5 PCIe 4

UCS C200 M22x Intel 5506

1RU12 DIMM

96GB

4 x 3.5” SAS/SATA

Drives2 PCIe 4

UCS C200 M2

UCS C210 M1(M2)

UCS C250 M1

Compute: Cisco UCS C-Series Rack-Mount Servers Examples

UCS C460 M1

Presenter
Presentation Notes
Extended memory server 2S/2U, 384 GB memory 8 SFF SAS/SATA drives 5 PCIe adapters Storage-intensive server 2S/2U, 96 GB memory 16 SFF SAS/SATA drives 5 PCIe adapters General purpose server 2S/1U, 96 GB memory 4 x 3.5” SAS/SATA drives 2 PCIe adapters
Page 16: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco ConfidentialPresentation_ID 17

UC Server Virtualization

Page 17: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 18

New UC Deployment Building BlocksThinking outside the (MCS) Box Deployments have been based on single

application MCS servers

Virtualization allows multiple Virtual Machines to access common HW resources

Solution capacity and deployment models do not change

Building blocks change from physical ‘servers’ with CPU/MEM/HDD to VMs

The number of required ‘servers’ remains the same, but the HW will vary

Presenter
Presentation Notes
In the past, when planning a UC deployment, the basic building block for the solution was MCS servers. These dedicated, single application HW would greatly increase the captial costs associated with the deployment. If fact, often times, deployments would be shorted in redundancy to keep the HW costs down. But when deploying in a virtual environment, the building block units change from MCS to virtual resources (vCPU, vMemory, vLAN, etc). So when planning for a virtual deployment, one must consider the virtual requirements and not the physical requirements.
Page 18: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 19

Virtual UC on UCS ArchitectureMCS 7816/25/28

MCS 7835/45 UCS C210/C200

UCS 5108 Chassis

UCS B200 with CNA

UCS 6100XPFabric Interconnect Switch

SANLAN

UCS 2100Fabric Extender

Storage Array (for UC Apps)

PSTN/PTT

FC

10GbE10/100/1GbE

Catalyst

Nexus

MDS

FC

Rest ofIntranet

FEX

Management:- UCS Manager - CIMC for UCS- vSphere/vCenter

Hypervisor

Virtual UC Apps

Presenter
Presentation Notes
This is what we are going to discuss in this session. We will discuss Remove the UCS C-Series 210M1 is good to go Goddard1 (1app/server) UCS C210M1 is coming in May/June 1 app/server UCS C210M1 with Multiple Apps/Server coming in August
Page 19: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco ConfidentialPresentation_ID 20

Virtual Design Differences

1

Page 20: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 21

Cluster Based on Virtual Resources

MCS ServersCPU/Mem/HDD

Virtual Machines (OVA)vCPU/vMem/SAN or DAS

Unified CM

UnityConnection

CUCCX Unified CM

UnityConnection

CUCCX

Presenter
Presentation Notes
So instead of having 4 physical machines with dedicated CPU/Memory/HDD, a virtual deployment will have the same ‘server’ components, but will be deployed on virtual HW that will provide the appropriate vCPU/vMemory/vHDD required by the application. Different applications will have different vCPU/vMemory/vHDD requirements, but those requirements are abstracted from the physical server and allocated via an OVA definition.
Page 21: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 22

Virtual Machine Specification

Number of VMs typically the same as physical MCS’s

But Virtual Machines are measured by:-vCPU-vRAM-vDisk-vNICs

A VM solution can be deployed on any “supported”hardware mix that meets the specified resource

Multiple VMs on same physical HW

Presenter
Presentation Notes
Any HW base can be used as long as it has capacity. vCPU right now will be the biggest bottleneck since all VTG applications (except Unity) require dedicated vCPU’s.
Page 22: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco ConfidentialPresentation_ID 23

UC on UCS Tested Reference Configurations (TRC)

Page 23: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 24

Cisco UCS B200M2 (UCS-B200M2-VCS1)B200M2 TRC Example

UCS 5108 Chassis

UCS 6100XP Fabric Interconnect Switch

SANLAN

UCS 2100Fabric Extender

Storage Array (for UC Apps)

PSTN/PTT

FC

10GbE

Catalyst

Nexus

MDS

FC

Rest ofIntranet

Configuration (M1):- 32GB RAM- 2 x E5540 CPU- 2 x 146GB SAS Drives- M71KR-Q CNA Adapter- Supports multiple VMs

Configuration (M2):- 48GB RAM- 2 x E5640 CPU- 2 x 146GB SAS Drives- UCS M81KR VIC- Supports multiple VMs

Management:- UCS Manager- vSphere/vCenter

Presenter
Presentation Notes
M1 has 10K SAS drives, M2 has 15K SAS drives
Page 24: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 25

Cisco UCS C200M2 (UCS-C200M2-VCD2)C200M2 TRC Example

UCS C200 M2

LANPSTN/PTT

10/100/1GbE

Catalyst

Rest ofIntranet

Management:- CIMC for UCS- vSphere/vCenter

Configuration:- Dual Quad Core E5506- 4 x 1TB SAS Drives- 24 GB RAM- 2 x 1GB NICs Ethernet- 1 x 1GB NIC for CIMC- Supports multiple VMs

Presenter
Presentation Notes
This is what we are going to discuss in this session. We will discuss Remove the UCS C-Series 210M1 is good to go Goddard1 (1app/server) UCS C210M1 is coming in May/June 1 app/server UCS C210M1 with Multiple Apps/Server coming in August
Page 25: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 26

Tested Reference Configurations SummaryServer Model/Generation &Tested Reference Configuration

Collaboration SKU Notes

B200 M2 TRC #1 UCS-B200M2-VCS1 Co-res, SANB200 M1 TRC #1 UCS-B200M1-VCS1 Co-res, SANC210 M2 TRC #1 UCS-C210M2-VCD2 Co-res, DAS

TRC #2 DC SKU only Co-res, SANC210 M1 TRC #1 UCS-C210M1-VCD1 Single VM, DAS

TRC #2 UCS-C210M1-VCD2 Co-res, DASTRC #3 DC SKU only Co-res, SAN

C200 M2 TRC #1 UCS-C200M2-VCD2 Co-res, DAS (1K users)

For B-SeriesDAS (for ESXi) and SAN (for VM) is required

For C-SeriesC210 (Support DAS and SAN options)

C200 (Support DAS option only)

Page 26: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco ConfidentialPresentation_ID 27

Planning and Design

2

Page 27: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 28

UC Deployment Model (Application) All UC Deployment Models are supported

No change in the current deployment modelsBase deployment model – Single Site, Centralized Call Processing, etc. are not changing

NO software checks for design rulesNo rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade

Clustering over WAN (COW) Mega-Cluster

Mixed/Hybrid Cluster

vBlock

http://www.cisco.com/go/ucsrnd

Presenter
Presentation Notes
FAQ : Q: Two UnityC on one blade. Does it need 2 extra CPU cores? A: No only 1
Page 28: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 29

High Availability Design Rules

Current Business Continuity and Disaster Recovery strategies are still applicable

The UC apps redundancy rules are same

Distribute UC application nodes across UCS blades, chassis and sites to minimize failure impact

Primary/secondary on different blade, chassis, sites On same blade, mix Subs with TFTP/MoH vs. just Subs

Redundancy of UCS components (blade, chassis, FEX links, Interconnect switching)

Redundancy of “new” network types (10GbE, SAN multi-pathing, etc.)

Page 29: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco ConfidentialPresentation_ID 30

Virtual Machine Sizing and Placement

Page 30: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 31

Virtual Machine Sizing Virtual Machine characteristics store in OVA

An OVF package consists of several files, placed in one directoryA one-file alternative is the OVA package

Each products has one or more defined OVAs

OVA defines:-vCPU, vRAM, vDisk, vNICs, OS Type-Network and Storage traffic profiles

OVA naming scheme:-Includes product, user count and revisionCUCM_7500_user_v1.0_vmv7.ovaCUC_5000_user_v1.0_vmv7.ova

Cisco UC OVAs include partition alignment

http://en.wikipedia.org/wiki/Open_Virtualization_Format

Presenter
Presentation Notes
Any HW base can be used as long as it has capacity. vCPU right now will be the biggest bottleneck since all VTG applications (except Unity) require dedicated vCPU’s.
Page 31: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 32

UC VM Configuration 8.5(1) Product Scale (users) vCPU vRAM (GB) vDisk (GB)CUCM 7,500 2 6 2 x 80

2,500 1 2.25 1 x 80UnityConnection 20,000 7 8 2 x 300

10,000 4 4 2 x 1465,000 2 4 1 x 200500 1 2 1 x 160

Unity 15,000 4 4 4 x 245,000 2 4 4 x 24

CUP 5,000 4 4 2 x802,500 2 4 1 x 80

UCCX/IPIVR 400 agents 4 8 2 x 146300 agents 2 4 2 x 146

1 vCPU for Unity Connection ESXi schedulerSME = CUCM Session Management Edition

Presenter
Presentation Notes
2.25 Reserved on ESXi
Page 32: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 33

CMBE 6000 (C200) VM Configurations 8.5(1)

* R = Reserved

Product Scale (users) vCPU Cores vRAM(GB)

vDisk(GB)

CUCM 1,000 2 (600 MHz *R) 4 (*R) 1 x 80UnityConnection 1,000 1 4 1 x 160

CUP 1,000 1 (800 MHz *R) 2 1 x 80UCCX/IPIVR 100 2 4 2 x 146

Product Scale (users) vCPU Cores vRAM(GB)

vDisk(GB)

CER 30,000 2 6 2 x 8020,000 1 2.25 1 x 8012,000 1 2.25 1 x 80

UC VM Configurations 8.5(1)

Presenter
Presentation Notes
2.25 Reserved on ESXi
Page 33: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 34

UCCE/CVP VM Configuration 8.5(1)

Component & Scale * vCPU vRAM (GB) vDisk (GB) vNIC

Router 8000 agents 2 4 1 x 80 2

Logger 8000 users 4 4 1 x150 2

Agent PG 2000 users 2 4 1 x 80 2

VRU PG 9600 ports, 10 PIMs 2 2 1 x 80 2

AW Server (25 clients) 1 2 1 x 40 1

UCCE

Component & Scale * vCPU vRAM (GB) vDisk (GB) vNIC

Call+VXML Server (900 Calls) 4 4 1 x 146 1

Reporting (Large) (840 Msg/sec) 4 4 1 x 721 x 438

1

OAMP Server 2 2 1 x 20 1

CVP

Page 34: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 35

Virtualized Server PlacementHypothetical C-Series Layout

CPU-1 CPU-2

Rack Server #1

PUB/TFTP

Core 1 Core 2 Core 3 Core 4 Core 1 Core 2 Core 3 Core 4

CCX-1

UC

xn(A

ct)

Small site, 500 users w/ VM and 50 Contact Center agents

Consolidate 7 servers into 2 C-Series

PROs: 7:2 server consolidation

4 RU’s App Redundancy

Extra ‘server’ at no HW cost

CONs: Extra server

CPU-1 CPU-2

Rack Server #2

SUB

Core 1 Core 2 Core 3 Core 4 Core 1 Core 2 Core 3 Core 4

UC

xn(S

tnad

by) CCX-2

ESXi

UC

xnES

XiU

Cxn

CU

P

Page 35: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco ConfidentialPresentation_ID 36

UCS Server Selection

Page 36: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 37

Server design considerationsWhich UCS servers should be deployed? Does the customer already have Data Center w/ SAN?

-ROI realized much earlier-SAN/DataCenter knowledge simplifies deployment

Is UC a driver for implementing SAN?-SAN/DataCenter knowledge key to successful deployment-Much lower ROI due to SAN costs

UCS chassis management-B Series Chassis have centralized management via UCS Manager-C Series are managed individually via CIMC

Presenter
Presentation Notes
MCS 7845-I3 costs 24K. SAN for DC costs 150K-200K, add in the Nexus FC switch and the cost will reach $250K. This is approx another 10 servers…hence 20-22 servers for UCS + SAN. ROI=Financial Break Even
Page 37: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 38

Server Selection Guideline

> 10 “servers”

C210MxSAN”

> 24 vCPU?

UCS B200”

Already have

DC/SAN?

C210MxDAS

C200M2/BE6000

> 8vCPU or > 1000 users

BuildingDC for UC?

$$

Yes

Yes

Yes

Yes

Yes

No

No

No

No

No

Start

1

2

3123

Large

Medium

Small

Presenter
Presentation Notes
The C200 server is targeted for < 1K users. The 1K templates for CUCM and VM use only 1 vCPU.
Page 38: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco ConfidentialPresentation_ID 39

VMware/vCenter Design Best Practices

Page 39: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 40

Virtual Machine Planning/DesigningESXi 4.0 Only – ESX not supported

vSphere Hypervisor (free ESXi) and Standard to Enterprise Plus

VMware feature support varies by application

Partial = Limited support (See following URL for details)http://docwiki.cisco.com/wiki/Unified_Communications_VMWare_Requirements

ESXi 4.0 Features CUCM UnityConn

VM Template (OVAs) Yes Yes

Copy Virtual Machine Yes Yes

Resize Virtual Machine Partial Partial

VMware Hot Add No No

VMware HA Yes Yes

VMware vMotion Partial Partial

VMware DRS No No

P2V or vCenterConverter

No No

Presenter
Presentation Notes
:: Good to have it. vMotion works but not supported because not fully tested. :: During the VMotion cutover, the system is paused. For a real-time streaming media application such as Cisco Unity Connection messaging, this creates service interruption. While testing shows that calls are not dropped during a VMotion migration, voice calls that were in progress experience degraded voice quality after the migration. SRM: Accelerate recovery for the virtual environment through automation. Add programming for example “wait for OS heartbeat” before power ON a VM VMware Distributed Resource Scheduler (DRS) continuously monitors utilization across resource pools and intelligently allocates available resources among virtual machines according to business needs. We know that it is not ideal situation for us and for our customers/partners as well but we are not supporting majority of the VMware features at least in this first release because our approach with this offering is that test the waters first before we swim at full speed. And also it goes back to the testing resources we have available. We definitely didn’t want to dictate things to our customers but at the same time we didn’t want to have many TAC cases and CAP opened against features that we didn’t test and not fully comfortable with. So as a customer or partner if you are going to use it is fine we don’t have any software checks that could prevent you using any one of the, we won’t stop you but at the same time we cannot support you. But the good news is that all of them are on the roadmap and will be delivered based on the priorities and business needs in the coming UC releases. VMOtion: Allows a running application to move from one ESX server to another virtually without any down time. Fault Tolerance: VMware Fault Tolerance (FT) provides continuous availability for applications by creating a live shadow instance of a virtual machine. FT allows instantaneous failover between the two instances in the event of hardware failure, VMware Fault Tolerance eliminates even the smallest of data loss or disruption. HA: VMware High Availability (HA) provides easy-to-use, cost-effective high availability for applications running in virtual machines. In the event of physical server failure, affected virtual machines are automatically restarted on other production servers with spare capacity. In the case of operating system failure, VMware HA restarts the affected virtual machine on the same physical server. VMware DRS (Distributed Resource Scheduler) is a utility that balances computing workloads with available resources in a virtualized environment. What is VMware Consolidated Backup? VMware® Consolidated Backup provides a centralized backup facility for virtual machines that works in conjunction with many leading backup software providers. It enables third party backup software to protect virtual machines and their contents through a centralized backup server rather than directly on an ESX Server.
Page 40: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 41

VMware Management

Cisco is not dictating management of VMware images Customer strategy will depend on

number of images VMware vSphere Client

-Thick client downloaded to Windows PC

-Directly manages each VMware ESXi host (Model A)

-Can connect to VMware vCenter Server to centrally manage all ESXi hosts (Model B)

VMware vCenter Server-Windows server running vCenter 4.x

-Provides central point of configuration, provisioning and management

-Only way to get chassis HW failure notification

vCenter Server

vSphere Client

vSphere Client

A

B

1

1

1

1

2

Page 41: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco ConfidentialPresentation_ID 42

LAN/SAN Design Best Practices

Page 42: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 43

Virtual Software Switch Options

VM

LAN SAN

ESXi Hypervisor

Software Switch

vNIC

CNA

FCoE

VMwarevSwitch

VMwaredvSwitch

Cisco Nexus 1KV

Host based (local) Distributed Distributed

IEEE 802.1Q VLAN tagging

IEEE 802.1Q VLAN tagging

IEEE 802.1Q VLAN tagging

VLANs only visible to local ESXi host

VLANs visible to all ESXi hosts

VLANs visible to all ESXi hosts

EtherChannel EtherChannel EtherChannel

-- -- Virtual PortChannel

-- -- QoS Marking (DSCP/CoS)

-- -- ACL

-- -- SPANRADIUS/TACACS+

No VM needed No VM needed VM needed

vCenter not needed vCenter needed vCenter needed

vmNIC

UCS B200

Presenter
Presentation Notes
Virtual networking concepts similar with all virtual switches http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/solution_overview_c22-526262.pdf
Page 43: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 44Physical switch maps L3 DSCP to L2 CoS

UC QoS Concepts – With C-Series or MCS

CUCM VM or MCS is connected to a switch

CUCM marks traffic based on L3 DSCP values

Physical switch (e.g Cat6k) does mapping from L3 DSCP to L2 CoS (if needed)

CTL Packet L3

dc1-access-6k(config)#mls qos map dscp-cos 24 to 3dc1-access-6k(config)#mls qos map dscp-cos 46 to 5

CS3

L2:0 L3:CS3

CUCM

CAT6KL2:3 L3:CS3L2:3 L3:CS3

Presenter
Presentation Notes
The UC Apps like CUCM/Unity etc. will mark the traffic with appropriate DSCP values and UCS will honor that and will carry those markings all the way up to the Northbound interface to the upstream Switch (like Nexus7K and CAT6500) The Menlo adapter will not re-write those markings and UCS-6120XP in End-Host mode, won’t do that either. Reason: UCS doesn’t look into the IP header. So it doesn’t re-write those DSCP values and trusts the UC Apps OS. By default only two QoS settings are enabled in the UCS Fabric Interconnect FCoE – no drop policy Rest (Match-Any) – Best Effort CoS marking are only meaningful when CoS classes are enabled with the UCS Fabric interconnect (UCS-6120XP). So by default all the CoS values will also be honored and won’t change. http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/securecldg.html#wp335477 IP ToS and DSCP have no effect on UCS internal QoS and thus cannot be used to copied to internal 802.1p CoS, however DSCP/ToS set in IP header is not altered by UCS. CoS markings are only meaningful when CoS classes are enabled.
Page 44: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 45

UC QoS Concept – With B-Series 6100 switch in middle – L2 based prioritization

DSCP value in IP header is not altered by 6100

6100 sends packet to Physical ethernet switch

Default QoS settings on UCSFC (“match cos 3”) – no drop policyRest (“match any”) – Best Effort Queue

vSwitch & UCS 6100 can not map L3 DSCP to L2 CoS

L2:0 L3:CS3

CUCM

CAT6K

UCS 6100

Presenter
Presentation Notes
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/ucs_xd_xenserver_ntap.pdf http://topic.cisco.com/news/cisco/mktg/ask-ucs-pm/msg04316.html For instance voice signaling traffic with L3 DSCP value of CS3 is mapped to L2 CoS value of 3 by Nexus 1000V. All FCoE (Fibre Channel over Ethernet) traffic is marked with L2 CoS value of 3 by UCS. When voice signaling and FCoE traffic enter in the UCS 6100 Fabric Interconnect Switch, both will carry CoS value of 3. In this situation voice signaling traffic will share queues and scheduling with Fibre Channel priority class and will be given lossless behavior (Fibre Channel priority class for CoS 3 in the UCS 6100 Fabric Interconnect Switch does not imply that the class cannot be shared with other types of traffic.)   On the other hand, L2 CoS value for FCoE traffic can be changed from its default value of 3 to another value and CoS 3 could be reserved exclusively for the voice signaling traffic. However this approach is not suggested or recommended at all because some CNAs (Converged Network Adapters) would cause issues when FCoE CoS value is not set to value of 3.
Page 45: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 46IP traffic competing with FC traffic

B-Series – Potential Congestion Scenario

UCS B200 Blade Server

FCoE

10Gbps Ethernet

Catalyst Ethernet

Switch SANDiscArray

UC AppsDisc Space

Fibre Channel

UCS-6100 FI Switch

Up to 20 UCS 5108 Chassis

Presenter
Presentation Notes
Why are we discussing all this. Why are we focusing on QoS? Well because there might be a congestion scenario although highly unlikely. But as a best practice we have to keep QoS considered for the worst case scenario. FCoE Traffic might cause congestion at the UCS 6100 level.
Page 46: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 47

B-Series QoS Best Practices

• UC blades Network Adapters QoS policy set to Platinum (CoS=5; No Drop)

• Non-UC blades Network Adapters QoS policy set to best effort

N1Kv Considerations:

• UC sig. traffic (CoS3) share queues with FCoE traffic (CoS3) • UC sig. traffic is given lossless behavior • Default CoS value of 3 for FCoE traffic should never be

changed

Without N1Kv:

Caveat:• All traffic types from virtual UC App will get CoS value of Platinum• Non-UC application gets best-effort class might not be acceptable

L2:0 L3:CS3

L2:3 L3:CS3

L2:3 L3:CS3

CUCM

N1KV

UCS 6100

CAT6K

Presenter
Presentation Notes
Vmware walmart switch All other non-UC blades should set the QoS policy to best effort. N1kv will give end to end QoS This is only for B-series. Second slide Risk of congestion Design dependent
Page 47: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 48

FC Network Best Practices/GuidelinesCompute Layer

SAN/Storage Layer – Cisco SRND

CiscoUCS 6100Fabric Interconnect

UCS 5100Blade Server

Cisco SAN Switch

4x10GE

4x10GE

4x10GE

4x10GE

FC FC

FC FC

Nexus 1000V

FC Storage

SP-A SP-B

3rd party layer

CUCM VM IOPS ~ 200200 IOPS * 4KB ~ 6.4 Mbps per VM

• 1 Rack; 12 DAE• Total capacity 28,000 IOPS • 14,000 IOPS per controller • 4 KByte block size

14,000 IOPS x (4KB) ~ 428 Mbps600 Mbps throughput/controller

3rd Party SAN Example

Result• One 4 Gbps FC interface is enough to

handle the entire capacity of one Storage Array

• HA requires four FC interfaces

Presenter
Presentation Notes
http://topic.cisco.com/news/cisco/mktg/ask-ucs-pm/msg04316.html In this example we are using EMC CX4-240
Page 48: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 49

SAN Array LUN Best Practices / GuidelinesHD Recommendation FC class (e.g 450 GB 15K, 300 GB 15K) ~ 180 IOPS

LUN Size Restriction Must never be greater than 2 TB

UC VM App Per LUN Between 3 & 8 (different UC apps require different space requirement based on OVA

LUN Size Recommendation

Between 500 GB & 1.5 TB

HD 1450gig

15K RPM

HD 2450gig

15K RPM

HD 3450gig

15K RPM

HD 4450gig

15K RPM

HD 5450gig

15K RPM

Single RAID5 Group (1.4 TB Usable Space)

PUB SUB1 UCCX1

VM1 VM2 VM3

LUN 2 (720 GB)LUN 1 (720 GB)

UCCX2 CUP1 CUP2

VM1 VM2 VM3

LUN1 – 720 GB LUN2 – 720 GB

Presenter
Presentation Notes
LUN size is always less than 2 terabytes (TB) IOPS Per HDD is Approximately 180 IOPS LUN size # of UC VM per LUN b/w 4 & 8 (different UC apps require different space based on OVA requirement) Create a LUN size of between 500 GB and 1.5 TB
Page 49: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco ConfidentialPresentation_ID 50

Implement/Operate

3

Remember! OVA is your MCS Server Now

Page 50: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco ConfidentialPresentation_ID 51

Case Study

Page 51: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 52

Deployment Model Distributed Call Processing

CUCM and Applications located at each siteUp to 30,000 lines per site

100+ sitesTransparent use of PSTN if IP WAN unavailable

PSTN

IP WANSIP Proxy

Applications(UnityC, UCCX, CUP)

CUCMcluster

CUSP

UnityC, UCCX, CUP

CUCM cluster

UnityC, UCCX CUP

CUCMcluster12,000 IP Phones

2000 IP Phones

500 IP Phones

Presenter
Presentation Notes
This is an example of a large company that has HQ in San Jose. One subsidiary is in Irvince, CA think of it as Linksys (Branch-A). And the second one is in San Francisco (Pure Digital) as Branch-B
Page 52: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 53

12K Phones

10K Messaging Users

10K CUPC Clients

240 UCCX Agents10 Supervisors

InputCUCM Server Requirement

CUP Server Requirement

UCCX Server Requirement

MCS ServersCUCM 11

UCxn 2

CUP 2

UCCX 2

Total 17

Unity Connection Server Requirement

Presenter
Presentation Notes
Above 1250 users, Cisco recommends a dedicated publisher and separate servers for primary and backup call processing subscribers. The Cisco TFTP service that provides this functionality can be enabled on any server in the cluster. However, in a cluster with more than 1250 users, other services might be impacted by configuration changes that can cause the TFTP service to regenerate configuration files. Therefore, Cisco recommends that you dedicate a specific subscriber node to the TFTP service, as shown in Figure 8-1, for a cluster with more than 1250 users or any features that cause frequent configuration changes.
Page 53: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 54

12K Phones

10K Messaging Users

10K CUPC Clients

240 UCCX Agents10 Supervisors

InputServer Requirement – 12K Devices/Users

MCS Servers

B200 Servers

CUCM 11

UCxn 2

CUP 2

UCCX 2

Total 17

Total 6

B200-1 B200-2CPU-1 CPU-2 CPU-1 CPU-2

Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4

B200-3 B200-4CPU-1 CPU-2 CPU-1 CPU-2

Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4

CUP-1UCCX-1CUP-2

B200-5 B200-6CPU-1 CPU-2 CPU-1 CPU-2

Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4

UCCX-1

UCxn-1Active

ESXiUCxn

UCxn-2Active

ESXiUCxn

TFTP-2

SUB-2

SUB-4

SUB-6

SUB-8 CUP-2UCCX-2

SUB-3

CUCM 2 vCPU 6 GB RAM 7.5K usersCUC 4 vCPU 4 GB RAM 10K usersCUP 4 vCPU 4 GB RAM 5K usersUCCX 2 vCPU 4 GB RAM 300 agents

PUB SUB-1

CUP-1

TFTP-1 SUB-5

SUB-7

Presenter
Presentation Notes
* Vcenter and n1kv is sitting and also spare blade
Page 54: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 55

Management Layers

vCenter vs. Standalone Depends on the vendor/DC team

Virtual KVMoIPCIMCUCS Manager

UCS C210 M1

Web GUI/CLI (CUCM/UnityC/etc)Windows Apps (UCCE/CVP/etc)

Page 55: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 56

Deploying UC Virtual Machine – B&C Series

UC VM OVF Templates: http://www.cisco.com/go/uc-virtualized

OVF Templates Provided by Cisco

1

2

3

UCS C210 M1

UCS B200 M1

ISO

4

Presenter
Presentation Notes
So this is what the outcome looks like. So you either do it manually or by using the OVA files to configure the virtual machines. OVF files will configure the partition appropriately too
Page 56: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 57

Managing Virtual UC Application (B & C Series)

Virtual UC Apps are NOT aware of the type of hardware being used (servers) nor the type of storage.

At login to the CLI and GUI, the VM configuration is displayed

No VM BIOS managementNo hardware management and monitoring

New iostat information is added to RTMT and logged (perfomoncounters) to help debug Disk I/O related issues on the SAN

Presenter
Presentation Notes
“Show environment fans/temp” CLI command “This is a VM” “show hardware” VM information “show memory size” This is a VM “show tech system bus page” Shows VM info.
Page 57: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 58

Complete Your Online Session Evaluation

Give us your feedback and you could win fabulous prizes. Winners announced daily.

Receive 20 Cisco Preferred Access points for each session evaluation you complete.

Complete your session evaluation online now (open a browser through our wireless network to access our portal) or visit one of the Internet stations throughout the Convention Center.

Don’t forget to activate your Cisco Live and Networkers Virtual account for access to all session materials, communities, and on-demand and live activities throughout the year. Activate your account at any internet station or visit www.ciscolivevirtual.com.

Only 3 Slides Left Now :- )

Page 58: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 59

Questions ?

Presenter
Presentation Notes
I appreciate your attention today. Thank You!
Page 59: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 60

External Resources Unified Communications Network Design Guide (SRND)

http://www.cisco.com/go/ucdesign

UC Virtualizationhttp://www.cisco.com/go/uc-virtualized

Supported UCS Hardware Specshttp://www.cisco.com/go/swonly

UC on UCS Solution Overview and Ordering Informationhttp://www.cisco.com/go/uconucs

Customer Success StoryStation Casino (12K Phones) http://newsroom.cisco.com/dlls/2010/prod_071310.html

Virtual UC Deployment Guide on B-Series by Shahzad Alihttps://supportforums.cisco.com/docs/DOC-6158

Presenter
Presentation Notes
Well, I guess I told you everything you need to know then. [heh heh] I'll be around after if you think of anything. Thanks again!
Page 60: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 61

Please browse on-site Cisco Store for suitable reading.

Recommended Reading

Page 61: Planning and Designing Virtual UC Solutions on UCS Platform- Joseph Bassaly

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKUCC-2782_syali 62

Presenter
Presentation Notes
* NMTG Products * Network Virtualization * Sample Security and QoS Policies for UC * it is more than template