technologies transforming the data center - terena

100
BRKDCT-2399 Technologies Transforming the Data Center

Upload: others

Post on 11-Sep-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Technologies Transforming the Data Center - Terena

BRKDCT-2399

Technologies Transforming the Data Center

Page 2: Technologies Transforming the Data Center - Terena

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2399 2

§  Virtualization, Consolidation, Cloud; these IT innovations can mean different things depending on your organization or your perspective. Data Center thought leaders and experts will dissect the key networking, storage and compute technologies that are transforming the data center from disparate technology islands to integrated, cohesive resource pools, which can be provisioned and re-provisioned as quickly as end users needs change.

§  We will cover the latest technology and architectural innovations in the Nexus portfolio, including consolidation with Unified Fabric and unified port, scalability with FEX-Link architecture and Fabric Path, virtual server aware and mobility with VN-link, virtual instance services, and high availability with In-Service-Software-Upgrades.

§  We will also analyze Cisco’s differentiation in the server market, leading a data center transition by optimizing compute, networking, virtualization and storage access into one cohesive system, with integrated management. We will articulate how this innovation has led to 25 world records for performance, to date.

§  This session will breakdown the building blocks of our data center architecture across our networking, storage networking, compute and virtualization portfolios, so that no matter where you are with virtualization, consolidation or defining a Cloud strategy…you can leverage these technologies in a practical way, to enhance your data center planning, design and operations.

Technologies Transforming the Data Center Abstract – Hopefully why you are here

Page 3: Technologies Transforming the Data Center - Terena

3 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Technologies Transforming the Data Center Agenda

§  The Evolving Data Centre §  Transition Points

§  Evolution of the Fabric §  Scaling the Fabric

§  Connecting the Server - VN-Link

§  Mobility and Storage - Unified I/O

§  Evolution of Compute §  UCS Building Blocks

§  UCS Automation and Management

§  UCS Integrated Solutions

§  Next Steps: The Data Center Focus for 2011

1K Cisco Nexus

x86

Page 4: Technologies Transforming the Data Center - Terena

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023 4

The Evolving Data Centre Architecture Technology Disruptor - Virtualization

0

2,500,000

5,000,000

7,500,000

10,000,000

12,500,000

15,000,000

17,500,000

20,000,000

2005 2006 2007 2008 2009 2010 2011 2012 2013 2014

Virtualized Non-Virtualized Source: IDC, Nov 2010

Tipping Point

Traditional Virtualized

c

App OS

App OS

App OS

App OS

App OS

App OS

App OS

App OS

App OS

...1 Server, or “Host”

Many Apps, or “VMs”…

Hypervisor

App OS

App OS

App OS

1 Application…

...1 Server

App OS

App OS

App OS Transition

Page 5: Technologies Transforming the Data Center - Terena

5 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

2010 2000 1990 1980 1970 1960

Cloud

Web

Client Server

Virtualization

Mainframe

Minicomputer

The Evolving Data Centre Architecture Compute & Fabric Technology Transitions

FabricPath, VNLink, Unified Fabric

Scaling the Virtualized Fabric

Internet – Massive Scaled TCP/IP

Networks and Fabrics

TCP/IP & SCSI

Page 6: Technologies Transforming the Data Center - Terena

6 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Data Center Row 1

The Evolving Data Centre Architecture Challenges for the Classical Network Design

Data Center Row 2

§  Hypervisor based server virtualization and the associated capabilities (vMotion, …) are changing multiple aspects of the Data Center design

§  Where is the server now?

§  Where is the access port?

§  Where does the VLAN exist?

§  Any VLAN Anywhere?

§  How large do we need to scale Layer 2?

§  What are the capacity planning requirements for flexible workloads?

§  Wherearethe policy boundaries with flexible workload (Security, QoS, WAN acceleration, …)?

Page 7: Technologies Transforming the Data Center - Terena

7 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023 7 7

The Evolving Data Centre Fabric The Pillars of Change

FY08

FY10

FabricPath

OTV

FEX-link

VN-Link

DCB/FCoE

vPC

VDC

Architectural Flexibility / Scale

Workload Mobility

Simplified Management w/ Scale

VM-Aware Networking

Consolidated I/O

Active-Active Uplinks

Virtualizes the Switch

Deployment Flexibility Unified Ports

CONVERGENCE

SCALE

INTELLIGENCE

Page 8: Technologies Transforming the Data Center - Terena

8 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Technologies Transforming the Data Center Agenda

§  The Evolving Data Centre §  Transition Points

§  Evolution of the Fabric §  Scaling the Fabric

§  Connecting the Server - VN-Link

§  Mobility and Storage - Unified I/O

§  Evolution of Compute §  UCS Building Blocks

§  UCS Automation and Management

§  UCS Integrated Solutions

§  Next Steps: The Data Center Focus for 2011

1K Cisco Nexus

x86

Page 9: Technologies Transforming the Data Center - Terena

9 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Servers, FCoE attached Storage

Scaling the Data Centre Fabric Larger Distributed Topologies

FC Attached Storage

Unified Compute Pod

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

§  Server, Storage, Application and Facilities are driving Layer 2 Scalability requirements

§  Server Virtualization and Clustering driving the need for every VLAN everywhere based design

§  Facilities requirements defining the network topology “No watt shall be left behind”

•  VM requirements along with Data Storage growth mandating a need for more efficient and pervasive network based storage

§  Technology changes will impact any cabling plant design

§  Migration to 10GE as the default LoM technology

Virtu

alize

d Ed

ge/A

cces

s Lay

er

Nexu

s LA

N an

d SA

N Co

re:

Optim

ized

for D

ata C

entre

Nexus Switching Fabric: Optimizing Compute, Storage, Application Workload with Improved High Availability and Management

Page 10: Technologies Transforming the Data Center - Terena

10 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Cisco FEXlink: Virtualized Access Switch Nexus 2000 Fabric Extender

Nexus Parent Switch (Nexus 5000/5500/7000)

Nexus 2000 Fabric Extender

10GE Fabric Links

§  Architectural Flexibility -Rack or Blade servers (with pass through), -100M to 1GE to 10GE to FCoE

§  Highly Scalable Server Access -Up to 512 Lossless 10GE/FCoE ports or 1536

100/1000 ports per management domain for Nexus

§  Simplified Operations -Single point of mgmt & policy enforcement - Plug-and-play mgmt with auto-configuration

§  Increased Business Agility & Resilience -Increased resilience for server connectivity with ISSU & VPC -VM aware network services -Quick expansion of network capacity with ToR Nexus 2000

Page 11: Technologies Transforming the Data Center - Terena

11 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Nexus 2000 Fabric Extender Network Interface Virtualization Architecture (NIV)

Bridges that support Interface Virtualization (IV) ports must support VNTag and the VIC protocol

NIV uplink ports must connect to an NIV capable bridge or an NIV Downlink

Hypervisor

NIV downlink ports may be connected to an NIV uplink port, bridge or NIC

NIV may be cascaded extending the port extension one additional level NIV downlink ports are

assigned a virtual identifier (VIF) that corresponds to a virtual interface on the bridge and is used to forward frames through NIV’s

LIF

VIF NIV capable adapters may extending the port extension

VIF

§  The Network Interface Virtualization (NIV) Architecture provides the ability to extend the bridge (switch) interface to downstream devices

§  NIV associates the Logical Interface (LIF) to a Virtual Interface (VIF)

LIF

Note: Not All Designs Supported in the NIV Architecture Are Currently Implemented

Page 12: Technologies Transforming the Data Center - Terena

12 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Virtualized Access Switch Parent Switch ~= Supervisor

§  Nexus parent switch provides the forwarding functionality for the entire Virtualized Access Switch

§  Upgrading the parent switch upgrades the capabilities of the entire virtualized Access switch

Nexus 5500 Parent Switch 16 FEX, DCB, Ethernet, FC, FCoE, NIV, Layer 2 & Layer 3, FabricPath

Nexus 5000 Parent Switch 12 FEX, DCB, Layer 2 Ethernet, FCoE, FC

Nexus 7000 Parent Switch M1 Line Card - 32 FEX, Layer 2 & 3 Ethernet

Migrating Parent Switches

Nexus 7000 Parent Switch F2 Line Card - DCB, Ethernet, FC, FCoE, NIV, Layer 2 & Layer 3, FabricPath (2HCY11)

Future Evolution of Parent Switches

Page 13: Technologies Transforming the Data Center - Terena

13 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Nexus 2000: Virtualized Access Switch Changing the Device Paradigm §  De-Coupling of the Layer 1 and Layer 2 Topologies

§  Simplified Management Model, plug and play provisioning, centralized configuration

§  Line Card Portability (N2K supported with Multiple Parent Switches – N5K, 6100, N7K)

§  Unified access for any server (100Mà1GEà10GEà FCoE): Scalable Ethernet, HPC, unified fabric or virtualization deployment

Virtualized Switch

. . .

Page 14: Technologies Transforming the Data Center - Terena

14 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

SAN LAN

§  Architectural view requires some form of a re-usable building block in the DC §  Network teams view this as layer 2 design issue, the server and application team

views it as compute workload

Nexus 2000: Virtualized Access Switch The Compute Pod: Domain of Workload

Page 15: Technologies Transforming the Data Center - Terena

15 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Virtualized Access Switch Supporting a Compute Domain

L3 L2

Virtualized Access Switch Mapping the Compute Pod to the Physical Network

§  De-Coupling of the Layer 1 and Layer 2 Topologies allows for more efficient alignment of compute resources to network devices

§  Define the Layer 2 boundary at the switch boundary if the compute workload maps to the scale of the virtualized switch (up to 2 x 1536 ports today)

Virtualized Access Switch

. . .

Page 16: Technologies Transforming the Data Center - Terena

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2399 16

Data Center Switching Architecture vPC – MultiChassisEtherChannel

§  vPC is a Port-channeling concept

extending link aggregation to two separate physical switches

§  Allows the creation of resilient L2 topologies based on Link Aggregation.

-Enables loop free Layer 2 topologies with physical network redundancy

§  Provides increased bandwidth

-All links are actively forwarding

§  vPC maintains independent control planes

Virtual Port Channel

L2

SiSi SiSi

Increased BW with vPC

Non-vPC vPC

Physical Topology Logical Topology

Page 17: Technologies Transforming the Data Center - Terena

17 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Fabric: Scaling the New Pod vPC - Scaling with Reduced Layer 2 §  Scaling of both VM and non VM

based workloads is driving increased density of compute and growth of layer 2 fabrics

§  Nexus designs currently leveraging vPC to increase capacity and increase scale of layer 2 fabrics

§ Removing physical loops out of the layer 2 topology

§ Reducing the STP state on the access and aggregation layer

§  Scaling the aggregate Bandwidth § Nexus 5000/5500 when combined with F1

line cards on Nexus 7000 can support port channels of up to 32 x 10G interfaces = 320 Gbps between access and aggregation

§ Nexus 5500/2000 virtualized access switch can support MCEC based port channels of up to 16 x 10G links = 160Gbps between server and virtualized access switch

Scaling for up to 32 x 10G links = 320

Gbps

vPC

vPC

VM #4

VM #3

VM #2

Scaling for up to 16 x 10G

links

Fabric Extension: STP Free

Physical and Control Plane Redundancy

Page 18: Technologies Transforming the Data Center - Terena

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2399 18

Scaling the Fabric (Pod) FabricPath

Scenario: Application grows beyond currently compute capacity and allocated rack space causing network disruptions and physical changes

VLAN 1, 2, 3 VLAN 1 Rack 1

VLAN 2 Rack 2

VLAN 3 Rack 3

§ VLAN Extensibility – any VLAN any where!

§  Location independence for workloads

§  Consistent, predictable bandwidth and latency with FabricPath.

§ Adding additional server capacity while maintaining layer 2 adjacencies in same VLAN

§ Disruptive - Requires physical move to free adjacent rack space

Page 19: Technologies Transforming the Data Center - Terena

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2399 19

Classical Ethernet Mac Address Table

FabricPath Routing Table

FabricPath is Routing Forwarding decision based on ‘FabricPath Routing Table’ §  FabricPath header is imposed by ingress switch §  Only switch addresses are used to make “routing” decisions §  No MAC learning required inside the L2 Fabric

A

S1 S2 S3 S4

S11 S12 S42 FabricPath

B

Switch IF

… …

S42 L1, L2,L3, L4

MAC IF A 1/1 … … B S42

1/1

S11èS42 A è B

Classical Ethernet

Single mac address lookup at the edge

Page 20: Technologies Transforming the Data Center - Terena

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2399 20

FabricPath is Scalable

§  Safe Data Plane, Conversational learning §  TTL and RFP check the data plane protect against loops

L2 can be extended in the data center (while STP is segmented) §  Conversational learning allows scaling mac address tables at

the edge

Classical Ethernet Mac Address Table

A

S11 S42

FabricPath (no mac address learning in the Fabric)

B A èB A è B

MAC IF A 1/1 … … B S42

Classical Ethernet

Classical Ethernet Mac Address Table

Classical Ethernet Mac Address Table

MAC IF … …

MAC IF A S11 … … B 1/1

S22

Page 21: Technologies Transforming the Data Center - Terena

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2399 21

Scaling the Fabric (Pod)

TRILL Status

Inv Dev Appr Pub

Inv Dev Appr Pub

Completed in March 2010, awaiting publication. This is the part of the protocol affecting the hardware.

Stable since long time, entering now the approval phase. This is control plane (i.e., software) only

Base proto

IS-IS ext.

Technically Stable

Page 22: Technologies Transforming the Data Center - Terena

© 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2399 22

Technologies Transforming the Data Center Agenda

§  The Evolving Data Centre §  Transition Points Points

§  Evolution of the Fabric §  Scaling the Fabric

§  Connecting the Server - VN-Link

§  Mobility and Storage - Unified I/O

§  Evolution of Compute §  UCS Building Blocks

§  UCS Automation and Management

§  UCS Integrated Solutions

§  Next Steps: The Data Center Focus for 2011

1K Cisco Nexus

x86

Page 23: Technologies Transforming the Data Center - Terena

23 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Data Center Architecture Evolution VN-Link – Virtual Machine Aware Fabric

§ VN-Link extends the network to the virtualization layer

§ VN-Link leverages innovation within networking equipment

Virtual Ethernet Interface Port Profiles Virtual Interface mobility Consistent operations model

§ Network and Compute Control Planes are actively synchronized

vETH vETH

Page 24: Technologies Transforming the Data Center - Terena

24 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Data Center Architecture Evolution VN-Link – Virtual Machine Aware Fabric

SWITCH VM

VM

SERVER

SWITCH VM

VM

SERVER NETWORK DEVICE

Network Interface Virtualization (VNTAG TechnologyIEEE 802.1Qbh pre-standard)

Nexus 1000V - IEEE 802.1Q standard-based

vETH vETH

Page 25: Technologies Transforming the Data Center - Terena

25 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Data Center Architecture Evolution VN-Link Today – Nexus 1000V

Nexus 1000V VSM

vCenter

Virtual Supervisor Module (VSM) §  Virtual or Physical appliance running

Cisco NXOS (supports HA) §  Performs management, monitoring, &

configuration §  Tight integration with VMware vCenter

Virtual Ethernet Module (VEM) §  Enables advanced networking

capability on the hypervisor §  Provides each VM with dedicated “switch port”

§  Collection of VEMs = 1 vNetwork Distributed Switch

Cisco Nexus 1000V Installation §  ESX &ESXi §  VUM & Manual Installation

§  VEM is installed/upgraded like an ESX patch

vSphere

Nexus 1000V VEM

vSphere vSphere

Nexus 1000V VEM

Nexus 1000V VEM

VM VM VM VM VM VM VM VM VM VM VM VM

Page 26: Technologies Transforming the Data Center - Terena

26 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

vCenter

n1000v(config)# port-profile WebServers n1000v(config-port-prof)# switchport mode access n1000v(config-port-prof)# switchport access vlan 100 n1000v(config-port-prof)# no shut

VSM

Data Center Architecture Evolution VN-Link – Virtual Machine Aware Fabric

VM #4

VM #3

VM #2

§  Coordinated Management State between Network and Compute

§  Coordinated Control Plane state between Network and Compute

§  Transition to real time coordination between fabric and compute

ESX & VEM

Page 27: Technologies Transforming the Data Center - Terena

27 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

What is VN-Link Deployment 1 - Nexus 1000V + Generic Adapter

§  Nexus 1000V 802.1q standards based bridge Performs packet forwarding and applies advanced networking features

Policy Based port profile applies port security, VLAN, and ACLs, policy maps for QoS treatment for all systems traffic including VM traffic, Console &Vmotion/Vmkernel

§  Generic adapter on generic x86 server

§  802.1q based upstream switch MCEC capable upstream switch provides increased granularity of traffic load sharing but not required

Hypervisor

Nexus 1000V VEM

VM VM VM VM

Generic Adapter

802.1Q Switch

VETH

VNIC

Page 28: Technologies Transforming the Data Center - Terena

28 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

What is VN-Link Deployment 2 - Nexus 1000V + NIV

§  Nexus 1000V Performs packets forwarding and applies advanced networking features

Uses port security, VLAN, and ACLs, policy maps for QoS treatment for VM traffic, Console &Vmotion/Vmkernel

§  NIV (Cisco VIC) Deployed in Cisco UCS Scheduled for 1HCY11 on Nexus 5500 Under standardization - 802.1Qbh

§  Upstream switch is NIV capable

§  NIV interface is a direct replacement of the physically independent NIC

NIV

Hypervisor

Nexus 1000V VEM

VM VM VM VM

UCS 6100 or Nexus 5500 (1HCY11)

VETH

VNIC

Page 29: Technologies Transforming the Data Center - Terena

29 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

What is VN-Link Deployment 3 – Direct NIV

§  Hypervisor Pass-thru within the hypervisor

§  NIV (Cisco VIC) Set of individual I/O queues for each vNIC

§  UCS 6100 (current) and Nexus 5500 (CY11)

Performs packets forwarding and applies networking features

§  Direct Mapping of the VM vNIC to the extend NIV port on the parent device

Cisco VIC

Hypervisor

VM VM VM VM

UCS 6100 or Nexus 5500 (1HCY11)

VNIC

VETH

Page 30: Technologies Transforming the Data Center - Terena

30 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Virtualized Applications Non Virtualized

Applications

VN-Link in HW with

VMDirectPath

VN-Link in HW VN-Link in SW

NIV & VN-Link Summary

Page 31: Technologies Transforming the Data Center - Terena

31 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

NIV & VN-Link Virtual Networking Standards

802.1Q

Virtual Embedded

Bridge

802.1Qbg

Reflective Relay

802.1Qbg

Multichannel

802.1Qbh

Port Extension

WITH TAG

OFFLOAD TO UPSTREAM SWITCH

TAGLESS

Page 32: Technologies Transforming the Data Center - Terena

32 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

NIV & VN-Link Virtual Networking Standards

802.1Q

Virtual Embedded

Bridge

802.1Qbg

Reflective Relay

802.1Qbg

Multichannel

802.1Qbh

Port Extension

NEW BRIDGE NEW DEVICE NEW BRIDGE

NEW BEHAVIOR OF

EXISTING BRIDGE

HYPERVISOR-RESIDENT

BRIDGE

Page 33: Technologies Transforming the Data Center - Terena

33 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

NIV & VN-Link

Virtual Bridging Standardization Status

Technically Stable

Inv Dev Appr Pub

Inv Dev Appr Pub

Edge Virtual Bridging

Bridge Port Extension

Qbg

Qbh

Page 34: Technologies Transforming the Data Center - Terena

34 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Technologies Transforming the Data Center Agenda §  The Evolving Data Centre §  Transition Points

§  Evolution of the Fabric §  Scaling the Fabric

§  Connecting the Server - VN-Link

§  Mobility and Storage - Unified I/O

§  Evolution of Compute §  UCS Building Blocks

§  UCS Automation and Management

§  UCS Integrated Solutions

§  Next Steps: The Data Center Focus for 2011

1K Cisco Nexus

x86

Page 35: Technologies Transforming the Data Center - Terena

35 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

The Goals of Unified Fabric What we already know

Reduce overall Data Center power consumption by up to 8%. Extend the lifecycle of current data center.

Wire hosts once to connect to any network - SAN, LAN, HPC. Faster rollout of new apps and services.

Every host will be able to mount any storage target. Increase SAN attach rate. Drive storage consolidation and improve utilization.

Rack, Row, and Cross-Data Center VM portability become possible.

Page 36: Technologies Transforming the Data Center - Terena

36 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Eth

erne

t H

eade

r

FCoE

H

eade

r

FC

Hea

der

FC Payload CR

C

EO

F FC

S

Destination MAC Address

Source MAC Address (IEEE 802.1Q Tag)

ET = FCoE Ver Reserved Reserved Reserved

Reserved SOF

Encapsulated FC Frame (with CRC)

EOF Reserved FCS

Byte 0 Byte 2197

FCoE Frame Format Bit 0 Bit 31

§  Fibre Channel over Ethernet provides a high capacity and lower cost transport option for block based storage

§  Two protocols defined in the standard

§  FCoE – Data Plane Protocol

§  FIP – Control Plane Protocol

§  FCoE is a standard - June 3rd 2009, the FC-BB-5 working group of T11 completed its work and unanimously approved a final standard for FCoE

What is FCoE? Fibre Channel over Ethernet (FCoE)

Page 37: Technologies Transforming the Data Center - Terena

37 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

FCoE §  Data Plane

§  It is used to carry most of the FC frames and all the SCSI traffic

§  Uses Fabric Assigned MAC address (dynamic) : FPMA

§  IEEE-assigned Ethertype for FCoE traffic is 0x8906

FIP (FCoE Initialization Protocol)

§  It is the control plane protocol

§  It is used to discover the FC entities connected to an Ethernet cloud

§  It is also used to login to and logout from the FC fabric

§  Uses unique BIA on CNA for MAC

§  IEEE-assigned Ethertype for FCoE traffic is 0x8914

http://www.cisco.biz/en/US/prod/collateral/switches/ps9441/ps9670/white_paper_c11-560403.html

FC-BB-5 defines two protocols required for an FCoE enabled Fabric

What is FCoE? Protocol Organization

Page 38: Technologies Transforming the Data Center - Terena

38 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

iSCSI Appliance

File System

Application

SCSI Device Driver iSCSI Driver TCP/IP Stack

NIC

Volume Manager

NIC TCP/IP Stack iSCSI Layer Bus Adapter

iSCSI Gateway

FC

File System

Application

SCSI Device Driver iSCSI Driver TCP/IP Stack

NIC

Volume Manager

NIC TCP/IP Stack iSCSI Layer

FC HBA

NAS Appliance

NIC TCP/IP Stack

I/O Redirector

File System

Application

NFS/CIFS

NIC TCP/IP Stack File System

Device Driver

Block I/O

NAS Gateway

NIC TCP/IP Stack

I/O Redirector

File System

Application

NFS/CIFS

FC

NIC TCP/IP Stack File System

FC HBA

Host/ Server

Storage Transport

Storage Media

FCoE SAN

FCoE

SCSI Device Driver

File System

Application

Computer System Computer System Computer System Computer System Computer System

Block I/O File I/O

SAN IP IP IP IP

Block I/O NIC

Volume Manager Volume Manager

FCoE Driver

Why FCoE? All Data accessed over a common fabric

Page 39: Technologies Transforming the Data Center - Terena

39 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Ethernet Economic Model §  Embedded on Motherboard

§  Integrated into O/S

§  Many Suppliers

§  Mainstream Technology

§  Widely Understood

§  Interoperability by Design

FC Economic Model §  Always a Stand-Up Card

§  Specialized Drivers

§  Few Suppliers

§  Specialized Technology

§  Special Expertise

§  Interoperability by Test

Ethernet Model has Proven Benefits

Why FCoE? It’s Ethernet!!

Page 40: Technologies Transforming the Data Center - Terena

40 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Ethernet Enhancements IEEE DCB

Standard / Feature Status of the Standard IEEE 802.1Qbb Priority-based Flow Control (PFC) Completed (awaiting publication)

IEEE 802.3bd Frame Format for PFC Completed (awaiting publication)

IEEE 802.1Qaz Enhanced Transmission Selection (ETS) and Data Center Bridging eXchange (DCBX)

Currently in sponsor ballot, the last stage of approval. Sponsor ballot being re-circulated for final closure. (“Everything at this point is all editorial”)

IEEE 802.1Qau Congestion Notification Complete, published March 2010 IEEE 802.1Qbh Port Extender In its first task group ballot

§  Developed by IEEE 802.1 Data Center Bridging Task Group (DCB)

§  All technically stable

§  Final standards expected bylate 2010 to early 2011

CEE (Converged Enhanced Ethernet) is an informal group of companies that submitted initial inputs to the DCB WGs.

Page 41: Technologies Transforming the Data Center - Terena

41 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Ethernet Enhancements DCB “Virtual Links”

VL1 VL2 VL3

LAN/IP Gateway

VL1 – LAN Service – LAN/IP VL2 - No Drop Service - Storage

Ability to support different forwarding behavours, e.g. QoS, MTU, … queues within the “lanes”

Campus Core/ Internet

Storage Area Network

Page 42: Technologies Transforming the Data Center - Terena

42 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Unified Fabric Design Designing for both Ethernet ‘and’ Fibre Channel

? ?

?

?

? ? ? ?

?

?

? ?

Switch Switch

Switch

?

T2

I5

I4 I3 I2 I1

I0

T1 T0

Switch Switch

Switch

DNS FSPF

Zone RSCN DNS

FSPF Zone

RSCN DNS

Zone

FSPF RSCN

§  Ethernet/IP Bandwidth and services are separate layers,

offered by separate entities

§  Fibre Channel Bandwidth and services are collapsed,

offered by the fabric

§  Unified Fabric design has to incorporate the super-set of requirements

QoS – Lossless ‘and’ Lossful Fabrics High Availability – Highly redundant network

topology ‘and’ redundant fabrics Bandwidth – FC fan-in and fan-out ratios ‘and’ Ethernet/IP oversubscription

Security – FC controls (zoning, port security, …) ‘and’ IP controls (CISF, ACL, …)

Manageability and visibility – Hop by hop visibility for FC ‘and’ Ethernet/IP

Page 43: Technologies Transforming the Data Center - Terena

43 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Nexus 2232 10GE FEX

Unified Fabric Design Today What is really being deployed

Nexus 5000 as FCF or as NPV device

Nexus 5000/5500

Generation 2 CNAs

§  The first phase of the Unified Fabric evolution design focused on the fabric edge

§  Unified the LAN Access and the SAN Edge by using FCoE

§  Consolidated Adapters, Cabling and Switching at the first hop in the fabrics

§  The Unified Edge supports multiple LAN and SAN topology options

§  Virtualized Data Center LAN designs

§  Fibre Channel edge with direct attached initiators and targets

§  Fibre Channel edge-core and edge-core-edge designs

§  Fibre Channel NPV edge designs

Fabric A Fabric B

FC

FCoE FC

Page 44: Technologies Transforming the Data Center - Terena

44 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Nexus 4000, 5000, 5550, 7000 & MDS FCoE Support

§ Nexus 7000 § Supports FCoE, iSCSI and NAS § Loss-Less Ethernet: DCBX, PFC, ETS

§  MDS 9500 §  8 FCoE ports at 10GE full rate in MDS 9506, 9509, 9513 §  80-Gbps front panel bandwidth

§  Nexus 5000 & 5500 §  Shipping for 3 years

§ FCoE Multi-hop supported § Unified Ports (Nexus 5500)

Page 45: Technologies Transforming the Data Center - Terena

45 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Servers, FCoE attached Storage

Unified Fabric Design – CY 2011 Larger Fabric Multi-Hop Topologies

§  Multi-hop edge/core/edge topology •  Supported by Nexus 5000 & 5500

in Q4CY10

§  Core SAN switches supporting FCoE

•  N7K with F1 line cards •  MDS with FCoE line cards

§  Edge FC switches supporting either •  N5K - FCoE-NPV with FCoE

uplinks to the FCoE enabled core (VNP to VF)

•  N5K or N7K - FC Switch with FCoE ISL uplinks (VE to VE)

§  Fully compatible with virtualized access switch and will Co-exist with FabricPath and/or Layer 3 designs

N7K or MDS FCoE enabled Fabric Switches

FC Attached Storage

Servers

VE

Edge FCF Switch Mode

VE

VF

VNP VE

VE

Page 46: Technologies Transforming the Data Center - Terena

46 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Backbone

Servers, FCoE attached Storage

Cisco Data Center Architecture Fabric in 2011

FC Attached Storage

§  Layer 2 Scale and Stability •  vPC will continue to be the predominant

design •  Customers are rapidly starting to explore

FabricPath (TRILL)

§  FCoE is under serious consideration at many customers (reaching early phases of general adoption)

§  VM and Compute Pod driving the customer business decisions

§  Data Center architecture focusing on the need to maximize the flexibility of the architecture

•  Common edge/core/edge for both NAS and FC/FCoE Storage

•  vPC/L2MP/OTV provides potential for very large layer 2 capacity designs

•  Virtual Machine aware fabric provides a more cost effective and manageable architecture Servers Leveraging

Bock and File Based Storage

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

Page 47: Technologies Transforming the Data Center - Terena

47 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Technologies Transforming the Data Center Agenda §  The Evolving Data Centre §  Transition Points

§  Evolution of the Fabric §  Scaling the Fabric §  Connecting the Server - VN-Link §  Mobility and Storage - Unified I/O

§  Evolution of Compute §  UCS Building Blocks

§  UCS Automation and Management

§  UCS Integrated Solutions

§  Next Steps: The Data Center Focus for 2011

1K Cisco Nexus

x86

Page 48: Technologies Transforming the Data Center - Terena

48 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Unified Computing Building Blocks Unified Fabric Introduced with the Cisco Nexus Series

Physical §  Wire once infrastructure

(Nexus 5500) §  Fewer switches, adapters,

cables

Virtual §  VN-Link (Nexus 1000v) §  Manage virtual the same as

physical

Scale §  Fabric Extender (Nexus 2000) §  Scale without increasing points

of management

Virtual

Physical

Ethernet Fibre Channel

Page 49: Technologies Transforming the Data Center - Terena

49 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Building Blocks of Cisco UCS An Integrated System Optimizes Data Center Efficiency

UCS Manager Service Profiles Virtualization integration

UCS Fabric Interconnect 10GE unified fabric switch One per 320 blades

UCS Fabric Extender Remote line card One per chassis

UCS Blade Server Chassis Flexible bay configurations

UCS Blade and Rack Servers x86 industry standard Patented extended memory

UCS I/O Adapters Choice of multiple adapters

Page 50: Technologies Transforming the Data Center - Terena

50 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

From ad hoc and inconsistent…

…to structured, but siloed, complicated

and costly…

…to simple, optimized and automated

What does your Data Center organization look like?

Cisco Unified Computing System (UCS) A New Approach to Server Infrastructure

Page 51: Technologies Transforming the Data Center - Terena

51 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Cisco Unified Computing System A Differentiated Approach to Compute

STATUS QUO

Vendor protecting its legacy more than customer needs

Systems complexity

Vendor chooses eco-system partner

CISCO DIFFERENTIATED VALUE…

Pre-integration reduces need for extensive and expensive services

Architected to meet today’s and tomorrow’s data center needs

Systems simplicity through innovation

Customer choice: Open eco-system; open standards

Complex service intensive approach

Page 52: Technologies Transforming the Data Center - Terena

52 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Software/Platform/Infrastructure as a Service

§  Software as a Service •  Applications already hosted

•  Users/Groups/Orgs provisioned

§  Platform as a Service •  Operating System and below already hosted

•  Golden Images in data stores ready for clone and customize

§  Infrastructure as a Service •  LAN, SAN, Storage, Security, Access, etc. pre-provisioned

•  Customer can lay down an OS on top

§  Cloud models host multiples of these •  Consumer operates within that domain

§  More than CoLo cage with “Ping, Power, & Pipe”

SaaS

PaaS IaaS

Page 53: Technologies Transforming the Data Center - Terena

53 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Elastic Compute and UCS Virtualizing More than the Server

§ One way is to scale out a compute platform on cheap commodity servers and implement services on this platform.

• The DC network provides L2/L3 connectivity between compute nodes and end-users. • The value is in the APIs the end-user has access to within the compute layer (not network). • Built for application developers and Internet companies using web services based applications.

§ The other is looking ahead and implementing a Next-Gen computing platform

• Support Enterprise Class features. • Treating network and compute resources as equally important to scale, secure and provide differentiated services. • Built for Enterprises looking to adapt to Cloud infrastructure automation with option of sharing/bursting IT services to a Cloud Compute Platform.

2010 Cisco Inc., Company Confidential – Presented under NDA 53

Page 54: Technologies Transforming the Data Center - Terena

54 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Elastic Compute and UCS

§  Servers •  Cost effective x86 multi-core CPUs, >96G Memory •  UCS Rack Optimized or UCS Blades •  Programmable server instances

§  Network •  Virtualized access to servers •  3-Tiered Ethernet Fabric •  IP and VPN connectivity

§  Storage Virtualized access to servers •  SAS, FC, SATA based Arrays •  NAS attached •  SAN attached •  Drives Distributed across servers

§  Virtualization •  Hypervisor •  Clustering •  Network access within server

§  Software/API •  Provisioning and Automation •  Provider interface •  End-user self-service interface •  Policy Management •  Usage based billing

Internet VPN

Virtualizing More than the Server

Software Capabilities / Application Programming Interface

2010 Cisco Inc., Company Confidential – Presented under NDA 54

Page 55: Technologies Transforming the Data Center - Terena

55 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

What is “stateless” computing architecture?

§ Stateless client computing is where every compute node has no inherent state pertaining to the services it may host.

§  In this respect, a compute node is just an execution engine for any application (CPU, memory, and disk – flash or hard drive).

§ The core concept of a stateless computing environment is to separate state of a server that is built to host an application, from the hardware it can reside on.

§ The servers can easily then be deployed, cloned, grown, shrunk, de-activated, archived, re-activated, etc.

2010 Cisco Inc., Company Confidential – Presented under NDA 55

Page 56: Technologies Transforming the Data Center - Terena

56 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Technologies Transforming the Data Center Agenda §  The Evolving Data Centre §  Transition Points

§  Evolution of the Fabric §  Scaling the Fabric

§  Connecting the Server - VN-Link

§  Mobility and Storage - Unified I/O

§  Evolution of Compute §  UCS Building Blocks

§  UCS Automation and Management

§  UCS Integrated Solutions

§  Next Steps: The Data Center Focus for 2011

1K Cisco Nexus

x86

Page 57: Technologies Transforming the Data Center - Terena

57 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Traditional Server Blade System Today =Too Complex, Too Much Management

Mgmt Server

M

M

M

M

M

M M M M M M M

M M M M M M M M

M

M

M

M M

M

M

Traditional architectures have many points of management:

•  Each blade

•  Each chassis

•  Each LAN switch in chassis

•  Each SAN switch in chassis

•  LAN and SAN switches at TOR/EOR

= No single management across technologies

= No system wide audit, fault & diagnostic, etc M M M M M M M

M M M M M M M M

M

M

M

M M

M

M M = Examples of Management

• IP Address • RBAC • Firmware

• Firmware Settings • BIOS • BIOS Settings

Page 58: Technologies Transforming the Data Center - Terena

58 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

UCS Technologies for Elasticity An Embedded Device Manager for UCS

UCS Manager

§  Unifies all of UCS HW components into a single, cohesive system

• Adapters • Blades • Chassis • Fabric extenders • Fabric interconnects

•  No external server or management software •  No extra software licenses •  No extra management or maintenance device

= Reduced Complexity = Simplified Management = Lower Management and Maintenance Cost

M M

Page 59: Technologies Transforming the Data Center - Terena

59 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

UCS Technologies for Elasticity An Embedded Device Manager for UCS

UCS Manager

§  Scale out without increased complexity Automatic discovery and inventory Policy-driven management

Scale out on demand Centralized firmware management

= No additional management = Increase capacity when needed

M M

Page 60: Technologies Transforming the Data Center - Terena

60 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

UCS Technologies for Elasticity UCS Manager An Embedded Device Manager

UCS Manager

§  A Single Manager for: • Discovery • Inventory • Auditing • Monitoring • Diagnostics • Fault Management • Statistics Collection • Configuration • Firmware • RBAC

GUI

CLI

•  A true single point of management •  Shared across network, storage, & servers

= Simplified troubleshooting & diagnostics = Unifies “language” & interaction across groups = No changes in responsibility within groups

Page 61: Technologies Transforming the Data Center - Terena

61 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

UCS Technologies for Elasticity UCS Manager Integration and Development

UCS Manager

§  One XML API Interface • Single Data Model •  Transactional • Order agnostic • UCS Platform Emulator tool

GUI

CLI

= Reduced time for development work = Less code to maintain and support = Reduced number of development systems

Programmatic Interfaces

§  One API to learn §  No need to develop “undo” sequence §  No need for developer to understand order of events §  Easy to automate and orchestrate

Page 62: Technologies Transforming the Data Center - Terena

62 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

UCS Technologies for Elasticity

§  Profile Name = vmhost-cluster1-1 §  UUID = 12345678-ABCD-9876-5432-

ABCDEF123456 §  Description = ESX4u1 – 1st Host in Cluster 1 §  LAN Config

vNIC0 Switch = Switch A vNIC0 Pin Group = SwitchA-pingroupA vNIC0 VLAN Trunking = Enabled vNIC0 Native VLAN = VLAN 100 vNIC0 MAC Address = 00:25:B5:00:01:01 vNIC0 Hardware Failover Enabled = No vNIC0 QoS policy = VMware-QoS-policy vNIC1 Switch = Switch B vNIC1 Pin Group = SwitchB-pingroupA vNIC1 VLAN Trunking = Enabled vNIC1 Native VLAN = VLAN 100 vNIC1 MAC Address = 00:25:B5:00:01:02 vNIC1 Hardware Failover Enabled = No vNIC1 QoS policy = VMware-QoS-policy

§  Local Storage Profile = no-local-storage §  Scrub Policy = Scrub local disks only

§  SAN Config Node ID = 20:00:00:25:B5:2b:3c:01:0f

vHBA0 Switch = Switch A vHBA0 VSAN = VSAN1-FabricA

vHBA0 WWPN = 20:00:00:25:B5:2b:3c:01:01 vHBA1 Switch = Switch B vHBA1 VSAN = VSAN1-FabricB

vHBA1 WWPN = 20:00:00:25:B5:2b:3c:01:02

§  Boot Policy = boot-from-ProdVMax Boot order =

1.  Virtual CD-ROM 2.  vHBA0, 50:00:16:aa:bb:cc:0a:01, LUN 00, primary

3.  vHBA1, 50:00:16:aa:bb:cc:0b:01, LUN 00, secondary

4.  vNIC0

§  Host Firmware Policy = VIC-EMC-vSphere4

§  Management Firmware Policy = 1-3-mgmt-fw

§  IPMI Profile = standard-IPMI

§  Serial-over-LAN policy = VMware-SOL

§  Monitoring Threshold Policy = VMware-Thresholds

Service Profiles: A UCS Server (not a blade, but an XML object)

Page 63: Technologies Transforming the Data Center - Terena

63 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Network has Complete Visibility to Servers

SAN

LAN Chassis-1/Blade-5

Chassis-9/Blade-2

Server Name: SP-A UUID: 56 4d cd 3f 59 5b 61… MAC : 08:00:69:02:01:FC WWN: 5080020000075740 Boot Order: SAN, LAN

§ Service Profiles Capture more than MAC & WWN MAC, WWN, Boot Order, Firmware, network & storage policy

§ Stateless Compute where network & storage see all movement Better diagnosability and QoS from network to blade, policy follows

Service Profiles deliver Service Agility regardless of Physical or

Virtual Machine

Page 64: Technologies Transforming the Data Center - Terena

64 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Blade

Blade

Blade

Blade

Blade

Web

Blade

Blade

Blade

Blade

Blade

Oracle

Blade

Blade

Blade

Blade

Blade

VMware

Server Availability

Blade

Blade

Blade

Web

Blade

Blade

Blade

Oracle

Blade

Blade

Blade

VMware

Blade

Blade

§ Today’s Deployment: • Provisioned for peak capacity • Spare node per workload

§ With Server Profiles: •  Resources provisioned as needed

•  Same availability with fewer spares

Burst capacity HA spare

Page 65: Technologies Transforming the Data Center - Terena

65 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

UCS Technologies for Elasticity

§  Supports fast turnaround business requirements

§  No need to maintain cache of servers for each service type

§  Can assign business priority to each server § Temporarily disassociate lower priority services when compute needed

§  Can assign project length to each server or group of servers

§ Can disassociate after sign-up period (and appropriate governance) § Make reclaimed compute available for other projects § Preservation of boot/data images (disk/LUN) needed if project restoral needed later

§  Boundary for services is not a chassis or rack

§ Server Pools and Qualifications allow more intelligent infrastructure

Infrastructure Automation from Profiles or Templates

Oracle-RAC-Node UUID, MAC,WWN Boot info firmware LAN, SAN Config Firmware…

ESX-DRS-Node UUID, MAC,WWN Boot info firmware LAN, SAN Config Firmware…

Exchange-Node UUID, MAC,WWN Boot info firmware LAN, SAN Config Firmware…

4 Needed 16 Needed 2 Needed

2010 Cisco Inc., Company Confidential – Presented under NDA 65

Page 66: Technologies Transforming the Data Center - Terena

66 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Access Layer Shifts Stateless NICs and HBAs with Unified I/O

§  Mapping of Ethernet and FC Wires over Ethernet

§  Service Level enforcement

§  Multiple data types (jumbo, lossless, FC)

§  Individual link-states

§  Fewer Cables Multiple Ethernet traffic co-exist on same cable

§  Fewer adapters needed

§  Overall less power

§  Interoperates with existing Models

Management remains constant for system admins and LAN/SAN admins

§  Possible to take these links further upstream for aggregation

Individual Ethernets

DCB Ethernet

Individual Storage (iSCSI, NFS, FC)

Page 67: Technologies Transforming the Data Center - Terena

67 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

UCS Technologies Impacting the DC Unified I/O and QoS within UCS

•  Enables lossless Fabrics for each class of service •  PAUSE sent per virtual lane when buffers limit exceeded

Transmit Queues Ethernet Link

Receive Buffers

Eight Virtual Lanes

One One

Two Two

Three Three

Four Four

Five Five

Seven Seven

Eight Eight

Six Six STOP PAUSE

•  Enables Intelligent sharing of bandwidth between traffic classes control of bandwidth •  802.1Qaz Enhanced Transmission

10 GE Link Realized Traffic Utilization

3G/s HPC Traffic 3G/s

2G/s

3G/s Storage Traffic 3G/s

3G/s

LAN Traffic 4G/s

5G/s 3G/s

t1 t2 t3

Offered Traffic

t1 t2 t3

3G/s 3G/s

3G/s 3G/s 3G/s

2G/s

3G/s 4G/s 6G/s

Among the tools used are aggregate shapers at the vNICs (VIC), ETS, Policers at the switch for each vNIC.

Priority Flow Control CoS Bandwidth Mgmt

Page 68: Technologies Transforming the Data Center - Terena

68 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

UCS Technologies Impacting the DC Unified I/O within UCS

§  Run a Virtual Interface Card PCI slots of a server

§  We are not imposing “State” on the server

§  Consumes far fewer ports on the DC switching infrastructure (and cables)

§  NOT Single-Root I/O Virtualization

§  PCI Bus Structure Virtualization

•  Virtualize Adapters and PCI-PCI Bridges

•  No Locally Traffic Switching to Manage

•  Per-virtual interface link-state

•  Operating System support if they support PCI

•  Rate-Shaping, QoS Marking, etc. per Virtual Interface

•  UCS sends only VLANs and VSANs to Virtual Interface as Needed – to ease L2 scale in designs

§  Central Administration can group definitions and updates to configuration

PCIe x16

10GbE/FCoE

User Definable vNICs

Eth

0

FC

1 2

FC

3

Eth

56

Page 69: Technologies Transforming the Data Center - Terena

69 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Interface Virtualization VM-FEX

OS See’s Administratively definable (MAC, QoS, VLAN, VSAN, WWPN, etc.) Ethernet and Fiber Channel interfaces which connects to a Cisco virtual interface (VIF)

UCS Technologies for Elasticity

Hypervisor see’s unconfigured (no MAC, VLAN, etc.) Ethernet interfaces which are configured by the external VMM and connects to a Cisco virtual interface (VIF)

2010 69

Page 70: Technologies Transforming the Data Center - Terena

70 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Logi

cal S

witc

h

Baseline architecture

Switch

FEX

Extending FEX architecture to Virtual Machines Cascading Port Extender

Hypervisor vSwitch

App

OS

App

OS

App

OS

LAN

Logi

cal S

witc

h

Baseline architecture

Switch

FEX

Hypervisor

App

OS

App

OS

App

OS

LAN

VM-FEX

Switch port extended over cascaded Fabric Extenders to the Virtual Machine

Logi

cal S

witc

h

Collapse virtual and physical networking tiers!!!

Page 71: Technologies Transforming the Data Center - Terena

71 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

UCS VM-FEX Extend FEX Architecture to the Virtual Access Layer

=

Distrbuted Modular System

VM-FEX: Single Virtual-Physical Access Layer § Collapse virtual and physical switching into a single access

layer § VIC is a Virtual Line Card to the UCS Fabric Interconnect § Fabric Interconnect maintains all management & configuration § Virtual and Physical traffic treated the same

LAN

N7000/ C6500

MDS

SAN

Access Layer UCS 6100

1 160

UCS VIC UCS VIC

App

OS

App

OS

App

OS App

OS

App

OS

App

OS

App

OS

App

OS

App

OS App

OS

App

OS

App

OS

UCS IOM UCS IOM

+

UCS Fabric Interconnect Parent Switch

Cisco UCS VIC

UCS IOM-FEX

+

Page 72: Technologies Transforming the Data Center - Terena

72 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Hypervisor Hypervisor

VM-FEX: One network

UCS 6100

VM VM VM VM VM VM VM VM VNIC

UCS Server UCS Server

VM-FEX Basics § Fabric Extender for VMs § Hypervisor vSwitch removed

§ Each VM assigned a PCIe device § Each VM gets a virtual port on

physical switch

VM-FEX: One Network § Collapses virtual and physical switching layers § Dramatically reduces network management points

by eliminating per host vSwitch § Virtual and Physical traffic treated the same

Host CPU Cycles Relief § Host CPU cycles relieved from VM switching §  I/O Throughput improvements

UC

S VI

C

UC

S VI

C

VETH

Page 73: Technologies Transforming the Data Center - Terena

73 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

UCS Technologies Impacting the DC What does VM-FEX change?

§  Each Virtual Machine vNIC now “connected” to the data network edge

§  1:1 mapping between a virtual adapter and the upstream network port

§  Helps with Payment Card Industry (example) requirements for VMs to have separate adapter (no soft switches)

§  As Virtual Machines move around infrastructure, the network edge port moves along with the virtual adapter

Page 74: Technologies Transforming the Data Center - Terena

74 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

VMdirectPath with VMotion: Native Performance, Network-Awareness, and Mobility

0

2

4

6

8

10

12

0 10 20 30 40 50 60 70 B

andw

idth

(Gbp

s)

Time (sec)

Temporary transition from VMDP to standard I/O

vMotion to secondary host

• 8GB VM, sending UDP stream using pckgen (1500MTU)

•  UCS B200 blades with UCS VIC card •  vSphere technology preview

Page 75: Technologies Transforming the Data Center - Terena

75 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

0 1000 2000 3000 4000 5000 6000 7000 8000 9000

10000

Mbp

s

1 vNICs 2 vNICs 3 vNICs 3 vNICs 5 vNICs 6 vNICs 7 vNICs 8 vNICs 9 vNICs 10 vNICs 11 vNICs 12 vNICs 13 vNICs 14 vNICs 15 vNICs 16 vNICs 17 vNICs 18 vNICs 19 vNICs 20 vNICs 21 vNICs 22 vNICs 23 vNICs 24 vNICs 25 vNICs 26 vNICs 27 vNICs 28 vNICs 29 vNICs 30 vNICs 31 vNICs 32 vNICs 33 vNICs 34 vNICs 35 vNICs 36 vNICs 37 vNICs 38 vNICs 39 vNICs 40 vNICs 41 vNICs 42 vNICs 43 vNICs 44 vNICs 45 vNICs 46 vNICs 47 vNICs 48 vNICs

Bandwidth Availability per vNIC

Port Bandwidth Utilization

§  Port Bandwidth Utilization in a contained range (7000–9000 Mbps). Port Utilization is above 70% across the range.

§  Bandwidth Availability is evenly shared across multiple vNICS.

§  No optimizations used

Performance in Single OS: BW across Multiple vNICs High Throughput and Evenly Shared among vNICs

Page 76: Technologies Transforming the Data Center - Terena

76 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

UCS Technologies Impacting the DC

§  Traditional DC designs have Access Layer • Middle of Row modular switching • End of Row modular switching • Top of Rack fixed switching

§  Traditional DC designs have Aggregation Layer • Centralized modular switching • Services layer for common network services

§  With UCS, Access Layer is now ToR • Networking setup on UCS for blades and VM’s (100’s of Servers) • NX-OS manageability for visibility • UCSM configurability of network attributes

§  Aggregation Layer unchanged

Where is the Network Boundary?

Page 77: Technologies Transforming the Data Center - Terena

77 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

UCS Technologies Impacting the DC Fabric Failover with Blade Servers

§  Two Fabric Interconnects in the system going to two FEX in each chassis

§  All links are active

LAN SAN B SAN A

Half Width Blade Half Width Blade

Fabric Extender Fabric Extender

vNIC

vNIC

vNIC

vNIC

Adapter

BMC

Adapter

BMC

UCS Fabric Interconnects

Chassis

VIC or Menlo

Blade BMC

Fabric Based LAN vNIC Failover:

§  OS sees single or multiple vNICs

§  IO Fabric provides Active-Passive Failover per server adapter

§  No Teaming Driver to qualify and install

§  Failover happens under OS layer

Page 78: Technologies Transforming the Data Center - Terena

78 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Where is the SAN Boundary? UCS Technologies Impacting the DC

§  Traditional DC designs have dual SAN Access Edge modular switching Edge fixed switching Collapsed Core modular switching

§  Traditional DC designs have dual SAN Core Layer Core modular switching Services layer for common SAN services

§  With UCS, SAN Edge Layer is now ToR SAN Edge configuration setup on UCS for blades and VM’s (100’s of Servers)

NX-OS manageability for visibility UCSM configurability of SAN attributes

§  Core Layer unchanged (with exception of NPIV enablement) and multi-vendor

Page 79: Technologies Transforming the Data Center - Terena

79 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

UCS Technologies Impacting the DC SAN Multipathing

§  Two Fabric Interconnects in the system going to two FEX in each chassis

§  All links are active

LAN SAN B SAN A

Half Width Blade Half Width Blade

Fabric Extender Fabric Extender

vHB

A

vHB

A

vHB

A

vHB

A

Adapter

BMC

Adapter

BMC

UCS Fabric Interconnects

Chassis

Adapters

Blade BMC

SAN Multi-Pathing Failover: §  OS sees multiple vHBAs

§  IO Fabric provides no failover per server adapter

§  No new FLOGI issued on other fabric than where originally designed

§  Typically, OS vendors want customers to disable multipath during installs

§  Add multipathing and driver after initial install

Page 80: Technologies Transforming the Data Center - Terena

80 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Optimizing Memory with the Xeon 5500

Typical System Either

•  12 DIMMs @ 1066MHz •  Max 96GB

Or •  18 DIMMs @ 800MHz •  Max 144GB at lower performance

Intel Xeon 5500 Series with UCS •  48 DIMMs @ 1333MHz •  Max 384GB per Blade at full performance

Benefit • 4x capacity • Lower costs • Standards DIMMs, CPUs, OS

Typical Memory Cisco UCS Memory

Xeon 5500

Fixed number of

DIMMs can be

addressed by the CPU

Each DIMM the CPU looks for

is made of 4 standard DIMMs

Xeon 5500

2010 80

Page 81: Technologies Transforming the Data Center - Terena

81 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

C-Series Deployment With UCS Management

2010 81

Page 82: Technologies Transforming the Data Center - Terena

82 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Compute Options

B200 M2 2 Socket Intel 5600, 2 SFF Disk, 12 DIMM

B250 M2 2 Socket Intel 5600, 2 SFF Disk, 48 DIMM

B440 M1 4 Socket Intel 7500, 4 SFF Disk, 32 DIMM

C200 M2 2 Socket Intel 5600, 4 Disks, 12 DIMM, 2 PCIe 1U

C210 M2 2 Socket Intel 5600, 16 Disks, 12 DIMM, 5 PCIe 2U

C250 M2 2 Socket Intel 5600, 8 Disks, 48 DIMM, 5 PCIe 2U

C460 M1 4 Socket Intel 7500, 12 Disks, 64 DIMM, 10 PCIe 4U

Bla

de

Rac

k M

ount

B230 M1 2 Socket Intel 6500/7500, 2 SSD (7MM), 32 DIMM

Page 83: Technologies Transforming the Data Center - Terena

83 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Technologies Transforming the Data Center Agenda

§  The Evolving Data Centre §  Transition Points

§  Evolution of the Fabric §  Scaling the Fabric

§  Connecting the Server - VN-Link

§  Mobility and Storage - Unified I/O

§  Evolution of Compute §  UCS Building Blocks

§  UCS Automation and Management

§  UCS Integrated Solutions

§  Next Steps: The Data Center Focus for 2011

1K Cisco Nexus

x86

Page 84: Technologies Transforming the Data Center - Terena

84 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Q4 FY09/Q1 FY10 Q2 FY10 Q3 FY10 Q4 FY10 May ‘09 – Oct ’09 Nov ‘09 - Jan ’10 Feb ‘10 – Apr ’10 May ‘10 – Jul ‘10

SPECint_rate2006 B200 M1 – X5570,50,40,20 C200 M1 and C210 M1 – X5570, 50, 40, 20

B200 M2 – X5680, 70, 50, 40 C460 M1 – X7560

C200 M2, C210 M2 – X5670, 50, 40; B440 M1 – X7560, 50,

40; C460 M1 – X7550, 40

SPECfp_rate2006 B200 M1 – X5570,50,40,20 C200 M1 and C210 M1 – X5570, 50, 40, 20

B200 M2 – X5680, 70, 50, 40 C460 M1 – X7560

C200 M2, C210 M2 – X5670, 50, 40; B440 M1 – X7560, 50,

40; C460 M1 – X7550, 40

SPECjAppServer2004 C250 M2 (Single Node)

SPECjbb2005 B200 M1 – X5570, 50, 40, 20 C200 M1 X5570, C210 M1 X5570, B250 M1 X5570

B200 M2 – X5680 C460 M1 – X7560 B440 M1 – X7560

VMmark B200 M1 – X5570 B200 M1 – X5570 B250 M2 – X5680, C460 M1 – X7560

B440 M1 – X7560 C460 M1 – X7560

SAP-SD 2-Tier B200 M1 – X5570 B200 M2 – X5680

SPEC OMP2001 B200 M1 – X5570 (M and L) C200 M1 and C210 M1 – X5570 B200 M2 – X5680 (M and L) C460 M1 – X7560 (M and L)

C200 M2, C210 M2 – X5670, 50, 40; B440 M1 – X7560, 50,

40; C460 M1 – X7550, 40

SPECpower_ssj2008 B200 M1 – X5570,50,40,20 C200 M1 X5570, C210 M1 X5570, B250 M1 X5570

B200 M2 – X5680 C460 M1 – X7560 B440 M1 – X7560

Prime95/mPrime B200 M1 – X5570,50,40,20 C200 M1 X5570, C210 M1 X5570, B250 M1 X5570

B200 M2 – X5680 C460 M1 – X7560 B440 M1 – X7560

Linpack B200 M1 – X5570 C200 M1 X5570 C210 M1 X5570

B200 M2 – X5680, 70, 50, 40 C460 M1 – X7560

C200 M2, C210 M2 – X5670, 50, 40; B440 M1 – X7560, 50,

40; C460 M1 – X7550, 40

LS-Dyna C460 M1 – X7560 (3 Cars, Car2Car, Neon_refined)

Stream (diff mem cfg) B200 M1 – X5570,50,40,20 C200 M1 X5570, C210 M1 X5570, B250 M1 X5570

B200 M2 – X5680 C460 M1 – X7560 B440 M1 – X7560

One or more new world records Featured in Press Releases/Keynotes

UCS platforms set 25+ new world records on highly competitive industry std benchmarks in FY2010

UCS Performance Benchmarks at a glance FY2010

Page 85: Technologies Transforming the Data Center - Terena

85 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Current Cisco UCS Performance Records Continuing the Trend of Record Setting Performance…

#1 Two-Socket 2-Node Record SPECjAppServer*2004 11,283.80 JOPS@Standard

#1 Two-Socket Record SPECjbb*2005 1,015,802 BOPS

#1 Two-Socket x86 Record SPECompM*base2001 52,314 base score*

#1 Two-Socket Record SPECompL*base2001 278,603 base score**

#1 Two-Socket Record

Oracle eBiz Suite Payroll Batch 581,846 Employees/Hour Large 368,098 Employees/Hour Medium

1st ever to publish on new Cloud bmk VMmark* 2.0 6.51 @ 6 tiles*

Four-Socket X86 Blade Record SPECint*_rate_base2006 720 base score

Four-Socket x86 Record SPECjbb*2005 2,021,525 BOPS

Four-Socket Record SPECompM*2001 100,258 base score*

Single-Node Record

4S LS-Dyna* Crash Simulation 41,727 seconds car2car

Server World Records

#1

#1

#1

#1

#1

#1

#1

#1

Results as of Jan 24, 2011: 1Two socket comparison based x86 Volume servers—Intel Xeon 5600 series and AMD Opteron 6100 Series 1Four socket comparison based on x86 servers—Intel Xeon 7500 series and AMD Opteron 6100 Series

#1 #1

Page 86: Technologies Transforming the Data Center - Terena

86 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

One Infrastructure, Many Solutions

Vblocks Imagine:

30 racks reduced down to 3 racks

Provisioning applications in hours instead of weeks

Secure Multi-tenancy

Imagine: Securely sharing servers between

multiple users/groups without having to add another server

Cisco’s network-centric virtualized data center is best positioned to enable the journey to the networked cloud

Flexpod Imagine:

Predesigned, validated, Flexible infrastructure that can grow and scale to meet cloud computing

requirements

Virtual Desktop Imagine:

Over 4000 desktops in a single rack! Savings up to 60+% per PC per year

Significant savings in operations

Page 87: Technologies Transforming the Data Center - Terena

87 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Virtual Computing Environment (VCE)

§  Integrated Pre-Sales, Services and Support Vblock Unified Customer Engagement. Dedicated pre-sales, professional services and single support experience to provide a seamless, end-to-end customer experience.

§  Technology Innovations Vblock Infrastructure Packages. Integrated best-of-breed packages from Cisco and EMC, together with VMware – engineered, tested, and validated to deliver revolutionary TCO and pervasive virtualization at scale in today’s most demanding use cases.

§  Partner Ecosystem Leverage Vblock Partner Ecosystem. A select group of partners, growing over time, which augment, sell and deliver Virtual Computing Environment solutions to enable the journey to pervasive virtualization and private cloud.

§  Solutions Venture and Investment VCE. A Cisco-EMC joint venture to build, operate, and transfer Vblock infrastructure to organizations who want to accelerate their journey.

Partner Ecosystem Leverage

Services Venture and Investment

Integrated Sales, Services and Support

Technology Innovations

Extensive and Ongoing

Collaboration

Page 88: Technologies Transforming the Data Center - Terena

88 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

VCE Vblock Accelerating the Virtualization of IT Infrastructure

Vblock 2 3000-6000 VMs Large-Scale, Greenfield Virtualization Vblock 1 800-3000 VMs Consolidation, Optimization Initiatives

Vblock 0 300-800 VMs Entry-level Offer Medium-Business Test/Dev for SIs, SPs

Page 89: Technologies Transforming the Data Center - Terena

89 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Secure Multi-Tenancy Three Companies, One Architecture

Overview: •  Validated design for end-to-end secure

multi-tenancy •  Isolate applications across network,

servers, storage •  Separate confidential information between

business units, customers, departments, or security zones

HR BU APP

VMware VMware VMware vSphere, vShield Zones, vCenter

Nexus 1000V, Nexus 2000/5000/7000, UCS, 10GbE

MultiStore, Data Motion NFS/iSCSI

Business Benefits: §  Meet service level agreements for mission

critical applications §  Quickly respond to changing business

needs §  Streamline operations and improve

efficiencies across data center §  Reduce costs and resources to achieve

isolation and compliance

Page 90: Technologies Transforming the Data Center - Terena

90 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

FlexPod Compute, Network, and Storage

§  Standard, prevalidated, best-in-class infrastructure building blocks

§  Flexible: One platform scales to fit many environments and mixed workloads

Add applications and workload Scale up and out

§  Simplified management and repeatable deployments

§  Design and sizing guides §  Services: Facilitate deployment of

different environments

Cisco® UCS B-Series Blade Servers and

UCS Manager

Cisco Nexus® Family Switches

NetApp FAS 10 GE and FCoE

Page 91: Technologies Transforming the Data Center - Terena

91 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Flex with Confidence

91

Dev/Test

Starting Out

DP/ Backup

More compute Less storage

Less compute More storage

Entry system Then scale up

VDI

Higher performance blades, more IOPS

Production Infrastructure

IOPS

CPU

Capacity Memory

Production Balanced

Infrastructure

Page 92: Technologies Transforming the Data Center - Terena

92 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

FlexPod Benefits

§  Simplify the journey to virtualization and cloud with one platform from three industry leaders

§  Reduce OpEx and increase resource utilization

§  Gain transparent integration with existing technology

§  Gain open integration with third-party management tools

§  Predesigned, validated infrastructure removes guesswork and speeds deployment

§  Provide proactive, predictive, centralized management

§  Identify and resolve problems with 24-hours-a-day cooperative support

Data Center Efficiency Reduced Risk Business Agility

§  Flexible platform adapts to a wide range of application workloads and use cases

§  Flexible infrastructure can grow and scale to meet cloud computing requirements

Page 93: Technologies Transforming the Data Center - Terena

93 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Cisco Desktop Virtualization Solution

Clients

Cisco UCS Platform

Desktop Virtualization S/W VMware/Citrix

Virtualized Data Center

Cisco WAAS

Hypervisor VMware/Citrix/Microsoft

Cisco ACE

Desktop O/S

Cisco ASA

Cisco MDS9000

Family

App App Data

Storage

Unified Network Services Unified

Computing Unified Fabric

Cisco Nexus

WAN

Partner Solution Elements

§  Removes VDI deployment barriers

§  Combined joint partner solutions with industry leaders

§  Cisco Validated Designs & Services to accelerate customer success

Cisco Data Center Business Advantage Framework

Virtualization

Page 94: Technologies Transforming the Data Center - Terena

94 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

A Scalable Architecture

LAN

Nexus 5000 Access

UCS Fabric Interconnect

MDS 9xxx

Storage

UCS Chassis and Blades

Page 95: Technologies Transforming the Data Center - Terena

95 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

9.16 VMs/core with B250 and 192GB/memory XenDesktop and View Scalability Results

UCS Blades

# vi

rtual

des

ktop

s

Desktop Profile •  Windows 7,

32bit •  1.5GB RAM •  1vCPU •  Write-back

cache 3GB on NFS

UCS Blade Profile •  B250 M2 •  192GB

Memory •  Dual Xeon

5680 CPU

Linear Scale from 1 to 16 blades and beyond

Page 96: Technologies Transforming the Data Center - Terena

96 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

-37%

Unified Computing and Automation Virtualization

100% Physical, Legacy Computer Platform

Average TCO

-32% Speed of delivery 6-8 Weeks

Speed of Delivery 2-3 Weeks

Speed of Delivery 15 Minutes

40% Physical, 60% Virtual, Legacy Computer Platform

Average TCO

35% Physical, 65% Virtual, Unified Computing Platform, 100% Automated

Average TCO

IT Maintenance / IT Innovation 70/30

IT Maintenance / IT Innovation 40/60

IT Maintenance / IT Innovation 60/40

Cisco-on-Cisco Results: ROI Achieved by Cisco IT

Page 97: Technologies Transforming the Data Center - Terena

97 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Technologies Transforming the Data Center Agenda §  The Evolving Data Centre §  Transition Points

§  Evolution of the Fabric §  Scaling the Fabric

§  Connecting the Server - VN-Link

§  Mobility and Storage - Unified I/O

§  Evolution of Compute §  UCS Building Blocks

§  UCS Automation and Management

§  UCS Integrated Solutions

§  Next Steps: The Data Center Focus for 2011

1K Cisco Nexus

x86

Page 98: Technologies Transforming the Data Center - Terena

98 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Summary: The Data Center At the Heart of Business Innovation

Data Center/Cloud

IT Initiatives

Business Value

Data Center Transformation

New Service Creation and

New Business Models

Cost Reduction

and Revenue Generation

Governance and Risk

Management

Virtualization

Consolidation

Application Integration

Compliance

Cloud Services

Source: IT initiatives from Goldman Sachs CIO Study

Page 99: Technologies Transforming the Data Center - Terena

99 © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public BRKDCT-2023

Summary What we have covered

§ Technologies used within UCS are affecting production deployments

§ Customer data network does not end 1 or 2 hops from the actual server (nor will the 3am calls)

§ Storage Network operations (FC, FCoE, iSCSI, or NFS) are becoming more prevalent over local disk deployments

§ Solutions Implementations with UCS

Page 100: Technologies Transforming the Data Center - Terena