vce vblock and vxblock systems 340 architecture overview · pdf file vce vblock® and...

90
www.vce.com VCE Vblock ® and VxBlock Systems 340 Architecture Overview Document revision 3.11 April 2016

Upload: vuongkien

Post on 12-Mar-2018

291 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

www.vce.com

VCE Vblock® and VxBlock™Systems 340Architecture Overview

Document revision 3.11

April 2016

Page 2: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Revision history

Date Documentrevision

Description of changes

April 2016 3.11 Added support for Cisco Nexus 3172TQ Switch

December 2015 3.10 • Updated to include 16 Gb SLIC.

• Added support for Cisco MDS 9148S Multilayer Fabric Switch.

• Added support for unified (NAS) configuration for EMC VNX5800, EMCVNX7600, and EMC VNX8000.

• Updated support for mixed internal and external access in a configurationwith more than two X-Blades.

• Updated power options.

• Updated VCE System with EMC VNX5800 elevations for VxBlockSystem 340.

• Updated VCE System with EMC VNX5800 (ACI ready) elevations forCisco MDS 9148S Multilayer Fabric Switch and VxBlock System 340.

October 2015 3.9 Updated graphics.

August 2015 3.8 • Updated to include the VxBlock System 340. Added support for VMwarevSphere 6.0 with VMware VDS on the VxBlock System and for existingVblock Systems.

• Added information on Intelligent Physical Infrastructure (IPI) appliance.

February 2015 3.7 Added support for Cisco B200 M4 Blade.

December 2014 3.6 Added support for AMP-2HA.

September 2014 3.5 Modified elevations and removed aggregate section.

July 2014 3.4 Added support for VMware VDS.

May 2014 3.3 • Updated for Cisco Nexus 9396 Switch and 1500 drives for EMCVNX8000

• Added support for VMware vSphere 5.5

January 2014 3.2 Updated elevations for AMP-2 reference.

November 2013 3.1 Updated network connectivity management illustration.

October 2013 3.0 Gen 3.1 release

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Revision history

2© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 3: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Contents

Introduction.................................................................................................................................................5

Accessing VCE documentation.................................................................................................................6

System overview.........................................................................................................................................7System architecture and components.................................................................................................... 7Base configurations and scaling.............................................................................................................9Connectivity overview...........................................................................................................................11

Segregated network architecture...................................................................................................13Unified network architecture.......................................................................................................... 16

Compute layer overview...........................................................................................................................20Compute overview................................................................................................................................20Cisco Unified Computing System.........................................................................................................20Cisco Unified Computing System fabric interconnects.........................................................................21Cisco Trusted Platform Module............................................................................................................ 21Scaling up compute resources............................................................................................................. 22VCE bare metal support policy.............................................................................................................23Disjoint layer 2 configuration................................................................................................................ 24

Storage layer............................................................................................................................................. 26Storage overview..................................................................................................................................26EMC VNX series storage arrays.......................................................................................................... 26Replication............................................................................................................................................28Scaling up storage resources...............................................................................................................28Storage features support......................................................................................................................31

Network layer............................................................................................................................................ 34Network overview................................................................................................................................. 34IP network components........................................................................................................................34Port utilization.......................................................................................................................................35

Cisco Nexus 5548UP Switch - segregated networking..................................................................36Cisco Nexus 5596UP Switch - segregated networking..................................................................36Cisco Nexus 5548UP Switch – unified networking........................................................................ 37Cisco Nexus 5596UP - unified networking.....................................................................................39Cisco Nexus 9396PX Switch - segregated networking..................................................................40

Storage switching components............................................................................................................ 41

Virtualization layer....................................................................................................................................43Virtualization overview..........................................................................................................................43VMware vSphere Hypervisor ESXi.......................................................................................................43VMware vCenter Server....................................................................................................................... 44

Contents VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

3© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 4: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Management..............................................................................................................................................47Management components overview.....................................................................................................47Management hardware components....................................................................................................47Management software components..................................................................................................... 48Management network connectivity....................................................................................................... 49

Configuration descriptions......................................................................................................................54VCE Systems with EMC VNX8000...................................................................................................... 54VCE Systems with EMC VNX7600...................................................................................................... 58VCE Systems with EMC VNX5800...................................................................................................... 61VCE Systems with EMC VNX5600...................................................................................................... 64VCE Systems with EMC VNX5400...................................................................................................... 68

Sample configurations............................................................................................................................. 71Sample VCE System with EMC VNX8000........................................................................................... 71Sample VCE System with EMC VNX5800........................................................................................... 75Sample VCE System with EMC VNX5800 (ACI ready)........................................................................80

System infrastructure...............................................................................................................................85VCE Systems descriptions................................................................................................................... 85Cabinets overview................................................................................................................................ 86Intelligent Physical Infrastructure appliance......................................................................................... 86Power options.......................................................................................................................................86

Additional references............................................................................................................................... 88Virtualization components.................................................................................................................... 88Compute components.......................................................................................................................... 88Network components............................................................................................................................88Storage components............................................................................................................................ 89

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Contents

4© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 5: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

IntroductionThis document describes the high-level design of the VCE System. This document also describes thehardware and software components that VCE includes in the VCE System.

In this document, the Vblock System and VxBlock System are referred to as VCE Systems.

The VCE Glossary provides terms, definitions, and acronyms that are related to VCE.

To suggest documentation changes and provide feedback on this book, send an e-mail to [email protected]. Include the name of the topic to which your feedback applies.

Related information

Accessing VCE documentation (see page 6)

Introduction VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

5© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 6: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Accessing VCE documentationSelect the documentation resource that applies to your role.

Role Resource

Customer support.vce.com

A valid username and password are required. Click VCE Download Center to access thetechnical documentation.

Cisco, EMC, VMwareemployee, or VCEPartner

partner.vce.com

A valid username and password are required.

VCE employee sales.vce.com/saleslibrary

or

vblockproductdocs.ent.vce.com

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Accessing VCE documentation

6© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 7: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

System overview

System architecture and componentsVCE Systems are modular platforms with defined scale points that meet the higher performance andavailability requirements of an enterprise's business-critical applications.

Refer to the VCE Systems Physical Planning Guide for information about cabinets and their components,the Intelligent Physical Infrastructure solution, and environmental, security, power, and thermalmanagement.

The VCE Systems include the following architecture features:

• Optimized, fast delivery configurations based on the most commonly purchased components

• Standardized cabinets with multiple North American and international power solutions

• Block (SAN) and unified storage options (SAN and NAS)

• Support for multiple features of the EMC operating environment for EMC VNX arrays

• Granular, but optimized compute and storage growth by adding predefined kits and packs

• Second generation of the Advanced Management Platform (AMP-2) for management

• Unified network architecture provides the option to leverage Cisco Nexus switches to support IPand SAN without the use of Cisco MDS switches.

VCE Systems contain the following key hardware and software components:

Resource Components

VCE Systemsmanagement

• VCE Vision™ Intelligent Operations System Library

• VCE Vision™ Intelligent Operations Plug-in for vCenter

• VCE Vision™ Intelligent Operations Compliance Checker

• VCE Vision™ Intelligent Operations API for System Library

• VCE Vision™ Intelligent Operations API for Compliance Checker

System overview VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

7© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 8: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Resource Components

Virtualizationandmanagement

• VMware vSphere Server Enterprise Plus

• VMware vSphere ESXi

• VMware vCenter Server

• VMware vSphere Web Client

• VMware Single Sign-On (SSO) Service (version 5.1 and higher)

• Cisco UCS C220 Server for AMP-2

• EMC PowerPath/VE

• Cisco UCS Manager

• EMC Unisphere Manager

• EMC VNX Local Protection Suite

• EMC VNX Remote Protection Suite

• EMC VNX Application Protection Suite

• EMC VNX Fast Suite

• EMC VNX Security and Compliance Suite

• EMC Secure Remote Support (ESRS)

• EMC PowerPath Electronic License Management Server (ELMS)

• Cisco Data Center Network Manager for SAN

Compute • Cisco UCS 5108 Server Chassis

• Cisco UCS B-Series M3 Blade Servers with Cisco UCS VIC 1240, optional port expander orCisco UCS VIC 1280

• Cisco UCS B-Series M4 Blade Servers with Cisco UCS VIC 1340, optional port expander orCisco UCS VIC 1380

• Cisco UCSB-MLOM-PT-01 - Port Expander for 1240 VIC

• Cisco UCS 2208XP fabric extenders or Cisco UCS 2204XP fabric extenders

• Cisco UCS 2208XP Fabric Extenders with FET Optics or Cisco UCS 2204XP FabricExtenders with FET Optics

• Cisco UCS 6248UP Fabric Interconnects or Cisco UCS 6296UP Fabric Interconnects

Network • Cisco Nexus 3172TQ or Cisco Nexus 3048 Switches. Refer to the appropriate RCM for a listof what is supported on your VCE System.

• Cisco Nexus 5548UP Switches, Cisco Nexus 5596UP switches, or Cisco Nexus 9396PXSwitches

• (Optional) Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to theappropriate RCM for a list of what is supported on your VCE System.

• (Optional) Cisco Nexus 1000V Series Switches

• (Optional) VMware vSphere Distributed Switch (VDS) (VMware vSphere version 5.5 andhigher)

• (Optional) VMware NSX Virtual Networking

Storage • EMC VNX storage array (5400, 5600, 5800, 7600, 8000) running the VNX OperatingEnvironment

• (Optional) EMC unified storage (NAS)

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview System overview

8© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 9: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

VCE Systems have a different scale point based on compute and storage options. VCE Systems cansupport block and/or unified storage protocols.

The VCE Release Certification Matrix provides a list of the certified versions of components for VCESystems. For information about VCE System management, refer to the VCE Vision™ IntelligentOperations Technical Overview.

The VCE Integrated Data Protection Guide provides information about available data protection solutions.

Related information

Accessing VCE documentation (see page 6)

EMC VNX series storage arrays (see page 26)

Base configurations and scalingVCE Systems have base configurations that contain a minimum set of compute and storage components,and fixed network resources that are integrated in one or more 19 inch, 42U cabinets.

In the base configuration, you can customize the following hardware:

Hardware How it can be customized

Compute blades Cisco UCS B-Series blade types include all supported VCE bladeconfigurations.

Compute chassis Cisco UCS Server Chassis

Sixteen chassis maximum for VCE Systems with EMC VNX8000, VCE Systemswith EMC VNX7600, VCE Systems with EMC VNX5800

Eight chassis maximum for VCE Systems with EMC VNX5600

Two chassis maximum for VCE Systems with EMC VNX5400

Edge servers

(with optional VMware NSX)

Four to six Cisco UCS B-series Blade Servers, including the B200 M4 with VIC1340 and VIC 1380.

For more information, see the VCE VxBlock™ Systems for VMware NSXArchitecture Overview.

Storage hardware Drive flexibility for up to three tiers of storage per pool, drive quantities in eachtier, the RAID protection for each pool, and the number of disk array enclosures(DAEs).

Storage EMC VNX storage - block only or unified (SAN and NAS)

System overview VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

9© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 10: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Hardware How it can be customized

Supported disk drives FastCache100/200GB SLC SSD

Tier 0100/200GB SLC SSD

100/200/400GB eMLC SSD

Tier 1300/600GB 15K SAS

600/900GB 10K SAS

Tier 21/2/3/4 TB 7.2K NL-SAS

Supported RAID types Tier 0: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1)

Tier 1: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1), RAID 6 (6+2), (12 +2)*, (14+2)**

Tier 2: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1), RAID 6 (6+2), (12 +2)*, (14+2)**

*file virtual pool only

**block virtual pool only

Management hardware options The second generation of the Advanced Management Platform (AMP-2)centralizes management of VCE System components. AMP-2 offers minimumphysical, redundant physical, and highly available models. The standard optionfor this platform is the minimum physical model.

The optional VMware NSX feature requires AMP-2HA Performance.

Data Mover enclosure (DME)packs

Available on all VCE Systems. Additional enclosure packs can be added foradditional X-Blades on VCE Systems with EMC VNX8000, VCE Systems withEMC VNX7600, and VCE Systems with EMC VNX5800.

Together, the components offer balanced CPU, I/O bandwidth, and storage capacity relative to thecompute and storage arrays in the system. All components have N+N or N+1 redundancy.

These resources can be scaled up as necessary to meet increasingly stringent requirements. Themaximum supported configuration differs from model to model. To scale up compute resources, addblade packs and chassis activation kits.

To scale up storage resources, add RAID packs, DME packs, and DAE packs. Optionally, expansioncabinets with additional resources can be added.

VCE Systems are designed to keep hardware changes to a minimum if the storage protocol is changedafter installation (for example, from block storage to unified storage). Cabinet space can be reserved forall components that are needed for each storage configuration (Cisco MDS switches, X-Blades, etc.)ensuring that network and power cabling capacity for these components is in place.

Related information

EMC VNX series storage arrays (see page 26)

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview System overview

10© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 11: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Scaling up compute resources (see page 22)

Scaling up storage resources (see page 28)

Management components overview (see page 47)

Replication (see page 28)

Connectivity overviewThe interconnectivity between theVCE Systems components depends on the network architecture.

These components and interconnectivity are conceptually subdivided into the following layers:

Layer Description

Compute Contains the components that provide the computing power within a VCE System. The Cisco UCS bladeservers, chassis, and fabric interconnects belong to this layer.

Storage Contains the EMC VNX storage component.

Network Contains the components that provide switching between the compute and storage layers within a VCESystem, and between a VCE System and the network. Cisco MDS switches and the Cisco Nexusswitches belong to this layer.

All components incorporate redundancy into the design.

Segregated network architecture and unified network architecture

In the segregated network architecture, LAN and SAN connectivity is segregated into separate switcheswithin the VCE System. LAN switching uses the Cisco Nexus switches. SAN switching uses the CiscoMDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of whatis supported on your VCE System.

In the unified network architecture, LAN and SAN switching is consolidated onto a single network device(Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches) within the VCE System. This removesthe need for a Cisco MDS SAN switch.

Note: The optional VMware NSX feature uses the Cisco Nexus 9396 switches for LAN switching. Formore information, see the VCE VxBlock™ Systems for VMware NSX Architecture Overview.

All management interfaces for infrastructure power outlet unit (POU), network, storage, and computedevices are connected to redundant Cisco Nexus 3048 switch. This switch provides connectivity forAdvanced Management Platform (AMP-2) and egress points into the management stacks for the VCESystem components.

All management interfaces for infrastructure power outlet unit (POU), network, storage, and computedevices are connected to redundant Cisco Nexus 3172TQ or Cisco Nexus 3048 switches. Refer to theappropriate RCM for a list of what is supported on your VCE System. These switches provide connectivityfor Advanced Management Platform (AMP-2) and egress points into the management stacks for the VCESystem components.

System overview VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

11© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 12: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Related information

Accessing VCE documentation (see page 6)

Management components overview (see page 47)

Segregated network architecture (see page 13)

Unified network architecture (see page 16)

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview System overview

12© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 13: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Segregated network architecture

This topic shows VCE Systems segregated network architecture for block, SAN boot, and unified storage.

Block storage configuration

The following illustration shows a block-only storage configuration for VCE Systems with no EMC X-Blades in the cabinets. You can reserve space in the cabinets for these components (including optionalEMC RecoverPoint Appliances). This design makes it easier to add the components later if there is anupgrade to unified storage.

System overview VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

13© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 14: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

SAN boot storage configuration

In all VCE Systems configurations, the VMware vSphere ESXi blades boot over the Fibre Channel (FC)SAN. In block-only configurations, block storage devices (boot and data) are presented over FC throughthe SAN. In a unified storage configuration, the boot devices are presented over FC and data service canbe either block devices (SAN) or presented as NFS data stores (NAS). In a file-only configuration, theboot devices are presented over FC and data devices are through NFS shares. Storage can also bepresented directly to the VMs as CIFS shares.

The following illustration shows the components (highlighted in a red, dotted line) that are leveraged tosupport SAN booting in the VCE Systems:

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview System overview

14© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 15: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Unified storage configuration

In a unified storage configuration, the storage processors also connect to X-Blades over FC. The X-Blades connect to the Cisco Nexus switches in the network layer over 10 GbE, as shown in the followingillustration:

Related information

Connectivity overview (see page 11)

Unified network architecture (see page 16)

System overview VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

15© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 16: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Unified network architecture

The topic provides an overview of the block storage, SAN boot storage, and unified storageconfigurations for the unified network architecture.

With unified network architecture, access to both block and file services on the EMC VNX is providedusing the Cisco Nexus 5548UP Switch or Cisco Nexus 5596UP Switch. The Cisco Nexus 9396PX Switchis not supported in unified network architecture.

Block storage configuration

The following illustration shows a block-only storage configuration in the VCE Systems:

In this example, there are no X-Blades providing NAS capabilities. However, space can be reserved inthe cabinets for these components (including the optional EMC RecoverPoint Appliance). This designmakes it easier to add the components later if there is an upgrade to unified storage.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview System overview

16© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 17: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

In a unified storage configuration for block and file, the storage processors also connect to X-Blades overFC. The X-Blades connect to the Cisco Nexus switches within the network layer over 10 GbE.

SAN boot storage configuration

In all VCE Systems configurations, VMware vSphere ESXi blades boot over the FC SAN. In block-onlyconfigurations, block storage devices (boot and data) are presented over FC through the Cisco Nexusunified switch. In a unified storage configuration, the boot devices are presented over FC and datadevices can be either block devices (SAN) or presented as NFS data stores (NAS). In a file-onlyconfiguration, boot devices are presented over FC, and data devices over NFS shares. The remainder ofthe storage can be presented either as NFS or as VMFS datastores. Storage can also be presenteddirectly to the VMs as CIFS shares.

System overview VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

17© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 18: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The following illustration shows the components that are leveraged to support SAN booting in the VCESystems:

Unified storage configuration

In a unified storage configuration, the storage processors also connect to X-Blades over FC. The X-Blades connect to the Cisco Nexus switches within the network layer over 10 GbE.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview System overview

18© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 19: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The following illustration shows a unified storage configuration for the VCE Systems:

Related information

Connectivity overview (see page 11)

Management components overview (see page 47)

Segregated network architecture (see page 13)

System overview VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

19© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 20: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Compute layer

Compute overviewThis topic provides an overview of the compute components for the VCE System.

Cisco UCS B-Series Blades installed in the Cisco UCS chassis provide computing power within the VCESystem.

Fabric extenders (FEX) within the Cisco UCS chassis connect to Cisco fabric interconnects overconverged Ethernet. Up to eight 10 GbE ports on each Cisco UCS fabric extender connect northbound tothe fabric interconnects, regardless of the number of blades in the chassis. These connections carry IPand storage traffic.

VCE has reserved some of these ports to connect to upstream access switches within the VCE System.These connections are formed into a port channel to the Cisco Nexus switch and carry IP traffic destinedfor the external network 10 GbE links. In a unified storage configuration, this port channel can also carryNAS traffic to the X-Blades within the storage layer.

Each fabric interconnect also has multiple ports reserved by VCE for Fibre Channel (FC) ports. Theseports connect to Cisco SAN switches. These connections carry FC traffic between the compute layer andthe storage layer. In a unified storage configuration, port channels carry IP traffic to the X-Blades for NASconnectivity. For SAN connectivity, SAN port channels carrying FC traffic are configured between thefabric interconnects and upstream Cisco MDS or Cisco Nexus switches.

Cisco Unified Computing SystemThis topic provides an overview of the Cisco Unified Compute System (UCS) data center platform thatunites compute, network, and storage access.

Optimized for virtualization, the Cisco UCS integrates a low-latency, lossless 10 Gb Ethernet unifiednetwork fabric with enterprise-class, x86-based servers (the Cisco B-Series).

VCE Systems powered by Cisco UCS offer the following features:

• Built-in redundancy for high availability

• Hot-swappable components for serviceability, upgrade, or expansion

• Fewer physical components than in a comparable system built piece by piece

• Reduced cabling

• Improved energy efficiency over traditional blade server chassis

The Vblock System Blade Pack Reference provides a list of supported Cisco UCS blades.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Compute layer

20© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 21: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Related information

Accessing VCE documentation (see page 6)

Cisco Unified Computing System fabric interconnectsThe Cisco Unified Computing System (UCS) fabric interconnects provide network connectivity andmanagement capabilities to the Cisco UCS blades and chassis.

The Cisco UCS fabric interconnects provide the management and communication backbone for theblades and chassis. The Cisco UCS fabric interconnects provide LAN and SAN connectivity for all bladeswithin their domain. Cisco UCS fabric interconnects are used for boot functions and offer line-rate, low-latency, lossless 10 Gigabit Ethernet and Fibre Channel over Ethernet (FCoE) functions.

VCE Systems use Cisco UCS 6248UP Fabric Interconnects and Cisco UCS 6296UP FabricInterconnects. Single domain uplinks of 2, 4, or 8 between the fabric interconnects and the chassis areprovided with the Cisco UCS 6248UP Fabric Interconnects. Single domain uplinks of 4 or 8 between thefabric interconnects and the chassis are provided with the Cisco UCS 6296UP Fabric Interconnects.

The optional VMware NSX feature uses Cisco UCS 6296UP Fabric Interconnects to accommodate theport count needed for VMware NSX external connectivity (edges). For more information, see the VCEVxBlock™ Systems for VMware NSX Architecture Overview.

Related information

Accessing VCE documentation (see page 6)

Cisco Trusted Platform ModuleCisco TPM provides authentication and attestation services that provide safer computing in allenvironments. Cisco TPM is a computer chip that securely stores artifacts such as passwords,certificates, or encryption keys that authenticate the VCE System.

Cisco TPM is available by default in the VCE System as a component in the Cisco UCS B-Series M3Blade Servers and Cisco UCS B-Series M4 Blade Servers, and is shipped disabled. The Vblock SystemBlade Pack Reference contains additional information about Cisco TPM.

VCE supports only the Cisco TPM hardware. VCE does not support the Cisco TPM functionality. Becausemaking effective use of the Cisco TPM involves the use of a software stack from a vendor with significantexperience in trusted computing, VCE defers to the software stack vendor for configuration andoperational considerations relating to the Cisco TPM.

Related information

www.cisco.com

Compute layer VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

21© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 22: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Scaling up compute resourcesThis topic describes what you can add to your VCE System to scale up compute resources.

To scale up compute resources, you can add uplinks, blade packs, and chassis activation kits to enhanceEthernet and Fibre Channel (FC) bandwidth either when VCE Systems are built, or after they aredeployed.

The following table shows the maximum chassis and blade quantities that are supported for VCESystems with EMC VNX5400, VCE Systems with EMC VNX5600, VCE Systems with EMC VNX5800,VCE Systems with EMC VNX7600, and VCE Systems with EMC VNX8000:

VCE Systemswith

2-link Cisco UCS6248UP CiscoUCS 2204XP IOM

4-link CiscoUCS 6248UPCisco UCS2204XP IOM

4-link CiscoUCS 6296UPCisco UCS2204XP IOM

8-link CiscoUCS 6248UPCisco UCS2208XP IOM

8-link CiscoUCS 6296UPCisco UCS2208XP IOM

EMC VNX8000 16(128) 8(64) 16(128) 4(32) 8(64)

EMC VNX7600 16(128) 8(64) 16(128) 4(32) 8(64)

EMC VNX5800 16(128) 8(64) 16(128) 4(32) 8(64)

EMC VNX5600 N/A 8(64) 8(64) 4(32) 8(64)

EMC VNX5400 N/A 2(16) N/A N/A N/A

Ethernet and FC I/O bandwidth enhancement

For VCE Systems with EMC VNX5600, EMC VNX5800, EMC VNX7600, and EMC VNX8000, theEthernet I/O bandwidth enhancement increases the number of Ethernet uplinks from the Cisco UCS6296UP fabric interconnects to the network layer to reduce oversubscription. To enhance Ethernet I/Obandwidth performance increase the uplinks between the Cisco UCS 6296UP fabric interconnects andthe Cisco Nexus 5548UP Switch for segregated networking, or the Cisco Nexus 5596UP Switch forunified networking.

FC I/O bandwidth enhancement increases the number of FC links between the Cisco UCS 6248UP fabricinterconnects or Cisco UCS 6296UP fabric interconnects and the SAN switch, and from the SAN switch tothe EMC VNX storage array. The FC I/O bandwidth enhancement feature is supported on VCE Systemswith EMC VNX5800, EMC VNX7600, and EMC VNX8000.

Blade packs

Cisco UCS blades are sold in packs of two and include two identical Cisco UCS blades. The baseconfiguration of each VCE System includes two blade packs. The maximum number of blade packsdepends on the type of VCE System. Each blade type must have a minimum of two blade packs as abase configuration and then can be increased in single blade pack increments thereafter.

Each blade pack is added along with the following license packs:

• VMware vSphere ESXi

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Compute layer

22© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 23: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

• Cisco Nexus 1000V Series Switches (Cisco Nexus 1000V Advanced Edition only)

• EMC PowerPath/VE

Note: License packs for VMware vSphere ESXi, Cisco Nexus 1000V Series Switches, and EMCPowerPath are not available for bare metal blades.

The Vblock System Blade Pack Reference provides a list of supported Cisco UCS blades.

Chassis activation kits

The power supplies and fabric extenders for all chassis are populated and cabled, and all requiredTwinax cables and transceivers are populated.

As more blades are added and additional chassis are required, chassis activation kits (CAK) areautomatically added to an order. The kit contains software licenses to enable additional fabricinterconnect ports.

Only enough port licenses for the minimum number of chassis to contain the blades are ordered. Chassisactivation kits can be added up-front to allow for flexibility in the field or to initially spread the bladesacross a larger number of chassis.

Related information

Accessing VCE documentation (see page 6)

VCE bare metal support policySince many applications cannot be virtualized due to technical and commercial reasons, VCE Systemssupport bare metal deployments, such as non-virtualized operating systems and applications.

While it is possible for VCE Systems to support these workloads (with caveats noted below), due to thenature of bare metal deployments, VCE is able to provide only “reasonable effort" support for systemsthat comply with the following requirements:

• VCE Systems contain only VCE published, tested, and validated hardware and softwarecomponents. The VCE Release Certification Matrix provides a list of the certified versions ofcomponents for VCE Systems.

• The operating systems used on bare-metal deployments for compute and storage componentsmust comply with the published hardware and software compatibility guides from Cisco and EMC.

• For bare metal configurations that include other hypervisor technologies (Hyper-V, KVM, etc.),those hypervisor technologies are not supported by VCE. VCE Support is provided only onVMware Hypervisors.

VCE reasonable effort support includes VCE acceptance of customer calls, a determination of whether aVCE System is operating correctly, and assistance in problem resolution to the extent possible.

Compute layer VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

23© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 24: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

VCE is unable to reproduce problems or provide support on the operating systems and applicationsinstalled on bare metal deployments. In addition, VCE does not provide updates to or test those operatingsystems or applications. The OEM support vendor should be contacted directly for issues and patchesrelated to those operating systems and applications.

Related information

Accessing VCE documentation (see page 6)

Disjoint layer 2 configurationIn the disjoint layer 2 configuration, traffic is split between two or more different networks at the fabricinterconnect to support two or more discrete Ethernet clouds. The Cisco UCS servers connect to twodifferent clouds.

Upstream disjoint layer 2 networks allow two or more Ethernet clouds that never connect to be accessedby servers or VMs located in the same Cisco UCS domain.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Compute layer

24© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 25: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The following illustration provides an example implementation of disjoint layer 2 networking into a CiscoUCS domain:

Virtual port channels (VPCs) 101 and 102 are production uplinks that connect to the network layer of theVCE Systems. Virtual port channels 105 and 106 are external uplinks that connect to other switches.

If you use Ethernet performance port channels (103 and 104 by default), port channels 101 through 104are assigned to the same VLANs.

Compute layer VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

25© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 26: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Storage layer

Storage overviewEMC VNX series are fourth-generation storage platforms that deliver industry-leading capabilities. Theyoffer a unique combination of flexible, scalable hardware design and advanced software capabilities thatenable them to meet the diverse needs of today’s organizations.

EMC VNX series platforms support block storage and unified storage. The platforms are optimized forVMware virtualized applications. They feature flash drives for extendable cache and high performance inthe virtual storage pools. Automation features include self-optimized storage tiering, and application-centric replication.

Regardless of the storage protocol implemented at startup (block or unified), VCE Systems can includecabinet space, cabling, and power to support the hardware for all of these storage protocols. Thisarrangement makes it easier to move from block storage to unified storage with minimal hardwarechanges.

VCE Systems are available with:

• EMC VNX5400

• EMC VNX5600

• EMC VNX5800

• EMC VNX7600

• EMC VNX8000

Note: In all VCE Systems, all EMC VNX components are installed in VCE cabinets in VCE-specificlayout.

EMC VNX series storage arraysThe EMC VNX series storage arrays contain common components across all models.

The EMC VNX series storage arrays connect to dual storage processors (SPs) using 6Gb/s four-laneserial attached SCSI (SAS). Each storage processor connects to one side of each two, four, eight, orsixteen (depending on the VCE System) redundant pairs of four-lane x 6Gb/s serial attached SCSI (SAS)buses, providing continuous drive access to hosts in the event of a storage processor or bus fault. FibreChannel (FC) expansion cards within the storage processors connect to the Cisco MDS switches in thenetwork layer over FC.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Storage layer

26© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 27: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The storage layer in the VCE System consists of an EMC VNX storage array. Each EMC VNX modelcontains some or all of the following components:

• The disk processor enclosure (DPE) houses the storage processors for the EMC VNX5400, EMCVNX5600, EMC VNX5800, and EMC VNX7600. The DPE provides slots for two storageprocessors, two battery backup units (BBU), and an integrated 25 slot disk array enclosure (DAE)for 2.5" drives. Each SP provides support for up to 5 SLICs (small I/O cards).

• The EMC VNX8000 uses a storage processor enclosure (SPE) and standby power supplies(SPS). The SPE is a 4U enclosure with slots for two storage processors, each supporting up to11 SLICs. Each EMC VNX8000 includes two 2U SPS' that power the SPE and the vault DAE.Each SPS contains two Li-ION batteries that require special shipping considerations.

• X-Blades (also known as data movers) provide file-level storage capabilities. These are housed indata mover enclosures (DME). Each X-Blade connects to the network switches using 10G links(either Twinax or 10G fibre).

• DAEs contain individual disk drives and are available in the following configurations:

— 2U model that can hold 25 2.5" disks

— 3U model that can hold 15 3.5" disks

EMC VNX5400

The EMC VNX5400 is a DPE-based array with two back-end SAS buses, up to four slots for front-endconnectivity, and support for up to 250 drives. It is available in both unified (NAS) and blockconfigurations.

EMC VNX5600

The EMC VNX5600 is a DPE-based array with up to six back-end SAS buses, up to five slots for front-end connectivity, and support for up to 500 drives. It is available in both unified (NAS) and blockconfigurations.

EMC VNX5800

The EMC VNX5800 is a DPE-based array with up to six back-end SAS buses, up to five slots for front-end connectivity, and support for up to 750 drives. It is available in both unified (NAS) and blockconfigurations.

EMC VNX7600

The EMC VNX7600 is a DPE-based array with six back-end SAS buses, up to four slots for front-endconnectivity, and support for up to 1000 drives. It is available in both unified (NAS) and blockconfigurations.

Storage layer VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

27© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 28: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

EMC VNX8000

The EMC VNX8000 comes in a different form factor from the other EMC VNX models. The EMCVNX8000 is an SPE-based model with up to 16 back-end SAS buses, up to nine slots for front-endconnectivity, and support for up to 1500 drives. It is available in both unified (NAS) and blockconfigurations.

Related information

Storage features support (see page 31)

ReplicationThis section describes how VCE Systems can be upgraded to include EMC RecoverPoint.

For block storage configurations, the VCE System can be upgraded to include EMC RecoverPoint. Thisreplication technology provides continuous data protection and continuous remote replication for on-demand protection and recovery to any point in time. EMC RecoverPoint advanced capabilities includepolicy-based management, application integration, and bandwidth reduction. RecoverPoint is included inthe EMC Local Protection Suite and EMC Remote Protection Suite.

To implement EMC RecoverPoint within a VCE System, add two or more EMC RecoverPoint Appliances(RPA) in a cluster to the VCE System. This cluster can accommodate approximately 80 MBps sustainedthroughput through each EMC RPA.

To ensure proper sizing and performance of an EMC RPA solution, VCE works with an EMC TechnicalConsultant. They collect information about the data to be replicated, as well as data change rates, datagrowth rates, network speeds, and other information that is needed to ensure that all businessrequirements are met.

Scaling up storage resourcesYou can scale up storage resources in the VCE System.

To scale up storage resources, you can expand block I/O bandwidth between the compute and storageresources, add RAID packs, and add disk-array enclosure (DAE) packs. I/O bandwidth and packs can beadded when VCE Systems are built and after they are deployed.

I/O bandwidth expansion

You can increase Fibre channel (FC) bandwidth in the VCE Systems with EMC VNX8000, VCE Systemswith EMC VNX7600, and VCE Systems with EMC VNX5800. An I/O bandwidth expansion adds anadditional four FC interfaces per fabric between the fabric interconnects and the Cisco MDS 9148 or9148S Multilayer Fabric Switch with segregated network architecture, or Cisco Nexus 5548UP Switch orSwitch Cisco Nexus 5596UP Switch with unified network architecture. The expansion includes anadditional four FC ports from the EMC VNX to each SAN fabric. Refer to the appropriate RCM for a list ofwhat is supported on your VCE System.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Storage layer

28© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 29: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

This option is available for environments that require high bandwidth, block-only configurations. Thisconfiguration requires the use of four storage array ports per storage processor that are normallyreserved for unified connectivity of the X-Blades.

RAID packs

Storage capacity can be increased by adding RAID packs. Each pack contains a number of drives of agiven type, speed, and capacity. The number of drives in a pack depends upon the RAID level that itsupports.

The number and types of RAID packs to include in VCE Systems are based upon the following:

• The number of storage pools that are needed.

• The storage tiers that each pool contains, and the speed and capacity of the drives in each tier.

The following table lists tiers, supported drive types, and supported speeds and capacities.

Note: The speed and capacity of all drives within a given tier in a given pool must be the same.

Tier Drive type Supported speeds and capacities

1 Solid-state Enterprise Flash drives (EFD) 100 GB SLC EFD

200 GB SLC EFD

100 GB eMLC EFD

200 GB eMLC EFD

400 GB eMLC EFD

2 Serial attached SCSI (SAS) 300 GB 10K RPM

600 GB 10K RPM

900 GB 10K RPM

300 GB 15K RPM

600 GB 15K RPM

Storage layer VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

29© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 30: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Tier Drive type Supported speeds and capacities

3 Nearline SAS 1 TB 7.2K RPM

2 TB 7.2K RPM

3 TB 7.2K RPM

• The RAID protection level for the tiers in each pool. The following table describes eachsupported RAID protection level. The RAID protection level for the different pools can vary.

RAIDprotectionlevel

Description

RAID 1/0 • A set of mirrored drives.

• Offers the best overall performance of the three supported RAID protection levels.

• Offers robust protection. Can sustain double-drive failures that are not in the same mirror set.

• Lowest economy of the three supported RAID levels since usable capacity is only 50% of rawcapacity.

RAID 5 • Block-level striping with a single parity block, where the parity data is distributed across all of thedrives in the set.

• Offers the best mix of performance, protection, and economy.

• Has a higher write performance penalty than RAID 1/0 because multiple I/Os are required toperform a single write.

• With single parity, can sustain a single drive failure with no data loss. Vulnerable to data loss orunrecoverable read errors on a track during a drive rebuild.

• Highest economy of the three supported RAID levels. Usable capacity is 80% of raw capacity orbetter.

RAID 6 • Block-level striping with two parity blocks, distributed across all of the drives in the set.

• Offers increased protection and read performance comparable to RAID 5.

• Has a significant write performance penalty because multiple I/Os are required to perform asingle write.

• Economy is very good. Usable capacity is 75% of raw capacity or better.

• EMC best practice for SATA and NL-SAS drives.

There are RAID packs for each RAID protection level/tier type combination. The RAID levels dictate thenumber of drives that are included in the packs. RAID 5 or RAID 1/0 is for performance and extremeperformance tiers and RAID 6 is for the capacity tier. The following table lists RAID protection levels andthe number of drives in the pack for each level:

RAID protection level Number of drives per RAID pack

RAID 1/0 8 (4 data + 4 mirrors)

RAID 5 5 (4 data + 1 parity) or 9 (8 data + 1 parity)

RAID 6 8 (6 data + 2 parity), 14 (12 data + 2 parity)* or 16 (14 data + 2 parity)**

* file virtual pool only

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Storage layer

30© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 31: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

**block virtual pool only

Disk array enclosure packs

If the number of RAID packs in VCE Systems is expanded, more disk array enclosures (DAEs) might berequired. DAEs are added in packs. The number of DAEs in each pack is equivalent to the number ofback-end buses in the EMC VNX array in the VCE System. The following table lists the number of busesin the array and the number of DAEs in the pack for each VCE System:

VCE System Number of buses in the array Number of DAEs in the DAE pack

EMC VNX8000 8 or 16 8 or 16

EMC VNX7600 6 6

EMC VNX5800 6 6

EMC VNX5600 2 or 6 2 or 6 (base includes DPE as the first DAE)

EMC VNX5400 2 2 (base includes DPE as the first DAE)

There are two types of DAEs:

• 2U 25 slot DAE for 2.5" disks

• 3U 15 slot DAE for 3.5" disks

A DAE pack can contain a mix of DAE sizes, if the total DAEs in the pack equals the number of buses. Toensure that the loads are balanced, physical disk is spread across the DAEs in accordance with bestpractice guidelines.

Storage features supportThis topic presents additional storage features available on the VCE Systems.

Support for array hardware or capabilities

The following table provides an overview of the support provided for EMC VNX operating environment fornew array hardware or capabilities:

Feature Description

NFS Virtual X-Blades –VDM (Multi-LDAPSupport)

Provides security and segregation for service provider environmental clients.

Data-in-place blockcompression

When compression is enabled, thick LUNs are converted to thin and compressed inplace. RAID group LUNs are migrated into a pool during compression. There is no needfor additional space to start compression. Decompression temporarily requiresadditional space, since it is a migration, and not an in-place decompression.

Storage layer VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

31© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 32: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Feature Description

Compression for file/display compressioncapacity savings

Available file compression types:

• Fast compression (default)

• Deep compression (up to 30% more space efficient, but slower and with higherCPU usage)

Displays capacity savings due to compression to allow a cost/benefit comparison(space savings versus performance impact).

EMC VNX snapshots EMC VNX snapshots are only for storage pools, not for RAID groups. Storage poolscan use EMC SnapView snapshots and EMC VNX snapshots at the same time.

Note: This feature is optional. VCE relies on guidance from EMC best practices fordifferent use cases of EMC SnapView snapshots versus EMC VNX snapshots.

Hardware features

VCE supports the following hardware features:

• Dual 10 GE Optical/Active Twinax IP IO/SLIC for X-Blades

• 2.5 inch vault drives

• 2.5 inch DAEs and drive form factors

• 3.5 inch DAEs and drive form factors

File deduplication

File deduplication is supported, but is not enabled by default. Enabling this feature requires knowledge ofcapacity and storage requirements.

Block compression

Block compression is supported but is not enabled by default. Enabling this feature requires knowledge ofcapacity and storage requirements.

External NFS and CIFS access

The VCE Systems can present CIFS and NFS shares to external clients provided that these guidelinesare followed:

• VCE Systems shares cannot be mounted internally by VCE Systems hosts and external to theVCE Systems at the same time. In a configuration with two X-Blades, mixed internal and externalaccess is not supported. The following configurations are supported:

— External NFS and external CIFS only

— Internal NFS and internal CIFS only

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Storage layer

32© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 33: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

• In a configuration with more than two X-Blades, mixed internal and external access is supported.

• In a configuration with more than two X-Blades, external NFS and CIFS access can run on one ormore X-Blades that are physically separate from the X-Blades serving VMFS data stores to theVCE System compute layer.

Snapshots

EMC VNX snapshots are only for storage pools, not for RAID groups. Storage pools can use EMCSnapView snapshots and EMC VNX snapshots at the same time.

Note: EMC VNX snapshot is an optional feature. VCE relies on guidance from EMC best practices fordifferent use cases of EMC SnapView snapshots versus EMC VNX snapshots.

Replicas

For VCE Systems NAS configurations, EMC VNX Replicator is supported. This software can create localclones (full copies) and replicate file systems asynchronously across IP networks. EMC VNX Replicator isincluded in the EMX VNX Remote Protection Suite.

Storage layer VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

33© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 34: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Network layer

Network overviewThe network components are switches that provide connectivity to different components in the VCESystem.

The Cisco Nexus Series Switches in the network layer provide 10 or 40 GbE IP connectivity between theVCE System and the external network. In unified storage architecture, the switches also connect thefabric interconnects in the compute layer to the X-Blades in the storage layer.

In the segregated architecture, the Cisco MDS 9000 series switches in the network layer provide FibreChannel (FC) links between the Cisco fabric interconnects and the EMC VNX array. These FCconnections provide block level devices to blades in the compute layer. In unified network architecture,there are no Cisco MDS series storage switches. FC connectivity is provided by the Cisco Nexus 5548UPSwitches or Cisco Nexus 5596UP Switches.

Ports are reserved or identified for special services such as backup, replication, or aggregation uplinkconnectivity.

The VCE System contains two Cisco Nexus 3172TQ or Cisco Nexus 3048 Switches to providemanagement network connectivity to the different components of the VCE System. Refer to theappropriate RCM for a list of what is supported on your VCE System. These connections include the EMCVNX service processors, Cisco UCS fabric interconnects, Cisco Nexus 5500UP switches or Cisco Nexus9396PX switches, and power output unit (POU) management interfaces.

IP network componentsVCE Systems use the following IP network components.

VCE Systems use Cisco UCS 6200 series fabric interconnects. VCE Systems with EMC VNX5400 usethe Cisco UCS 6248UP fabric switches. All other VCE Systems use the Cisco UCS 6248UP FabricInterconnects or the Cisco UCS 6296UP Fabric Interconnects.

VCE Systems include two Cisco Nexus 5548UP switches, Cisco Nexus 5596UP switches, or CiscoNexus 9396PX switches to provide 10 or 40 GbE connectivity:

• Between the VCE Systems internal components

• To the site network

• To the second generation Advanced Platform (AMP-2) through redundant connections betweenAMP-2 and the Cisco Nexus 5548UP switches, Cisco Nexus 5596UP switches, or Cisco Nexus9396PX switches

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Network layer

34© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 35: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

To support the Ethernet and SAN requirements in the traditional, segregated network architecture, twoCisco Nexus 5548UP switches or Cisco Nexus 9396PX switches provide Ethernet connectivity, and a pairof Cisco MDS switches provide Fibre Channel (FC) connectivity.

The Cisco Nexus 5548UP Switch is available as an option for all segregated network VCE Systems. It isalso an option for unified network VCE Systems with EMC VNX5400 and EMC VNX5600.

Cisco Nexus 5500 series switches

The two Cisco Nexus 5500 series switches support low latency, line-rate, 10 Gb Ethernet and FC overEthernet (FCoE) connectivity for up to 96 ports. Unified port expansion modules are available and providean extra 16 ports of 10 GbE or FC connectivity. The FC ports are licensed in packs of eight in an on-demand basis.

The Cisco Nexus 5548UP switches have 32 integrated, low-latency, unified ports. Each port providesline-rate, 10 Gb Ethernet or eight Gbps FC connectivity. The Cisco Nexus 5548UP switches have oneexpansion slot that can be populated with a 16 port unified port expansion module. The Cisco Nexus5548UP Switch is the only network switch supported for VCE Systems data connectivity in VCE Systems(5400).

The Cisco Nexus 5596UP switches have 48 integrated, low-latency, unified ports. Each port providesline-rate 10 GB Ethernet or eight Gbps FC connectivity. The Cisco Nexus 5596UP switches have threeexpansion slots that can be populated with 16 port, unified, port expansion modules. The Cisco Nexus5596UP Switch is available as an option for both network topologies for all VCE Systems except VCESystems (5400).

Cisco Nexus 9396PX Switch

The Cisco Nexus 9396PX Switch supports both 10 Gbps SFP+ ports and 40 Gbps QSFP+ ports. TheCisco Nexus 9396PX Switch is a two rack unit (2RU) appliance with all ports licensed and available foruse. There are no expansion modules available for the Cisco Nexus 9396PX Switch.

The Cisco Nexus 9396PX Switch provides 48 integrated, low-latency SFP+ ports. Each port provides line-rate 1/10 Gbps Ethernet. There are also 12 QSFP+ ports that provide line-rate 40 Gbps Ethernet.

Related information

Management hardware components (see page 47)

Management software components (see page 48)

Port utilizationThis section describes the switch port utilization for Cisco Nexus 5548UP Switch and Cisco Nexus5596UP Switch in segregated networking and unified networking configurations, as well as the CiscoNexus switches in a segregated networking configuration.

Network layer VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

35© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 36: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Cisco Nexus 5548UP Switch - segregated networking

This section describes port utilization for a Cisco Nexus 5548UP Switch segregated networkingconfiguration.

The base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1G or 10G connectivity for LANtraffic.

The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module) withsegregated networking:

Feature Used ports Port speeds Media

Uplinks from fabric interconnect (FI) 8* 10G Twinax

Uplinks to customer core 8** Up to 10G SFP+

Uplinks to other Cisco Nexus 5000 Series Switches 2 10G Twinax

AMP-2 ESX management 3 10G SFP+

*VCE Systems with VNX5400 only support four links between the Cisco UCS FIs and Cisco Nexus5548UP switches.

**VCE Systems with VNX5400 only support four links between the Cisco Nexus 5548UP Switch andcustomer core network.

The remaining ports in the base Cisco Nexus 5548UP Switch (no module) provide support for thefollowing additional connectivity option:

Feature Available ports Port speeds Media

Customer IP backup 3 1G or 10G SFP+

If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, there are 28 additionalports (beyond the core connectivity requirements) available to provide additional feature connectivity.Actual feature availability and port requirements are driven by the model that is selected.

The following table shows the additional connectivity for Cisco Nexus 5548UP Switch with a 16UPmodule:

Feature Available ports Port speeds Media

Customer IP backup 4 1G or 10G SFP+

Uplinks from Cisco UCS FI for Ethernet bandwidth (BW) enhancement 8 10G Twinax

Cisco Nexus 5596UP Switch - segregated networking

This section describes port utilization for a Cisco Nexus 5596UP Switch segregated networkingconfiguration.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Network layer

36© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 37: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1G or 10G connectivity for LANtraffic.

The following table shows core connectivity for the Cisco Nexus 5596UP Switch (no module) withsegregated networking:

Feature Used ports Port speeds Media

Uplinks from Cisco UCS FI 8 10G Twinax

Uplinks to customer core 8 Up to 10G SFP+

Uplinks to other Cisco Nexus 5000 Series Switches 2 10G Twinax

AMP-2 ESX management 3 10G SFP+

The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for thefollowing additional connectivity option:

Feature Used ports Port speeds Media

Customer IP backup 3 1G or 10G SFP+

If an optional 16 unified port module is added to the Cisco Nexus 5596UP Switch, additional ports(beyond the core connectivity requirements) are available to provide additional feature connectivity.Actual feature availability and port requirements are driven by the model that is selected.

The following table shows the additional connectivity for the Cisco Nexus 5596UP Switch with one 16UPmodule:

Note: Cisco Nexus 5596UP Switch with two or three 16UP modules is not supported with segregatednetworking.

Feature Available ports Port speeds Media

Customer IP backup 4 1G or 10G SFP+

Uplinks from Cisco UCS FIs for Ethernet BW enhancement 8 10G Twinax

Cisco Nexus 5548UP Switch – unified networking

This section describes port utilization for a Cisco Nexus 5548UP Switch unified networking configuration.

The base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1G or 10G connectivity for LANtraffic or 2/4/8 Gbps FC traffic.

Network layer VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

37© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 38: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module) withunified networking for VCE Systems with EMC VNX5400 only.

Feature Used ports Port speeds Media

Uplinks from Cisco UCS FI 4 10G Twinax

Uplinks to customer core 4 Up to 10G SFP+

Uplinks to other Cisco Nexus 5K 2 10G Twinax

AMP-2 ESX management 3 10G SFP+

FC uplinks from Cisco UCS FI 4 8G SFP+

FC links to EMC VNX array 6 8G SFP+

The following table shows the core connectivity for the Cisco Nexus 5548UP Switch with unifiednetworking for VCE Systems with EMC VNX5600:

Feature Used ports Port speeds Media

Uplinks from Cisco UCS FI 8 10G Twinax

Uplinks to customer core 8 Up to 10G SFP+

Uplinks to other Cisco Nexus 5K 2 10G Twinax

AMP-2 ESX management 3 10G SFP+

FC uplinks from UCS FI 4 8G SFP+

FC links to EMC VNX array 6 8G SFP+

The remaining ports in the base Cisco Nexus 5548UP Switch (no module) provide support for thefollowing additional connectivity options for VCE Systems with EMC VNX5400 only.

Feature Available ports Port speeds Media

X-Blade connectivity 2 10G EMC Active Twinax

X-Blade NDMP connectivity 2 8G SFP+

Customer IP backup 3 1G or 10G SFP+

The remaining ports in the base Cisco Nexus 5548UP Switch provide support for the following additionalconnectivity options for the other VCE Systems:

Feature Available ports Port speeds Media

EMC RecoverPoint WAN links (one per EMC RecoverPointAppliance pair)

2 1G GE_T SFP+

X-Blade connectivity 2 10G EMC Active Twinax

Customer IP backup 2 1G or 10G SFP+

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Network layer

38© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 39: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, additional ports(beyond the core connectivity requirements) available to provide additional feature connectivity. Actualfeature availability and port requirements are driven by the model that is selected.

The following table shows the additional connectivity for the Cisco Nexus 5548UP Switch with one 16UPmodule:

Feature Available ports Port speeds Media

EMC RecoverPoint WAN links (one per EMC RecoverPointAppliance pair)

4 1G GE_T SFP+

X-Blade connectivity 8 10G EMC Active Twinax

Customer IP backup 4 1G or 10G SFP+

Uplinks from Cisco UCS FIs for Ethernet BW Enhancement 8 10G Twinax

Cisco Nexus 5596UP Switch - unified networking

This section describes port utilization for a Cisco Nexus 5596UP Switch unified networking configuration.

The base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1/10G connectivity for LAN trafficor 2/4/8 Gbps Fibre Channel (FC) traffic.

The following table shows the core connectivity for the Cisco Nexus 5596UP Switch (no module):

Feature Used ports Port speeds Media

Uplinks from Cisco UCS FI 8 10G Twinax

Uplinks to customer core 8 Up to 10G SFP+

Uplinks to other Cisco Nexus 5K 2 10G Twinax

AMP-2 ESX management 3 10G SFP+

FC uplinks from Cisco UCS FI 4 8G SFP+

FC links to EMC VNX Array 6 8G SFP+

The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for thefollowing additional connectivity options:

Feature Minimum portsrequired for feature

Port speeds Media

X-Blade connectivity 4 10G EMC Active Twinax

X-Blade NDMP connectivity 2 8G SFP+

IP backup solutions 4 1 or 10G SFP+

Network layer VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

39© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 40: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Feature Minimum portsrequired for feature

Port speeds Media

EMC RecoverPoint WAN links (one per EMCRecoverPoint Appliance pair)

2 1G GE_T SFP+

EMC RecoverPoint SAN links (two per EMCRecoverPoint Appliance)

4 8G SFP+

Up to three additional 16 unified port modules can be added to the Cisco Nexus 5596UP Switch(depending on the selected VCE System). Each module has 16 ports to enable additional featureconnectivity. Actual feature availability and port requirements are driven by the model that is selected.

The following table shows the connectivity options for Cisco Nexus 5596UP Switch for slots 2-4:

Feature Ports availablefor feature

Port speeds Media Defaultmodule

Uplinks from Cisco UCS FI for Ethernet BWenhancement

8 10G Twinax 1

EMC VPLEX SAN connections (4 per engine) 8 8G SFP+ 1

X-Blade connectivity 12 10G EMC ActiveTwinax

3

X-Blade NDMP connectivity 6 8G SFP+ 3,4

EMC RecoverPoint WAN links (1 per EMCRecoverPoint Appliance pair)

2 1G GE_T SFP+ 4

EMC RecoverPoint SAN links

(2 per EMC RecoverPoint Appliance)

4 8G SFP+ 4

FC links from Cisco UCS fabric interconnectfor FC BW enhancement

4 8G SFP+ 4

FC links from EMC VNX array for FC BWenhancement

4 8G SFP+ 4

Cisco Nexus 9396PX Switch - segregated networking

This section describes port utilization for a Cisco Nexus 9396PX Switch segregated networkingconfiguration.

The base Cisco Nexus 9396PX Switch provides 48 SFP+ ports used for 1G or 10G connectivity and 1240G QSFP+ ports for LAN traffic.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Network layer

40© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 41: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The following table shows core connectivity for the Cisco Nexus 9396PX Switch with segregatednetworking:

Feature Used ports Port speeds Media

Uplinks from fabricinterconnect (FI)

8* 10G Twinax

Uplinks to customercore***

8(10G)**/2(40G) Up to 40G SFP+/QSFP+

VPC peer links 2 40G Twinax

AMP-2 ESXmanagement

3 10G SFP+

*VCE Systems with EMC VNX5400 only support four links between the Cisco UCS FIs and Cisco Nexus9396PX switches.

** VCE Systems with EMC VNX5400 only support four links between the Cisco Nexus 9396PX Switchand customer core network.

*** VCE Systems and Cisco Nexus 9396PX support 40G or 10G SFP+ uplinks to customer core.

The remaining ports in the Cisco Nexus 9396PX Switch provide support for a combination of the followingadditional connectivity options:

Feature Available ports Port speeds Media

EMC RecoverPointWAN links (one perEMC RecoverPointAppliance pair)

4 1G GE T SFP+

Customer IP backup 8 1G or 10G SFP+

X-Blade connectivity 8 10G EMC Active Twinax

Uplinks from CiscoUCS FIs for EthernetBW enhancement*

8 10G Twinax

*Not supported on VCE Systems with EMC VNX5400

Storage switching componentsThe storage switching components consist of redundant Cisco SAN fabric switches.

In a segregated networking model, there are two Cisco MDS 9148 or Cisco MDS 9148S Multilayer FabricSwitches. Refer to the appropriate RCM for a list of what is supported on your VCE System. In a unifiednetworking model, Fibre Channel (FC) based features are provided by the two Cisco Nexus 5548UPswitches or Cisco Nexus 5596UP switches that are also used for LAN traffic.

Network layer VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

41© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 42: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

In VCE Systems, these switches provide:

• FC connectivity between the compute layer components and the storage layer components

• Connectivity for backup, business continuity (EMC RecoverPoint Appliance), and storagefederation requirements when configured.

Note: Inter-Switch Links (ISL) to the existing SAN are not permitted.

The Cisco MDS 9148 Multilayer Fabric Switch provides from 16 to 48 line-rate ports (in 8-port increments)for non-blocking 8 Gbps throughput. The port groups are enabled on an as needed basis.

The Cisco MDS 9148S Multilayer Fabric Switch provides from 12 to 48 line-rate ports (in 12-portincrements) for non-blocking 16 Gbps throughput. The port groups are enabled on an as needed basis.

The Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches provide a number of line-rate portsfor non-blocking 8 Gbps throughput. Expansion modules can be added to the Cisco Nexus 5596UPSwitch to provide 16 additional ports operating at line-rate.

The following tables define the port utilization for the SAN components when using a Cisco MDS 9148 orCisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supportedon your VCE System.

Feature Used ports Port speeds Media

FC uplinks from Cisco UCS FI 4 8 G SFP+

FC links to EMC VNX array 6 8 G or 16 G** SFP+

**16 Gb Fibre Channel SLICs are available on the EMC VNX storage arrays.

Feature Available ports

Backup 2

FC links from Cisco UCS fabric interconnect (FI) for FC Bandwidth (BW) enhancement 4

FC links from EMC VNX storage array for FC BW enhancement 4

FC links to EMC VNX storage array dedicated for replication 2

EMC RecoverPoint SAN links (two per EMC RecoverPoint Appliance) 8

SAN aggregation 2

EMC VPLEX SAN connections (four per engine) 8

EMC X-Blade network data management protocol (NDMP) connectivity 2

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Network layer

42© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 43: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Virtualization layer

Virtualization componentsVMware vSphere is the virtualization platform that provides the foundation for the private cloud. The coreVMware vSphere components are the VMware vSphere ESXi and VMware vCenter Server formanagement. Depending on the version that you are running, VMware vSphere 5.x includes a SingleSign-on (SSO) component as a standalone Windows server or as an embedded service on the vCenterserver.VMware vSphere 6.0 includes a pair of Platform Service Controller Linux appliances to provide theSingle Sign-on (SSO) service.

The hypervisors are deployed in a cluster configuration. The cluster allows dynamic allocation ofresources, such as CPU, memory, and storage. The cluster also provides workload mobility and flexibilitywith the use of VMware vMotion and Storage vMotion technology.

VMware vSphere Hypervisor ESXiThis topic describes the VMware vSphere Hypervisor ESXi that runs on the second generation of theAdvanced Management Platform (AMP-2) and in a VCE System utilizing VMware vSphere ServerEnterprise Plus.

This lightweight hypervisor requires very little space to run (less than six GB of storage required to install)and has minimal management overhead.

VMware vSphere ESXi does not contain a console operating system. The VMware vSphere HypervisorESXi boots from Cisco FlexFlash (SD card) on AMP-2. For the compute blades, ESXi boots from the SANthrough an independent Fibre Channel (FC) LUN presented from the EMC VNX storage array. The FCLUN also contains the hypervisor's locker for persistent storage of logs and other diagnostic files toprovide stateless computing within VCE Systems. The stateless hypervisor is not supported.

Cluster configuration

VMware vSphere ESXi hosts and their resources are pooled together into clusters. These clusterscontain the CPU, memory, network, and storage resources available for allocation to virtual machines(VMs). Clusters can scale up to a maximum of 32 hosts for VMware vSphere 5.1/5.5 and 64 hosts forVMware vSphere 6.0. Clusters can support thousands of VMs.

The clusters can also support a variety of Cisco UCS blades running inside the same cluster.

Note: Some advanced CPU functionality might be unavailable if more than one blade model is running ina given cluster.

Data stores

VCE Systems support a mixture of data store types: block level storage using VMFS or file level storageusing NFS.

Virtualization layer VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

43© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 44: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The maximum size per VMFS5 volume is 64 TB (50 TB VMFS3 @ 1 MB). Beginning with VMwarevSphere 5.5, the maximum VMDK file size is 62 TB. Each host/cluster can support a maximum of 255volumes.

VCE optimizes the advanced settings for VMware vSphere ESXi hosts that are deployed in VCE Systemsto maximize the throughput and scalability of NFS data stores. VCE Systems support a maximum of 256NFS data stores per host.

Virtual networks

Virtual networking in the Advanced Management Platform (AMP-2) uses standard virtual switches. Virtualnetworking in VCE Systems is managed by the Cisco Nexus 1000V Series Switch. The Cisco Nexus1000V Series Switch ensures consistent, policy-based network capabilities to all servers in the datacenter by allowing policies to move with a VM during live migration. This provides persistent network,security, and storage compliance.

Alternatively, virtual networking in VCE Systems is managed by a VMware vCenter Virtual DistributedSwitch (version 5.5 or higher) with comparable features to the Cisco Nexus 1000V where applicable. TheVMware VDS option consists of both a VMware Standard Switch (VSS) and a VMware vSphereDistributed Switch (VDS) and will use a minimum of four uplinks presented to the hypervisor.

The implementation of Cisco Nexus 1000V Series Switch for VMware vSphere 5.1/5.5 and VMware VDSfor VMware vSphere 5.5 use intelligent network Class of Service (CoS) marking and Quality of Service(QoS) policies to appropriately shape network traffic according to workload type and priority. WithVMware vSphere 6.0, QoS is set to Default (Trust Host). The vNICs are equally distributed across allavailable physical adapter ports to ensure redundancy and maximum bandwidth where appropriate. Thisprovides general consistency and balance across all Cisco UCS blade models, regardless of the CiscoUCS Virtual Interface Card (VIC) hardware. Thus, VMware vSphere ESXi has a predictable uplinkinterface count. All applicable VLANs, native VLANs, MTU settings, and QoS policies are assigned to thevirtual network interface cards (vNIC) to ensure consistency in case the uplinks need to be migrated tothe VMware vSphere Distributed Switch (VDS) after manufacturing.

Related information

Management hardware components (see page 47)

Management software components (see page 48)

VMware vCenter ServerThis topic describes the VMware vCenter Server which is a central management point for the hypervisorsand VMs.

VMware vCenter Server is a central management point for the hypervisors and virtual machines. VMwarevCenter Server is installed on a 64-bit Windows Server. VMware Update Manager is installed on a 64-bitWindows Server and runs as a service to assist with host patch management.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Virtualization layer

44© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 45: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Second generation of the Advanced Management Platform with redundant physical servers (AMP-2RP)and the VCE System each have a unified VMware vCenter Server Appliance instance. Each of thesesystems resides in the AMP-2RP.

VMware vCenter Server provides the following functionality:

• Cloning of VMs

• Creating templates

• VMware vMotion and VMware Storage vMotion

• Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vSpherehigh-availability clusters

VMware vCenter Server provides monitoring and alerting capabilities for hosts and VMs. VCE Systemadministrators can create and apply the following alarms to all managed objects in VMware vCenterServer:

• Data center, cluster, and host health, inventory, and performance

• Data store health and capacity

• VM usage, performance, and health

• Virtual network usage and health

Databases

The backend database that supports VMware vCenter Server and VMware Update Manager (VUM) isremote Microsoft SQL Server 2008 (vSphere 5.1) and Microsoft SQL 2012 (vSphere 5.5/6.0). The SQLServer service requires a dedicated service account.

Authentication

VCE Systems support the VMware Single Sign-On (SSO) Service capable of the integration of multipleidentity sources including Active Directory, Open LDAP, and local accounts for authentication. VMwareSSO is available in VMware vSphere 5.1 and higher. VMware vCenter Server, Inventory, Web Client,SSO, Core Dump Collector, and Update Manager run as separate Windows services, which can beconfigured to use a dedicated service account depending on the security and directory servicesrequirements.

VCE supported features

VCE supports the following VMware vCenter Server features:

• VMware Single Sign-On (SSO) Service (version 5.1 and higher)

• VMware vSphere Web Client (used with VCE Vision™ Intelligent Operations)

• VMware vSphere Distributed Switch (VDS)

Virtualization layer VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

45© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 46: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

• VMware vSphere High Availability

• VMware DRS

• VMware Fault Tolerance

• VMware vMotion

• VMware Storage vMotion

— Layer 3 capability available for compute resources (version 6.0 and higher)

• Raw Device Mappings

• Resource Pools

• Storage DRS (capacity only)

• Storage driven profiles (user-defined only)

• Distributed power management (up to 50 percent of VMware vSphere ESXi hosts/blades)

• VMware Syslog Service

• VMware Core Dump Collector

• VMware vCenter Web Client

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Virtualization layer

46© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 47: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Management

Management components overviewThis topic describes the second generation of the Advanced Management Platform (AMP-2) components.

AMP-2 provides a single management point for VCE Systems and provides the ability to:

• Run the core and VCE Optional Management Workloads

• Monitor and manage VCE System health, performance, and capacity

• Provide network and fault isolation for management

• Eliminate resource overhead on VCE Systems

The Core Management Workload is the minimum required set of management software to install, operate,and support a VCE System. This includes all hypervisor management, element managers, virtualnetworking components (Cisco Nexus 1000v or VMware vSphere Distributed Switch (VDS)), and VCEVision™ Intelligent Operations software.

The VCE Optional Management Workload is non-Core Management Workloads that are directlysupported and installed by VCE whose primary purpose is to manage components within a VCE System.The list would be inclusive of, but not limited to, Data Protection, Security or Storage management toolssuch as, EMC Unisphere for EMC RecoverPoint or EMC VPLEX, Avamar Administrator, EMC InsightIQfor Isilon, or VMware vCNS appliances (vShield Edge/Manager).

Related information

Connectivity overview (see page 11)

Unified network architecture (see page 16)

Management hardware componentsThis topic describes the second generation of the Advanced Management Platform (AMP-2) hardware.

AMP-2 is available with one to three physical servers. All options use their own resources to runworkloads without consuming resources:

AMP-2 option Physical server Description

AMP-2P One Cisco UCS C220server

Default configuration for VCE Systems that use a dedicatedCisco UCS C220 Server to run management workloadapplications.

Management VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

47© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 48: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

AMP-2 option Physical server Description

AMP-2RP Two Cisco UCS C220servers

Adds a second Cisco UCS C220 Server to support applicationand hardware redundancy.

AMP-2HA Baseline Two Cisco UCS C220servers

Implements VMware vSphere HA/DRS with shared storageprovided by EMC VNXe3200 storage.

AMP-2HAPerformance

Three Cisco UCS C220servers

Adds a third Cisco UCS C220 Server and additional storagefor EMC FAST VP.

Management software componentsThis topic describes the software that is delivered pre-configured with the second generation of theAdvanced Management Platform (AMP-2).

AMP-2 is delivered pre-configured with the following software components which are dependent on theselected VCE Release Certification Matrix:

• Microsoft Windows Server 2008 R2 SP1 Standard x64

• Microsoft Windows Server 2012 R2 Standard x64

• VMware vSphere Enterprise Plus

• VMware vSphere Hypervisor ESXi

• VMware Single Sign-On (SSO) Service

• VMware vSphere Web Client Service

• VMware vSphere Inventory Service

• VMware vCenter Server

• VMware vCenter Database using Microsoft SQL Server Standard Edition

• VMware vCenter Update Manager

• VMware vSphere client

• VMware vSphere Syslog Service (optional)

• VMware vSphere Core Dump Service (optional)

• VMware vCenter Server Appliance (AMP-2RP) - a second instance of VMware vCenter Server isrequired to manage the replication instance separate from the production VMware vCenter Server

• VMware vSphere Replication Appliance (AMP-2RP)

• VMware vSphere Distributed Switch (VDS) or Cisco Nexus 1000V virtual switch (VSM)

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Management

48© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 49: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

• EMC PowerPath/VE Electronic License Management Server (ELMS)

• EMC Secure Remote Support (ESRS)

• Array management modules, including but not limited to, EMC Unisphere Client, EMC UnisphereService Manager, EMC VNX Initialization Utility, EMC VNX Startup Tool, EMC SMI-S Provider,EMC PowerPath Viewer

• Cisco Prime Data Center Network Manager and Device Manager

• (Optional) EMC RecoverPoint management software that includes EMC RecoverPointManagement Application and EMC RecoverPoint Deployment Manager

Management network connectivityThis topic provides the second generation of the Advanced Management Platform network connectivityand server assignment illustrations.

Management VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

49© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 50: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

AMP-2HA network connectivity

The following illustration provides an overview of the network connectivity for the AMP-2HA:

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Management

50© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 51: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

AMP-2HA server assignments

The following illustration provides an overview of the VM server assignment for AMP-2HA:

VCE Systems that use VMware vSphere Distributed Switch (VDS) do not include Cisco Nexus1000VVSM VMs.

The Performance option of AMP-2HA leverages the DRS functionality of VMware vCenter to optimizeresource usage (CPU/memory) so that VM assignment to a VMware vSphere ESXi host will be managedautomatically

Management VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

51© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 52: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

AMP-2P server assignments

The following illustration provides an overview of the VM server assignment for AMP-2P:

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Management

52© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 53: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

AMP-2RP server assignments

The following illustration provides an overview of the VM server assignment for AMP-2RP:

VCE Systems that use VMware VDS do not include Cisco Nexus1000V VSM VMs.

Management VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

53© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 54: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Configuration descriptions

VCE Systems with EMC VNX8000VCE Systems with EMC VNX8000 support various array types and features, disk array enclosure andSLIC configurations, and compute and connectivity for fabric interconnects.

Array options

VCE Systems (8000) are available as block only or unified storage. Unified storage VCE Systems (8000)support up to eight X-Blades and ships with two X-Blades and two control stations. Each X-Bladeprovides four 10G front-end network connections. An additional data mover enclosure (DME) supportsthe connection of two additional X-Blades with the same configuration as the base data movers.

The following table shows the available array options:

Array Bus Supported X-Blades

Block 8/16 N/A

Unified 8/16 2

Unified 8/16 3

Unified 8/16 4

Unified 8/16 5

Unified 8/16 6

Unified 8/16 7

Unified 8/16 8

Each X-Blade contains:

• One 6 core 2.8 GHz Xeon processor

• 24 GB RAM

• One Fibre Channel (FC) storage line card (SLIC) for connectivity to array

• Two 2-port 10 GB SFP+ compatible SLICs

Feature options

VCE Systems (8000) support both Ethernet and FC bandwidth (BW) enhancement. Ethernet BWenhancement is available with Cisco Nexus 5596UP switches only. FC BW enhancement requires thatSAN connectivity is provided by Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switches, orCisco Nexus 5596UP switches, depending on topology. Refer to the appropriate RCM for a list of what issupported on your VCE System.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions

54© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 55: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The following table shows the feature options:

Array Topology FC BW enhancement Ethernet BW enhancement

Block Segregated Y Y

Unified Segregated Y Y

Block Unified network Y Y

Unified Unified network Y Y

Unified networking is supported only on the VCE Systems (8000) with Cisco Nexus 5596UP switches.Ethernet BW enhancement is supported only on the VCE Systems (8000) with Cisco Nexus 5596UPswitches.

Disk array enclosure configuration

VCE Systems (8000) include two 25 slot 2.5" disk array enclosures (DAEs). An additional six DAEs arerequired beyond the two base DAEs. Additional DAEs can be added in either 15 slot 3.5" DAEs or 25 slot2.5" DAEs. Additional DAEs (after initial eight) are added in multiples of eight. If there are 16 buses, thenDAEs must be added in multiples of 16. DAEs are interlaced when racked, and all 2.5" DAEs are firstracked on the buses, then 3.5" DAEs.

SLIC configuration

The EMC VNX8000 provides slots for 11 SLICs in each service processor (SP).

• Two slots in each SP are populated with back-end SAS bus modules by default.

• Two additional back-end SAS bus modules support up to 16 buses. If this option is chosen, allDAEs are purchased in groups of 16.

• VCE Systems (8000) support two FC SLICs per SP for host connectivity. Additional FC SLICs areincluded to support unified storage.

• If FC BW enhancement is configured, an additional FC SLIC is added to the array.

• The remaining SLIC slots are reserved for future VCE configuration options.

• VCE only supports the four port FC SLIC for host connectivity.

• By default, six FC ports per SP are connected to the SAN switches for VCE Systems hostconnectivity. The addition of FC BW Enhancement provides four additional FC ports per SP.

As the VCE System with EMC VNX8000 has multiple CPUs, balance the SLIC arrangements acrossCPUs.

Configuration descriptions VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

55© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 56: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The following table shows the SLIC configurations per SP (eight bus):

Array FC BW enhancement SL 0 SL 1 SL 2 SL 3 SL 4 SL 5 SL 6 SL 7 SL 8 SL 9 SL 10

Block Y FC Res Res FC Res Bus Res Res Res FC Bus

Unified Y FC Res Res FC Res Bus Res Res FC/U FC Bus

Block N FC Res Res Res Res Bus Res Res Res FC Bus

Unified N FC Res Res Res Res Bus Res Res FC/U FC Bus

Unified -> 4 DM N FC Res FC/U Res Res Bus Res Res FC/U FC Bus

Unified -> 4 DM Y FC Res FC/U FC Res Bus Res Res FC/U FC Bus

Res: slot reserved for future VCE configuration options.

FC: 4xFC port input/output module (IOM): Provides four 16 Gb FC connections (segregated networking).

FC: 4xFC port input/output module (IOM): Provides four 8 Gb FC connections (unified networking).

FC/U: 4xFC port IOM dedicated to unified X-Blade connectivity: provides four 8 Gb FC connections.

Bus: Four port - 4x lane/port 6 Gbps SAS: provides additional back-end bus connections.

The following table shows the SLIC configurations per SP (16 bus):

Array FC BW SL 0 SL 1 SL 2 SL 3 SL 4 SL 5 SL 6 SL 7 SL 8 SL 9 SL 10

Block Y FC Res Res FC Bus Bus Bus Res Res FC Bus

Unified Y FC Res Res FC Bus Bus Bus Res FC/U FC Bus

Block N FC Res Res Res Bus Bus Bus Res Res FC Bus

Unified N FC Res Res Res Bus Bus Bus Res FC/U FC Bus

Unified -> 4 DM N FC Res FC/U Res Bus Bus Bus Res FC/U FC Bus

Unified -> 4 DM Y FC Res FC/U FC Bus Bus Bus Res FC/U FC Bus

N/A: not available for this configuration.

Res: slot reserved for future VCE configuration options.

FC: 4xFC port IOM: provides four 16Gb FC connections (segregated networking).

FC: 4xFC port IOM: provides four 8 Gb FC connections (unified networking).

FC/U: 4xFC port IOM dedicated to unified X-Blade connectivity: provides four 8G FC connections.

Bus: Four port - 4x lane/port 6 Gbps.

SAS: provides additional back-end bus connections.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions

56© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 57: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Two additional back-end SAS bus modules are available to support up to 16 buses. If this option ischosen, all DAEs are purchased in groups of 16.

Compute

VCE Systems (8000) support between two to 16 chassis, and up to 128 half-width blades. Each chassiscan be connected with two links (Cisco UCS 2204XP fabric extenders IOM only), four links (Cisco UCS2204XP fabric extenders IOM only), or eight links (Cisco UCS 2208XP fabric extenders IOM only) perIOM.

The following table shows the compute options that are available for the fabric interconnects:

Fabric interconnect Min chassis(blades)

2-link max chassis(blades)

4-link max chassis(blades)

8-link maxchassis (blades)

Cisco UCS 6248UP 2 (2) 16 (128) 8 (64) 4 (32)

Cisco UCS 6296UP 2 (2) N/A 16 (128) 8 (64)

Connectivity

VCE Systems (8000) support the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabricinterconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches forEthernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 Series SwitchesCisco MDS9148 or Cisco MDS 9148S Multilayer Fabric Switches, based on the topology. Refer to the appropriateRCM for a list of what is supported on your VCE System.

The following table shows the available switch combinations that are available for the fabricinterconnects:

Fabric interconnect Topology Ethernet SAN

Cisco UCS 6248UP Segregated Cisco Nexus 5548UPswitches

Cisco MDS 9148, or 9148SMultilayer Fabric Switch.Refer to the appropriateRCM for a list of what issupported on your VCESystem.

Segregated Cisco Nexus 5596UPswitches

Cisco MDS 9148, or 9148SMultilayer Fabric Switch.Refer to the appropriateRCM for a list of what issupported on your VCESystem.

Unified Cisco Nexus 5596UP switches

Configuration descriptions VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

57© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 58: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Fabric interconnect Topology Ethernet SAN

Cisco UCS 6296UP Segregated Cisco Nexus 5548UPswitches

Cisco MDS 9148, or 9148SMultilayer Fabric Switch.Refer to the appropriateRCM for a list of what issupported on your VCESystem.

Segregated Cisco Nexus 5596UPswitches

Cisco MDS 9148, or 9148SMultilayer Fabric Switch.Refer to the appropriateRCM for a list of what issupported on your VCESystem.

Unified Cisco Nexus 5596UP switches

Note: The default is a unified network with Cisco Nexus 5596UP switches.

VCE Systems with EMC VNX7600VCE Systems with EMC VNX7600 support various array types and features, disk array enclosure andSLIC configurations, and compute and connectivity for fabric interconnects.

Array options

VCE Systems (7600) are available as block only or unified storage. Unified storage VCE Systems (7600)support up to eight X-Blades and ships with two X-Blades and two control stations. Each X-Bladeprovides four 10 G front-end connections to the network. An additional data mover enclosure (DME)supports the connection of two additional X-Blades with the same configuration as the base X-Blades.

The following table shows the available array options:

Array Bus Supported X-Blades

Block 6 N/A

Unified 6 2 *

Unified 6 3 *

Unified 6 4 *

Unified 6 5*

Unified 6 6*

Unified 6 7*

Unified 6 8*

*VCE supports two to eight X-Blades in VCE Systems (7600).

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions

58© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 59: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Each X-Blade contains:

• One 4 core 2.4 GHz Xeon processor

• 12 GB RAM

• One Fibre Channel (FC) storage line card (SLIC) for connectivity to array

• Two 2-port 10 GB SFP+ compatible SLICs

Feature options

VCE Systems (7600) support both the Ethernet and FC bandwidth (BW) enhancement. The Ethernet BWenhancement is available with Cisco Nexus 5596UP switches only. The FC BW enhancement requiresthat SAN connectivity is provided by Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switches, orthe Cisco Nexus 5596UP switches, depending on topology. Refer to the appropriate RCM for a list ofwhat is supported on your VCE System. Both block and unified arrays use FC BW enhancement.

The following table shows the feature options:

Array Topology FC BW enhancement Ethernet BW enhancement

Block Segregated Y Y

Unified Segregated Y Y

Block Unified network Y Y

Unified Unified network Y Y

Unified networking is only supported on VCE Systems (7600) with Cisco Nexus 5596UP switches.

Disk array enclosure configuration

VCE Systems (7600) have two 25 slot 2.5" disk array enclosures (DAEs). The EMC VNX7600 dataprocessor enclosure (DPE) provides the DAE for bus 0, and provides the first DAE on bus 1. Anadditional four DAEs are required beyond the two base DAEs. Additional DAEs can be added in either 15slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs (after initial six) are added in multiples of six. DAEsare interlaced when racked, and all 2.5" DAEs are racked first on the buses, then 3.5" DAEs.

SLIC configuration

The EMC VNX7600 provides slots for five SLICs in each service processor (SP). Slot 0 in each SP ispopulated with a back-end SAS bus module. VCE Systems (7600) support two FC SLICs per SP for hostconnectivity. A third is reserved to support unified storage. If FC BW enhancement is configured, anadditional FC SLIC is added to the array. VCE only supports the four port FC SLIC for host connectivity.By default, six FC ports per SP are connected to the SAN switches for VCE Systems host connectivity.The addition of FC BW enhancement provides four additional FC ports per SP.

Configuration descriptions VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

59© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 60: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The following table shows the SLIC configurations per SP:

Array FC BW enhancement SLIC 0 SLIC 1 SLIC 2 SLIC 3 SLIC 4

Block Y Bus FC FC FC N/A

Unified (<5DM)* Y Bus FC FC FC FC/U

Block N Bus FC FC N/A N/A

Unified N Bus FC FC FC/U FC/U

Greater than four X-Blades prohibits FC BW enhancement feature.

N/A: not available for this configuration.

FC 4xFC port I/O module (IOM) provides four 16 Gb FC connections (segregated networking).

FC 4xFC port IO module (IOM) provides four 8 Gb FC connections (unified networking).

FC/U 4xFC port IO module dedicated to unified X-Blade connectivity provides four 8 Gb FC connections.

Bus four port - 4x lane/port six GB SAS provides additional back-end bus connections.

Compute

VCE Systems (7600) support two to 16 chassis, and up to 128 half-width blades. Each chassis can beconnected with two links (Cisco UCS 2204XP fabric extenders input/output module (IOM) only), four links(Cisco UCS 2204XP fabric extenders IOM only), or eight links (Cisco UCS 2208XP fabric extenders IOMonly) per IOM.

The following table shows the compute options available for the fabric interconnects:

Fabric interconnect Min chassis(blades)

2-link max chassis(blades)

4-link max chassis(blades)

8-link maxchassis (blades)

Cisco UCS 6248UP 2 (2) 16 (128) 8 (64) 4 (32)

Cisco UCS 6296UP 2 (2) N/A 16 (128) 8 (64)

Connectivity

VCE Systems (7600) support the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabricinterconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches forEthernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 Series Switches, CiscoMDS 9148 or Cisco MDS 9148S Multilayer Fabric Switches, and based on the topology. Refer to theappropriate RCM for a list of what is supported on your VCE System.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions

60© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 61: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The following table shows the available switch combinations available for the fabric interconnects:

Fabric interconnect Topology Ethernet SAN

Cisco UCS 6248UP Segregated Cisco Nexus 5548UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Segregated Cisco Nexus 5596UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Unified Cisco Nexus 5596UP switches

Cisco UCS 6296UP Segregated Cisco Nexus 5548UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Segregated Cisco Nexus 5596UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Unified Cisco Nexus 5596UP switches

Note: The default is unified network with Cisco Nexus 5596UP switches.

VCE Systems with EMC VNX5800VCE Systems with EMC VNX5800 support various array types and features, disk array enclosure andSLIC configurations, and compute and connectivity for fabric interconnects.

Array options

VCE Systems (5800) are available as block only or unified storage. Unified storage VCE Systems (5800)support up to six X-Blades and ships with two X-Blades and two control stations. Each X-Blade providesfour 10G front-end connections to the network. An additional data mover enclosure (DME) supports theconnection of one additional X-Blade with the same configuration as the base data movers.

Configuration descriptions VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

61© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 62: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The following table shows the available array options:

Array Bus Supported X-Blades

Block 6 N/A

Unified 6 2

Unified 6 3*

Unified 6 4*

Unified 6 5*

Unified 6 6*

VCE supports two to six X-Blades in VCE Systems (5800).

Each X-Blade contains:

• One 4 core 2.13 GHz Xeon processor

• 12 GB RAM

• One Fibre Channel (FC) storage line card (SLIC) for connectivity to array

• Two 2-port 10 GB SFP+ compatible SLICs

Feature options

VCE Systems (5800) support both Ethernet and FC bandwidth (BW) enhancement. Ethernet BWenhancement is available with Cisco Nexus 5596UP switches only. FC BW enhancement requires thatSAN connectivity is provided by Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switches, or theCisco Nexus 5596UP switches, depending on the topology. Refer to the appropriate RCM for a list ofwhat is supported on your VCE System. Both block and unified arrays use FC BW enhancement.

The following table shows the feature options.

Array Topology FC BW enhancement Ethernet BW enhancement

Block Segregated Y Y

Unified Segregated Y Y

Block Unified network Y Y

Unified Unified network Y Y

Note: Unified networking is supported only on VCE Systems (5800) with Cisco Nexus 5596UP switches.

Disk array enclosure configuration

VCE Systems (5800) have two 25 slot 2.5" disk array enclosure (DAEs). The EMC VNX5800 dataprocessor enclosure (DPE) provides the DAE for bus 0, and the second provides the first DAE on bus 1.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions

62© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 63: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

An additional four DAEs are required beyond the base two DAEs. Additional DAEs can be added in either15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs (after initial six) are added in multiples of six.DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then 3.5" DAEs.

SLIC configuration

The EMC VNX5800 provides slots for five SLICs in each service processor. Slot 0 is populated with aback-end SAS bus module. VCE Systems (5800) support two FC SLICs per SP for host connectivity. Athird is reserved to support unified storage. If FC BW enhancement is configured, an additional FC SLICis added to the array. VCE only supports the four-port FC SLIC for host connectivity. By default, six FCports per SP are connected to the SAN switches for VCE Systems host connectivity. The addition of FCBW enhancement provides four additional FC ports per SP.

The following table shows the SLIC configurations per SP:

Array FC BW enhancement SLIC 0 SLIC 1 SLIC 2 SLIC 3 SLIC 4

Block Y Bus FC FC FC N/A

Unified (<5DM)* Y Bus FC FC FC FC/U

Block N Bus FC FC N/A N/A

Unified N Bus FC FC FC/U FC/U

Greater than four X-Blades prohibits FC BW enhancement.

N/A: not available for this configuration.

FC 4xFC port I/O module (IOM) provides four 16 Gb FC connections (segregating networking).

FC 4xFC port I/O module (IOM) provides four 8 Gb FC connections (unified networking).

FC/U 4xFC port IOM dedicated to unified X-Blade connectivity provides four 8 Gb FC connections.

Bus: Four port - 4x lane/port 6 Gbps SAS: provides additional back-end bus connections.

Compute

VCE Systems (5800) support two to 16 chassis, and up to 128 half-width blades. Each chassis can beconnected with two links (Cisco UCS 2204XP fabric extenders IOM only), four links (Cisco UCS 2204XPfabric extenders IOM only) or eight links (Cisco UCS 2208XP fabric extenders IOM only) per IOM.

The following table shows the compute options that are available for the fabric interconnects:

Fabric interconnect Min chassis(blades)

2-link max chassis(blades)

4-link max chassis(blades)

8-link max chassis(blades)

Cisco UCS 6248UP 2 (2) 16 (128) 8 (64) 4 (32)

Cisco UCS 6296UP 2 (2) N/A 16 (128) 8 (64)

Configuration descriptions VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

63© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 64: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Connectivity

VCE Systems (5800) support the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabricinterconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches forEthernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 switches, or Cisco MDS9148 or Cisco MDS 9148S Multilayer Fabric Switches, and based on the topology. Refer to theappropriate RCM for a list of what is supported on your VCE System.

The following table shows all the available switch combinations that are available for the fabricinterconnects:

Fabric interconnect Topology Ethernet SAN

Cisco UCS 6248UP Segregated Cisco Nexus 5548UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Segregated Cisco Nexus 5596UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Unified Cisco Nexus 5596UP switches

Cisco UCS 6296UP Segregated Cisco Nexus 5548UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Segregated Cisco Nexus 5596UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Unified Cisco Nexus 5596UP switches

Note: The default is a unified network with Cisco Nexus 5596UP switches.

VCE Systems with EMC VNX5600VCE Systems with EMC VNX5600 support various array types and features, disk array enclosure andSLIC configurations, and compute and connectivity for fabric interconnects.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions

64© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 65: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Array options

VCE Systems (5600) are available as block only or unified storage. Unified storage VCE Systems (5600)support one to four X-Blades and two control stations. Each X-Blade provides two 10G front-endconnections to the network.

The following table shows the available array options:

Array Bus Supported X-Blades

Block 2 or 6 N/A

Unified 2 or 6 1

Unified 2 or 6 2*

Unified 2 or 6 3*

Unified 2 or 6 4*

*VCE supports one to four X-Blades in VCE Systems (5600).

Each X-Blade contains:

• One 4 core 2.13 GHz Xeon processor

• Six GB RAM

• One Fibre Channel (FC) storage line card (SLIC) for connectivity to array

• One 2-port 10 GB SFP+ compatible SLICs

Feature options

VCE Systems (5600) use the Cisco Nexus 5596UP switches. VCE Systems (5600) do not support FCbandwidth (BW) enhancement in block or unified arrays.

The following table shows the feature options:

Array Topology Ethernet BW enhancement

Block Segregated Y

Unified Segregated Y

Block Unified network Y

Unified Unified network Y

DAE configuration

VCE Systems (5600) have two 25 slot 2.5" disk array enclosure (DAEs). The EMCVNX 5600 diskprocessor enclosure (DPE) provides the DAE for bus 0, the second provides the first DAE on bus 1.Additional DAEs can be in either 15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs are added in

Configuration descriptions VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

65© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 66: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

multiples of two. DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then3.5" DAEs.

An additional four port SAS bus expansion SLIC is an option with VCE Systems (5600). If more than 19DAEs are required, the addition of a four port expansion bus card is required. If the card is added, DAEsare purchased in groups of six.

SLIC configuration

The EMC VNX5600 provides slots for five SLICs in each service processor. VCE Systems (5600) havetwo FC SLICs per SP for host connectivity. A third FC SLIC can be ordered to support unified storage.The remaining SLIC slots are reserved for future VCE configuration options. VCE only supports the fourport FC SLIC for host connectivity. Six FC ports per SP are connected to the SAN switches for VCESystems host connectivity.

The following table shows the SLIC configurations per SP:

Array FC bandwidth enhancement SLIC 0 SLIC 1 SLIC 2 SLIC 3 SLIC 4

Block N Bus FC FC N/A N/A

Unified N Bus FC FC N/A FC/U

The FC 4xFC port I/O module (IOM) provides four 16 Gb FC connections (segregating networking).

The FC 4xFC port I/O module (IOM) provides four 8 Gb FC connections (unified networking).

The FC/U 4xFC port IO module (IOM) dedicated to unified X-Blade connectivity provides four 8 Gb FCconnections.

Bus four port - 4x lane/port six GB.

SAS: provides additional back-end bus connections.

Compute

VCE Systems (5600) support two to eight chassis and up to 64 half-width blades. Each chassis can beconnected with four links (Cisco UCS 2204XP fabric extenders IOM only) or eight links (Cisco UCS2208XP fabric extenders IOM only) per IOM.

The following table shows the compute options that are available for the fabric interconnects:

Fabric interconnect Min chassis(blades)

2-link max chassis(blades)

4-link max chassis(blades)

8-link maxchassis (blades)

Cisco UCS 6248UP 2 (2) N/A 8 (64) 4 (32)

Cisco UCS 6296UP 2 (2) N/A 16 (128) 8 (64)

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions

66© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 67: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Connectivity

VCE Systems (5600) support the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabricinterconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches forEthernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 Series Switches, or CiscoMDS 9148 or Cisco MDS 9148S Multilayer Fabric Switches, and based on the topology. Refer to theappropriate RCM for a list of what is supported on your VCE System.

The following table shows the switch options that are available for the fabric interconnects:

Fabric Interconnect Topology Ethernet SAN

Cisco UCS 6248UP Segregated Cisco Nexus 5548UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Unified network Cisco Nexus 5548UP switches

Segregated Cisco Nexus 5596UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Unified network Cisco Nexus 5596UP switches

Cisco UCS 6296UP Segregated Cisco Nexus 5548UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Unified network Cisco Nexus 5548UP switches

Segregated Cisco Nexus 5596UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Unified network Cisco Nexus 5596UP switches

Note: The default is a unified network with Cisco Nexus 5596UP switches.

Configuration descriptions VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

67© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 68: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

VCE Systems with EMC VNX5400VCE Systems with EMC VNX5400 support various array types and features, disk array enclosure andSLIC configurations, and compute and connectivity for fabric interconnects.

Array options

VCE Systems (5400) are available as block only or unified storage. Unified storage VCE Systems (5400)support one to four X-Blades and two control stations. Each X-Blade provides two 10G front-endconnections to the network.

The following table shows the available array options:

Array Bus Supported X-Blades

Block 2 N/A

Unified 2 1*

Unified 2 2*

Unified 2 3*

Unified 2 4*

*VCE supports one to four X-Blades in VCE Systems (5400).

Each X-Blade contains:

• One 4 core 2.13 GHz Xeon processor

• Six GB RAM

• One Fibre Channel (FC) storage line card (SLIC) for connectivity to array

• One 2-port 10 GB SFP+ compatible SLICs

Feature options

VCE Systems (5400) use the Cisco UCS 6248UP fabric interconnects. VCE Systems (5400) do notsupport FC bandwidth (BW) enhancement or Ethernet BW enhancement in block or unified arrays.

Disk array enclosure configuration

VCE Systems (5400) have two 25 slot 2.5" disk array enclosure (DAEs). The EMC VNX5400 diskprocessor enclosure (DPE) provides the DAE for bus 0, the second provides the first DAE on bus 1.Additional DAEs can be in either 15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs are added inmultiples of two. DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then3.5" DAEs.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions

68© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 69: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

SLIC configuration

The EMC VNX5400 provides slots for five SLICs in each service processor (SP), although only four areenabled. VCE Systems (5400) have two FC SLICs per SP for host connectivity. A third FC SLIC can beordered to support unified storage. The remaining SLIC slots are reserved for future VCE configurationoptions. VCE only supports the four-port FC SLIC for host connectivity. Six FC ports per SP areconnected to the SAN switches for VCE Systems host connectivity.

The following table shows the SLIC configurations per SP:

Array FC BW enhancement SLIC 0 SLIC 1 SLIC 2 SLIC 3 SLIC 4

Block N N/A FC FC N/A N/A

Unified N N/A FC FC N/A FC/U

• FC 4xFC port I/O module (IOM) provides four 16 Gb FC connections (segregating networking).

• FC 4xFC port I/O module (IOM) provides four 8 Gb FC connections (unified networking).

• The FC/U 4xFC port IOM dedicated to unified X-Blade connectivity provides four 8 Gb FCconnections.

Compute

VCE Systems (5400) are configured with two chassis that support up to 16 half-width blades. Eachchassis is connected with four links per fabric extender I/O module (IOM). VCE Systems (5400) supportCisco UCS 2204XP Fabric Extenders IOM only.

The following table shows the compute options that are available for the Cisco UCS 6248UP fabricinterconnects:

Fabric interconnect Min chassis(blades)

2-link max chassis(blades)

4-link max chassis(blades)

8-link max chassis(blades)

Cisco UCS 6248UP 2 (2) N/A 2 (16) N/A

Connectivity

VCE Systems (5400) contain the Cisco UCS 6248UP fabric interconnects that uplink to Cisco UCS Nexus5548UP switches for Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5548UPswitches, or Cisco MDS 9148 or Cisco MDS9148S Multilayer Fabric Switches. Refer to the appropriateRCM for a list of what is supported on your VCE System.

Configuration descriptions VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

69© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 70: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

The following table shows the switch options that are available for the fabric interconnects:

Fabric interconnect Topology Ethernet SAN

Cisco UCS 6248UP Segregated Cisco Nexus 5548UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Unified network Cisco Nexus 5548UP switches

Segregated Cisco Nexus 5596UPswitches

Cisco MDS 9148 or CiscoMDS 9148S MultilayerFabric Switch. Refer to theappropriate RCM for a listof what is supported onyour VCE System.

Unified network Cisco Nexus 5596UP switches

Note: The default is a unified network with Cisco Nexus 5596UP switches.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions

70© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 71: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Sample configurations

Sample Vblock® System 340 and VxBlock™ System 340 withEMC VNX8000VCE™ Systems with EMC VNX8000 cabinet elevations vary based on the specific configurationrequirements.

These elevations are provided for sample purposes only. For specifications for a specific VCE Systemdesign, consult your vArchitect.

Sample configurations VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

71© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 72: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Front view

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations

72© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 73: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Rear view

Sample configurations VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

73© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 74: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Cabinet 1

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations

74© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 75: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Cabinet 2

Sample VCE System with EMC VNX5800VCE™ Systems with EMC VNX5800 cabinet elevations vary based on the specific configurationrequirements.

These elevations are provided for sample purposes only. For specifications for a specific VCE Systemdesign, consult your vArchitect.

Sample configurations VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

75© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 76: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Front view

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations

76© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 77: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Rear view

Sample configurations VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

77© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 78: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Cabinet 1

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations

78© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 79: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Cabinet 2

Sample configurations VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

79© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 80: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Cabinet 3

Sample Vblock® System 340 and VxBlock™ System 340 withEMC VNX5800 (ACI ready)VCE™ Systems with EMC VNX5800 elevations for a cabinet that is Cisco Application CentricInfrastructure (ACI) vary based on the specific configuration requirements.

These elevations are provided for sample purposes only. For specifications for a specific VCE Systemdesign, consult your vArchitect.

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations

80© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 81: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Front view

Sample configurations VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

81© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 82: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Rear view

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations

82© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 83: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Cabinet 1

Sample configurations VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

83© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 84: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Cabinet 2

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations

84© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 85: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

System infrastructure

VCE Systems descriptionsA comparison of the compute, network, and storage architecture describes the differences among theVCE Systems.

The following table shows a comparison of the compute architecture:

VCE Systems with: EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400

Cisco B-series bladechassis

16 maximum 8 maximum 2 maximum

B-series blades(maximum)

Half-width = 128, Full-width = 64 Half-width = 64

Full-width = 32

Half-width = 16

Full-width = 8

Fabric interconnects Cisco Nexus 6248UP or Cisco Nexus 6296UP Cisco Nexus6248UP

The following table shows a comparison of the network architecture:

VCESystemswith:

EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400

Network Cisco Nexus 5548UP or Cisco Nexus 5596UP Cisco Nexus5548UP

SAN Cisco MDS 9148 or Cisco MDS 9148S (segregated). Refer to the appropriate RCM for a list of what issupported on your VCE System.

The following table shows a comparison of the storage architecture:

VCE Systems with: EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400

Storage access Block or unified

Back-end SAS buses 8 or 16 6 6 2 or 6 2

Storage protocol (block) FC

Storage protocol (file) NFS and CIFS

Data store type (block) VMFS

Data store type (file) NFS

Boot path SAN

Maximum drives 1500 1000 750 500 250

System infrastructure VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

85© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 86: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

VCE Systems with: EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400

X-Blades (min/max) 2/8 2/4 2/3 2/2 2/2

Cabinets overviewIn each VCE System, the compute, storage, and network layer components are distributed within thecabinets. Distributing the components in this manner balances out the power draw and reduces the sizeof the power distribution units (PDUs) that are required.

Each cabinet has a capacity for physical dimensions such as weight, heat dissipation, power draw, RUspace, and receptacle count. This design improves flexibility when upgrading or expanding VCE Systemsas capacity needs increase.

For some configurations, VCE preinstalls all wiring based on the predefined layouts.

VCE cabinets are designed to be installed contiguously to one another within the data center. If the baseand expansion cabinets need to be physically separated, customized cabling is needed, which incursadditional cost and delivery delays.

Note: The cable length is not the same as distance between cabinets. The cable must route through thecabinets and through the cable channels overhead or in the floor.

Intelligent Physical Infrastructure applianceThe Intelligent Physical Infrastructure (IPI) appliance allows users to collect and monitor environmentaldata, and monitor control power and security.

For more information about the IPI appliance, refer to the administration guide for your VCE System andto the VCE Intelligent Physical Infrastructure (IPI) Appliance User Manual.

Power optionsVCE Systems support several power distribution unit (PDU) options inside and outside of North America.

Power options for VCE System cabinets

The following table lists the PDUs that are available:

PDU Power specifications Number per CN or S

IEC 60309 3P+PE 3-phase Delta / 60A 2 pairs of PDUs per cabinet

NEMA L15-30P 3-phase Delta / 30A 3 pairs of PDUs per cabinet

NEMA L6-30P Single phase / 30A 3 pairs of PDUs per cabinet

IEC 60309 3P+N+PE 3-phase WYE / 30 / 32A / 2 pairs of PDUs per cabinet

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview System infrastructure

86© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 87: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

PDU Power specifications Number per CN or S

IEC 60309 2P+E Single phase / 32A 3 pairs of PDUs per cabinet

Balancing cabinet maximum usable power

The VCE System maximum usable power must be balanced in the cabinets based on the amount ofcomponents in the cabinet. The maximum kilowatt draw for a VCE System PDU that has been derated to80 percent is listed in the following table:

Power option Kilowatt draw per PDU

3-Phase Delta 60A@208V 17.3

3-Phase Delta 30A@208V 8.6

3-Phase WYE 32A@230V 17.7

Single Phase 30A@208V 5

Single Phase 32A@230V 5.9

Note: The kilowatt draw per PDU is an approximate measurement.

The following PDU limitations per cabinet are for a VCE System with one or more Cisco UCS 5108 BladeServer Chassis installed:

Power option Number of Cisco UCS5108 Blade ServerChassis

Maximum PDUs per cabinet

Three-Phase Delta 60A 1-3 One pair

Three-Phase Delta 60A 4-6 Two pair

Three-Phase Delta 30A 1-3 Two pair

Three-Phase Delta 30A 4 Three pair

Three-Phase WYE 30A or 32A 1-3 One pair

Three-Phase WYE 30A or 32A 4-6 Two pair

Single Phase 30A or 32A 1 Two pair

Single Phase 30A or 32A 2 Three pair

Related information

Accessing VCE documentation (see page 6)

System infrastructure VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

87© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 88: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Additional references

Virtualization components

Product Description Link to documentation

VMware vCenterServer

Provides a scalable and extensible platform that formsthe foundation for virtualization management.

http://www.vmware.com/products/vcenter-server/

VMware vSphereESXi

Virtualizes all application servers and providesVMware high availability (HA) and dynamic resourcescheduling (DRS).

http://www.vmware.com/products/vsphere/

Compute components

Product Description Link

Cisco UCS B-SeriesBlade Servers

Servers that adapt to application demands,intelligently scale energy use, and offer best-in-classvirtualization.

www.cisco.com/en/US/products/ps10280/index.html

Cisco UCS Manager Provides centralized management capabilities forthe Cisco Unified Computing System (UCS).

www.cisco.com/en/US/products/ps10281/index.html

Cisco UCS 2200 SeriesFabric Extenders

Bring unified fabric into the blade-server chassis,providing up to eight 10 Gbps connections eachbetween blade servers and the fabric interconnect.

http://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2200-series-fabric-extenders/tsd-products-support-series-home.html

Cisco UCS 5100 SeriesBlade Server Chassis

Chassis that supports up to eight blade servers andup to two fabric extenders in a six rack unit (RU)enclosure.

www.cisco.com/en/US/products/ps10279/index.html

Cisco UCS 6200 SeriesFabric Interconnects

Cisco UCS family of line-rate, low-latency, lossless,10 Gigabit Ethernet, Fibre Channel over Ethernet(FCoE), and Fibre Channel functions. Providenetwork connectivity and management capabilities.

www.cisco.com/en/US/products/ps11544/index.html

Network components

Product Description Link to documentation

Cisco Nexus1000V SeriesSwitches

A software switch on a server that deliversCisco VN-Link services to virtual machineshosted on that server.

www.cisco.com/en/US/products/ps9902/index.html

VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Additional references

88© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 89: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

Product Description Link to documentation

VMware vSphereDistributed Switch(VDS)

A VMware vCenter-managed software switchthat delivers advanced network services tovirtual machines hosted on that server.

http://www.vmware.com/products/vsphere/features/distributed-switch.html

Cisco MDS 9148Multilayer FabricSwitch

Provides 48 line-rate 16 Gbps ports andoffers cost-effective scalability through on-demand activation of ports.

www.cisco.com/en/US/products/ps10703/index.html

Cisco MDS 9148SMultilayer FabricSwitch

Provides 48 line-rate 16 Gbps ports andoffers cost-effective scalability through on-demand activation of ports.

http://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9148s-16g-multilayer-fabric-switch/datasheet-c78-731523.html

Cisco Nexus 3048Switch

Provides local switching that connectstransparently to upstream Cisco Nexusswitches, creating an end-to-end Cisco Nexusfabric in data centers.

http://www.cisco.com/c/en/us/products/switches/nexus-3048-switch/index.html

Cisco Nexus3172TQ Switch

Provides local switching that connectstransparently to upstream Cisco Nexusswitches, creating an end-to-end Cisco Nexusfabric in data centers.

http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/data_sheet_c78-729483.html

Cisco Nexus 5000Series Switches

Simplifies data center transformation byenabling a standards-based, high-performance unified fabric.

http://www.cisco.com/c/en/us/products/switches/nexus-5000-series-switches/index.html

Cisco Nexus9396PX Switch

Provides high scalability, performance, andexceptional energy efficiency in a compactform factor. Designed to support CiscoApplication Centric Infrastructure (ACI).

http://www.cisco.com/c/en/us/support/switches/nexus-9396px-switch/model.html

Storage componentsThis topic provides a description of the storage components.

Product Description Link to documentation

EMC VNX8000, EMC VNX7600,EMC VNX5800, EMC VNX5600,EMC VNX5400 storage arrays

High-performing unified storage withunsurpassed simplicity and efficiency,optimized for virtual applications.

www.emc.com/products/series/vnx-series.htm

Additional references VCE Vblock® and VxBlock™ Systems 340 Architecture Overview

89© 2013-2016 VCE Company, LLC.

All Rights Reserved.

Page 90: VCE Vblock and VxBlock Systems 340 Architecture Overview · PDF file  VCE Vblock® and VxBlock™ Systems 340 Architecture Overview Document revision 3.11 April 2016

www.vce.com

About VCE

VCE, an EMC Federation Company, is the world market leader in converged infrastructure and converged solutions. VCEaccelerates the adoption of converged infrastructure and cloud-based computing models that reduce IT costs while improvingtime to market. VCE delivers the industry's only fully integrated and virtualized cloud infrastructure systems, allowingcustomers to focus on business innovation instead of integrating, validating, and managing IT infrastructure. VCE solutionsare available through an extensive partner network, and cover horizontal applications, vertical industry offerings, andapplication development environments, allowing customers to focus on business innovation instead of integrating, validating,and managing IT infrastructure.

For more information, go to http://www.vce.com.

Copyright 2013-2016 VCE Company, LLC. All rights reserved. VCE, VCE Vision, VCE Vscale, Vblock, VxBlock, VxRack,and the VCE logo are registered trademarks or trademarks of VCE Company LLC. All other trademarks used herein are theproperty of their respective owners.

90© 2013-2016 VCE Company, LLC.

All Rights Reserved.