hp xp architecture

24
 HP StorageWorks External Storage XP white paper To the business application, everything is virtualized............. ................................................................... 2  Storage virtualization today .................................................................................................................. 2  Fabric-based virtualization .................................................................................................................... 3  HP StorageWorks External Storage XP ................................................................................................... 3   Application transparent data mobility ................................................................................................. 4  Simplified volume management and provisioning........... ...................................................................... 4  Heterogeneous local and remote data replication ................................................................................ 4  External Storage XP details ................................................................................................................... 5  The redundant StorageWorks XP architecture ...................................................................................... 5  Fibre Channel CHIP architecture ........................................................................................................ 6  XP software ..................................................................................................................................... 8  XP terms.......................................................................................................................................... 8  External Storage XP configurations........... .............................................................................................. 9   An XP system using External Storage XP without remote replication ........................................................ 9  StorageWorks XP using External Storage XP with remote replication .................................................... 10  Supported External Storage XP configuration top ologies ..................................................................... 11  Basic building block ....................................................................................................................... 13  Specific configurations............ ........................................................................................................ 15   Array repurposing ...................................................................................................................... 15  Data migration across heterogeneous external disk array tiers ......................................................... 17  Remote data replication .............................................................................................................. 19  Low-utilization data archive ......................................................................................................... 20  Microsoft Exchange .................................................................................................................... 21  Oracle ...................................................................................................................................... 22  Basic rules of thumb ....................................................................................................................... 22  Sizing External Storage XP configurations ......................................................................................... 22  Partitioning................ .................................................................................................................... 23  Summary .......................................................................................................................................... 23  For more information.......................................................................................................................... 24  

Upload: vikaskaushikce

Post on 06-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 1/24

  HP StorageWorks External Storage XP white paper

To the business application, everything is virtualized................................................................................ 2 Storage virtualization today .................................................................................................................. 2 Fabric-based virtualization .................................................................................................................... 3 HP StorageWorks External Storage XP ................................................................................................... 3 

 Application transparent data mobility ................................................................................................. 4  Simplified volume management and provisioning................................................................................. 4 Heterogeneous local and remote data replication ................................................................................ 4 

External Storage XP details ................................................................................................................... 5 The redundant StorageWorks XP architecture ...................................................................................... 5 Fibre Channel CHIP architecture ........................................................................................................ 6 XP software ..................................................................................................................................... 8 XP terms.......................................................................................................................................... 8 

External Storage XP configurations......................................................................................................... 9  An XP system using External Storage XP without remote replication ........................................................ 9 StorageWorks XP using External Storage XP with remote replication .................................................... 10 Supported External Storage XP configuration topologies ..................................................................... 11 Basic building block ....................................................................................................................... 13 Specific configurations.................................................................................................................... 15 

 Array repurposing ...................................................................................................................... 15  Data migration across heterogeneous external disk array tiers ......................................................... 17 Remote data replication .............................................................................................................. 19 Low-utilization data archive ......................................................................................................... 20 Microsoft Exchange .................................................................................................................... 21 Oracle ...................................................................................................................................... 22 

Basic rules of thumb ....................................................................................................................... 22 Sizing External Storage XP configurations ......................................................................................... 22 Partitioning.................................................................................................................................... 23 

Summary .......................................................................................................................................... 23 For more information.......................................................................................................................... 24 

Page 2: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 2/24

To the business application, everything is virtualized

 Applications need resources. They do not care about servers or networks or disk arrays per se. To theapplication, all these are simply resources intended to serve its needs. It does not care which physicalservers or network or storage is being consumed (although the business might). It just assumes theappropriate resources will be there when needed. In effect, as far as the application is concerned, allIT infrastructure resources, hardware and software, are already virtualized.

From the organization’s standpoint, however, the resources may not be virtualized. Instead, eachresource is painstakingly deployed, configured, provisioned, and managed. Furthermore, althoughthe application may not care which particular storage resource it is using, the organization certainlydoes. By virtualizing its resources, the organization can improve efficiency and lower costs throughbetter asset utilization and simplified administration. It can standardize management processes andhardware and, most importantly, match the physical resources, particularly storage, to the needs ofthe data and ultimately the business.

 Virtualization allows the organization, for example, to store data in ways that balance the applicationand data’s needs in terms of availability, performance, and cost at the logical level instead ofphysically connecting and reconnecting the right resources with the right applications. Data andapplications requiring high performance, high availability, or other attributes can be logically

connected to the resources that deliver those characteristics while less important data can beconnected to other resources that provide a better match between application needs and resourcecost.

 Virtualization, which is just another word for abstraction, masks the particulars of the physicalresources behind logical objects. It also allows the organization to dynamically change thecomposition of the logical objects—in effect mixing and matching different physical resources andtheir attributes—while remaining transparent to the application. In short, virtualization enables theorganization to achieve what the application ideally has wanted all along, which is transparentaccess to the most appropriate resources when and how it needs them.

In truth, virtualization need not stop with the physical storage, fabric, and server resources. All themanagement capabilities that present the infrastructure components, in fact the entire management

path from the application to the physical device, can be virtualized. To the application, everything—physical devices, operating systems, software, management services, components of all sorts—is atransparent resource that exists to serve its needs.

Storage virtualization today

Storage virtualization first appeared in the 1970s in the form of mirroring, or volume shadowing. By2006 it evolved to encompass not only storage but servers and even networks. Functionally,virtualization is really an enabler—a host of technologies that allows organizations to reduce orremove the complexity of individual components within the IT infrastructure through abstraction for thepurpose of achieving efficiency and flexibility.

 Virtualization creates logical views of resources that are distinct from the underlying physicalcomponents. This, in turn, allows for resource encapsulation, which lets organizations aggregate andgroup the logical resources and capabilities in useful and meaningful ways. A mixture of storagedevices with different performance and cost characteristics, for example, could be aggregated andgrouped logically to create tiered storage. Virtualization provides the technological underpinningsthat group and manage resources so that the services required by applications can be dynamicallyprovisioned, much like the so-called on-demand IT utility. Virtualization itself can be viewed as aservices architecture intended to transparently deliver infrastructure services (CPU, storage,networking) to the application.

2

Page 3: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 3/24

Today, the industry is focusing on three main areas of storage virtualization: storage-based, network-or fabric-based, and server-based.

•  Storage-based virtualization focuses on storage devices (disk, tape, and so on) within a singlestorage subsystem, and abstracts the physical differences in the storage devices to enable thecapacity to be aggregated and used and managed as a single logical storage pool.

•  Fabric-based virtualization performs similar functions as storage-based but does the abstracting,aggregating, and logical pooling at the fabric level, thus encompassing multiple storage systems.

• Server-based virtualization isolates the physical differences in storage devices from the applicationsat the operating system and lower level drive code on each server running such software thusencompassing multiple storage systems, but only one server.

 Virtualization in any of the three areas ultimately leads to results that can be measured in terms ofefficiency and simplified management. However, each area of virtualization offers specificadvantages and drawbacks, mainly in terms of its scope (amount and type of resources that areaffected) and the granularity of the resulting logical view. Storage-based virtualization, for instance, iseasily implemented and simple to manage, but its span of control is often limited to a single storagesubsystem. Fabric-based virtualization, by comparison, gives you huge scalability and heterogeneoussupport although implementations differ among manufacturers. Also, depending on theimplementation, unique attributes of the subsystems being pooled may be lost in the fabric-based

pool.

Fabric-based virtualization

Fabric-based virtualization offers the ability to span multiple heterogeneous storage subsystems. Itraises the level of abstraction from the storage subsystem to the storage network. It enables a logicalview of multiple storage subsystems connected to a storage area network (SAN). Also, the SAN itselfinherently virtualizes the routing of data over multiple paths to reduce points of failure and improvequality of service, expedite zoning and partitioning, and facilitate the movement of data across longdistances.

In this way, fabric-based virtualization goes right to the heart of the cost-complexity paradox by

removing the complexity that drives up the cost of operating collections of storage subsystems, such asenterprise SANs, which, as previously noted, already are virtualized connections between hostservers and multiple storage arrays. Such fabric-based virtualization reduces the complexity andimproves service levels by letting administrators manage the SAN and provision its resources moreholistically and logically instead of having to wrestle with the specifics of each component system.

HP StorageWorks External Storage XP

HP StorageWorks External Storage XP (ES XP) running on an XP storage array decreases the stress ofeveryday heterogeneous data management by reducing the amount of effort needed to manage agroup of storage systems. Because ES XP data is accessed from a unified pool, and the amount of

management needed to maintain the pool is relatively low, total cost of ownership maybe reduced. As an added benefit, the pool can encompass a number of tiers of disk storage.

The internally clustered XP storage array delivers storage virtualization based on a proven high-performance/high-availability architecture with no single point of failure and boot-once onlinescalability. All XP components are redundant and hot-swappable, including: processors, I/Ointerfaces, power supplies, batteries, and fans. Non-disruptive XP online1 upgrades allows capacity

1 While the XP firmware (FW) may be updated online, individual external arrays must typically be taken offline (due to their limitations) when theiFW is updated. If non-access to data is not an option, consider host-level RAID-1, or using Tiered Storage Manager to online migrate data to aspare array before the FW upgrade.

3

Page 4: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 4/24

and features to be added to meet the needs of today and tomorrow—without ever requiring a powerdown.

The advanced virtualization technology of ES XP simplifies data migration, volume management, anddata replication in SAN environments—supporting a theoretical maximum of 16–30 petabytes2 ofheterogeneous storage—all of which can be managed from one single pane of glass3.

 Application transparent data mobility

ES XP enables transparent data movement. Data being virtualized by ES XP can be transparentlymigrated anywhere within the virtualized storage pool. This enables application transparent tieredstorage data migration capability and data movement for array decommissioning, repurposing,upgrading, maintenance, data center moves or consolidation, implementation of Information LifecycleManagement (ILM) policies, and performance tuning.

Benefits include:

•  Data movement within an array or to a different array

•  Non-disruptive data migration

•  No SAN or server reconfiguration

•  Fewer staff management hours

•  Elimination of application downtime•  More agile environment

Simplified volume management and provisioning

ES XP simplifies the management of virtual volumes, dynamically reallocating capacity, poolingheterogeneous volumes into a single reservoir that results in increased capacity utilization. The ES XP“Set and Forget” approach to managing external disk arrays allows the external disk arrays to becompletely configured one time. When configured, all connected storage can then be managed fromES XP as virtual volumes, with the capability of dynamically reallocating capacity and the ability topool heterogeneous volumes into a single reservoir.

•  Reduction in costs and complexity in provisioning heterogeneous storage are experienced.•  Heterogeneous storage appears as a single pool of capacity, managed from a single console,

easing administrative tasks.

•  Physical storage resources can be reallocated without a disruptive remounting of volumes onservers.

•  A single data management tool can be used to provision storage capacity from the heterogeneousstorage pool.

Heterogeneous local and remote data replication

ES XP enables local and remote heterogeneous data replication for data distribution, data mining,

backup and restore, and data validation testing.Data can be replicated between unlike storage systems both synchronously and asynchronously,locally and over wide-area distances.

2 16 PB for the HP StorageWorks XP10000 Disk Array, 32 PB for the HP StorageWorks XP12000 Disk Array, and 247 PB for theHP StorageWorks XP24000 Disk Array, although a typical/reasonable maximum configuration for the XP10000 Disk Array would be more inthe 200–500-TB range (depending greatly on workload type and intensity). For more details, see your HP storage representative. 

3  While all the data can be centrally managed from the XP, the individual storage arrays still require their own management tools for

configuration changes.

4

Page 5: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 5/24

External Storage XP details

This document is intended to be an overview of the capabilities, possible configurations, and uses ofES XP. This document is not intended to be a complete listing of all features, operating systems, orexternal devices supported by ES XP. For a current support matrix, see your HP representative.

ES XP can be connected between Microsoft® Windows®, HP-UX, AIX, Solaris, NetWare, SG-IRIX,Tru64, OpenVMS and Linux hosts and a wide variety of HP, IBM, EMC, and HDS disk arrays as

shown in Figure 1. Connections to hosts and disk arrays can be either direct Fibre Channelconnection or Fibre Channel SAN fabric connections.

Figure 1. StorageWorks XP utilizing External Storage XP to virtualize storage

Hosts HP MSA 

HP MSA 

SAN

HP XP

IBM

HDS

HP EVA 

EMCXP

SAN

Hosts HP MSA HP MSA 

HP MSA HP MSA 

SANSAN

HP XPHP XP

IBMIBM

HDSHDS

HP EVA HP EVA 

EMCEMCXP

SANSAN

 

The redundant StorageWorks XP architecture

NoteFor the purposes of consistency, the majority of this document describes the

attributes of ES XP running on the XP10000 Disk Array (for example,number of ports, MPs, and so on) However, ES XP can also run on theStorageWorks XP24000/12000 Disk Arrays, which have significantlymore port/MP/crossbar resources.

NoteDue to performance considerations, OLTP applications and SATA externalstorage are not recommended uses for ES XP.

Figure 2 illustrates the fully redundant internal architecture of the XP storage array, which is designedwith no single points of failure. There are two or more of every component in an XP system forredundancy (including the disk spindles and controller—not shown). One half of each component pairis located in each of the two clusters. Each cluster can be powered by either of the AC powersupplies.

The Client Host Interface Processors (CHIPs) are XP components that contain the Fibre Channel ports.The orange lines show the external Fibre Channel connections to hosts and external disk arrays. OneCHIP pair (16 Fibre Channel ports) is included in every XP10000 Disk Array; the other CHIP pair(additional 32 Fibre Channel ports) is optional.

5

Page 6: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 6/24

The central, highly available cache in the XP can be used to increase the write speed to externalarrays by buffering write data in cache and using an ES XP cache de-staging mechanism to write tothe external disk array. Depending on the needs of the application and the relative speed of the hostcompared to the speed of the external disk array connections, ES XP can be configured to use cachein an asynchronous way (in-order data buffering), or in a synchronous way where data is written tothe external disk array before acknowledging completion to the host.

Figure 2. Redundant StorageWorks XP Fibre Channel connection architecture

CacheSwitch

CacheCache

SharedMemory

Fibre Channel Ports

BoxBatteryBox

InputPower

 AC-BoxPowerSupply

 AC-DCPowerSupply

SharedMemory

IncludedCHIP

CacheSwitch

OptionalCHIP

Cluster 1 Cluster 2

IncludedCHIP

OptionalCHIP

 

Fibre Channel CHIP architecture

Each Fibre Channel CHIP blade contains microprocessor modules, each of which contains twomicroprocessors and four Fibre Channel ports. Figure 3 illustrates the architecture of an XP FibreChannel CHIP microprocessor module. Two Fibre Channel ports are typically serviced by a singleCHIP microprocessor, except during firmware upgrades when one CHIP microprocessor services allfour Fibre Channel ports of the module while the other CHIP microprocessor in that module is beingupdated. The recommended best practice for ES XP configurations is to actively use only one port for

each CHIP microprocessor. The other port associated with that CHIP microprocessor can be used forfailover in case it needs to take over the workload from a port on the other blade of the pair. Giventhat, the XP10000 Disk Array can be thought of as having up to 12 host-to-storage flow-through datapaths. That is, half of the 24 multiprocessors facing hosts, and half facing external storage.

6

Page 7: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 7/24

 Figure 3. The XP10000 Disk Array Fibre Channel CHIP architecture

8

16

2

8

Built in16 Port

Pair 

4# 2 Microprocessor Modules/Blade

16# Microprocessors/Blade pair 

32# FC Ports/Blade pair 

16# FC Ports/Blade

 Architecture

Optional32 Port

Pair Fibre Channel CHIP

8

16

2

8

Built in16 Port

Pair 

4# 2 Microprocessor Modules/Blade

16# Microprocessors/Blade pair 

32# FC Ports/Blade pair 

16# FC Ports/Blade

 Architecture

Optional32 Port

Pair Fibre Channel CHIP

1 Blade of a 16 Port CHIP pair

FCPort

Microprocessor

FCPort

FCPort

Microprocessor

FCPort

Microprocessor Module

FCPort

Microprocessor

FCPort

FCPort

Microprocessor

FCPort

Microprocessor Module

 

ES XP performance depends on the workload type and the number of CHIP microprocessors installedin the StorageWorks XP. Each CHIP microprocessor is capable of approximately 34 KIOPS.

Random workloads consist of approximately 60% reads/40% writes, with 8 KB of data per IO. Bestperformance is achieved by matching the number of StorageWorks XP ports connected to a devicewith the random IOPS performance desired/available from that device.

Sequential workloads usually consist of more data (64 or 128 KB) per IO. Best performance isachieved by matching the required bandwidth of device to the number of StorageWorks XP portstimes 3 KIOPS per port times the IO size.

Up to 16 PB of inactive data, like archived data that is seldom accessed, can be virtualized by oneStorageWorks XP10000 Disk Array.

Best total performance is achieved with the same number of CHIP microprocessors facing the serversas facing the external disk arrays.

4 While the microprocessors in the XP12000 Disk Array are capable of approximately 3 KIOPS, the microprocessors in the XP24000 Disk Arrayare capable of approximately 2x the XP12000 Disk Array value.

7

Page 8: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 8/24

XP software

 A full complement of management software tools is available with every StorageWorks XP. Thesesoftware tools are licensed based on the capacity to be virtualized. A sampling of available softwaretitles are:

•  HP StorageWorks Command View XP Advanced Edition—Provides centralized web-basedmanagement for ES XP and XP. It consists of a user friendly GUI, SMI-S provider, host agents, and acommand line interface (CLI).

•  LUN Configuration and Security Manager XP—Add and delete paths, create custom-sizedvolumes5, and configure LUN security to provide controlled, secure access to data stored behind anXP.

•  Auto LUN XP—Enables manual data migration.

•  Tiered Storage Manager XP—Enables data migration to match key user quality of servicerequirements to the storage attributes of ES XP controlled storage.

•  Data Shredder XP—Can optionally be invoked after a successful Tiered Storage Manager datamigration to delete the data on the migration source volume.

•  HP StorageWorks Business Copy XP, Snapshot XP—Makes nearly instantaneous local full or spaceefficient point-in-time copies or mirrors of data without ever interrupting online production.

• HP StorageWorks Continuous Access XP Synchronous/Asynchronous/Journal—Enables datamirroring between local and remote XP disk arrays.

•  HP StorageWorks XP Disk/Cache Partition—Allows resources on a single XP to be divided into anumber of distinct subsystems that are independently and securely managed.

XP terms

The following terms are important in understanding ES XP:

•  CHIP Pair—A printed circuit assembly pair used for data transfer between the external environmentand ES XP cache. A CHIP pair contains multiple Fibre Channel ports, which are used to connect tohosts, external storage devices, and remote sites by way of SANs or by direct connect.

•  Cluster (CL)—An isolated portion of ES XP cache, Fibre Channel CHIP blades, and so on such thathardware failures within one cluster will not affect the continued operation of the other cluster.

•  Device group—An LU group containing one or more Business Copy XP or Continuous Access XPpairs such that operations applied to the group affect every group LU.

•  External disk array—The disk arrays connected to ES XP.

•  External device group—A grouping of one or more external disk array volumes.

•  External port—An XP port configured to connect to an external disk array—like a host bus adapter(HBA) on a server.

•  LU—Logical Unit or disk volume.

•  LDEV—An XP logical device typically manifesting in a particular emulation type (for example, anLDEV might be an OPEN-V LDEV in 3D+1P RAID 5). After an LDEV is registered within ES XP, it caneither be mapped directly to a host group, and host viewable LU, aggregated with other LDEVs tocreate a larger LU (Logical Unit Size Expansion [LUSE]), or carved up to create a smaller LDEV andLU.

•  Microprocessor (MP)—The solid state engine responsible for the functionality and performance ofone or more Fibre Channel ports.

5 For best performance, map external volumes into the XP at their native size. In that the use of many mirroring products (for example, BusinessCopy XP) requires an exact match between volumes, EVA volumes should ideally be sized in multiples of 15,350 MB (31457280 blocks,16384 cylinders) to avoid the performance ramifications of using CVS with external storage.

8

Page 9: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 9/24

•  Pvol—The primary or “production” side volume of a Business Copy XP or Continuous Access XPpair.

•  Svol—The secondary or “mirror” volume of a Business Copy XP pair or Continuous Access XP pair.

Figure 4. The Set and Forget configuration sequence

(1) Use the external array’s local mgmt station & GUI toprovision/expose LUs to the XP (select RAID type, sizes, etc).

Typically, a one time operation.

(2) The external LUs are “discovered” by the XP and mappedin as V/LDEVS, & assigned LUNs, host ports & host groups.

Typically, a one time operation.

(3) Day-to-day XP storage management operations(LUN provisioning, re-sizing, aggregating, etc.)

can typically occur without further external storage changes.

The XP adds its premium features to the external storage(which is treated as generic storage)

 

External Storage XP configurations

 An XP system using External Storage XP without remote replication An XP using ES XP without remote data replication should be configured as shown in Figure 5. In thisconfiguration, data travels “vertically” in the diagram between hosts and virtualized storage.

Maximum XP10000 ES XP performance is achieved by configuring up to 126 active paths betweenthe hosts and the XP10000 Disk Array, with one CHIP microprocessor servicing each path; and up to12 active paths between the XP10000 Disk Array and the external disk arrays, with one CHIPmicroprocessor servicing each path.

The best practice for configuring the number of active Fibre Channel paths per external disk arrayvaries by the workload type. A workload containing a significant random-access component (forexample, OLTP/dB) can be configured with one active and one inactive Fibre Channel path per

external disk array port and LU7. For a predominantly large-block sequential workload (where onlyone LU will typically be accessed at a time) two active and two passive paths8 to the entire externaldisk array may allow for maximum storage connectivity.

6 Or up to 56 storage-facing and 56 host-facing microprocessors (MPs) for the XP12000 Disk Array7 For instance, an HP StorageWorks Enterprise Virtual Array (EVA4000) could have one active and one inactive Fibre Channel path into each ofeight EVA ports, to eight EVA LUs.8 That is, one active and one passive path per disk array subsystem/frame.

9

Page 10: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 10/24

 Figure 5. StorageWorks XP10000 Disk Array using ES XP without remote replication

OOO

1-12 ActiveHost Paths

1-12 ActiveStoragePaths

ServersBackupServer 

XP

HeterogeneousDisk Arrays

OOO

1-12 ActiveHost Paths

1-12 ActiveStoragePaths

ServersBackupServer 

XP

HeterogeneousDisk Arrays

 

StorageWorks XP using External Storage XP with remote replication

The XP can replicate data to another XP disk array located on a remote site, either synchronously orasynchronously. An XP using ES XP with remote data replication should be configured as shown inFigure 6. In this configuration, data not only travels “vertically” in the diagram between hosts andvirtualized storage, but may also travel “across” between sites.

Best practice for configuring an XP10000 Disk Array with ES XP or maximum performance withremote data replication is to configure up to10 active paths between the hosts and ES XP, with oneCHIP microprocessor servicing each path and up to 10 active paths between ES XP and the externaldisk arrays, with one CHIP microprocessor servicing each path. The remaining four Fibre Channelports are used for the Continuous Access XP links.

10

Page 11: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 11/24

Four Fibre Channel paths are recommended for the Continuous Access links at a minimum (two ineach direction). A maximum of eight Fibre Channel ports may be used (four in each direction).

 Another best practice for a heavily write-biased workload in this configuration is to assure that theContinuous Access XP Svol (remote disk array) is as fast or faster than the Continuous Access Pvol(source disk array), so data does not needlessly accumulate in ES XP cache. The Figure 6configuration also supports a remote RAID Manager command device on the Continuous Access XPRCU (remote site), which can communicate with the Continuous Access XP MCU (local site) commanddevice for the purpose of remote pair (for example, Business Copy XP and Continuous Access XP)

management.

Figure 6. StorageWorks XP10000 Disk Array using ES XP with remote replication

OOO

Servers

OOO

ServersBackupServer 

1-10 ActiveHost Paths

1-10 ActiveStoragePaths

XP

HeterogeneousDisk Arrays

HeterogeneousDisk Arrays

4 Continuous Access Paths

XP

OOO

Servers

OOO

ServersBackupServer 

1-10 ActiveHost Paths

1-10 ActiveStoragePaths

XP

HeterogeneousDisk Arrays

HeterogeneousDisk Arrays

4 Continuous Access Paths

XP

 

Supported External Storage XP configuration topologies

Diagrams (a) through (f) in Figure 7 describe the various ways in which an XP may be configured inrelationship to hosts and external disk arrays. Although only a single host is shown, ES XP is capableof connecting to many hosts. For best performance, configuration (b) should be avoided (or at leastrestricted to a LUSE configuration composed of four or less LDEVs). Configuration (d) is notrecommended to contain Snapshot Pvols or Snapshot pools on Modular Storage Array (MSA) arrays.Not shown is the fact that the Nishan FC-IP-FC extender may be used to locate the external array upto 1000 km from the XP.

11

Page 12: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 12/24

 Figure 7. Valid ES XP topologies

note: path lines shown are logical paths. Most configurations require at least 2 physical paths

 Array

LU

XP

(a)

Host

Regular or CVS. XP cache

on or off. SAA, A/P or AAA controllers. With orwithout Nishan extender

pair.

 Array

LU

Host

XP

aggregated(LUSE)

LU

 Array

LU

XP

(c)

Host

LVM aggregatedand/or SW RAID(can use >1 XP)

 Array

LU

LU

Lvolpv pv

 Array

LU

svol

(d)

externalBC/snapshot

pvol & svol.

pvolBC

LU

Part of 1 orboth can be

in XPCache LUN

LUSELU LU

(b)

Host

XP XP

note: path lines shown are logical paths. Most configurations require at least 2 physical paths

 Array

LU

XP

(a)

Host

Regular or CVS. XP cache

on or off. SAA, A/P or AAA controllers. With orwithout Nishan extender

pair.

 Array

LU

Host

XP

aggregated(LUSE)

LU

 Array

LU

XP

(c)

Host

LVM aggregatedand/or SW RAID(can use >1 XP)

 Array

LU

LU

Lvolpv pv

 Array

LU

svol

(d)

externalBC/snapshot

pvol & svol.

pvolBC

LU

Part of 1 orboth can be

in XPCache LUN

LUSELU LU

(b)

Host

XP XP

 

 Array

LU

pvol

XP

(e)

Host

CA XP, with External LUs(supported for all arrays but MSA)

note: path lines shown are logical paths. Most configurations require at least 2 physical paths

 Array

LU

svol

XP

Host

CA

 Array

LU

pvol

XP

(f)

Host

Direct access to an LUnot involved with the XP

(supported for all arrays but MSA)

LU

Host

LU

 Array

LU

pvol

XP

(e)

Host

CA XP, with External LUs(supported for all arrays but MSA)

note: path lines shown are logical paths. Most configurations require at least 2 physical paths

 Array

LU

svol

XP

Host

CA

 Array

LU

pvol

XP

(f)

Host

Direct access to an LUnot involved with the XP

(supported for all arrays but MSA)

LU

Host

LU

 

12

Page 13: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 13/24

Basic building block

The ES XP basic building block is shown in Figure 8. This building block consists of a matched set of:

•  One host-facing Fibre Channel CHIP microprocessor using only one active port (the inactive port isreserved for failover)

•  One external disk array-facing Fibre Channel CHIP microprocessor using only one active port (theinactive port is reserved for failover)

•  One Fibre Channel link to one dedicated external disk array port and one LU

Figure 8. Basic building block

XP CL-1 XP CL-2

Controller-A Controller-B

 ApplicationHost

  m

  p

  m  p

  m

  p

  m  p

LU

For high availability use a matched set of1 host-facing CHIP microprocessor and Istorage-facing CHIP microprocessor

•1 Fibre Channel path

•1 external array port

•1 external LU

Good for light Random orheavy Sequential workloads.

Scale up from this basic configurationbased on the type and intensity

of the workload, etc.

LUfor instance,

an EVA

1 active and 1 inactivefailover port per MP

FC Switch FC Switch

FC Switch FC Switch

XP CL-1 XP CL-2

Controller-A Controller-B

 ApplicationHost

  m

  p

  m  p

  m

  p

  m  p

LU

For high availability use a matched set of1 host-facing CHIP microprocessor and Istorage-facing CHIP microprocessor

•1 Fibre Channel path

•1 external array port

•1 external LU

Good for light Random orheavy Sequential workloads.

Scale up from this basic configurationbased on the type and intensity

of the workload, etc.

LUfor instance,

an EVA

1 active and 1 inactivefailover port per MP

FC Switch FC Switch

FC Switch FC Switch

 

13

Page 14: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 14/24

For optimum performance, ES XP should be configured with up to a maximum of 12 (XP10000 Disk Array) host-facing and 12 external disk array-facing CHIP microprocessors, each actively using onlyone of its two ports. The diagram in Figure 9 illustrates (on the external disk array side) how eachCHIP microprocessor should be connected to one active and one non-active (failover) path by way ofits two Fibre Channel ports. The external array LU should consist of enough disk spindles (at typicallyless than 150 IOPS per spindle) to keep up with the potential throughput of the active Fibre Channellink (for example, 20 spindles).

Figure 9. ES XP port configuration (XP10000 Disk Array)

XP CL-1 XP CL-2

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

External Disk Arrays

Hosts

FC Switch FC Switch

XP CL-1 XP CL-2

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

    m    p

External Disk Arrays

Hosts

FC Switch FC Switch

 

14

Page 15: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 15/24

 When Continuous Access XP is used, best practice is to use at least two CHIP microprocessors fromthe host-facing side and two CHIP microprocessors from the external disk array-facing side for the bi-directional Continuous Access XP links. Doing this results in a maximum of 10 host-facing CHIPmicroprocessors and 10 external disk array-facing CHIP microprocessors.

If MSA9 external disk arrays are being considered:

•  MSA disk arrays with two controllers are strongly recommended, with two paths connected to theXP. MSA disk arrays with only a single controller are not recommended.

•  MSA disk arrays are not supported for use with Continuous Access (Pvols, Svols), and notrecommended for use with Snapshot (Pvols or pool).

Failed external disk array port failover is handled automatically by the XP for all supported disk arrayclasses:

•  Symmetrical Active/Active controller disk arrays like the HP XP/EMC DMX/EMC Symmetrix/HDSLightning/SUN StorEdge 9900/IBM DS4000 series

•  Asymmetrical Active/Active controller disk arrays like the HP StorageWorks 4000/6000/8000Enterprise Virtual Arrays (EVA4000/6000/8000), or the EMC CX/HDS Thunder series

•  Active/Standby controller disk arrays like the HP StorageWorks 3000/5000 Enterprise Virtual Arrays (EVA3000/5000), or the HP MSA series

 Active load balancing to an external disk array volume by way of multiple paths will only occur forthe symmetrical Active/Active controller class of arrays. For more details, see your HP storagerepresentative.

Specific configurations

 Array repurposing

The diagram in Figure 10 illustrates how an XP with ES XP may be permanently placed between aworking host and array connection, without taking the application offline.

Benefits:

•  Re-direct an application’s access path from direct connect to by way of an XP•  Preserve data integrity and application uptime during the process

•  Use mirror/UX and LVM used with HP-UX, VxVM and VERITAS Volume Manager, mirroring may beused for other operating systems

9 Consider a re-marketed EVA as a low-cost, high-functionality alternative to MSA.

15

Page 16: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 16/24

Sequence:

•  Time 0—Begin. Application running on a legacy array LU as LVM Lvol A mapped to LVM Pvol 1.

•  Time 1—Create a SW RAID duplicate. LVM Lvol A can now be served by either LVM Pvol 1 or 2.

•  Time 2—Break one side of the SW mirror. The other side caries on without interruption.

•  Time 3—Re-establish the SW mirror by way of a second external LU.

•  Time 4—Break the second SW mirror path and decommission the second external LU.

•  Time 5—Application running on a legacy array LU by way of the XP, still using LVM Lvol A.

Figure 10. Online LU reconfiguring from direct connect to virtualized

Time-2 Time-4Time-0 Time-1 Time-3 Time-5

 Active Unused

 Array

 Active  Active

 Array

Inactive  Active

 Array

 Active  Active

 Array

 Active inactive

 Array

 Active Unused

 Array

XP

LVMpvol 1LU Z

LVMPvol 1LU Z

LVMPvol 2LU Y

LVMPvol 2LU Y

LVMPvol 1LU Z

LVMPvol 1

LVMPvol 1

LVMPvol 2LU Y

Mirroringmiddleware

e.g. Mirror/UX& LVM

 Applicationusing

LVM Lvol-A

LVMPvol 1

LVMPvol 2LU Y

LU X LU X LU XLU X LU X LU W LU W LU W

XPXPXPXP

Mirroringmiddleware

e.g. Mirror/UX& LVM

 Applicationusing

LVM Lvol-A

Mirroringmiddleware

e.g. Mirror/UX& LVM

 Applicationusing

LVM Lvol-A

Mirroringmiddleware

e.g. Mirror/UX& LVM

 Applicationusing

LVM Lvol-A

Mirroringmiddleware

e.g. Mirror/UX& LVM

 Applicationusing

LVM Lvol-A

Mirroringmiddleware

e.g. Mirror/UX& LVM

 Applicationusing

LVM Lvol-A

Time-2 Time-4Time-0 Time-1 Time-3 Time-5

 Active Unused

 Array

 Active  Active

 Array

Inactive  Active

 Array

 Active  Active

 Array

 Active inactive

 Array

 Active Unused

 Array

XP

LVMpvol 1LU Z

LVMPvol 1LU Z

LVMPvol 2LU Y

LVMPvol 2LU Y

LVMPvol 1LU Z

LVMPvol 1

LVMPvol 1

LVMPvol 2LU Y

Mirroringmiddleware

e.g. Mirror/UX& LVM

 Applicationusing

LVM Lvol-A

LVMPvol 1

LVMPvol 2LU Y

LU X LU X LU XLU X LU X LU W LU W LU W

XPXPXPXP

Mirroringmiddleware

e.g. Mirror/UX& LVM

 Applicationusing

LVM Lvol-A

Mirroringmiddleware

e.g. Mirror/UX& LVM

 Applicationusing

LVM Lvol-A

Mirroringmiddleware

e.g. Mirror/UX& LVM

 Applicationusing

LVM Lvol-A

Mirroringmiddleware

e.g. Mirror/UX& LVM

 Applicationusing

LVM Lvol-A

Mirroringmiddleware

e.g. Mirror/UX& LVM

 Applicationusing

LVM Lvol-A

 

16

Page 17: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 17/24

 Array repurposing can also take place offline to the application, as illustrated in Figure 11.

Sequence:

•  Time 0—Begin. Application running.

•  Time 1—Shut down application and remove connection to legacy array.

•  Time 2—Re-establish the LU by way of ES XP as an external LU, now accessed by way of XP LU Y.

Figure 11. Offline reconfiguring from direct connect to virtualized

Time-2

 Active

 Array

Time-0

 Active

 Array

XP

 Application

LU XLU X

 Active

 Array

Time-1

LU YHost

  Application Application

HostHost

Time-2

 Active

 Array

Time-0

 Active

 Array

XP

 Application

LU XLU X

 Active

 Array

Time-1

LU YHost

  Application Application

HostHost

 

Data migration across heterogeneous external disk array tiers

 As the two diagrams show in Figure 12, the Tiered Storage Manager (TSM) plug-in for Command View XP AE (or alternatively, Auto LUN XP by way of the remote Web Console) is a very useful toolfor managing the online migration of data from either external to external, or from internal toexternal, by way of a Graphical User Interface (GUI). Data migrations can occur while theapplication is using the data, and can be optionally scheduled to occur automatically during a periodof low storage demand.

Low utilization data can be moved to less expensive (slower) disks, while maintaining the same hostLU number and XP LDEV number.

Use manual TSM/Auto LUN XP to online migrate less frequently accessed data from one external diskarray tier or frame to another

Benefits:•  Improved $/GB for online access to older data

•  Preservation of prior investments in legacy disk arrays

17

Page 18: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 18/24

 Figure 12. Migrate or replicate LUs across internal/external tiers or arrays

SCSI

SATA

TSM Clientvia webconsole SCSI

SATA

Data Migration/Replication Across ‘Tiers’

Data Migration/Replication Across ‘Domains’ (Arrays)

Host

15Krpm

7.5Krpm

15Krpm

7.5Krpm

XP XP

TSM Clientvia webconsole

Host

SCSI

SATA

TSM Clientvia webconsole SCSI

SATA

Data Migration/Replication Across ‘Tiers’

Data Migration/Replication Across ‘Domains’ (Arrays)

Host

15Krpm

7.5Krpm

15Krpm

7.5Krpm

XP XP

TSM Clientvia webconsole

Host

 

Seasonal Migration Across internal/external storage tiers

High

Medium

Low

SeasonableUser Demand

May Jun Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr

Tier-4Storage

Tier-3Storage

Tier-0/1XP

Example: Seasonally migrate application data online to internal/external storagewith different performance attributes, as application demand seasonably fluctuates.

[This a hypothetical example for the U.S. Income Tax department]

Tier-2Storage

e.g. externalXP

e.g. external EVAe.g. external MSA

e.g. Tier-0:

internal cacheLUNs, Tier-1:internal XP

spindle LUNs

18

Page 19: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 19/24

Page 20: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 20/24

Page 21: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 21/24

Microsoft Exchange

The diagram in Figure 15 defines a Microsoft Exchange email “Module” using an XP10000 Disk Array connected to EVA3000/5000 disk arrays. At less than one11 I/O operation per second peruser, ES XP would be a suitable match for 4,000, 8,000, or even 12,000 Microsoft Exchange users.The hardware requirements of each group of 1,000 users is a single 350-GB LU, consisting ofenough12 disk spindles to accommodate 800 IOPS of random access workload.

Note from Figure 15 that the best practice is to dedicate an external disk array port and LU to asingle active (and one inactive) Fibre Channel path.

Figure 15. Microsoft Exchange

Three modulesfor 12,000 Exchange users

using EVA disk arrays

Exchangehost cluster

Exchangehost cluster

4000 user Exchange moduleEach module consists of:• A 2-node host cluster• Four 350 GB LUNs (of at least 6 spindles each)• Each of the 4 LUNs requires the ability to handle:

•1000 Microsoft Exchange users:

•0.8 IOPS max. per user normally•0.4 IOPS max. per user during backup

XP with 8GB Cache

0.8KIOPS

controller A controller B

350GB

350GB350GB

EVA

350GB

0.8KIOPS

0.8KIOPS

0.8KIOPS

controller A controller B

350GB

350GB350GB

EVA

350GB

350GB

350GB350GB350GB

350GB

350GB350GB

350GB

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Basic 4000 user ExchangeModule using EVA disk arrays

2.4KIOPS

2.4KIOPS

2.4KIOPS

2.4KIOPS

Not shown is the switch betweenthe XP and the storage, enabling 2Fibre Channel connections perEVA port. Also not shown are theinternal EVA LU failover paths tothe non-owning controller.

Each EVA LU much consist of atleast 6 spindles, in order toprovide the necessary 800 IOPS

XP with 12GB

Three modulesfor 12,000 Exchange users

using EVA disk arrays

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

4000 user Exchange moduleEach module consists of:• A 2-node host cluster• Four 350 GB LUNs (of at least 6 spindles each)• Each of the 4 LUNs requires the ability to handle:

•1000 Microsoft Exchange users:

•0.8 IOPS max. per user normally•0.4 IOPS max. per user during backup

XP with 8GB Cache

0.8KIOPS

controller A controller B

350GB

350GB350GB

EVA

350GB

0.8KIOPS

0.8KIOPS

0.8KIOPS

controller A controller B

350GB

350GB350GB

EVA

350GB

350GB

350GB350GB350GB

350GB

350GB350GB

350GB

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Exchangehost cluster

Basic 4000 user ExchangeModule using EVA disk arrays

2.4KIOPS

2.4KIOPS

2.4KIOPS

2.4KIOPS

Not shown is the switch betweenthe XP and the storage, enabling 2Fibre Channel connections perEVA port. Also not shown are theinternal EVA LU failover paths tothe non-owning controller.

Each EVA LU much consist of atleast 6 spindles, in order toprovide the necessary 800 IOPS

XP with 12GB

 

11 0.8 I/Os per second, per user12 For instance, eight (the minimum number of disk drives in an EVA disk group).

21

Page 22: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 22/24

Oracle

The performance of ES XP in an Oracle® environment depends on the application and workload tobe placed on the external disk arrays. In general, ES XP is suitable for random access (OLTP/dB)environments with less than 3,000 8 KB13 IOPs per active Fibre Channel link. ES XP can support anapplication of up to about 2.5 KIOPS per LU application on top of an Oracle database.

Figure 16 shows the 12 external disk array-facing CHIP microprocessors fully consumed by threeEVA3000/5000 disk arrays for use in a random access workload environment.

Figure 16. Oracle

XP10000 with 12GB Cache

2.5KIOPS

controller A controller B

500GB

500GB500GB

EVA

500GB

2.5KIOPS

2.5KIOPS

2.5KIOPS

Oracle hostBackupServer

Tape

 Application

In order for Each EVA LU to provide 2.5 KIOPS, it must consist of at least 17--21 spindles.

2.5KIOPS

controller A controller B

500GB

500GB500GB

EVA

500GB

2.5KIOPS

2.5KIOPS

2.5KIOPS

2.5KIOPS

controller A controller B

500GB

500GB500GB

FastT

500GB

2.5KIOPS

2.5KIOPS

2.5KIOPS

Can be either two EVA3000/5000s or one EVA4000/6000/8000

XP10000 with 12GB Cache

2.5KIOPS

controller A controller B

500GB

500GB500GB

EVA

500GB

2.5KIOPS

2.5KIOPS

2.5KIOPS

Oracle hostBackupServer

Tape

 Application

In order for Each EVA LU to provide 2.5 KIOPS, it must consist of at least 17--21 spindles.

2.5KIOPS

controller A controller B

500GB

500GB500GB

EVA

500GB

2.5KIOPS

2.5KIOPS

2.5KIOPS

2.5KIOPS

controller A controller B

500GB

500GB500GB

FastT

500GB

2.5KIOPS

2.5KIOPS

2.5KIOPS

Can be either two EVA3000/5000s or one EVA4000/6000/8000

 

Basic rules of thumb

ES XP performs best under sequential disk access workloads compared to random access workloads.Low-intensity random access (OLTP/dB) applications should be considered a secondary use for ES XP,with sequential access (backup/archive) applications being the primary use. Medium-high intensity

random access workloads should not be considered appropriate for ES XP.

Sizing External Storage XP configurations

Given that:

•  Each storage-facing CHIP MP is capable of about 3000 IOPS (8KB 60/40 Read/Write), and

•  Each CHIP MP is strongly recommended to serve only a single active port, and

•  Most single external array ports and LUNs are capable of at least 3000 IOPS

Performance will peak with one CHIP MP and one XP port dedicated to one external array port andLUN. To the extent that peak performance is not required, customers may choose to deviate from this

configuration. As a conservative example, a customer with a light random access workload may choose to limithis/her configuration to 12 external LUNs, served by 12 CHIP MPs (each using a single/dedicatedXP port and a single/dedicated external array port). Depending on the situation, some customers maychoose to have a single CHIP MP and port serve two or more external LUNs.

13 60/40 Read/Write ratio

22

Page 23: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 23/24

 As a liberal example, a customer with a seldom accessed deep archive (sequential workload) mayconnect each group of two CHIP MPs to one or more entire arrays. For more details, see your HPstorage representative.

Partitioning

HP StorageWorks XP Disk/Cache Partition can be very useful to isolate different workloads and userson the same XP. For instance, partitioning is considered a best practice if ES XP is used for randomaccess (OLTP/dB) and large-block sequential (backup/archive) workloads XP simultaneously.Likewise, partitioning is recommended as a best practice if disparate user groups (for example,finance and R&D) share the same XP.

For many external storage applications, partitioning is recommended. A limited license (CLPR0-3within SLPR0) is provided at no cost. For more details, see your HP storage representative.

Figure 17. Partitioning

XP

Finance’shost Marketing’shost Manufacturing’shost

XP

Finance’shost Marketing’shost Manufacturing’shost

 

Summary

External Storage XP delivers fabric-based storage virtualization based on the proven high-performance, high-availability XP disk array architecture. In a world where every business decisionseems to trigger an IT event, ES XP helps enable an adaptive enterprise—one that can quicklycapitalize on and manage change.

23

Page 24: HP XP Architecture

8/2/2019 HP XP Architecture

http://slidepdf.com/reader/full/hp-xp-architecture 24/24

For more information

For more information on HP StorageWorks XP, visit: www.hp.com/go/storage

 

© 2007 Hewlett-Packard Development Company, L.P. The information containedherein is subject to change without notice. The only warranties for HP products andservices are set forth in the express warranty statements accompanying suchproducts and services. Nothing herein should be construed as constituting anadditional warranty. HP shall not be liable for technical or editorial errors oromissions contained herein.

Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.Oracle is a registered US trademark of Oracle Corporation, Redwood City,California.

4AA0-6162ENW, Rev. 1, May 2007