xp_wp

39
ThP XP Best Practices White paper Table of contents Executive summary............................................................................................................................... 3 Target audience .................................................................................................................................. 3 Related documentation ......................................................................................................................... 3 ThP overview....................................................................................................................................... 3 ThP and its value ................................................................................................................................. 4 When configuring ThP on the XP24000/XP20000, you SHOULD: ............................................................ 5 When configuring ThP on the XP24000/XP20000, you SHOULD AVOID: ................................................. 6 Current ThP limitations and guidelines: ................................................................................................... 7 ThP Thresholds and Alarms ................................................................................................................... 7 The ThP Pool Threshold ..................................................................................................................... 7 The ThP Volume Threshold ................................................................................................................. 8 Setup Email Notification for ThP Alarms .............................................................................................. 9 How to control the run-away process to prevent unnecessary pool usage.................................................... 9 Challenges .................................................................................................................................... 10 Proposed solution ........................................................................................................................... 10 Benefits ......................................................................................................................................... 10 Host File System Considerations .......................................................................................................... 10 ThP Pool design recommendations ....................................................................................................... 11 ThP V-VOL design recommendations .................................................................................................... 12 Pool protection .................................................................................................................................. 12 Double disk failure concerns ............................................................................................................ 12 Formula for mean time to data loss (MTTDL) .......................................................................................... 13 Backup and recovery...................................................................................................................... 13 DMT protection .............................................................................................................................. 13 DMT Backup Guideline ............................................................................................................... 14 Impact of DMT Backup ................................................................................................................ 14 Scenario “1” 100% sequential write ............................................................................................. 14 Scenario “2” 100% random write (One IO/ThP page, worst case) .................................................. 14 Oracle 11g and ThP .......................................................................................................................... 15 Auto-Extend ................................................................................................................................... 15 Best practice.................................................................................................................................. 15 VMware support using ThP ................................................................................................................. 16

Upload: meganathan-ramanathan

Post on 28-Oct-2014

49 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: XP_WP

ThP XP Best Practices White paper

Table of contents

Executive summary............................................................................................................................... 3

Target audience .................................................................................................................................. 3

Related documentation ......................................................................................................................... 3

ThP overview....................................................................................................................................... 3

ThP and its value ................................................................................................................................. 4

When configuring ThP on the XP24000/XP20000, you SHOULD: ............................................................ 5

When configuring ThP on the XP24000/XP20000, you SHOULD AVOID: ................................................. 6

Current ThP limitations and guidelines: ................................................................................................... 7

ThP Thresholds and Alarms ................................................................................................................... 7 The ThP Pool Threshold ..................................................................................................................... 7 The ThP Volume Threshold................................................................................................................. 8 Setup Email Notification for ThP Alarms .............................................................................................. 9

How to control the run-away process to prevent unnecessary pool usage.................................................... 9 Challenges.................................................................................................................................... 10 Proposed solution........................................................................................................................... 10 Benefits......................................................................................................................................... 10

Host File System Considerations .......................................................................................................... 10

ThP Pool design recommendations ....................................................................................................... 11

ThP V-VOL design recommendations .................................................................................................... 12

Pool protection .................................................................................................................................. 12 Double disk failure concerns............................................................................................................ 12

Formula for mean time to data loss (MTTDL) .......................................................................................... 13 Backup and recovery...................................................................................................................... 13 DMT protection .............................................................................................................................. 13

DMT Backup Guideline ............................................................................................................... 14 Impact of DMT Backup................................................................................................................ 14 Scenario “1” 100% sequential write............................................................................................. 14 Scenario “2” 100% random write (One IO/ThP page, worst case) .................................................. 14

Oracle 11g and ThP .......................................................................................................................... 15 Auto-Extend................................................................................................................................... 15 Best practice.................................................................................................................................. 15

VMware support using ThP ................................................................................................................. 16

Page 2: XP_WP

ThP with External Storage (ES)............................................................................................................. 17

ThP usage with partitioning ................................................................................................................. 18

ThP with Auto LUN............................................................................................................................. 19

ThP with Business Copy ...................................................................................................................... 19

ThP Snapshot Combination ................................................................................................................. 20

ThP Continuous Access Combination.................................................................................................... 21

ThP shredding ................................................................................................................................... 22

ThP Pool space reclaiming .................................................................................................................. 22 Zero unused disk space .................................................................................................................. 22 Reclaim unused disk space.............................................................................................................. 22

ThP V-VOL expansion ......................................................................................................................... 23 Best practices ................................................................................................................................ 25

Online LUN expansion ....................................................................................................................... 25 Expanding V-VOL using HP-UX ........................................................................................................ 26 Expanding V-VOL using OVMS........................................................................................................ 28

Using CVAE CLI to create ThP Pool ...................................................................................................... 30

Using CVAE CLI to create ThP V-VOLs .................................................................................................. 30

Additional ThP related CLI commands .................................................................................................. 30

Appendix ......................................................................................................................................... 31 ThP Operation sequence ................................................................................................................. 31 ThP Combinations with other Program Products.................................................................................. 32 ThP Pool Volume specification.......................................................................................................... 33 Restrictions on ThP Pool Volume ....................................................................................................... 33 ThP Pool specifications.................................................................................................................... 34 ThP Volume specifications ............................................................................................................... 35 Service information messages (SIM) ................................................................................................. 36 Glossary ....................................................................................................................................... 37

For more information.......................................................................................................................... 39

Page 3: XP_WP

Executive summary The XP24000/XP20000 introduced a new storage allocation concept called Thin Provisioning (ThP). Thin provisioned storage allows the array administrator to pre-plan user capacity needs and allocate virtual storage based on the planned future capacity, but physically only consume the amount of disk space that the user is actually accessing. As a result, administrators no longer have to concern themselves with unused storage—that has been allocated but is not currently in use by the users. This white paper provides an overview of the best practices for setting up XP Thin Provisioning (ThP) on the XP24000/XP20000 array.

The reader should have previous experience or knowledge regarding provisioning of an XP array, and the tools used to provision the array such as Remote Web Console, and XP Command View Advanced Edition software. Additionally, the reader should be familiar with replication using XP Business Copy and other program products.

Target audience The document is intended for customers who are considering the setup of ThP on the XP24000/ XP20000 array and have experience with the general setup and use of the previous XP array generations using the Remote Web Console and XP Command View AE. If the user is interested in using ThP with replication, they should already have experience using XP Business Copy and Raid Manager.

Related documentation This paper is not intended to replace or substitute for the installation, configuration, and troubleshooting guides. Therefore in addition to this white paper, please refer to other documents for this product:

HP StorageWorks XP Thin Provisioning Installation Guide HP StorageWorks XP Thin Provisioning Configuration Guide HP StorageWorks XP Thin Provisioning Release Notes HP StorageWorks XP Thin Provisioning White Papers HP StorageWorks RWC Guide HP StorageWorks XP Command View AE HP StorageWorks XP24000/XP20000 SNMP Agent Reference Guide

These and other HP documents can be found on HP Web site: http://www.hp.com/support

ThP overview System administrators typically provide themselves with much more storage than is needed for various applications because they plan ahead for growth. For instance, an application may require five volumes with 650 GB of total actual data, but based on some analysis or at the request of the department, the system administrator has created a 3 TB volume, allowing for data growth. If a volume is created with 500 GB of space, this space is typically dedicated to that application volume and no other application can use it. However, in many cases the full 500 GB is never used, so the remainder is essentially wasted. This is a major problem with managing storage capacity and is often referred to as stranded storage.

3

Page 4: XP_WP

The inefficiencies of traditional storage provisioning can negatively impact capital costs and storage administration resources. The most obvious issue is the amount of storage that becomes unused and therefore increases the total cost of ownership. Additionally, since this allocated but unused storage capacity cannot typically be reclaimed for other applications, customers have to buy more storage capacity as their environments grow, increasing cost even further. At some point, customers may actually be required to buy a completely new storage system in addition to the one they have in place.

Results from multiple surveys conducted by several analysts and storage companies targeting enterprise storage have uncovered several limitations regarding traditional storage provisioning methods. The highlights of the surveys are:

Over 50% of the customers were aware that they had stranded and unused storage capacity due to inefficient provisioning methods.

Over half of these customers had between 31-50% of stranded and unused storage. For example, if they had 10 TB of storage capacity then 3.1-5 TB was stranded.

Almost half of the total users had to buy an additional storage system (array) because they could not utilize their stranded storage. This means that although these customers had unused storage capacity they had already paid for, they needed to buy a new storage system to meet the needs of their business.

Close to one third of users are planning to buy an additional storage system in the next 12 months because they cannot access their stranded storage.

Over 75% of users felt that storage provisioning was a time and resource drain on their IT organizations.

ThP and its value Thin Provisioning is a technology that presents to hosts large virtual volumes, which are backed up by a pool of significantly less physical storage (see figure). The components of the ThP solution are:

Pool Volumes—XP LDEVs which make up the pool of actual storage ThP Pool—the aggregate of the pool volumes ThP Volume—the virtual volume presented to the host which appears to have much more capacity

than is actually the case. Pages of actual storage are allocated from the pool as needed to accommodate writes to the volume.

500 GB 800 GB 500 GB 700 GB 500 GB+ + + + 3 TB Total space= 1 TB ThP Pool

175 GB 200 GB 75 GB 50 GB 150 GB+ + + + 650 GB Used=

Trad

ition

al

ThP

650 GB

500 GB 800 GB 500 GB 700 GB 500 GB+ + + + 3 TB Total space= 1 TB ThP Pool

175 GB 200 GB 75 GB 50 GB 150 GB+ + + + 650 GB Used=

Trad

ition

al

ThP

650 GB

4

Page 5: XP_WP

These are ways in which ThP can reduce the cost of ownership and significantly accelerate return on investment (ROI):

Advantage Description Notes

Simplified volume design

Smooth implementation of logical volume system without physical format

Logical volume design independent of physical configuration

Actual capacity design independent of logical volume configuration

The format of the pool volume is required

Optimized storage implementation cost

Implementation of large capacity volumes for reasonable disk capacity cost Dependent on a particular host environment

Simplified design for performance leveling

Design performance leveling of a volume, without designing physical data allocation

Required to create a pool with multiple parity groups

When configuring ThP on the XP24000/XP20000, you SHOULD:

Recommendation Reason

Present LDEVs to the ThP pool from as many RAID Groups (RGs. For example, 4 HDDs in a 3D+1P RG) as possible, preferably from RGs hosted by multiple DKA (ACP) pairs.

Adding more disks to the pool will give the ThP pool and consequently the ThP V-VOLs a chance to take advantage of as many physical disks and as many DKAs (ACPs) as possible.

As much as possible, use equally sized LDEVs. So that each LDEV in the pool has close to the same number of pages allocated.

Divide each pool RG into number of LDEVs that match the number of disks. For example: Divide 3D+1P in 3 LDEVs.

It will create an even amount of stripes across all LDEVs, and take advantage of multi-processor performance more efficiently.

Install pool capacity in increments of entire parity groups to avoid performance interference between resources. All the LDEVs created from a ThP Pool parity group should be dedicated to only that pool, and not be used as normal volumes or for other pools.

Dedicating an entire parity group to the pool will simplify identification, troubleshooting and administration, and enable better performance.

Try to limit the frequency of pool expansions via LDEV addition.

This keeps the pool layout optimized from a LDEV space consumption perspective. Automatic load balancing is slow low level background process.

For better performance, expand the pool capacity before it reaches 90% full. The pool will rebalance the pages, but it takes time for rebalance to complete. Recommended to add more storage in increments of entire array groups.

Assuming that you have equally sized LDEVs, the remaining 10% of each LDEV will be used along with the newly added LDEVs, maintaining a reasonable pool page allocation performance balance.

Create ThP pools based on performance expectations and by application type (production vs. testing). Also, by operating system and file system.

For better performance predictability, pool life, space manageability, and space allocation dependability.

If you must power off the XP24K array, make sure that you have at least 2 GB of free space on the SVP to store the VFS table information area.

It will provide a full backup for ThP system area on the SVP.

Place the ThP V-VOLs in known CUs apart from the traditional volumes.

It will simplify identification and administration. Similarly, the Snapshot volumes should follow the same principle.

Recommended is divide the RAID group into number of LDEVs equal to useable disks. Then, try to size the ThP pool Volumes in integral multiples of 42 MB. You will most likely end up with more or less space on the last LDEV, which is ok.

A ThP pool volume will be divided into 42 MB pages as soon as it is added to the pool, the fractions that can’t make a full 42 MB page will be wasted. Similarly, each ThP V-VOL will get assigned 42 MB pages as necessary, even if the ThP V-VOL needs less than 42 MB page.

Prepare a host computer with a SNMP agent. You need a host to receive the ThP alarms when a threshold is exceeded.

5

Page 6: XP_WP

Recommendation Reason

For the ThP V-VOLs that were previously used but are no longer needed, it is no longer need to perform a V-VOL format before releasing them from the pool because the pages will be zeroed automatically upon release.

This enables that all of the pages that are returned to the pool free_page_queue will contain “all Zero” data. Volume Shredding will write the “shred” data pattern first. You could create a Business Copy with a brand new V-VOL (P-VOL) to the existing V-VOL (S-VOL) to effectively write zeros to the S-VOLs pages.

Note: 703 SOM is used to skip the processing that clears the data area to zero and is off by default and it’s highly recommended to leave it OFF.

Mode 703 = ON: The zero-clear processing is skipped.

Mode 703 = OFF (default): The zero-clear processing is not skipped.

Since the dynamic mapping table (DMT) used by the ThP resides in the fifth shared memory (SM) set, make sure that you have installed the first four SM sets prior to that.

ThP is a separate application from other program products, and it uses its own fixed SM space.

If you must restore an image (sector based) backup into a THP V-VOL, make sure there is enough space in the pool for the entire volume before starting restore). Then run discard zero data to reclaim unused space after each V-VOL is restored.

Image backups are usually full volume backups that don’t differentiate between data and free space (for example: a 10 GB volume with 1 GB of actual data will have 10 GB backup image while a file backup will only be a 1 GB backup set). If an image backup is restored on a ThP V-VOL, the full virtual volume space will be allocated in the ThP pool.

If a file backup of a ThP volume takes too long (for example: when the ThP volume contains a large number of small files). There are two ways to mitigate this:

1. When possible, while using ThP volumes and creating files, create a fewer number of larger files (for example: When operating a DBMS, “A database management system“, as in Oracle, increase the size of the existing file by using a function for file extension (such as the auto extended function of Oracle), do not create new files for table space extension of the DB.) Use Snapshot on the ThP volume, and then perform a tape backup from that Snapshot

When configuring ThP on the XP24000/XP20000, you SHOULD AVOID:

Avoid Reason

Mixing Disks

Mixing RAID levels in the ThP pool (for example: mixing 2D+2D with 3+1, 7+1, 28+4 ….)

Mixing disk spindle speeds (for example: 10K & 15K)

Mixing disk sizes (for example: 36 GB, 72 GB, 146 GB …)

Mixing disk types (different models)

ThP V-VOL performance will become unpredictable and inconsistent.

De-fragmenting a ThP V-VOL This will result in undesirable pool space allocation (OS and tool dependent). Make sure the pool has enough space for the V-VOL before you start defrag, and that you discard zero data after defrag operation.

Don’t use in it/erase or any low level formatting, where zero data is written to the volume unless you plan on running discard zero data to reclaim the unused space.

This will result in undesirable space allocation. Discard zero data to reclaim space post low level formatting.

6

Page 7: XP_WP

Current ThP limitations and guidelines: ThP Pool LDEVs become pool property upon joining the pool, so the ThP Pool LDEVs:

– Can’t be used with any other program product – Can’t be presented to a host

To delete a ThP volume you must: – First, stop all host I/O – Un-present the ThP volume from the host storage group (HSG) – Release the V-VOL from the ThP Pool using RWC (the pool free space should increase)

You can use Business Copy or Auto-LUN to migrate ThP V-VOLs from one ThP Pool to another, in order to balance the pool workloads

The XP does not offer safeguards regarding: – Preventing improper V-VOL allocation and pool usage – Run-away applications causing space allocation

ThP Thresholds and Alarms

The ThP Pool Threshold Pool Threshold is the percentage (%) of the used pool capacity compared to the total pool capacity. There are two pool thresholds.

Pool Threshold 1: you can set the pool threshold between 5% and 95% in 5% increments. The default value is 70%.

Pool Threshold 2: This value is always 80% and cannot be changed.

For the user settable Pool Threshold 1, and its corresponding alarm, the setting should be based on:

The pool size: The larger the pool, the higher the threshold can be, because it is likely to take longer to consume the free pool space.

The type of file systems using the V-VOLs derived from the pool: The higher the up-front space consumption (for example: for writing an ownership marker every XMB); the more likely it will eventually lessen the rate of future space consumption.

Based on the business application’s aggressiveness of space consumption and anticipated growth needs: Trending data growth is essential for the correct setting of pool alarms.

The time necessary to order, receive, and install new physical disks varies by customer. Therefore, always give yourself and your customer enough time to react to physical pool space needs. Keep in mind that production cycles are not the same across all customer environments and that process controls differ.

If you don’t have precise answers for the above considerations (knowing that there is another threshold alarm pre-set at 80%):

1. Set the pool alarm threshold initially at 30%. 2. When the alarm triggers at 30%, increase the ThP pool alarm threshold to 40%. 3. Measure the time it takes the ThP volumes to consume the extra 10% of the ThP pool

(from 30% to 40%). 4. Repeat the process again with setting the ThP pool alarm threshold at 50%.

7

Page 8: XP_WP

5. Measure the time it takes the ThP volumes to consume the extra 10% of the ThP pool (from 40% to 50%).

6. Repeat the process again with setting the ThP pool alarm threshold at 60%. 7. Measure the time it takes the ThP volumes to consume the extra 10% of the ThP pool

(from 50% to 60%).

The table below represents the expected response of the ThP pool volume when the ThP pool has insufficient space to allocate to the V-VOLs.

Access area I/O type Reported content

Read Illegal request Page unassigned area

Write Write protect

Read Read enable Page assigned area

Write Write enable

The ThP Volume Threshold For the ThP Volume Threshold, you have one threshold setting. This threshold can be set anywhere from 5% to 300% with 5% increments (the default is 5%).

This threshold represents the relationship of the unallocated ThP volume capacity to the available pool space.

For example, when the unallocated capacity of a ThP volume is 1 TB and the volume threshold is set at 200%, if the free pool capacity becomes smaller than 2 TB (1 TB x 200%), you will be notified via SIM and SNMP trap.

Since a POOL is likely shared by multiple ThP volumes, it is desirable to have free space larger than the unused capacity of a ThP volume. Therefore, the threshold of a ThP volume is typically set at 100% to 300%.

To best setup the ThP volume threshold for most operating systems:

1. If the ThP volume is used in a non-production/test environment, you may want to set the threshold at 5%.

2. If the ThP volume is used in a production environment with critical data, set the ThP volume threshold at 100%.

3. If the ThP volume is used in a production environment with extremely critical data, set the ThP volume threshold at 200% or higher (you will get warned much earlier than prior cases and have more time to react).

Please keep in mind that some OS and file system combinations allocate more space than others when creating a new file system. For those types of file systems (regardless of whether it’s for production or testing) please set your thresholds at no lower than 50% and apply the above rules.

A ThP pool is likely shared by several V-VOLs, so it’s preferable that the pool capacity is much larger than a single free V-VOL capacity.

8

Page 9: XP_WP

Setup Email Notification for ThP Alarms 1. Only ThP threshold and Program Product license capacity SIMs are sent via the email notification

feature. All other SIMs must be viewed at the RWC. 2. Sample email notification:

DATE : 01/23/2009 TIME : 12:50:41 Machine : Unknown(Seq.# 10038) RefCode : 630032 Detail : The TP VOL threshold was exce

Figure 1: HP StorageWorks XP Remote Web Console

How to control the run-away process to prevent unnecessary pool usage When using ThP V-VOL LUNs, controlling run-away file system allocations and avoiding file system (FS) fragmentations with large volumes is necessary. The following approach is suggested to help with these issues but you are not limited to this implementation:

If, for instance, we presented a 2 TB volume (Traditional or ThP) to a file system, the file system will have the entire 2 TB address range to use for creating that file system. Over time, files are added and deleted or purged. The file system will typically attempt to allocate the next contiguous chunk of space for the new files. The file system may eventually run out of large enough contiguous chunks of space.

9

Page 10: XP_WP

Challenges It’s not recommended to de-fragment a file system using a ThP V-VOL, as it may result in

unnecessary page allocation. It can be very difficult to predict whether a user may or may not cause a ThP V-VOL page allocation

by mistake. Those mistakes can be quite difficult to correct after they occur. There are no safeguards concerning run-away applications space allocation, so be careful while

using ThP V-VOLs.

Proposed solution For those file systems that can grow their volumes on the fly (for example: Windows, most of UNIX advanced file systems, Linux advanced file system) where large volumes are presented via ThP to be used for an application’s life cycle over time. (for example: A 2 TB ThP V-VOL to be used over the course of the next 24 months)

1. Configure the ThP V-VOL normally for the full 2 TB size 2. Present the 2 TB V-VOL to the host 3. Partition the 2 TB volume into four equal 250 GB host partitions 4. Present the 1st partition to your file system volume group 5. Let the application use it for the 1st six months 6. As needed, the server admin can add more partitions or utilize scripting to perform the LVM

expansion

Benefits 1. This file system can never allocate more than 250 GB from the ThP pool unless permitted 2. The pool consumption can be predictable over time resulting in better ThP management 3. The storage administrator is not so involved in the volumes growth, but is still notified in time to

better manage the physical pool growth 4. Scripting can be used to automate the volume expansion based on pre-set policies 5. Less fragmented in file system when you expand the volume on demand rather than over

provisioning the volume ahead of time

Host File System Considerations The following table shows utilization efficiency by file system type for each operating system (OS).

(FS created using default parameters.) OS and FS should be carefully considered when using Thin Provisioning.

For other OSs or FSs than those listed in the table, notify users in advance that we cannot guarantee the space utilization efficiency.

Note: Applications noted are not recommended for any thin provisioning product, not just HP’s implementation.

10

Page 11: XP_WP

OS FS Recommended

JFS (VxFS) Yes HP-UX

HFS No1

WINDOWS Server 2003/2008 NTFS Yes

XFS Yes

VxFS Yes

Linux

Ext2

Ext3

Yes2

UFS No1

VxFS Yes

Solaris

ZFS Yes3

OpenVMS VMS Yes

NonStop NSK Yes

VMware (ESX Server) VMFS Yes4,5,6

VFS Yes Tru64

AdvFS Yes

JFS No1

JFS2 Yes

AIX

VxFS Yes

1 At FS creation time, the capacity of the pool is consumed up to 100% of the ThP V-VOL capacity. 2 At FS creation time, the capacity of the pool is consumed to 30% of ThP V-VOL capacity. 3 ZFS “zpool scrub” is not recommended to use because it will force the volume to fully allocate. 4 If VMware eagerzeroedthick formatting is used, then run Discard Zero data from the RWC so the pool reclaims pages that have been zeroed. 5 VMware thin formatting can result in less than optimum new page allocation when multiple V-VOLs are used in a single VMFS volume 6 VMFS does not support online volume expansion. Online volume expansion will work if the volume is presented as a raw device to the guest OS, and if the guest OS supports it.

ThP Pool design recommendations 1. Generally, pool performance increases as you add more disks (parity groups). 2. Choose RAID level and HDD type for each pool based on performance and/or protection. For

example: A pool composed of RAID 1 SSD would be considered highest performing tier, and a pool with RAID 1 FC would be considered the safest tier.

3. Consider the amount of front-end bandwidth you will allocate to the V-VOLs bound the pool. Then choose the HDD type, RAID level, and number of parity groups/pool to match the front-end bandwidth.

4. Divide the parity group into LDEVs equal in size to the data disks in the group. For example: RAID 5 3D+1P would be divided into 3 LDEVs. If size of the data disk does not divide into 42 MB evenly, create as many LDEVs as possible with 42 MB increment and leave the remaining space in the final LDEV. For example: 3D+1P of 144 GB drives would yield 2 LDEVs of 147462 MB and the last LDEV would be 147444 MB.

5. When adding multiple parity groups to the pool, choose parity groups from different DKA sets to maximize performance.

11

Page 12: XP_WP

Item Specification

Emulation type OPEN-V

RAID level All XP24000/XP20000 supported levels (including parity group concatenation)

HDD type All XP24000/XP20000 supported HDD types

Creation By LDEV

Capacity of Pool Volume 8 GB to maximum allowable LDEV size by the array (approximately 3 TB today)

ThP V-VOL design recommendations 1. Create the ThP V-VOL group (ThP V-VOLs), setup the volume size, CU, and so on. Then attach the

ThP V-VOL to a pool. 2. Create one ThP V-VOL per V-VOL group so that the V-VOL is guaranteed room for expansion at a

later time. System mode 726 set to ON will force users to create one V-VOL per V-VOL group. 3. Do not over provision the V-VOL unless the customer absolutely demands a larger volume. You can

expand the V-VOL online at a later time if needed. 4. The ThP V-VOL will show with “x” notation to differentiate it from other XP volumes.

Example: • ThP V-VOL (00:FF:00 X) • Snapshot volume (00:EE:00 V) • External storage volume (00:AA:00 #) • Regular volume (00:00:00)

Pool protection

Double disk failure concerns You increase risk of the highly unlikely event of a double disk failure in the same parity group the larger your pool becomes. Keep in mind, double disk failure risk is equivalent to the same risk encountered when using LVM, VxFS or any other host based virtualization.

Thin Provisioning does not add any additional double disk failure risk. The XP uses Dynamic Sparing to prevent double disk failure. If you lose a drive in an array group, the spare kicks in and data is restored.

Dynamic sparing is a method of removing a disk drive from service if its read/write errors exceed a certain threshold. On normal read and write operations, the array keeps track of the number of errors that occur. If the error threshold is reached, the system considers that disk drive as likely to cause an unrecoverable error and automatically copies the data from that disk drive to a spare disk drive.

The odds of a complete double drive failure are extremely slim because the rebuild of the spare drive will complete before the second drive fails.

If a double drive failure does occur you will receive a 623XXX (XXX is the pool ID.) alert, and the pool will go into a Blockade state.

In order to complete avoid double drive failure use RAID 6. Otherwise, you will have to recover the pool from backup. Additionally, for your database place the data files in one pool and the log backups in another.

12

Page 13: XP_WP

NOTE Single drive failure in multiple parity groups within the same pool will not affect overall performance of the pool.

Formula for mean time to data loss (MTTDL) For RAID 5:

MTTF or MTBF MTTDL= N*(G-1)*(MTTR*MTTR)

For RAID 6:

MTTF or MTBF MTTDL= N*(G-1)*(G-2)*(MTTR*MTTR)

Where N is the number of drives in a pool, G is the number of drives in a parity group, and MTTR is the correction copy time.

In the case of a 300 GB 15K disk in 7+1 and 256 drives in a pool MTTDL is 45,553,935.96 hours or 1,898,080.65 days or 271,154.37 weeks or 63,059.16 months.

A drive MTBF of 520,833 hrs is provided by the manufacturer when calculating a single drive failure.

The formula is based upon years of array operations proving out the assertion disk failures are independent and completely uncorrelated. Aging of drives may influence these results.

Backup and recovery It is best to backup your data by file instead of volume in case you do need to recover you volumes. During recovery, you will not fully allocate your V-VOL if you only backed up the files instead of the raw volume. You can do full volume backups, but make sure your pool contains enough disk space for the volumes, and reclaim the pages after recovery completes.

DMT protection The dynamic mapping table (DMT) contains all of the pointers from the V-VOLs to the spindles on the disk. If the DMT is lost, all of the data is effectively lost; therefore, we have added a protection mechanism to prevent loosing the DMT.

The DMT is regularly saved in the reserved area of the ThP Pool. As a result, the DMT will be recoverable even after a worst case power failure.

During an orderly power down, DMT is also saved to the System Disk. With an unexpected power loss, if power comes back:

– While the batteries are still good—DMT is OK, all is well – After the batteries are failed—all is well. The DMT is saved on ThP pool HDD and will be

automatically restored upon power recovery. The restoration process can add several minutes to the power up process.

13

Page 14: XP_WP

DMT Backup Guideline Back-up Area

– Size of storage area for back-up: up to 4 GB/Pool – Location of back-up area: Head of Pool – 4 GB can backup enough metadata for 10 PB of user data

Impact of DMT Backup For the following scenarios:

420 GB ThP V-VOL 10K ThP pages (42 MB each) Assume the host is sustaining ~2K IOPS to the ThP V-VOL 8K I/O size 42 MB/8K = 5250 8K IOs Assume a new page allocation causes “slow I/O” Assume an allocated page is a normal “fast I/O”

Scenario “1” 100% sequential write What should you expect in 100% Sequential Write?

1. One IO out of 5250 will take longer time to be accepted 2. The total number of the “slow” IOs = 10K I/Os for the life time of the V-VOL (=total number of ThP

pages) 3. If the host consumes the 420 GB space in 6 month, the total number “slow I/O” is only 10,000

out of 55+ Million 8K I/O spread over 6 months and all other I/Os are at the “fast I/O” rate.

Scenario “2” 100% random write (One IO/ThP page, worst case) The Big Picture for Random Write

What should you expect in Random Write?

1. At worse case, ThP allocates one page per I/O due to the host I/O random address pattern. 2. The total number of the “slow” IOs = 10K I/Os for the life time of the V-VOL (=total number of

ThP pages). 3. With the host I/O rate of 2K IOPs; the 420 GB volume of 10K pages can be allocated in less than

five second at the “slow I/O” rate. 4. After the THP V-VOL pages are fully allocated, the remaining 55+ Million I/Os will go back up to

the normal “fast I/O” rate.

14

Page 15: XP_WP

Oracle 11g and ThP

Auto-Extend Feature of Oracle database that allows tablespaces to extend their size on an as needed basis. Provides great flexibility by enabling tablespaces to initially consume very small amounts of initial

capacity. Tablespaces cannot extend past the size of the allocated volume. Auto-Extend have small initial capacity footprint integrate nicely with ThP. Traditional volumes can be complex to handle over time and when they fill up. ASM and XP Thin

Provisioning elements the complexity.

Best practice When selecting the optimal auto-extend size, consider the ASM Disk group AU (Allocation Unit), the

data growth rate and ThP page size. It is not recommended to a ThP V-VOL for swap space and redo-log space because it will not take

advantage of ThP functionality.

Use the closest possible increment to 42 MB. In the example, only 1 GB is available as the smallest increment unit.

15

Page 16: XP_WP

VMware support using ThP Investigations were done to examine the VMware impact on ThP pool space allocation.

VMware can use the storage device as VMFS, Raw, RDM (raw device mapped). VMFS is the common and preferred file system, however, using the ThP as thin device is the ideal

way if compared to thick device. VMware eagerzeroedthick formatting is not recommended (results full allocation). VMware thin formatting not recommended (unpredictable allocation results). VMware Site Recovery Manager does work with ThP.

Item Range of Access Results

Make VMFS Top only (500 MB) Good

Make FS in Guest OS (Windows 2003)

Top, middle, and end (3 GB) Good

Make FS in Guest OS (Linux EXT3) 32 GB (Total) Good (The same as when there is no VMware)

Test Environment:

Server HP-ML370 G5

OS ESX 3.0.2

Guest OS Windows 2003 R2 Standard 32 Bit Edition SP2

XP24K LUN:0-3, THP VOL Size:100 GB/LUN

16

Page 17: XP_WP

ThP with External Storage (ES) The XP24000/XP20000 now supports ThP Pools on ES

Best Practice Guidelines:

1. A ThP pool should NOT consist of volumes from multiple external arrays. If you choose to use multiple arrays, and a connection from one array is lost, the entire pool will fail.

2. Try to utilize multiple LUNs from a single external storage array to help ThP performance. 3. Follow all of the internal Thin Provisioning guidelines. 4. ES ThP implementation adheres to the ES array performance and availability.

XP

ESExternal.

LUNs

Virtual ThP Pool made up of

External LUNs

ThP V-VOLs XP

ESExternal.

LUNs

Virtual ThP Pool made up of

External LUNs

ThP V-VOLs

17

Page 18: XP_WP

ThP usage with partitioning Not necessary to define a CLPR when defining a ThP volume, but if you do (External Storage):

The ThP volume and the ThP pool it uses must be located in the same CLPR. A ThP pool cannot belong to more than one CLPR. A ThP volume cannot belong to more than one CLPR.

SLPR

CLPR CLPR

ThP PoolThPPool

ThPPool

ThPVOL

ThPVOL

ThPVOL

ThPVOL

ThPVOL

SLPRCLPR CLPR

ThP PoolThPPool

ThPPool

ThPVOL

ThPVOL

ThPVOL

ThPVOL

ThPVOL

18

Page 19: XP_WP

ThP with Auto LUN ThP-> Normal:

To ensure data integrity, a ThP VOL migration to a normal volume will write “0” data to the normal VOL areas corresponding to the locations associated with unallocated ThP VOL pages.

Normal -> ThP:

Normal volume migration to ThP V-VOL will write the entire normal volume contents to the ThP pool and update the VFS/DMT to point to the correct pages for the V-VOL. Migrating from a LUSE volume to a ThP V-VOL is not possible unless the LUSE is connected to the array as external storage (transforming it into OPEN-V).

ThP V-VOL -> ThP V-VOL:

It will only write allocated pages from one pool to the other.

First, data is exchanged between the pools. After the pool information is exchanged, the host presented V-VOL is swapped with the Auto-LUN

V-VOL by simply pointing the original V-VOL to the address in the new pool’s DMT.

ThP with Business Copy Not all business copy operations will behave the same with ThP. The following table illustrates the differences of copy behavior based on the application used to create the pair.

Copy Table

P-VOL S-VOL RWC CVAE Replication Manager Plug-in RM

ThP-VOL ThP-VOL *1 Supported Supported Supported

ThP-VOL Normal-VOL *2 Supported Supported Supported

Normal-VOL ThP-VOL *3 Ok, but run discard zero data

Ok, but run discard zero data

Ok, but run discard zero data

*1 Comment—Given that the pair is in suspended state, and new data is written to the P-VOL (Primary Volume of the replica). A pairresync restore operation will cause the S-VOL data to over-write the P-VOL as expected. The remaining difference data that was written to the P-VOL during the suspend state will be set to zero, but the unused space will not be released to free space. The P-VOL usage % will not decrease to match the S-VOL usage %. The user can manually start Discard Zero Data from the Remote Web Console to return the unused space to back the pool after suspending the pair.

*2 Comment—A pairresync restore operation will not increase the ThP P-VOL used capacity to 100% in order to match the Normal S-VOL. Instead, the array will only copy the changes from S-VOL to P-VOL.

*3 Comment—The V-VOL will fully allocate before you can run discard zero data; therefore, carefully copy to a select number of V-VOLs at a time to prevent filling up the pool, run Discard Zero Data, and then continue with remaining V-VOLs. However, it is possible that there will not be any pages full with zero data; therefore, be careful not to overprovision if you are unsure of the content.

19

Page 20: XP_WP

ThP Snapshot Combination ThP VOL can be configured as Snapshot P-VOL

Snapshot S-VOL

ThP PoolSnapshot Pool

ThP P-VOL

DMT DMT

Snapshot pair

ThP V-VOL

Snapshot V-VOL

64 generations

ThP Pool(s)

It’s ok to copy between many ThP Pools to a single Snapshot Pool

ThP Pool and Snapshot Pool are required. Actual data is stored in ThP Pool. Metadata and difference data is in the Snapshot Pool. Data copy is performed between both pools. In the case of Snapshot S-VOL access when data does not exist in Snapshot Pool, DKC reads the

data from ThP Pool.

You must create a ThP Pool and Snapshot Pool. Take this into consideration as you design your system because it will reduce the total number of Pools.

20

Page 21: XP_WP

ThP Continuous Access Combination A ThP V-VOL can be remotely paired with another ThP V-VOL and Normal Volumes. The copy operations behaves similarly as Business Copy (refer to Copy Table).

S-VOL V-VOL

V-VOL P-VOL

Remote ThP Pool

Continuous Access Copy

DMT DMT

Local ThP Pool

Local and remote ThP Pools are required for ThP to ThP pairing.

Continuous Access link is established through the V-VOLs, but data copy is performed between the pools.

Local and remote pools do not have to be the same size. All Continuous Access functionality behaves as if the ThP V-VOL is a normal volume. Supported with Continuous Access Sync and Continuous Access Journal. Continuous Access pair status will change to PSUE when S-VOL pool is full before P-VOL pool, and

write IO is attempted from the P-VOL.

21

Page 22: XP_WP

ThP shredding Shredding capability is applied to ThP VOL like normal VOL.

– The specified data is written to the assigned pages. – This logic is not applied to non-assigned pages.

ThP Pool space reclaiming

Zero unused disk space Use a disk scrub utility

Simple to use: Scrub utilities are designed to clear out any data so it becomes unrecoverable. This might be overkill in many situations. Scrubs may take a long time because they usually write random data first and repeat 4-5 times, and then they write zeros and repeat again another 4-5 times.

Windows sdelete: Included with windows since win2k.

sdelete –c will zero free space.

Unix dd: Powerful command that can clear out an entire disk.

dd if=/dev/zero of=zerofile: It will create a file called “zerofile” filled with “zero” bits until the directory is full. It will not overwrite any other files on the disk. Adjust the block size to speed up the write.

Sync: flushes the buffer rm zerofile

Reclaim unused disk space After completing the above tasks to zero out unused space you must reclaim the pool space using Discard Zero Data:

From the RWC:

1. Access the V-VOL window by selecting Go->LUN Expansion->V-VOL 2. Click on the V-VOL group and select the V-VOLs you want to discard zero data. 3. Right click on the V-VOLs and select Discard Zero Data.

22

Page 23: XP_WP

ThP V-VOL expansion ThP gives the user the ability to expand a virtual volume without using LUSE.

Increase the V-VOL size by using raidvchkset command from RAID Manager. Currently, only Windows 2008 will automatically detect the volume expansion if you have host mode option 40 enabled. All other operating systems may need to remount or reinitialize the volume in order for the change to take effect from the host perspective. Check the ability of your operating system to handle a growing LUN before using raidvchkset.

V-VOL expansion depends upon the V-VOL threshold settings. The ratio of the free space capacity of the pool and the free space capacity of the V-VOL must be equal to or more than the V-VOL threshold setting. See the following examples.

The examples show when you can expand the V-VOL based on its threshold settings.

1. The pool is 1000 GB and 400 GB are free space. Initial size of the V-VOL is 500 GB with 200 GB free; therefore, the ratio is 200% (400 GB/200 GB). If you expand the V-VOL to 100 GB, now the V-VOL free space is 700 GB and the ratio drops to 56% (400 GB/700 GB), which is still ok because the V-VOL threshold is set to 50%.

2. V-VOL has 300 GB free. The user tries to expand it out to 1200 GB, but it will fail because the ratio drops to 40%, below the threshold.

3. If V-VOL threshold is set to 250%, and V-VOL is 500 GB with 200 GB free, ratio is 200%. The user will not be able to expand because it is below 250%. Basically, you can control whether or not the V-VOL can expand by setting the V-VOL threshold higher than free capacity ratio.

23

Page 24: XP_WP

Similarly, to CVS with normal LDEVs in a parity group, free space must exist immediately below the V-VOL in the V-VOL group.

In the example below you have V-VOL group X1-1 with 3 V-VOLs each 1 GB in size. The group has free space scattered in between the V-VOLs due to user deleting V-VOLs in order to return space to the pool. The user can expand:

1. 00:10:00 an additional 1 GB. If you delete 00:10:02 and 00:10:67, you can grow up to 4 TB. 2. 00:10:02 can grow up to 103 GB. If you delete 00:10:67, you can grow up to 4 TB–104 GB. 3. 00:10:67 can grow up to the remaining 4TB–105 GB.

24

Page 25: XP_WP

Best practices 1. Do not set your V-VOL threshold excessively high unless you are positive that the pool will have

enough free space when you plan your V-VOL expansion. 2. Only create one V-VOL per V-VOL group so that your V-VOL can expand up to the maximum 4 TB

size at any time. The number of V-VOL groups will not affect the maximum number of LDEVs. If you want 65K V-VOLs you can create 65K V-VOL groups with one V-VOL in each.

Online LUN expansion ThP Online expansion is supported all operating systems that support Online LUN expansion. If the Operating System does not support Online LUN expansion, then simply re-initialize and remount the LUN after you have expanded it using raid manager.

25

Page 26: XP_WP

Expanding V-VOL using HP-UX > raidvchkdsp -g VG01 -v aou Group PairVol Port# TID LU Seq# LDEV# Used(MB) LU_CAP(MB) U(%) T(%) PID VG01 thp01 CL4-B-0 0 0 10009 5120 42 19240 1 70 7

> raidvchkset -vext 18g -g VG01 -d thp01

> raidvchkdsp -g VG01 -v aou Group PairVol Port# TID LU Seq# LDEV# Used(MB) LU_CAP(MB) U(%) T(%) PID VG01 thp01 CL4-B-0 0 0 10009 5120 42 37672 1 70 7

> diskinfo /dev/rdisk/disk69 SCSI describe of /dev/rdisk/disk69: vendor: HP product id: OPEN-V type: direct access size: 38576128 Kbytes bytes per sector: 512

> xpinfo -f /dev/rdisk/disk69

Device File : /dev/rdisk/disk69 Model : XP24000 Port : CL4B Serial # : 00010009 Host Target : 0d Code Rev : 6004 Array LUN : 00 Subsystem : 0018 LDKC:CU:LDev: 0:14:00 CT Group : --- Type : OPEN-V CA Volume : SMPL Size : 37672 MB BC0 (MU#0) : SMPL ALPA : d2 BC1 (MU#1) : SMPL Loop Id : 0d BC2 (MU#2) : SMPL SCSI Id : --- RAID Level : RAID18 RAID Type : --- RAID Group : 0-7 ACP Pair : Disk Mechs : --- --- --- --- FC-LUN : 0000271900001400 Port WWN : 50060e8005271931 HBA Node WWN: 50060b000024d9dd HBA Port WWN: 50060b000024d9dc Vol Group : /dev/vg03 Vol Manager : LVM Mount Points: /dev/vg03/thptest:/thptest DMP Paths : --- CLPR : ---

> vgmodify -r -v -a -E vg03 /dev/rdisk/disk69 Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf /dev/rdisk/disk69 Warning: Max_PE_per_PV for the volume group (4809) too small for this PV (9417). Using only 4809 PEs from this physical volume. An update to the Volume Group is NOT required Review complete. Volume group not modified

> diskinfo /dev/rdisk/disk69 SCSI describe of /dev/rdisk/disk69: vendor: HP product id: OPEN-V type: direct access size: 38576128 Kbytes bytes per sector: 512

26

Page 27: XP_WP

> vgmodify -v -r vg03 Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf

Current Volume Group settings: Max LV 255 Max PV 16 Max PE per PV 4809 PE Size (Mbytes) 4 VGRA Size (Kbytes) 656 /dev/rdisk/disk69 Warning: Max_PE_per_PV for the volume group (4809) too small for this PV (9417). Using only 4809 PEs from this physical volume. "/dev/rdisk/disk69" size changed from 19701760 to 38576128kb An update to the Volume Group IS required

New Volume Group settings: Max LV 255 Max PV 16 Max PE per PV 4809 PE Size (Mbytes) 4 VGRA Size (Kbytes) 656

Review complete. Volume group not modified

27

Page 28: XP_WP

Expanding V-VOL using OVMS The main thing to take into consideration is first the volume must be mounted shareable, and after you use raid manager to expand the size you need to run "set volume/size=<the same size you expanded it with raid manager in blocks>". There is no need to re-mount the drive.

$ show dev dga

Device Device Error Volume Free Trans Mnt Name Status Count Label Blocks Count Cnt $1$DGA314: (CAALMG) Online 0 $1$DGA368: (CAALMG) Online 8

$ init/limit $1$dga368 thpdev %INIT-I-DEFCLUSTER, value for /CLUSTER defaulted to 16

$ mount/share $1$dga368 _Label: thpdev _Log name: %MOUNT-I-MOUNTED, THPDEV mounted on _$1$DGA368: (CAALMG)

$ sh dev/full $1$dga368

Disk $1$DGA368: (CAALMG), device type HP OPEN-V, is online, mounted, file- oriented device, shareable, available to cluster, error logging is enabled.

Error count 8 Operations completed 49257 Owner process "" Owner UIC [RAIDMGR] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count 1 Default buffer size 512 Current preferred CPU Id 1 Fastpath 1 WWID 01000010:6006-0E80-0561-AC00-0000-61AC-0000-0170 Total blocks 35603072 Sectors per track 32 Total cylinders 34769 Tracks per cylinder 32 Logical Volume Size 35603072 Expansion Size Limit 2147475456 Allocation class 1

Volume label "THPDEV" Relative volume number 0 Cluster size 16 Transaction count 1 Free blocks 35388096 Maximum files allowed 16711679 Extend quantity 5 Mount count 1 Mount status Process Cache name "_CAALMG$DKC100:XQPCACHE" Extent cache size 64 Maximum blocks in extent cache 3538809 File ID cache size 64 Blocks in extent cache 0 Quota cache size 0 Maximum buffers in FCP cache 4114 Volume owner UIC [RAIDMGR] Vol Prot S:RWCD,O:RWCD,G:RWCD,W:RWCD

Volume Status: ODS-2, subject to mount verification, file high-water marking, write-back caching enabled.

$ HORCMINST == "0"

$ raidvchkdsp -g thpos -v aou Group PairVol Port# TID LU Seq# LDEV# Used(MB) LU_CAP(MB) U(%) T(%) PID thpos DGA368 CL7-A-0 0 2 25004 368 756 17384 1 70 1

$ raidvchkset -g thpos -d DGA368 -vext 2G

$ raidvchkdsp -g thpos -v aou Group PairVol Port# TID LU Seq# LDEV# Used(MB) LU_CAP(MB) U(%) T(%) PID

28

Page 29: XP_WP

thpos DGA368 CL7-A-0 0 2 25004 368 756 19432 1 70 1

$ inqraid $1$dga368 $1$DGA368 -> [ST] CL7-A Ser = 25004 LDEV = 368 [HP ] [OPEN-V ] CA = SMPL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] A-LUN[PoolID 0001] SSID = 0x0005

$ sh dev $1$dga368/full

Disk $1$DGA368: (CAALMG), device type HP OPEN-V, is online, mounted, File-oriented device, shareable, available to cluster, error logging is enabled.

Error count 8 Operations completed 49258 Owner process "" Owner UIC [RAIDMGR] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count 1 Default buffer size 512 Current preferred CPU Id 1 Fastpath 1 WWID 01000010:6006-0E80-0561-AC00-0000-61AC-0000-0170 Total blocks 35603072 Sectors per track 32 Total cylinders 34769 Tracks per cylinder 32 Logical Volume Size 35603072 Expansion Size Limit 2147475456 Allocation class 1

Volume label "THPDEV" Relative volume number 0 Cluster size 16 Transaction count 1 Free blocks 35388096 Maximum files allowed 16711679 Extend quantity 5 Mount count 1 Mount status Process Cache name "_CAALMG$DKC100:XQPCACHE" Extent cache size 64 Maximum blocks in extent cache 3538809 File ID cache size 64 Blocks in extent cache 0 Quota cache size 0 Maximum buffers in FCP cache 4114 Volume owner UIC [RAIDMGR] Vol Prot S:RWCD,O:RWCD,G:RWCD,W:RWCD

Volume Status: ODS-2, subject to mount verification, file high-water marking, write-back caching enabled.

$ set vol/size=37932453 $1$dga368: $ sh dev $1$dga368/full

Disk $1$DGA368: (CAALMG), device type HP OPEN-V, is online, mounted, file- oriented device, shareable, available to cluster, error logging is enabled.

Error count 9 Operations completed 49341 Owner process "" Owner UIC [RAIDMGR] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count 1 Default buffer size 512 Current preferred CPU Id 1 Fastpath 1 WWID 01000010:6006-0E80-0561-AC00-0000-61AC-0000-0170 Total blocks 39797376 Sectors per track 32 Total cylinders 38865 Tracks per cylinder 32 Logical Volume Size 37932453 Expansion Size Limit 2147475456 Allocation class 1

Volume label "THPDEV" Relative volume number 0 Cluster size 16 Transaction count 1 Free blocks 37717472 Maximum files allowed 16711679 Extend quantity 5 Mount count 1 Mount status Process Cache name "_CAALMG$DKC100:XQPCACHE"

29

Page 30: XP_WP

Extent cache size 64 Maximum blocks in extent cache 3771747 File ID cache size 64 Blocks in extent cache 2329376 Quota cache size 0 Maximum buffers in FCP cache 4114 Volume owner UIC [RAIDMGR] Vol Prot S:RWCD,O:RWCD,G:RWCD,W:RWCD

Volume Status: ODS-2, subject to mount verification, file high-water marking, write-back caching enabled.

$

Using CVAE CLI to create ThP Pool Create a Pool with 1 Raid Group:

./hdvmcli.sh AddPool model=XP24000/XP20000 serialnum=10038 poolid=100 threshold=75 devnums=00:70:00,00:70:01,00:70:02,00:70:03,00:70:04,00:70:05,00:70:06,00:70:07,00:70:08,00:70:09,00:70:0A,00:70:0B,00:70:0C,00:70:0D,00:70:0E,00:70:0F *more raid groups can be added by adding to the lids to LDKC:CU:LDEV numbers.

Add a Raid Group to an existing ThP pool:

./hdvmcli.sh ModifyPool model=XP24000/XP20000 serialnum=10038 poolid=100 devnums=00:70:10,00:70:11,00:70:12,00:70:13,00:70:14,00:70:15,00:70:16,00:70:17,00:70:18,00:70:19,00:70:1A,00:70:1B,00:70:1C,00:70:1D,00:70:1E,00:70:1F

Using CVAE CLI to create ThP V-VOLs Add a Virtual Volume:

./hdvmcli.sh AddVirtualVolume model=XP24000/XP20000 serialnum=10038 capacity=67108864 numoflus=8 devnum=00:80:00 poolid=100 threshold=75

Additional ThP related CLI commands De-assign Virtual Volume:

./hdvmcli.sh ModifyVirtualVolume model=XP24000/XP20000 serialnum=10038 assign=false devnums=00:80:00,00:80:01,00:80:02,00:80:03,00:80:04,00:80:05,00:80:06,00:80:07,00:80:08

Re-assign Virtual Volume:

./hdvmcli.sh ModifyVirtualVolume model=XP24000/XP20000 serialnum=10038 assign=true poolid=100 devnums=00:80:00,00:80:01,00:80:02,00:80:03,00:80:04,00:80:05,00:80:06,00:80:07,00:80:08

Delete Virtual Volume (2 step Process):

./hdvmcli.sh ModifyVirtualVolume model=XP24000/XP20000 serialnum=10038 assign=false devnums=00:80:00,00:80:01,00:80:02,00:80:03,00:80:04,00:80:05,00:80:06,00:80:07,00:80:08

./hdvmcli.sh DeleteVirtualVolume model=XP24000/XP20000 serialnum=10038 devnums=00:80:00,00:80:01,00:80:02,00:80:03,00:80:04,00:80:05,00:80:06,00:80:07,00:80:08

30

Page 31: XP_WP

Appendix

ThP Operation sequence

Create an ThP pool volume

Create an ThP pool

Create an ThP volume

Add pool volumes to pool

Create a SYS area

Define LU path to ThP Volume

Add pool volumes to pool

Expand the SYS area

Delete pool ID

Delete SYS area

Define Thp config

Normal Operation

Release Thp config

1

2

3

Monitor ThP pool free area

Expand ThP pool capacity

Prohibit ThP volume write

Delete ThP volume

Delete ThP pool

Release LU path for ThP Volume

Release pool volume

Create an ThP pool volume

Create an ThP pool

Create an ThP volume

Add pool volumes to pool

Create a SYS area

Define LU path to ThP Volume

Add pool volumes to pool

Expand the SYS area

Delete pool ID

Delete SYS area

Define ThP config

Normal Operation

Release ThP config

1

2

3

Monitor ThP pool free area

Expand ThP pool capacity

Prohibit ThP volume write

Delete ThP volume

Delete ThP pool

Release LU path for ThP Volume

Release pool volume

Create an ThP pool volume

Create an ThP pool

Create an ThP volume

Add pool volumes to pool

Create a SYS area

Define LU path to ThP Volume

Add pool volumes to pool

Expand the SYS area

Delete pool ID

Delete SYS area

Define Thp config

Normal Operation

Release Thp config

1

2

3

Monitor ThP pool free area

Expand ThP pool capacity

Prohibit ThP volume write

Delete ThP volume

Delete ThP pool

Release LU path for ThP Volume

Release pool volume

Create an ThP pool volume

Create an ThP pool

Create an ThP volume

Add pool volumes to pool

Create a SYS area

Define LU path to ThP Volume

Add pool volumes to pool

Expand the SYS area

Delete pool ID

Delete SYS area

Define ThP config

Normal Operation

Release ThP config

1

2

3

Monitor ThP pool free area

Expand ThP pool capacity

Prohibit ThP volume write

Delete ThP volume

Delete ThP pool

Release LU path for ThP Volume

Release pool volume

31

Page 32: XP_WP

ThP Combinations with other Program Products The following table shows the relationship between ThP volumes and other XP Program Products.

Program Products Relationship ThP volume Notes

CA Primary Supported HP XP Continuous Access Sync

CA Secondary Supported

Pool volumes cannot be copied

CA Primary Supported

CA Secondary Supported

HP XP Continuous Access Journal

CA Journal Not supported

Pool volumes cannot be copied

Business Copy BC Primary Supported Pool volumes cannot be copied.

BC Secondary Supported Not supported when the source volume is a traditional volume.

Primary Supported

Secondary Not supported

Snapshot

Snapshot Pool Not supported

V-VOL can only be either a ThP Volume or a Snapshot Volume

FC Source Supported FlashCopyV2

FC Target Supported

Auto LUN source Supported Auto LUN

Auto LUN target Supported

Pool volumes cannot be migrated. You can migrate for Normal to ThP V-VOL and vice versa.

XRC Source Supported Extended Remote Copy (XRC)

XRC Target Supported

DKC may not recognize Extended Remote Copy (XRC) Target

LUSE Not supported V-VOL may not be used in a LUSE, and LUSE volume cannot be used in a ThP Pool. (Instead, use V-VOL online LUN expansion)

LUN Security Supported Similar to a traditional LDEV

CVS Supported Similar to a traditional LDEV ThP

Cache Residency Manager

Supported Similar to a traditional LDEV

HP XP Performance Control

Supported Similar to a traditional LDEV

Parallel Access Volume (PAV)

Supported

Data Exchange Supported

HMDE/FAL/FCU Supported

DB Validator Supported Similar to a traditional LDEV

Volume Retention Manager for Mainframe

Supported Similar to a traditional LDEV

HP XP Disk/Cache Partition

Supported Similar to a traditional LDEV

HP XP External Storage

Supported - Pool volumes may be External Volumes

Similar to a traditional LDEV

32

Page 33: XP_WP

Program Products Relationship ThP volume Notes

Business Continuity Manager

Supported Similar to a traditional LDEV

Presenting via Remote Web Console

Supported Similar to a traditional LDEV

ThP Pool Volume specification Item Specification Restrictions, recommended configuration

Emulation type OPEN-V only

RAID level All XP24000/XP20000 supported levels (including parity group concatenation)

HDD type All XP24000/XP20000 supported HDD type

Creation By LDEV

LDEV is associated with the pool and the pool volume attribute is attached.

Individual Pool Volume (LDEV) Capacity

8 GB to 4 TB

• Emulation type, RAID level, or HDD type can be selected as for normal volumes.

• The LDEV status is normal.

• Recommended to install pool volumes by increments of entire parity groups.

• Divide the parity group into LDEVs equal in size to the data disks in the group. For example: RAID 5 3D+1P would be divided into 3 LDEVs.

• When adding multiple parity groups to the pool, choose parity groups from different DKA sets to maximize performance.

Restrictions on ThP Pool Volume Item Restrictions

LUSE Pool volumes cannot be LUSE volumes.

CVS Possible to make an LDEV volume created by using CVS to be a pool volume. In that case, the minimum pool volume size should be 8 GB.

Combination with other Program Products

• Unable use an LDEV involved in Business Copy, Volume Migration (Auto LUN/Tiered Storage Manager), Continuous Access Sync/Async/Journal, Snapshot, LDEV Guard, or Cache LUN as a ThP pool volume.

• Unable to share ThP Pool volumes with Snapshot volumes.

Sharing among pools

Unable to share a pool volume among multiple ThP pools.

Pool volume deletion • Cancel the attribute of a pool volume.

• The attribute cancellation is available only when deleting the pool.

Path definition • Unable use an LDEV with a defined host path as part of a ThP pool.

• Unable to permit direct host I/O access to a pool volume.

LDEV format • Unable to format a pool volume.

LDEV un-installation Unable to uninstall if LDEV is a pool volume.

33

Page 34: XP_WP

ThP Pool specifications Item Specification Recommended configuration, restrictions

Mixed RAID levels Allowed Recommend to use the same drive type and RAID level within a ThP pool.

Pool capacity

8 GB min. For maximum see formula.

Pool capacity is calculated by the following formula:

Total Number of pages = Σ(↓↓pool-VOL number of blocks ÷ 512↓÷ 168↓) for each pool-VOL

The capacity of the pool (MB) = Total number of pages × 42 − (4116 + 84 × Number of pool-VOLs)

↓↓: for the part of the formula between the arrow, truncate after the decimal point.

However, the upper limit of total capacity of all pools is 2.1 PB.

The number of pool volumes

1 to 1024 per pool Requires at least one pool volume.

The number of pools 1 to 128 per array Possible to define up to 128 pools total combined with the pools of Snapshot.

Emulation type OPEN-V The emulation type of a pool volume.

Capacity expansion Available by pool volumes (also OK while online)

Recommended to install by increments of whole parity groups.

Recommended to expand pool capacity when the host load is low.

Capacity reduction Not available To reduce pool capacity, delete the pool and reconfigure it.

Deletion Available only when no ThP VOLs are in use

Possible to delete ThP pool only when no ThP volume exists in the pool. (After the deletion, the pool ID is managed like an undefined pool.)

Threshold For monitoring start and capacity shortage warning

If values exceed the threshold for either the monitoring start or capacity shortage warning, users are notified by SNMP Trap. One is a fixed value; the other is a user specified value.

Page size 42 MB

Spanning multiple CLPRs

Not available

34

Page 35: XP_WP

ThP Volume specifications Item Specification Recommended configuration, Restrictions

ThP VOL Definition Create a ThP VOL from a V-VOL group The V-VOL group for a ThP volume cannot be used with the V-VOL group for Snapshot.

V-VOL Group is the management table number

ThP VOL Deletion Cancel the ThP VOLs association with an ThP pool

Uninstalling a V-VOL will increase free space in the ThP pool due to LDEV format performed for the ThP volume.

The possible number of volumes that may defined

8,192 per pool Define one V-VOL per V-VOL group so that volume expansion will not be impeded.

ThP VOL Capacity 46 MB to 4 TB (1kB=1024B) Create V-VOL sizes as integral multiple of 42 MB since pages are allocated in 42 MB increments.

RAID level responding to a host

ThP volume Possible to check the RAID level and pool ID via SCSI Inquiry.

Emulation type OPEN-V

Threshold An alert is reported when N% of the unused capacity of ThP volume could not be absorbed by the available free pool.

A user configured alert warning can be provided via SNMP Trap for each LUN.

35

Page 36: XP_WP

Service information messages (SIM) The SVP and the C-Track team will get a SIM message when one of the events identified in the table below is triggered.

Codes Events Thresholds or Values

Various Types of Reports Detail Information

620XXX (XXX is the pool ID.)

Pool usage rate exceeded the pool threshold 1.

5% to 95% in 5% increments.

The default value is 70%.

Report to the host: Yes.

Completion report to Remote Web Console: Yes.

Information to the operator: No.

Warning

621XXX (XXX is the pool ID.)

Pool usage rate exceeded the pool threshold 2.

Always 80%. Report to the host: Yes.

Completion report to Remote Web Console: Yes.

Information to the operator: No.

Warning

622XXX (XXX is the pool ID.)

Pool is full. 100% Report to the host: Yes.

Completion report to Remote Web Console: Yes.

Information to the operator: No.

If Mode 729 is ON and a write request for allocating new page is received, the DRU is set on the V-VOL to indicate no space available.

623XXX (XXX is the pool ID.)

Error occurred in the pool.

Not applicable. Report to the host: Yes.

Completion report to Remote Web Console: No.

Information to the operator: Yes.

Blockade

625XXX (XXX is pool ID.)

Pool usage rate exceeded the highest threshold. Repeat SIM every 8 hours

Highest of Threshold 1 and 2.

Report to the host: Yes.

Completion report to Remote Web Console: Yes.

Information to the operator: No.

Warning

630XXX (XXX is the pool ID.)

The rate of free pool capacity to the free V-VOL capacity exceeded the V-VOL threshold.

5% to 300% and counted by 5%.

The default value is 5%.

Report to the host: Yes.

Completion report to Remote Web Console: Yes.

Information to the operator: No.

If the pool IDs are the same even though the V-VOLs are different, only one SIM will be reported.

LDEV number

V-VOL threshold

7FF7XX (XX is V-VOL LDEV)

The term of validity is over (BC license)

Report to the host: Yes.

Completion report to Remote Web Console: Yes.

Information to the operator: No.

7FF8XX (XX is V-VOL LDEV)

The capacity of validity is over (BC license)

Report to the host: Yes.

Completion report to Remote Web Console: Yes.

Information to the operator: No.

36

Page 37: XP_WP

Glossary These terms used throughout the white paper aid in understanding the innovative solutions provided by HP servers and storage.

Term Definition

Array Group 4 Disk drives in a purchase order

BC Business Copy

CHA Channel Host Adapter

CHP Channel Host Processor

CLI Command Line Interface

CLPR Cache Logical Partition

CM Cache Memory

CSW Cache Switch

CV AE Command View Advanced Edition

CVS Custom Volume Set

DCR Data Cache LUN Residence

Disk Group See RAID Group

DKA Disk Control Adapter

DKC Disk control frame

DKU Disk array frame

DR Disaster recovery

DWL Duplex Write Limit

FC Fibre Channel

FC AL Fibre Channel Arbitrated Loop

SSID Subsystem ID

GB Giga byte

Gb Giga bit

GUI Graphical user interface

HA High availability

HBA Host bus adapter

HDU Hard Disk box

IOPS I/Os per second

LDEV Logical device

LDKC Logical Disk Control Frame

LUN SCSI Logical Unit Number

MCU Main Control Unit

MP Micro Processor

MTBF Mean Time Between Failure

MTTDL Mean Time To Data Loss

MTTF Mean Time To Failure

37

Page 38: XP_WP

38

Term Definition

MTTR Mean Time to Repair

nPar HP nPartition (hard partition)

OLTP Online Transaction Processing

Parity group See RAID Group

PCR Partial Cache Residence

PSA Partition Storage Administrator

P-VOL Primary Volume

QoS Quality of Service

RAID Group The set of disks that make up a RAID set

RCU Remote Control Unit

RWC Remote Web Console

SA Storage administrator

SAN Storage area network

SLA Service Level Agreement

SLPR Storage Management Logical Partition

SM Shared Memory

SPOF Single point of failure

SS Snapshot

S-VOL Secondary volume

SVP XP array Service Processor

TB Tera Byte

THP Thin Provisioning

UPS Uninterruptible power supply

V-VOL Virtual Volume

WWN World Wide Name, a unique 64-bit device identifier in a Fibre Channel storage area network

Page 39: XP_WP

For more information HP StorageWorks XP24000/XP20000 Thin Provisioning Software User Guide

http://h20000.www2.hp.com/bizsupport/TechSupport/Home.jsp?lang=en&cc=us&prodTypeId=18964&prodSeriesId=3415988

Technology for better business outcomes © Copyright 2007, 2008, 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

4AA1-3937ENW, Rev. 3, January 2010