openstack hpe 3par block storage driver configuration …• cinder’s cinder.conf settings of...

47
OpenStack HPE 3PAR and HPE Primera Block Storage Driver Configuration Best Practices OpenStack Train Release Technical white paper

Upload: others

Post on 28-May-2020

15 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

OpenStack HPE 3PAR and HPE Primera Block Storage Driver Configuration Best Practices OpenStack Train Release

Technical white paper

Page 2: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper

Contents Revision history ......................................................................................................................................................................................................................................................................................................................................... 4 Executive summary ................................................................................................................................................................................................................................................................................................................................ 5 Introduction ................................................................................................................................................................................................................................................................................................................................................... 5 HPE 3PAR and HPE Primera storage ................................................................................................................................................................................................................................................................................... 6 Configuration ............................................................................................................................................................................................................................................................................................................................................... 8 Volume types creation ..................................................................................................................................................................................................................................................................................................................... 10 Report backend state in service list .................................................................................................................................................................................................................................................................................... 10 Setting extra_specs or capabilities ....................................................................................................................................................................................................................................................................................... 10

extra_specs restrictions ............................................................................................................................................................................................................................................................................................................ 12 Creating and setting qos_specs ............................................................................................................................................................................................................................................................................................... 13

qos_specs restrictions ............................................................................................................................................................................................................................................................................................................... 14 Backend capabilities .......................................................................................................................................................................................................................................................................................................................... 14 Multiple storage backend support and Block Storage configuration ...................................................................................................................................................................................................... 15 iSCSI target port selection ............................................................................................................................................................................................................................................................................................................ 16 iSCSI multipath support ................................................................................................................................................................................................................................................................................................................. 17 Fibre Channel target port selection ...................................................................................................................................................................................................................................................................................... 18 Block Storage scheduler configuration with multi-backend ......................................................................................................................................................................................................................... 18 Block Storage scheduler configuration with driver filter and weigher ................................................................................................................................................................................................... 19 Volume types assignment ............................................................................................................................................................................................................................................................................................................ 21

Multiple backend requirements ........................................................................................................................................................................................................................................................................................ 22 Volume migration ................................................................................................................................................................................................................................................................................................................................ 22 Clone volume ........................................................................................................................................................................................................................................................................................................................................... 22 Volume manage and unmanage ............................................................................................................................................................................................................................................................................................. 22 Revert to snapshot ............................................................................................................................................................................................................................................................................................................................. 23 Snapshot manage and unmanage ....................................................................................................................................................................................................................................................................................... 24 Consistency groups ............................................................................................................................................................................................................................................................................................................................ 24

Creating a consistency group.............................................................................................................................................................................................................................................................................................. 25 Deleting a consistency group.............................................................................................................................................................................................................................................................................................. 25 Adding volumes .............................................................................................................................................................................................................................................................................................................................. 25 Removing volumes ....................................................................................................................................................................................................................................................................................................................... 26 Creating a cgsnapshot ............................................................................................................................................................................................................................................................................................................... 26 Deleting a cgsnapshot ............................................................................................................................................................................................................................................................................................................... 26 Creating a consistency from a cgsnapshot ............................................................................................................................................................................................................................................................. 26 Cloning a consistency group ............................................................................................................................................................................................................................................................................................... 26

Generic volume groups................................................................................................................................................................................................................................................................................................................... 27 Create a group type .................................................................................................................................................................................................................................................................................................................... 27 Setting required flag as group specs and extra specs................................................................................................................................................................................................................................. 27

Page 3: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper

Creating a generic volume group ................................................................................................................................................................................................................................................................................... 27 Deleting a generic volume group ................................................................................................................................................................................................................................................................................... 28 Create a volume and add it to a group ...................................................................................................................................................................................................................................................................... 28 Adding and removing volumes ......................................................................................................................................................................................................................................................................................... 28 Creating a snapshot for a group ...................................................................................................................................................................................................................................................................................... 29 Deleting a group snapshot ................................................................................................................................................................................................................................................................................................... 29 Create a group from a group snapshot ..................................................................................................................................................................................................................................................................... 29 Cloning a group ............................................................................................................................................................................................................................................................................................................................... 29

Group Level Replication (Tiramisu) .................................................................................................................................................................................................................................................................................... 30 Volume compression ........................................................................................................................................................................................................................................................................................................................ 31 Volume retype ......................................................................................................................................................................................................................................................................................................................................... 31 Volume replication .............................................................................................................................................................................................................................................................................................................................. 32

Volume type extra_specs ....................................................................................................................................................................................................................................................................................................... 34 Replication status of a host .................................................................................................................................................................................................................................................................................................. 34 Creating a replicated volume .............................................................................................................................................................................................................................................................................................. 35 Listing valid replication targets ........................................................................................................................................................................................................................................................................................ 35 Failing over a host ......................................................................................................................................................................................................................................................................................................................... 36 Failing back a host ........................................................................................................................................................................................................................................................................................................................ 37

Security improvements .................................................................................................................................................................................................................................................................................................................... 37 CHAP support ................................................................................................................................................................................................................................................................................................................................... 37 Configurable SSH Host Key Policy and known hosts file ........................................................................................................................................................................................................................... 37 Suppress requests library SSL certificate warnings ....................................................................................................................................................................................................................................... 38

Support for Containerized Stateful Services ................................................................................................................................................................................................................................................................. 38 Additional HPE 3PAR and HPE Primera features ................................................................................................................................................................................................................................................. 38

Adaptive Optimization .............................................................................................................................................................................................................................................................................................................. 38 Data-At-Rest Encryption ........................................................................................................................................................................................................................................................................................................ 38 Known limitations .......................................................................................................................................................................................................................................................................................................................... 38

Specify NSP for FC Bootable Volume ................................................................................................................................................................................................................................................................................ 39 Peer Persistence support.............................................................................................................................................................................................................................................................................................................. 39 Summary ...................................................................................................................................................................................................................................................................................................................................................... 40 Appendix ...................................................................................................................................................................................................................................................................................................................................................... 41

Creating a goodness function ............................................................................................................................................................................................................................................................................................ 41 Failing back an HPE 3PAR backend ............................................................................................................................................................................................................................................................................. 45 Configuring HPE 3PAR for volume replication .................................................................................................................................................................................................................................................. 46

Resources and additional links ................................................................................................................................................................................................................................................................................................ 47

Page 4: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 4

Revision history Rev. Date Description

1.0 8-Apr-2016 • Initial release to support the OpenStack® Mitaka release

• All driver paths and configuration options have been updated with the Hewlett Packard Enterprise “HPE” rebranding

• Requires the “python-3parclient” version 4.X from PyPi

• Configure the suppression of Python requests for library SSL certification

• Added support for manage and unmanage of snapshots

• Volume replication support

• How HPE 3PAR’s online physical copy affects the OpenStack Cinder functions

• Behavior changes in clone volume

2.0 24-Oct-2016 Update for OpenStack Newton release

• LDAP and AD authentication is now supported in the HPE 3PAR driver

• The HPE 3PAR backend must be properly configured for LDAP/AD authentication prior to configuring the Cinder driver

• Details on setting up LDAP with HPE 3PAR can be found in the HPE 3PAR user guide

• Once configured, hpe3par_username and hpe3par_password can be used with LDAP/AD credentials

3.0 23-Feb-2017 Update for OpenStack Ocata release

• Image-Volume cache functionality is supported in the HPE 3PAR and HPE Primera block storage driver

4.0 20-July-2017 Update for OpenStack Pike release

• Generic Volume Group support

• Volume Compression support

• A few changes in Volume Replication

• Known limitations

5.0 15-Apr-2018 Update for OpenStack Queens release

• Backend capabilities

• Group-level replication (Tiramisu)

• Revert to snapshot

• A note has been added in Fibre Channel target port selection

6.0 4-Sep-2018 Update for OpenStack Rocky release

• Report backend state in service list

7.0 3-Apr-2019 Update for OpenStack Stein release

• New extra spec hpe3par:convert_to_base

8.0 25-Sep-2019 Update for OpenStack Train release

• HPE Primera support

• Volume Multi-attach support

• Specify NSP for FC Bootable Volume

• Peer Persistence support

Note This document should be used for the OpenStack Mitaka and later releases.

Page 5: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 5

Executive summary Hewlett Packard Enterprise’s commitment to the OpenStack community brings the power of OpenStack to the enterprise with new and enhanced offerings that enable enterprises to increase agility, speed up innovation, and lower costs.

Since the Grizzly release, Hewlett Packard Enterprise has been a top contributor to the advancement of the OpenStack project.1 Hewlett Packard Enterprise’s contributions have focused on continuous integration and quality assurance, which has supported the development of a reliable and scalable cloud platform that is equipped to handle production workloads.

To support the requirements that many larger organizations and service providers have for enterprise-class storage, Hewlett Packard Enterprise has developed the HPE 3PAR and HPE Primera Block Storage Drivers, which support the OpenStack technology across both iSCSI and Fibre Channel (FC) protocols. This provides the flexibility and cost-effectiveness of a cloud-based, open source platform to customers with mission-critical environments and high-resiliency requirements.

Figure 1 shows the high-level components of a basic cloud architecture.

Figure 1. OpenStack cloud architecture

Introduction This document provides information and best practices for the new features in the OpenStack release. These include configuring and using volume types, extra specs, quality of service (QoS) specs, consistency groups, over subscription, volume replication, and multiple backend support with the HPE 3PAR and HPE Primera Block Storage Drivers.

The HPE3PARFCDriver and HPE3PARISCSIDriver are based on the Block Storage (Cinder) plug-in architecture, shown in Figure 2. The drivers execute the volume operations by communicating with the HPE 3PAR storage system over HTTP or HTTPS protocols and secure shell (SSH) connections. The connections communicate using the python-3parclient, which is part of the Python Package Index (PyPi).

1 Stackalytics.com, “OpenStack Queens Analysis,” March 2016. stackalytics.com/?module=cinder-group&metric=commits&release=queens

Page 6: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 6

Figure 2. HPE 3PAR iSCSI and FC drivers for OpenStack Cinder

HPE 3PAR and HPE Primera storage HPE 3PAR and HPE Primera storage use a single architecture, shown in Figures 3 and 4, to deliver primary storage platforms for midrange, enterprise, and all-flash arrays.

HPE 3PAR and HPE Primera Block Storage Drivers can work with all arrays in the HPE 3PAR and HPE Primera product family. HPE 3PAR and HPE Primera storage deliver key advantages for the OpenStack community:

• High performance to meet peak demands

• Nondisruptive scalability to easily support storage growth

• Bulletproof storage to reduce downtime

• Increased efficiency to help ensure no wasted storage

• Effortless storage administration to lower operational costs and reduce time-to-value

In the latest release, the HPE 3PAR Block Storage Drivers have added Volume Replication using Remote Copy functionality and documented support for Adaptive Optimization and Data-At-Rest Encryption.

Page 7: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 7

Figure 3. HPE 3PAR StoreServ product family

Figure 4. HPE Primera storage product family

Page 8: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 8

Configuration The HPE 3PAR and HPE Primera Block Storage Drivers for iSCSI and Fibre Channel were introduced in the OpenStack Grizzly release. Since that release, several configuration improvements have been made, including the following:

Icehouse • Common Provisioning Groups (CPGs) used by the HPE 3PAR and HPE Primera Block Storage Drivers are no longer required to belong to a

domain. The hp3par_domain configuration setting in the cinder.conf file has been removed.

• Added support to the HPE 3PAR iSCSI OpenStack driver, which allows the selection of the best-fit target iSCSI port from a list of candidate ports.

• Enhanced quality of service features now using qos_specs instead of extra_specs.

• Icehouse release requires the hp3parclient version 3.0.0 from the PyPi.

• The HPE 3PAR FC OpenStack driver can now take advantage of the Fibre Channel Zone Manager feature in OpenStack that allows FC SAN zone or access control management. For details, see the OpenStack Configuration Reference.

Juno • The Juno release requires the hp3parclient version 3.1.1 from the PyPi.

• Added support to the HPE 3PAR Fibre Channel OpenStack driver allows for Match Set (requires Fibre Channel Zone Manager) VLUNs instead of Host Sets.

• Admin Horizon UI now supports adding extra_specs and qos_specs settings.

• The HPE 3PAR iSCSI OpenStack driver now supports the Challenge-Handshake Authentication Protocol (CHAP).

• Configurable SSH Host Key Policy and known host file.

Kilo • The Kilo release introduces support for pools. With Kilo or later, the hp3par_cpg setting in the cinder.conf file is used to define

CPGs/pools. The pool name is the CPG name. The hp3parcpg setting can now contain a comma-separated list of CPGs. This allows the scheduler to select a backend and a pool in its set of pools.

• The extra spec setting hp3par:cpg is ignored in Kilo. Instead, use the hp3par_cpg setting in the cinder.conf file to list the valid CPGs for a backend. If types referred to different CPGs with different attributes, those should be converted to multiple backends with the CPGs specified in the cinder.conf file.

• Added support for Flash Cache, which can be enabled for a volume with the hp3par:flash_cache extra-spec setting.

• Added support for Thin Deduplication volume provisioning, which can be used for provisioning a volume with the hp3par:provisioning extra-spec setting.

• The Fibre Channel Zone Manager feature in OpenStack that allows FC SAN zone or access control management. For the latest configuration details for both Cisco and Brocade, see the OpenStack Configuration Reference Guide.

• The Dynamic Optimization license is required to support any feature that results in a volume changing provisioning type or CPG. This might apply to the volume migrate, retype, and manage commands.

Liberty • Cinder’s cinder.conf setting for the hp3par_username and SAN login requires just the “edit” role and the Domain setting of “all.” This

works with prior releases of the OpenStack as well. In the past, a “super” role was required.

• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes and reserved_percentage to prevent over provisioning on the HPE 3PAR storage. Fixed the optional hp3par_snapshot_retention and hp3par_snapshot_expiration parameters when set in the cinder.conf file. These parameters are sent to the backend as strings instead of integers.

• Added HPE 3PAR Multipath iSCSI support. In cinder.conf, and added the hp3par_iscsi_ips property to the HPE 3PAR iSCSI backends that will be utilizing multipath.

Page 9: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 9

Mitaka • Requires the python-3parclient version 4.2.0 or newer from PyPi. The client was rebranded from hpe3parclient.

• HPE 3PAR Cinder driver paths and configuration settings have been updated as part of the Hewlett Packard Enterprise rebranding.

• Added volume replication support to the HPE 3PAR drivers.

• Data-At-Rest Encryption and Adaptive Optimization support.

• Modified the behavior of creating a volume from another volume (clone) when the new volume requested is a larger size.

• The System Reporter license is required to support the gathering of statistics from the storage array’s CPGs. These statistics are used by the Cinder scheduler for volume placement.

Newton • LDAP and AD authentication are now supported in the HPE 3PAR driver.

• The HPE 3PAR backend must be properly configured for LDAP/AD authentication prior to configuring the Cinder driver. Details on setting up LDAP with HPE 3PAR can be found in the HPE 3PAR user guide.

• Once configured, hpe3par_username and hpe3par_password can be used with LDAP/Active Directory (AD) credentials.

Ocata • Image-Volume Cache is supported in the HPE 3PAR driver.

• image_volume_cache_enabled = True can be added in the backend section of cinder.conf.

Pike • A generic volume group is supported for HPE 3PAR driver and support for the Consistency group is removed.

• Volume Compression is supported in the HPE 3PAR driver.

• A few changes and corrections have been done in Volume Replication, such as multipath support for a secondary HPE 3PAR system, multiple pools in backend configuration, volume retype support, and instructions for fail-over and fail-back operations for HPE 3PAR driver.

• A few known limitations for HPE 3PAR driver have been added.

Queens • From Pike release, support for HPE 3PAR backend capabilities has been added, which provides a list of capabilities for the HPE 3PAR backend

in Cinder. The keys and possible values for the capabilities are documented.

• The Revert to Snapshot feature copies the differences in a snapshot back to its base volume, allowing you to revert the base volume to an earlier point in time.

• Support for Group level replication (Tiramisu) has been built upon generic volume groups.

• A note has been added to Fibre Channel target port selection (see Fibre Channel target port selection).

Rocky • Implemented report backend state in service list.

Stein • Added new extra spec hpe3par:convert_to_base.

Page 10: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 10

Train • Added support for HPE Primera array (OS 4.0.x). Requires python-3parclient version 4.2.11 or newer from PyPi. In HPE Primera array, Dedup

and Compression is combined as single option “data reduction.” Also, port number 443 is used instead of 8080.

• Enabled multi-attach capability; now a volume can be attached to more than one instance.

• For FC Bootable volume, added config option “hpe3par_target_nsp” to use when multipath is not enabled and Fibre Channel Zone Manager is not used.

• Added Peer Persistence support.

Volume types creation Block Storage volume types are a type or label that can be selected at volume create time in OpenStack. These types can be created either in the OpenStack Horizon Dashboard (Formerly Admin Horizon UI) or using the command line, as shown below:

$cinder --os-username admin --os-tenant-name admin type-create <name>

The <name> is the name of the new volume type. This example illustrates how to create three new volume type names with the names gold, silver, and bronze:

$cinder --os-username admin --os-tenant-name admin type-create gold

$cinder --os-username admin --os-tenant-name admin type-create silver

$cinder --os-username admin --os-tenant-name admin type-create bronze

Report backend state in service list Until the Pike release, Cinder could not report the backend state to service, and operators only know that the cinder-volume process is up, but they are not aware of whether the backend storage device is ok. Users still can create volumes and go to fail over and over again. To make maintenance easier, the operator could query the storage device state via service list and fix the problem more quickly. If the device state is down, that means volume creation will fail.

The following query shows the state of the backend device—whether it is up or down.

Figure 5. Query showing state of backend device

Setting extra_specs or capabilities After the volume type names have been created, you can assign extra_specs, qos_specs or capabilities to these types. The filter scheduler uses the extra_specs data to determine capabilities and the backend. It also enforces strict checking. Any QoS-related settings with the exception of the virtual volume set (VVS) must be set in the qos_specs, as described in Creating and setting qos_specs.

The extra_specs or capabilities must be set or unset for a volume type. The extra_specs are set or unset either in the OpenStack Horizon Dashboard or using the command line, as shown below:

$cinder --os-username admin --os-tenant-name admin type-key <vtype> <action> [<key=value> [<key=value> ...]]

The argument <vtype> is the name or ID of the previously created volume type (for example, gold, silver, or bronze). The argument <action> must be one of the actions: set or unset. The optional argument <key=value> is the extra_specs to set. Only the key is necessary for unset.

Page 11: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 11

Any or all of the following capabilities can be set on a volume type. These capabilities override the default values that were specified in the cinder.conf file, or they are additional capabilities that HPE 3PAR offers. For information on providing constraints on when the VVS and QoS settings are set for a single volume type, see extra_specs restrictions.

• volume_backend_name

Assign a volume type to a particular Block Storage Driver and set the volume_backend_name key to match the value specified in the cinder.conf file for that Block Storage Driver.

• Scoping hpe3par

This setting is required for all the HPE 3PAR specific keys. The current list of supported HPE 3PAR keys includes the following:

– hpe3par:flash_cache

Valid values are true and false (defaults to false).

– hpe3par:snap_cpg

Overrides the hpe3par_cpg_snap setting. Defaults to the hpe3par_cpg_snap setting in the cinder.conf file. If hpe3par_cpg_snap is not set, this setting defaults to the hpe3par_cpg setting.

– hpe3par:provisioning

Defaults to thin provisioning. Valid values are the following: thin, dedup, and full. The dedup value is for thin deduplication provisioned volumes.

– hpe3par:persona

Defaults to “2—Generic-ALUA” persona. The valid values are the following: 1—Generic, 2—Generic-ALUA, 3—Generic-legacy, 4—HPEUX-legacy, 5—AIX-legacy, 6—EGENERA, 7—ONTAP-legacy, 8—VMware, 9—OpenVMS, 10—HPEUX, and 11—Windows Server.

– hpe3par:convert_to_base

Valid values are True and False. Defaults to False. While creating volume from snapshot, value of this extra spec is checked.

True: Volume (from snapshot) is created independently. Advantage: new volumes are decoupled from initial snapshot and original volume. Usage: where instances are launched for longer, unpredictable time frame.

False: Volume (from snapshot) is created as child of snapshot. Advantage: new volumes can be created quickly. Usage: where many similar instances are launched for short time frame.

Note The HPE 3PAR Web Services API requires these personas. The numerical values are different from what displays in the HPE 3PAR Management Console and the HPE 3PAR command line interface (CLI).

• Different CPGs should be controlled by configuring separate backends with pools.

• To use VVS settings, the HPE 3PAR and HPE Primera storage array must have an HPE 3PAR Priority Optimization license installed.

• hpe3par:vvs

The virtual volume set name that has been set up by the administrator that would have predefined QoS rules associated with it. If you specify extra_specs hpe3par:vvs, the qos_specs minIOPS, maxIOPS, minBWS, and maxBWS settings are ignored.

• hpe3par:compression

Defaults to false. Valid values are true and false.

Set examples: $cinder type-key gold set hpe3par:snap_cpg=SNAPCPG volume_backend_name=3par_FC

$cinder type-key silver set hpe3par:provisioning=full volume_backend_name=3par_ISCSI

Page 12: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 12

$cinder type-key bronze set hpe3par:vvs=myvvs volume_backend_name=iscsi

Unset examples: $cinder type-key gold unset hpe3par:snap_cpg

To list all the volume types and extra_specs currently configured, use the command:

$cinder --os-username admin --os-tenant-name admin extra-specs-list

extra_specs restrictions Certain constraints apply when using one or more of the extra_specs as documented in Setting extra_specs or capabilities.

• If hpe3par:snap_cpg is set per volume type, it must be in the same Virtual Domains as the backend’s CPGs on the HPE 3PAR and HPE Primera storage array.

• The hpe3par:persona is set on a per-volume basis, but is not actually used until that volume is attached to an instance and an HPE 3PAR host is created. In this case, the first volume’s persona to be attached to the host is used. Additional volumes that have a different persona are still attached, but their persona is ignored. They use the persona of the first attached volume.

• Errors occur if you attempt to use the vvs or the qos setting without the Priority Optimization license installed on the HPE 3PAR and HPE Primera storage array.

• If you specify hpe3par:vvs virtual volume set as an extra_spec and one or more of the qos settings (via qos_specs), the qos settings are ignored and the volumes are created in the VVS specified.

• Volumes that have been cloned only support these extra_spec keys: hpe3par:snap_cpg, hpe3par:provisioning, and hpe3par:vvs. All others are ignored. In addition, the comments section of the cloned volume in the HPE 3PAR array will not be populated.

• If you specify hpe3par:flash_cache, the HPE 3PAR and HPE Primera storage array must meet the following requirements:

– Firmware version HPE 3PAR OS 3.2.1 MU2 and Web Services API version 1.4.2

– Adaptive Flash Cache license installed

– Available Solid State Drives (SSDs)

– The assigned CPG for a Flash Cache volume must be set to device type of “SSD.”

• Flash Cache must be enabled on the HPE 3PAR and HPE Primera storage array. This is done with the CLI command:

createflashcache <size> (Where <size> must be in 16 GB increments.)

For example, the following command creates 128 GB of Flash Cache for each node pair in the array:

createflashcache 128g

• If you specify deduplication as the hpe3par:provisioning value, the HPE 3PAR array must meet the following requirements:

– Firmware version HPE 3PAR OS 3.2.1 MU1 and Web Services API version 1.4.1

– Thin Deduplication license installed

– Supported only on SSDs

– The assigned CPG for a Thin Deduplication volume must be set to device type of “SSD.”

• If you specify hpe3par:compression, the HPE 3PAR array must meet the following requirements:

– Firmware version HPE 3PAR OS 3.3.1

– HPE 3PAR system with 8k or 20k series

– Compression license installed

– Supported only on SSDs

– The assigned CPG for a compressed volume must be set to device type of “SSD.”

Page 13: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 13

• Other restrictions and considerations:

– For a compressed volume, the minimum volume size is 16 GB; otherwise, the resulting volume will be created successfully but will not be a compressed volume.

– A fully provisioned volume cannot be compressed. If compression is enabled and provisioning type requested is full, the resulting volume defaults to a thinly provisioned compressed volume.

Creating and setting qos_specs The qos_specs must be created and associated with a volume type. To use these QoS settings, the HPE 3PAR array must have a Priority Optimization license installed. The current HPE 3PAR qos_specs that can be specified in the Icehouse release do not require scoping.

• minIOPS

Sets the QoS I/O issue count minimum goal. If not specified, there is no limit on I/O issue count.

• maxIOPSn

Sets the QoS I/O issue count rate limit. If not specified, there is no limit on I/O issue count.

• minBWS

Sets the QoS I/O issue bandwidth minimum goal. If not specified, there is no limit on I/O issue bandwidth rate.

• maxBWS

Sets the QoS I/O issue bandwidth rate limit. If not specified, there is no limit on I/O issue bandwidth rate.

• latency

Sets the latency goal in milliseconds.

• priority

Sets the priority of the QoS rule over other rules. Defaults to normal, the valid values are low, normal, and high.

Any or all of these capabilities can be set on a volume type. They override the default values that were specified in the cinder.conf file or are additional capabilities that the HPE 3PAR array offers. For information about providing constraints on when the VVS and QoS settings are set for a single volume type, see extra_specs restrictions.

minIOPS and maxIOPS must be used together to set I/O limits. Similarly, minBWS and maxBWS must be used together. If only one parameter is set, the other is set to the same value. For example, if a qos-create was called with only minIOPS=10000 being set, maxIOPS would also be set to 10000.

All qos_specs can be made in the OpenStack Horizon Dashboard or on the command line. To list all the qos_specs currently configured, use this command:

$cinder --os-username admin --os-tenant-name admin qos-list

The qos_specs can be created by using the qos-create command, following this format:

$cinder --os-username admin --os-tenant-name admin qos-create <name> <key=value> [<key=value> [<key=value>...]]

The argument <name> is the name of the new qos_specs. The argument <key=value> is the qos_specs to set the key and value that will be created for this qos_specs. At least one key=value pair is required.

You set or unset keys and values on the command line, only after the qos_specs are created, following this format:

$cinder --os-username admin --os-tenant-name admin qos-key <qos_specs> <action> [<key=value> [<key=value>...]]

Page 14: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 14

The argument <qos_specs> is the ID of the qos_specs. You can retrieve the ID of the qos_specs by running cinder qos-list. The argument <action> must be one of these actions: set or unset. The argument <key=value> is the qos_specs to set or unset the key. Only the key is necessary on unset.

You can connect the qos_specs to a volume type by making an association. You can associate the qos_specs_id to the volume_type_id that is connected to a Block Storage Driver by issuing this command:

$cinder --os-username admin --os-tenant-name admin qos-associate <qos_specs_id> <volume_type_id>

You can undo an association using the qos-disassociate command:

$cinder --os-username admin --os-tenant-name admin qos-disassociate <qos_specs_id> <volume_type_id>

To find the <qos_spec_id>, run the cinder qos-list command. To find the <volume_type_id>, run the cinder extra-specs-list command. The volume type used must also have a volume_backend_name assigned to it.

volume_backend_name=<volume backend name>

Create examples: $cinder qos-create high_iops minIOPS=1000 maxIOPS=100000

$cinder qos-create high_bws maxBWS=5000

Set examples: $cinder qos-key 563055a9-f17f-4553-8595-4a948b5bf010 set priority=high minIOPS=100000

$cinder qos-key d58adb0b-a282-43c5-8c13-550c38df31b8 set maxIOPS=2000 maxBWS=100

Unset examples: $cinder qos-key 563055a9-f17f-4553-8595-4a948b5bf010 unset priority

$cinder qos-key d58adb0b-a282-43c5-8c13-550c38df31b8 unset maxIOPS maxBWS

When you want to unset a key value pair from a volume type, only the key is required.

Associate example: $cinder qos-associate 563055a9-f17f-4553-8595-4a948b5bf010 71ca8337-5cbf-43f5-b634-c0b35808d9c4

Where “563055a9-f17f-4553-8595-4a948b5bf010” is the ID of the qos_specs and “71ca8337-5cbf-43f5-b634-c0b35808d9c4” is the ID of the volume type. This ID can be found by running cinder qos-list and cinder extra-specs-list commands.

Disassociate example: $cinder qos-disassociate 563055a9-f17f-4553-8595-4a948b5bf010 71ca8337-5cbf-43f5-b634-c0b35808d9c4

qos_specs restrictions Certain constraints apply when using one or more of the qos_specs documented in Creating and setting qos_specs.

• Errors occur if you attempt to use vvs or the QoS setting without the Priority Optimization license installed on the HPE 3PAR array.

• If you specify the hpe3par: vvs virtual volume set as an extra_spec and one or more of the QoS settings, the QoS settings are ignored and the volume is created in the VVS specified.

Backend capabilities Backend capabilities for HPE 3PAR provide a mechanism that allows users who are managing Cinder installations with multiple HPE 3PAR backends to assign key/value pairs to the HPE 3PAR backends with the help of volume types.

The previous implementation of the volume type and extra_specs management system in the OpenStack Horizon Dashboard and in the Cinder client is error-prone. A user adds or removes extra_specs from the volume type without having knowledge of what capabilities or volume polices are possible within an HPE 3PAR backend. If you do not know the key/value pairs for extra_specs in the HPE 3PAR backend, see Setting extra_specs or capabilities and Creating and setting qos_specs.

Page 15: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 15

The Queens’s release has added support for HPE 3PAR backend capabilities, which provide a list of capabilities for the HPE 3PAR storage backend in Cinder. The keys and possible values for the capabilities can be generated by using the following commands.

Issue the commands below to get the list of capabilities for the HPE 3PAR backend:

1. To get a list of the services, issue the following command:

$openstack volume service list

2. After identifying one of the hosts listed for HPE 3PAR storage, pass that host name to the get-capabilities parameter using the following command:

$cinder get-capabilities <host@backend>

For example:

$cinder get-capabilities host1@3PARISCSI

Multiple storage backend support and Block Storage configuration Detailed instructions about setting up multiple backends can be found in the OpenStack Administrator Guide.

The multi-backend configuration is done in the cinder.conf file. The enabled_backends flag must be set up. This flag defines the names (separated by a comma) of the configuration groups for the different backends. For each backend, one name is associated to one configuration group (for example, [3parfc-1]). Each group must have a full set of the driver-required configuration options. The following listing shows a sample cinder.conf file for three HPE 3PAR array backends—configuring two Fibre Channel drivers and one iSCSI Cinder driver.

Note Currently, the HPE 3PAR drivers communicate with the HPE 3PAR array over HTTP or HTTPS and SSH. This means that both the hpe3par_username/password and san_login/password entries must be configured in the cinder.conf file.

# List of backends that will be served by this node

enabled_backends=3parfc-1,3parfc-2,3pariscsi-1

# [3parfc-1]

volume_driver=cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver

volume_backend_name=3par_FC

hpe3par_api_url=https : // 10.10.22.241 : 8080 / api / v1

hpe3par_username=<username>

hpe3par_password=<password>

hpe3par_cpg=OpenStackCPG_RAID5_NL,cpggold1

san_ip=10.10.22.241

san_login=<san_username>

san_password=<san_password>

max_over_subscription_ratio=10.0

reserved_percentage=15

image_volume_cache_enabled = True

Page 16: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 16

#

[3parfc-2]

volume_driver=cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver

volume_backend_name=3par_FC

hpe3par_api_url=https : // 10.10.22.242 : 8080 / api / v1

hpe3par_username=<username>

hpe3par_password=<password>

hpe3par_cpg=OpenStackCPG_RAID6_NL,cpggold2

san_ip=10.10.22.242

san_login=<san_username>

san_password=<san_password>

hpe3par_snapshot_retention=48

hpe3par_snapshot_expiration=72

image_volume_cache_enabled = True

#

[3pariscsi-1]

volume_driver=cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver

hpe3par_iscsi_ips=10.10.220.253,10.10.220.254

hpe3par_api_url=https : // 10.10.22.243 : 8080/ api/ v1

volume_backend_name=3par_ISCSI

hpe3par_username=<username>

hpe3par_password=<password>

hpe3par_cpg=OpenStackCPG_RAID6_ISCSI

san_ip=10.10.22.243

san_login=<username>

san_password=<password>

image_volume_cache_enabled = True In this configuration, both the 3parfc-1 and 3parfc-2 have the same volume_backend_name. When a volume request comes in with the 3par_FC backend name, the scheduler must choose which one is most suitable. This is done with the capacity filter scheduler. For details, see Block Storage scheduler configuration with multi-backend. This example also includes a single iSCSI-based HPE 3PAR Cinder driver with a different volume_backend_name.

In this configuration, both 3parfc-1 and 3parfc-2 also show multiple CPGs in their hpe3par_cpg option. These CPGs are used as “pools.”

iSCSI target port selection The HPE 3PAR iSCSI OpenStack driver provides the ability to select the best-fit target iSCSI port from a list of candidate ports. The first time a volume is attached to a host, all iSCSI ports configured for driver selection are examined for best fit. The port with the least active volumes attached is selected as the communication path to the HPE 3PAR array. Any subsequent volumes attached to the same host will use the established target port.

Page 17: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 17

To configure the candidate iSCSI ports used for best-fit selection, set the cinder.conf option hpe3par_iscsi_ips with a comma-separated list of IP addresses. Do not use quotes around the list. For example, the section for the backend config group name [3pariscsi-1] in the previous cinder.conf file example is as follows:

hpe3par_iscsi_ips=10.10.220.253,10.10.220.254

If the single iSCSI cinder.conf option iscsi_ip_address is set, this address is included as a possible candidate for port selection at volume attach time.

At driver startup, target iSCSI ports are verified with the HPE 3PAR array to make sure each is a valid iSCSI port. If an invalid iSCSI port is identified, the following message is logged to the cinder-volume log file:

2013-07-02 08:50:50.934 WARNING cinder.volume.drivers.hpe.hpe_3par_iscsi [req-6c6e6807-5543-46dd-ba66- 30149f24758d None None] Found invalid IP address(s) in configuration option(s) hpe3par_iscsi_ips or iscsi_ip_address ‘10.10.22.230, 10.10.220.25'

If no valid iSCSI port is found, the following exception is logged and the driver fails:

2013-07-02 08:53:57.559 TRACE cinder.service InvalidInput: Invalid input received: At least one valid iSCSI IP address must be set.

iSCSI multipath support Support for iSCSI multipath was added to the HPE 3PAR iSCSI driver in the Liberty release and the HPE 3PAR FC driver in the Pike release. The steps to set up multipath with HPE 3PAR iSCSI are described in the following sections.

The first step is to set up the image transfer options in cinder.conf as documented in OpenStack Configuration Reference Guide for general multipath settings (that is., enforce_multipath_for_image_xfer, iscsi_use_multipath, and so on).

For volume multipath setup, refer to the volume_use_multipath options in nova-cpu.conf as documented in OpenStack Configuration Reference Guide for volume multipath settings (that is., volume_use_multipath).

After OpenStack is set up to use multipath, the next step is to determine which HPE 3PAR iSCSI IPs will be used as potential paths. After this is decided, the cinder.conf file must be edited. In cinder.conf, add the hpe3par_iscsi_ips property to the HPE 3PAR iSCSI backends that will be utilizing multipath. The property should look similar to the example given in iSCSI target port selection. The following is an example of how a cinder.conf file should look:

[3pariscsi-1]

volume_driver=cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver

hpe3par_iscsi_ips=10.10.120.227,10.10.220.228,10.10.320.229

iscsi_ip_address = 10.10.120.227

hpe3par_api_url=https : // 10.10.22.243 : 8080 / api / v1

volume_backend_name=3par_ISCSI

hpe3par_username=<username>

hpe3par_password=<password>

hpe3par_cpg=OpenStackCPG_RAID6_ISCSI

san_ip=10.10.22.243

san_login=<username>

san_password=<password>

image_volume_cache_enabled = True

Page 18: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 18

In addition there is a change required in nova-compute.conf in the following section:

[libvirt]

...

...

volume_use_multipath = True

Note An attempt will be made to use all of the IPs that are listed in the hpe3par_iscsi_ips property. For Liberty and Mitaka releases, add the iscsi_use_multipath=true record in the libvirt section of nova.conf. For Newton and later releases, add the volume_use_multipath=true record in the libvirt section of nova.conf.

With multipath enabled and the iSCSI backend entry updated in cinder.conf, the backend is now ready to support multipath. When performing attaches to volumes, note that LUNs are created for each of the IPs defined in cinder.conf. If an IP is unreachable or the port itself is not in a ready state, the port is skipped and unused.

Fibre Channel target port selection Before the Juno release, the HPE 3PAR FC OpenStack driver always used all available FC ports on the HPE 3PAR host when an instance was attached to a volume and only one FC path was available to that host. Now, the HPE 3PAR FC OpenStack Driver can detect if a single FC path is available. When a single FC path is detected, only a single VLUN is created, instead of one for every available NSP (node:slot:port) on the HPE 3PAR host. This prevents an HPE 3PAR host from using extra FC ports that are not needed. If multiple FC paths are available, all the ports are used.

Note When attaching a volume via FC, a VLUN template is created of host_sees type. This creates only one VLUN template per host, unlike VLUN templates per available FC ports.

By default, the VLUN template will be seen as a host_sees type in Ocata and later releases. Whereas pre-Ocata releases have the default VLUN template as matched_set. See review.opendev.org/#/c/522947/1 for details.

To configure the HPE 3PAR OpenStack FC driver target port selection (added in Juno), the Fibre Channel Zone Manager must be configured and the zone_mode=fabric must be set in cinder.conf to enable the target port selection. If zone_mode=None is not present in the cinder.conf, then all available FC ports are used. See the OpenStack Pike Configuration Guides for details.

Block Storage scheduler configuration with multi-backend Multi-backend must be used with filter_scheduler enabled. Filter scheduler acts in two steps:

1. Filter scheduler filters the available backends. By default, AvailabilityZoneFilter, CapacityFilter and CapabilitiesFilter are enabled.

2. Filter scheduler weighs the previously filtered backends. By default, the CapacityWeigher is enabled. The CapacityWeigher attributes high scores to backends with the most available space.

According to the results of filtering and weighing, the scheduler is able to pick “the best” backend to handle the request. In that way, filter scheduler achieves the goal of explicitly creating volumes on specific backends using volume types.

The default scheduler is the FilterScheduler (scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler).

The command syntax above does not need to be added to the cinder.conf file.

Page 19: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 19

Block Storage scheduler configuration with driver filter and weigher The driver filter and weigher for the Block Storage scheduler is a feature (new in Kilo) that, when enabled, allows for a filter and goodness function to be defined in the cinder.conf file. The two functions are used during volume creation by the Block Storage scheduler to determine which backend is best for the volume. The filter function is used to filter out backend choices that should not be considered. The goodness function is used to rank the filtered backends from 0 to 100. This feature should be used when the default Block Storage scheduling does not provide enough control when volumes are being created.

Enable the use of the driver filter for the scheduler by adding DriverFilter to the scheduler_default_filters property in your cinder.conf file. Enabling the driver weigher is similar to adding GoodnessWeigher to the scheduler_default_weighers property in the cinder.conf file. If you want to include other OpenStack filters and weighers in your setup, make sure to add those to the scheduler_default_filters and scheduler_default_weighers properties as well.

Note You can choose to have only the DriverFilter or GoodnessWeigher enabled in your cinder.conf file, depending on how much customization you want.

OpenStack supports various math operations that can be used in the filter and goodness functions. The currently supported list of math operations is shown in Table 1.

Table 1. Supported math operations for filter and goodness functions

Operations Type

+, -, *, /, ^ standard math

not, and, or, &, |, ! logic

>, >=, <, <=, ==, <>, != equality

+, - sign

x ? a : b ternary

abs(x), max(x,y), min(x,y) math helper functions

Several driver-specific properties are available for use in the filter and goodness functions for an HPE 3PAR backend. The currently supported list of HPE 3PAR specific properties include the following:

• capacity_utilization

The percent of total space used on the HPE 3PAR CPG.

• total_volumes

The total number of volumes on the HPE 3PAR CPG.

Additional generic volume properties are available from OpenStack for use in the filter and goodness functions. These properties can be found in the OpenStack Administrator Guide.

Note Access HPE 3PAR-specific properties by using the following format in your filter or goodness functions: capabilities.<property>

The following sample cinder.conf file shows an example of how several HPE 3PAR backends can be configured to use the driver filter and weigher from the Block Storage scheduler:

Page 20: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 20

[default]

cinder_internal_tenant_project_id = PROJECT_ID

cinder_internal_tenant_user_id = USER_ID

scheduler_default_filters = DriverFilter

scheduler_default_weighers = GoodnessWeigher

enabled_backends = 3parfc-1, 3parfc-2, 3parfc-3 [3parfc-1]

hpe3par_api_url = <api_url>

hpe3par_username = <username>

hpe3par_password = <password>

san_ip = <san_ip>

san_login = <san_username>

san_password = <san_password>

volume_backend_name = 3parfc

hpe3par_cpg = CPG-1

volume_driver = cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver

filter_function = “capabilities.total_volumes < 10”

goodness_function = “(capabilities.capacity_utilization < 75)? 90 : 50”

[3parfc-2]

hpe3par_api_url = <api_url>

hpe3par_username = <username>

hpe3par_password = <password>

san_ip = <san_ip>

san_login = <san_username>

san_password = <san_password>

volume_backend_name = 3parfc

hpe3par_cpg = CPG-2

volume_driver = cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver

filter_function = “capabilities.total_volumes < 10”

goodness_function = “(capabilities.capacity_utilization < 50)? 95 : 45”

image_volume_cache_enabled = True

[3parfc-3]

hpe3par_api_url = <api_url>

hpe3par_username = <username>

hpe3par_password = <password>

san_ip = <san_ip>

san_login = <san_username>

san_password = <san_password>

Page 21: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 21

volume_backend_name = 3parfc

hpe3par_cpg = CPG-3

volume_driver = cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver

filter_function = “capabilities.total_volumes < 20”

goodness_function = “(capabilities.capacity_utilization < 90)? 75 : 40”

image_volume_cache_enabled = True

In the sample listing, there are three HPE 3PAR backends enabled in the cinder.conf file. The sample shows how you can use HPE 3PAR-specific properties to distribute volumes with more control than the default Block Storage scheduler.

Note Remember that you can combine the HPE 3PAR specific properties with the generic volume properties provided by OpenStack. Also, the values used in the previous sample are only examples. In your own environment you have full control over the filter and goodness functions that you create. For more information, see the OpenStack Administrator Guide.

For more information about creating a useful goodness_function, see the Appendix.

Volume types assignment To specify a volume_backend_name for each volume type you create, use the following command or the OpenStack Horizon Dashboard (new in Juno) to link the volume type to a backend name.

$ cinder --os-username admin --os-tenant-name admin type-key gold set volume_backend_name=3parfc-1

The second volume type could be for an iSCSI driver volume type named “silver.”

$ cinder --os-username admin --os-tenant-name admin type-key silver set volume_backend_name=3pariscsi-1

Multiple key-value pairs can be specified when running this command. For example, you could run the following command to create a volume type named “gold,” with a CPG of OpenStack_RAID5_FC, and a host persona of VMware® with full provisioning:

$ cinder --os-username admin --os-tenant-name admin type-key gold set volume_backend_name=3parfc-1 hpe3par:persona=‘11 – VMware’ hpe3par:provisioning=full

Page 22: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 22

Multiple backend requirements • From the Havana release forward, you can name the volume_backend_name whatever you like.

• The driver now looks up the domain based on the CPG specified in either the cinder.conf file or hpe3par:cpg (extra-spec volume type setting prior to Kilo only).

• If you try to attach volumes from different domains to the same HPE 3PAR host, errors occur.

Volume migration Starting in the Icehouse release, volumes can be migrated between different CPGs in the same HPE 3PAR backend—directly within the backend. Volume migration requires that you have the Dynamic Optimization license installed on the HPE 3PAR storage array. First, configure Cinder to use multiple backends, as explained in Block Storage scheduler configuration with multi-backend. Using the command line, you can see the available driver instances represented as “hosts” within the cinder.conf file. To migrate volumes, use the following command:

$cinder-manage host list

mystack

mystack@3parfc-1

mystack@3parfc-2

mystack@3pariscsi-1

To see which HPE 3PAR driver instance is managing a particular volume, use this command:

$cinder show <volume_id>

Where <volume_id> represents the volume ID, and the host is in the attribute os-vol-host-attr:host. os-vol-host-attr:host mystack@3parfc-1#cpggold1

To migrate a volume to a different driver instance, and therefore to a different CPG, use this command:

$cinder migrate <volume_id> <host>#<pool>

Where <volume_id> represents the volume ID and <host> represents the driver instance. The <pool> is required. In the Kilo release and later, the HPE 3PAR drivers use the CPG as the pool. Here is an example:

$cinder migrate 3e57599e-7327-4596-a45f-d29939c836cf mystack@3parfc-2#cpggold2

Note Cinder migrate requires the host or drivers to have the same volume_backend_name in the cinder.conf file. Changing the previous example so that all three drivers have the same volume_backend_name=3par enables volume migration among all of them.

Clone volume When cloning a volume, the HPE 3PAR driver can take additional time depending on the size of the new volume requested. Creating a volume with a source volume ID (clone) has changed in that when the new volume’s size is different from the original volume, a slower operation takes place to clone the volume. The driver asks HPE 3PAR to clone the volume, and then waits for the completion of that task. This can take some time to complete. When creating a volume with a source volume ID (clone) is called and the size of the new volume is the same as the original volume, an online copy is made on the HPE 3PAR array and the cloned volume is available immediately for use.

Volume manage and unmanage Starting in the Juno release, HPE 3PAR volumes can be managed and unmanaged. This allows for importing non-OpenStack volumes already on an HPE 3PAR array into OpenStack/Cinder, or exporting, which removes volumes from the OpenStack/Cinder perspective; however, the volume on the HPE 3PAR array is left intact. Using the command line, you can see the available driver instances represented as “hosts” within the

Page 23: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 23

cinder.conf file. This host is where the HPE 3PAR volume that you want to manage resides. To see the list of hosts, use the following command:

$cinder-manage host list

mystack

mystack@3parfc-1

mystack@3parfc-2

mystack@3pariscsi

To manage the volumes with what exists on the HPE 3PAR array but is not already managed by OpenStack/Cinder, use this command:

$cinder manage --name [<cinder name>] <host>#<pool> <source-name>

Where <source-name> represents the name of the volume to manage, <cinder name> is optional but represents the OpenStack name, and <host> represents the driver instance. The <pool> is required for the Juno release. In the Juno release, for the HPE 3PAR drivers, <pool> is simply a repeat of the driver backend name. In the Kilo release and later, the HPE 3PAR drivers use one of the CPGs configured for the backend as <pool>. The manage volume command also accepts an optional <--volume-type> parameter that performs a retype of the virtual volume after being managed:

$cinder manage --name volgold mystack@3par-fc2#cpggold2 volume123

Note Cinder manage renames the volume on the HPE 3PAR array to a name that starts with osv- followed by a UUID, which is required for OpenStack/Cinder to locate the volume under its management.

To unmanage a volume from the OpenStack/Cinder and leave the volume intact on the HPE 3PAR array, use this command:

$cinder unmanage <volume_id>

Where <volume_id> is the ID of the OpenStack/Cinder volume to unmanage: For example:

$cinder unmanage 16ab6873-eb09-4522-8d0f-91aab83be34d

Note Cinder unmanage removes the OpenStack/Cinder volume from OpenStack, but the volume remains intact on the HPE 3PAR array.

The volume name has a umn- prefix, followed by an encoded UUID, which is required because HPE 3PAR has name length and character limitations.

Revert to snapshot This feature copies the differences of a snapshot back to its base volume, allowing you to revert the base volume to an earlier point in time.

$cinder --os-volume-api-version 3.42 revert-to-snapshot <snap-id>

The revert process will overwrite the current state and data of the data volume. For example, overwriting the base volume with the virtual copy (snapshot). If the volume was extended after the snapshot, the request to extend the volume would be rejected.

If multiple snapshots exist, this feature will limit Cinder to revert the most recent volume snapshot known to Cinder.

Note The revert-to-snapshot feature on a replicated volume is supported on HPE 3PAR version 3.3.1 and later.

Page 24: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 24

Snapshot manage and unmanage Starting in the Mitaka release, volume snapshots can be managed and unmanaged from OpenStack. Unmanaging a volume snapshot allows the snapshot to be removed from the OpenStack perspective, but the volume remains intact on the HPE 3PAR array. This can be useful when volumes are being unmanaged from OpenStack because the volume’s snapshots will not be automatically unmanaged. Bringing volume snapshots back into the OpenStack perspective involves using the snapshot-manage command for snapshots. The process for both managing and unmanaging snapshots is similar to the managing and unmanaging of volumes.

Use the following command to manage a volume snapshot from an HPE 3PAR array:

$cinder snapshot-manage <parent_volume_name> <snapshot_name>

Where <parent_volume_name> represents the name of the parent volume for the snapshot being managed and <snapshot_name> is the name of the snapshot on the HPE 3PAR array. For example:

$cinder snapshot-manage 16fn6873-eb09-49h2-8c1f-91ccb83be95k ums-UDbA9gEOQmifh-MC00-fCg

Note Cinder snapshot-manage renames the snapshot on the HPE 3PAR array to a name that starts with oss- followed by a UUID, which is required for OpenStack/Cinder to locate the volume under management.

Use the following command to unmanage a volume snapshot from OpenStack/Cinder and leave the snapshot intact on the HPE 3PAR array:

$cinder snapshot-unmanage <snapshot_id>

Where <snapshot_id> is the ID of the OpenStack/Cinder snapshot to unmanage. For example:

$cinder snapshot-unmanage 16ab6873-eb09-4522-8d0f-91ccb83be56k

Note Cinder snapshot-unmanage removes the OpenStack/Cinder snapshot from OpenStack, but the snapshot remains intact on the HPE 3PAR array.

The snapshot name has a ums- prefix, followed by an encoded UUID. This is required because HPE 3PAR has name length and character limitations.

Consistency groups Support for Consistency groups is available from the Liberty to the Ocata release. Prior to consistency groups, every operation in Cinder happened at the volume level. Grouping “like” volumes allows for improved data protection, paves the way to maintaining consistency of data across multiple different volumes, and allows for operations to be performed on groups of volumes. The fundamental supported operations include creating a consistency group, deleting a consistency group (and all volumes inside of it), adding volumes to a consistency group, removing volumes from a consistency group, snapshotting a consistency group, and creating a consistency group from a source cgsnapshot.

Consistency group CLI support is set to off by default. To access the consistency group related CLI commands, /etc/cinder/policy.conf needs to be modified by removing group:nobody from the following lines:

“consistencygroup:create" : “",

“consistencygroup:delete": “",

“consistencygroup:update": “",

“consistencygroup:get": “",

Page 25: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 25

“consistencygroup:get_all": “",

“consistencygroup:create_cgsnapshot" : “",

“consistencygroup:delete_cgsnapshot": “",

“consistencygroup:get_cgsnapshot": “",

“consistencygroup:get_all_cgsnapshots": “",

Creating a consistency group After the policy.conf file is correctly modified, you can create a consistency group either in the OpenStack Horizon Dashboard or by using the command line, as follows:

$cinder consisgroup-create [--name <name>] [--description <description>] <volume-type>

Where <volume-type> is the OpenStack/Cinder volume type name and --name and --description are optional. For example:

$cinder consisgroup-create --name “MyCG” --description “3parfc cg” 3parfc

To view the newly created consistency group, use the following command:

$cinder consisgroup-list

| 831b2099-d5ba-4b92-a097-8c08f9a8404f | available | MyCG |

Deleting a consistency group An empty consistency group can be deleted either in the OpenStack Horizon Dashboard or by using the command line, as follows:

$cinder consisgroup-delete <consisgroup-id>

Where <consisgroup-id> represents to consistency group ID. For example:

$cinder consisgroup-delete 831b2099-d5ba-4b92-a097-8c08f9a8404f

If the group has volumes, the force flag can be added, as shown below

Important The --force flag will fully delete all volumes in the group.

$cinder consisgroup-delete <consisgroup-id> --force

Where <consisgroup-id> represents to consistency group ID. For example:

$cinder consisgroup-delete 831b2099-d5ba-4b92-a097-8c08f9a8404f --force

Adding volumes Add volumes to the consistency group by using the OpenStack Horizon Dashboard or using the command line, as shown below:

$cinder consisgroup-update <consisgroup-id> --add-volumes <uuid1,uuid2,......>

Where <consisgroup-id> represents the consistency group ID and <uuid1,uuid2,......> is a comma-separated list of OpenStack/Cinder volume IDs. For example:

$cinder consisgroup-update 831b2099-d5ba-4b92-a097-8c08f9a8404f --add-volumes 87ac88d4-e360-4bdb-b888-2208fbe282dd, 9ce09c09-0a20-4bcd-bd1a-0eaa95dbe0cd, a4563466-61d4-4018-b586-f07f84c4010c

Page 26: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 26

Removing volumes Remove volumes from the consistency group by using the OpenStack Horizon Dashboard or using the command line, as shown below:

$cinder consisgroup-update <consisgroup-id> --remove-volumes <uuid1,uuid2,......>

Where <consisgroup-id> represents the consistency group ID and <uuid1,uuid2,......> is a comma-separated list of OpenStack/Cinder volume IDs. For example:

$cinder consisgroup-update 831b2099-d5ba-4b92-a097-8c08f9a8404f --remove-volumes 87ac88d4-e360-4bdb-b888-2208fbe282dd

Note The command above does not delete the volume; it only removes the volume from the consistency group.

Creating a cgsnapshot To create a snapshot of a consistency group, use the following command:

$cinder cgsnapshot-create [--name <name>] [--description <description>] <consisgroup-id>

Where <consisgroup-id> represents the consistency group ID and --name and --description are optional. For example:

$cinder cgsnapshot-create --name “MyCGSnap” --description “Snapshot of MyCg” 831b2099-d5ba-4b92-a097-8c08f9a8404f

To view the newly created cgsnapshot, use the following command

$cinder cgsnapshot-list

| 70c266bb-8255-4f2b-83cf-87f79d54dfb4 | creating | MyCGSnap |

Deleting a cgsnapshot To delete a cgsnapshot, use the following command:

$cinder cgsnapshot-delete <cgsnapshot-id>

Where <cgsnapshot-id> is the consistency group snapshot ID. For example:

$cinder cgsnapshot-delete 70c266bb-8255-4f2b-83cf-87f79d54dfb4

Creating a consistency from a cgsnapshot To create a consistency group from a cgsnapshot, use the following command:

$cinder consisgroup-create-from-src --cgsnapshot <cgsnapshot-id>

Where <cgsnapshot-id> is the consistency group snapshot ID. For example:

$cinder consisgroup-create-from-src --cgsnapshot 70c266bb-8255-4f2b-83cf-87f79d54dfb4

The newly created consistency group can be treated as a new, completely separate group with no ties to its parent group or to cgsnapshot.

Cloning a consistency group To clone a consistency group, use this command:

$cinder consisgroup-create-from-src -–source-cg <consistencygroup-id>

Where <consistencygroup-id> is the consistency group to be cloned:

The newly created consistency group can be treated as a new, completely separate group with no ties to its parent group. It can be deleted at any time, as can its source group.

Page 27: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 27

Generic volume groups The consistency group feature of Cinder has been replaced by Generic volume groups in the Pike release. Consistency groups in Cinder only support consistent group snapshots and cannot be easily extended to serve other purposes. A tenant might want to put volumes used in the same application together in a group, so that it is easier to manage them together, and this group of volumes may or may not support consistent group snapshots. Generic volume groups was introduced to solve this problem. By decoupling the tight relationship between the group construct and the consistency concept, generic volume groups can be extended in the future to support other features. The functionality of a consistency group is achieved using generic volume-group APIs, and by restricting some changes in volume types and group types. Similar to extra_specs for volume types, group types have group specifications (group specs) associated with them.

To create a group with consistent group capability through generic volume group APIs, both group specs and extra specs of the associated volume types must have consistent_group_snapshot_enabled set to <is> True. Disabling or not enabling the consistent_group_snapshot_enabled flag does not guarantee consistency at the storage level and Virtual Volume Set will not be created in HPE 3PAR. Therefore, use of this flag is strongly recommended.

Create a group type To create a group type, use the following command:

cinder --os-volume-api-version 3.11 group-type-create <GroupTypeName>

For example:

$cinder --os-volume-api-version 3.11 group-type-create GrpType

Setting required flag as group specs and extra specs Considering the group type and volume types that are available, use the following commands to set flags:

$cinder --os-volume-api-version 3.11 group-type-key GrpType set consistent_group_snapshot_enabled=“<is> True”

$cinder type-key 3parfc-1 set consistent_group_snapshot_enabled=“<is> True”

$cinder type-key 3parfc-2 set consistent_group_snapshot_enabled=“<is> True”

Creating a generic volume group You can create a generic volume group in the OpenStack Horizon Dashboard or using the command line, as shown below:

$cinder --os-volume-api-version 3.13 group-create [--name <name>] [--description <description>] GROUP_TYPE VOLUME_TYPES

Where GROUP_TYPE is the name or UUID of a group type, VOLUME_TYPE is the OpenStack/Cinder volume type name, and --name and --description are optional. For example:

$cinder --os-volume-api-version 3.13 group-create --name MyGVG --description “3parfc gvg” GrpType 3parfc-1,3parfc-2

To view the newly created group, use the following command:

$cinder --os-volume-api-version 3.13 group-list

| 831b2099-d5ba-4b92-a097-8c08f9a8404f | available | MyGVG |

Note If a group is created to achieve point-in-time consistency at the storage level, the extra specs and group spec associated with this group (VolumeTypes and GroupType) must have the following flag set: consistent_group_snapshot_enabled = “<is> True

Page 28: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 28

Deleting a generic volume group An empty generic volume group can be deleted either in the OpenStack Horizon Dashboard or using the command line, as shown below:

$cinder --os-volume-api-version 3.13 group-delete <group>

Where <group> represents to generic volume group UUID/name. For example:

$cinder --os-volume-api-version 3.13 group-delete MyGVG

If the group has volumes, the --delete-volumes flag can be added, as shown below.

Important Setting the --delete-volumes flag fully deletes all volumes in the group.

$cinder --os-volume-api-version 3.13 group-delete --delete-volumes MyGVG

Create a volume and add it to a group To create a volume and add it to a group, use the following command:

$cinder --os-volume-api-version 3.13 create --volume-type VOLUME_TYPE --group-id GROUP_ID SIZE

Note When creating a volume and adding it to a group, the parameters VOLUME_TYPE and GROUP_ID must be provided, because a group can support more than one volume type.

Adding and removing volumes Add volumes to the group or remove volumes from the group by using the OpenStack Horizon Dashboard or using the command line, as shown below:

$cinder --os-volume-api-version 3.13 group-update --add-volumes <uuid1,uuid2,......> --remove-volumes <uuid3,uuid4,......> <group>

Where <group> represents the group UUID/name and <uuid1,uuid2,......> is a comma-separated list of OpenStack/Cinder volume IDs. For example:

$cinder --os-volume-api-version 3.13 group-update --add-volumes 87ac88d4-e360-4bdb-b888-2208fbe282dd,9ce09c09-0a20-4bcd-bd1a-0eaa95dbe0cd --remove-volumes a4563466-61d4-4018-b586-f07f84c4010c MyGVG

Note Removing a volume does not delete the volume; it only removes the volume from the generic volume group.

Page 29: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 29

Creating a snapshot for a group Creating a snapshot for a generic volume group can be accomplished using the following command:

$cinder --os-volume-api-version 3.14 group-snapshot-create [--name <name>] [--description <description>] <group>

Where <group> represents the group UUID/name and --name and --description are optional. For example:

$cinder --os-volume-api-version 3.14 group-snapshot-create --name “MyGVGSnap” --description “Snapshot of MyGVG” MyGVG

To view the newly created group snapshot, for example:

$cinder --os-volume-api-version 3.14 group-snapshot-list

| 70c266bb-8255-4f2b-83cf-87f79d54dfb4 | creating | MyGVGSnap |

Deleting a group snapshot To delete a group snapshot, use the following command:

$cinder --os-volume-api-version 3.14 group-snapshot-delete <group_snapshot>

Where <group_snapshot> is the group snapshot ID. For example:

$cinder --os-volume-api-version 3.14 group-snapshot-delete MyGVGSnap

Create a group from a group snapshot Use the following command to create a group from a group snapshot:

$cinder --os-volume-api-version 3.14 group-create-from-src --group-snapshot <group_snapshot> --name <name>

Where <group_snapshot> is the group snapshot UUID/name. For example:

$cinder --os-volume-api-version 3.14 group-create-from-src --group-snapshot MyGVGSnap --name MyGVG2

Cloning a group To clone a group, use the following command:

$cinder --os-volume-api-version 3.14 group-create-from-src --source-group <source_group> --name <name>

Where <source_group> is the group to be cloned.

The newly created generic volume group can be treated as a new, completely separate group with no ties to its parent group or group snapshot. It can be deleted at any time, as can its source group.

For details on the use of generic volume groups, see docs.openstack.org/cinder/latest/admin/blockstorage-groups.html

Page 30: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 30

Group Level Replication (Tiramisu) Support for replication groups is built upon the generic volume groups.

Group Level Replication supports replication: creation, deletion, enable, disable, failover, and list replication targets.

Note Replicated groups can be created only when hpe3par:group_replication is enabled for the volume type. If hpe3par:group_replication is enabled in extra_specs of the volume type, Cheesecake spec flow becomes invalid and volume-level replication is not supported. The default extra_specs of the replication remain unaltered until the user explicitly changes them.

For Tiramisu: Replication group support, the minimum OS_VOLUME_API_VERSION is 3.38, as shown below:

$export OS_VOLUME_API_VERSION=3.38

To create a volume type, use the following command:

$cinder type-create <volume_type>

To set extra specs to the volume type, use the following commands:

$cinder type-key <volume_type> set "replication_enabled="<is> True"

$cinder type-key <volume_type> set hpe3par:group_replication="<is> True"

$cinder type-key <volume_type> set consistent_group_snapshot_enabled="<is> True”

To verify the volume type, use the following command:

$cinder type-show <volume_type>

To create and verify the group type, use the following command:

$cinder group-type-create <group_type>

To set extra_specs to the group type, use the following commands:

$cinder group-type-key <group_type> set consistent_group_snapshot_enabled="<is> True"

$cinder group-type-key <group_type> set group_replication_enabled="<is> True"

To verify the group type, use the following command:

$cinder group-type-show <group_type>

To create a group, use the following command:

$cinder group-create --name <group_name> <group_type> <volume_type>

To list a group, use the following command:

$cinder group-list

To verify the group, use the following command:

$cinder group-show <group_name>

Page 31: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 31

To enable a replication group on the primary storage, use the following command

$cinder group-enable-replication <group_name>

Expected results: replication_status becomes “enabled”

To disable replication group, use the following command:

$cinder group-disable-replication <group_name>

Expected results: replication_status becomes “disabled”

To list replication targets, use the following command:

$cinder group-list-replication-targets <group_id>

To enable failover of a replication group to the secondary storage, use the following commands:

If secondary-backend-id is not specified, the secondary-backend-id configured in cinder.conf will be used:

$cinder group-failover-replication <group_name>

If secondary-backend-id is specified (not “default”), the specified backend_id will be used:

$cinder group-failover-replication <group_name> --secondary-backend-id <backend_id>

Expected results: replication_status becomes “failed-over”

Run failover replication group again to fail the group back to the primary storage, as follows:

$cinder group-failover-replication <group_name> --secondary-backend-id default

Expected results: replication_status becomes “enabled”

Volume compression Support for volume compression has been added in the Pike release. Consolidating data in a way that preserves the information while reducing the total amount of storage is the essence of storage compression. To increase space efficiency and consolidate stored data, a new compression feature is available on HPE 3PAR Thinly Provisioned Virtual Volumes (TPVV) and Thinly Deduplicated Virtual Volumes (TDVV). Compression is a new feature of HPE 3PAR OS 3.3.1. Volumes can be marked for compression during their creation or after retyping (with compression enabled on destination volume type).

For more information on restrictions and considerations when compression is enabled in volume type, see extra_specs restrictions.

Volume retype Volume retype is available beginning with the Juno release. The retype works only if the volume is on the same HPE 3PAR array. This allows a volume retype, for example, from a “silver” volume type to a “gold” volume type. The HPE 3PAR OpenStack drivers modify the volume’s Snap CPG, provisioning type, persona, and QoS settings, as needed, to make the volume behave appropriately for the new volume type. In the Kilo release and later, separately configured backends with CPGs (as pools) should be used to allow the scheduler to select the appropriate CPG. Volume retype also requires that you have the Dynamic Optimization license enabled on the HPE 3PAR array.

Use caution when using the optional --migration-policy on-demand option, because this falls back to copying the entire volume (using dd over the network) to the Cinder node and then to the destination HPE 3PAR array. The Cinder node also has to have enough space available to store the entire volume during the migration. Hewlett Packard Enterprise recommends that you use the default --migration-policy never when retype is used.

Page 32: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 32

Note Volume retype is not allowed if the volume has snapshots and the retype requires a change to the Snap CPG or User CPG. The volume_backend_name in cinder.conf must be the same between the source and destination volume types when --migration-policy is set to never. This is the default and recommended retype method.

Volume replication Volume replication (Cheesecake) will continue to work as it is, besides group level replication (Tiramisu).

Before using Cinder’s implementation of replication, make sure the HPE 3PAR arrays have the proper Remote Copy licenses and that they have been paired as Remote Copy Targets. For more information, see Configuring HPE 3PAR for volume replication in the Appendix.

Cinder’s replication is host-driven, meaning all volume_type replicated volumes created on a host will be automatically replicated to the configured secondary targets. HPE 3PAR supports both asynchronous periodic and synchronous replication. To begin replicating, some cinder.conf entries must be created, as well as a replicated volume type that points to the newly configured backend.

As shown in the sample cinder.conf file below, the only additional field required to enable replication on a backend is to provide a minimum of one replication_device entry. If multiple replication targets are desired, add additional replication_device entries for each target. The following fields are expected for each replication_device:

• backend_id

This field must be a valid HPE 3PAR Target Name, which needs to be independently configured for each array.

• replication_mode

This field must be periodic or sync. Depending on the value, the target can only be used for asynchronous periodic or synchronous replication.

– Periodic

Volumes resynchronize at a set interval automatically. I/O times are better than those in synchronous mode, but volumes are not always in sync. At a minimum, they can be synced every five minutes.

– Sync

Every write to the primary array is instantly written to the secondary arrays. Data integrity is much greater, but at the cost of I/O times.

• cpg_map

Provides a list of HPE 3PAR CPGs and shows which CPG they map to on the primary system.

Note Regarding formatting, a single CPG map entry should consist of the primary CPG followed by a colon, followed by the secondary CPG. For example, the primary system with a CPG of “REMOTE_COPY_CPG2” and the secondary system with a CPG of “REMOTE_COPY_DEST2” would be formatted as “REMOTE_COPY_CPG2: REMOTE_COPY_DEST2.” If there are other CPGs that need to be mapped, each entry should be separated by a single space.

• hpe3par_api_url

The API URL for the secondary system.

• hpe3par_username

The username for the secondary system.

• hpe3par_password

The password for the secondary system.

Page 33: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 33

• san_ip

The IP address used for SSH on the secondary system.

• san_login

The username used for SSH on the secondary system.

• san_password

The password used for SSH on the secondary system.

• hpe3par_iscsi_ips

This field is only required when working the iSCSI driver and should represent the iSCSI IP addresses for the secondary system. If there are more than one iSCSI IP address to provide multipath support, then they must be separated by a single space, unlike the primary system.

The following sample of cinder.conf entries shows replication devices.

# For Fibre Channel

[3parfcrep] hpe3par_api_url = http : // 10.10.20.241 : 8008/api/v1 hpe3par_username = <username> hpe3par_password = <password> hpe3par_debug = False san_ip = 10.10.20.241 san_login = <san_username> san_password = <san_password> volume_backend_name = 3parfcrep hpe3par_cpg = REMOTE_COPY_CPG2,CPG_PRI volume_driver = cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver replication_device = backend_id:eos16,

replication_mode:periodic, cpg_map:REMOTE_COPY_CPG2:REMOTE_COPY_DEST2 CPG_PRI:CPG_SEC, hpe3par_api_url:http : //10.50.3.37 : 8008/ api/ v1, hpe3par_username:<username>, hpe3par_password:<password>, san_ip:10.50.3.37, san_login:<san_username>, san_password:<san_password>

# For iSCSI

[3pariscsirep] hpe3par_api_url = http : //10.10.20.241 : 8008/ api/ v1 hpe3par_username= <username> hpe3par_password= <password> hpe3par_debug = False san_ip = 10.10.20.241 san_login = <san_username> san_password = <san_password> volume_backend_name = 3pariscsirep hpe3par_cpg = REMOTE_COPY_CPG2,CPG_PRI hpe3par_iscsi_ips = 10.50.3.11,10.50.3.12

Page 34: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 34

volume_driver = cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver replication_device = backend_id:eos16-sync,

replication_mode:sync, cpg_map:REMOTE_COPY_CPG2:REMOTE_COPY_DEST2 CPG_PRI:CPG_SEC, hpe3par_api_url:http : // 10.50.3.37: 8008/ api/ v1, hpe3par_username:<username>, hpe3par_password:<password>, san_ip:10.50.3.37, san_login:<san_username>, san_password:<san_password>, hpe3par_iscsi_ips: 10.50.3.42 10.50.3.43

Volume type extra_specs Required extra_spec values are as follows:

replication_enabled = <is> True

Important Use the syntax above exactly as provided, including case, or volume replication will not work.

Optional extra_spec values:

replication:mode = sync|periodic

replication:sync_period = 900

These extra_spec values are only needed if replication:mode is periodic. The default replication mode is periodic and the default sync_period for this mode is 900 seconds.

After modifying cinder.conf and creating a volume type that supports HPE 3PAR replication, any volumes created under that type are automatically replicated.

Replication status of a host Because Cinder’s implementation of replication is host-based, you can view the replication state of the host to determine if it is enabled, disabled, failed-over, or in an error state, as shown in the example below using the –-withreplication flag.

$ cinder service-list --withreplication

Page 35: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 35

Figure 6. Sample output from service-list --withreplication command

Creating a replicated volume The example below assumes a volume type of 3parfcreplicated was created and the proper extra_specs were added to support replication.

$ cinder create -–volume-type 3parfcreplicated 10

| id | 808482df-3bae-47d1-8c66-6a41c6cdf796 |

$ cinder show 808482df-3bae-47d1-8c66-6a41c6cdf796

| replication_status | enabled |

Doing a show on the volume will show if the volume is being replicated. Depending on the configuration, the volume and its data will be periodically or automatically synchronized from the primary to the secondary systems.

Note Volume Retype with Replication is supported. Changing the volume-type to one that includes replication_enabled = <is> True (assuming this parameter was not present before) results in adding a secondary copy to a volume. Changing the volume-type to one that no longer includes replication_enabled = <is> True results in removing the secondary copy, while preserving the primary copy.

Listing valid replication targets Replication targets are reported as a host capabilities under the replication_targets option. These targets are helpful when administrators want to see the number of targets to be replicated, or when they need to specify a location for host fail-over.

$ cinder get-capabilities alex-devstack@3parfcrep

Page 36: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 36

Figure 7. Sample output from get-capabilities command

Failing over a host When a host is failed over through Cinder’s failover-host command, all replicated volumes on the host are individually transitioned to the specified secondary target. If the primary system is entirely offline, the failover works as intended. If there are any non-replicated volumes on the host at the time failover is requested, they will not be available on the secondary system. These volumes are forced into an error state, but they are available upon failing back.

$ cinder failover-host --backend_id <3PAR-target> <primary@host>

Where <3PAR-target> is a value obtained from the replication_targets field of Cinder’s get-capabilities command and <primary@host> is the host that is intended to be failed over.

$ cinder service-list --withreplication

This command shows the new status of the replicated host.

Figure 8 shows an example of failing over a host using the get-capabilities command as a reference.

Figure 8. Sample output showing a failed-over state from service-list –withreplication command

Caution • When a host is failed over, detach all the volumes from instances and immediately attach them again to respective instances to prevent any

data loss.

• If the primary HPE 3PAR backend is in a failed-over state, new volume creation on a secondary HPE 3PAR system is not allowed. Trying to create a new volume ends in an error state because the Remote Copy link is either broken or not available.

• Freeze (freeze-host) and Thaw (thaw-host) operations are not currently supported.

Page 37: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 37

Failing back a host Before a host’s replication status can be returned to “normal,” work must be done on the HPE 3PAR backend. All Remote Copy groups on the host must be recovered, resynchronized, and restarted. After this is done, the failback command can be issued.

Important If these steps are not performed on the backend prior to failing back, the command will not be successful. The host will remain in a failed-over state.

Using the -–backend_id value of default triggers the failback command as follows:

$ cinder failover-host –-backend_id default <primary@host>

Figure 9. Sample output showing a disabled state from service-list –-withreplication command

If all the proper backend steps are performed, the host’s replication_status returns to an enabled state and can be failed back over at a later date. The volumes resume replication and all data is duplicated to the secondary targets.

Note Before failing back a host, detach all the volumes from instances. Volumes can be attached again to respective instances after the fail-back operation is successful.

Security improvements CHAP support Challenge-Handshake-Authentication-Protocol (CHAP) support was added in the Juno release to the HPE 3PAR iSCSI driver as one-way authentication (sets the CHAP initiator on the HPE 3PAR array). The hpe3par_iscsi_chap_ enabled option in cinder.conf must be set to True to enable the iSCSI CHAP support. The current HPE 3PAR host will have the CHAP setting automatically added the next time an iSCSI volume is attached.

Configurable SSH Host Key Policy and known hosts file Both OpenStack Cinder and the HPE 3PAR client were enhanced in the Juno release to allow for configuring the SSH Host Key Policy and Known Hosts File. This enhancement adds configuration options for ssh_hosts_key_file and strict_ssh_host_key_policy in cinder.conf.

The strict_ssh_host_key_policy option defaults to False. When False, Cinder and the HPE 3PAR client use the auto-add policy, like previous versions. Auto-add allows new hosts to be added, but raises an exception if a host that was already known starts sending a different host key. When strict_ssh_host_key_policy=True, Cinder and the HPE 3PAR client use the reject policy. When the reject policy is used, the host must already be recorded in the known hosts file and match the recorded host key.

Page 38: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 38

The ssh_hosts_key_file option defaults to $state_path/ssh_known_hosts (state_path is a configuration option that defaults to /var/lib/cinder). This setting allows you to specify the known hosts file to use for both Cinder and HPE 3PAR client SSH connections. The previous default was to use the system host keys. The client will try to create the configured file if it does not exist.

If…

strict_ssh_host_key_policy=True - then this file needs to be pre-populated with trusted known host keys.

When using…

strict_ssh_host_key_policy=False (the default) - new hosts are appended to the file automatically.

Suppress requests library SSL certificate warnings If the SSL certificate is invalid or out of date, the HPE 3PAR client uses the python request library to communicate with the array, and the following warning message is logged to the Cinder’s volume log on every client request:

WARNING py.warnings /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:791: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: urllib3.readthedocs.org/en/latest/security.html InsecureRequestWarning)

Cinder has added the suppress_requests_ssl_warnings option to the cinder.conf file to suppress these warning messages. The default value is False, but the option can be set to True to suppress the warning. This is a valid warning and should not be disabled in a production environment.

Support for Containerized Stateful Services The HPE 3PAR drivers are supported by the ClusterHQ Flocker open source technology. Details regarding the setup and usage of Flocker are available at github.com/ClusterHQ/flocker/tree/master/docs.

Additional HPE 3PAR and HPE Primera features The HPE 3PAR OpenStack drivers can support additional features by enabling them on any HPE 3PAR array with a valid license; for example: Adaptive Optimization and Data-At-Rest Encryption. To take advantage of these features, configure the feature on the HPE 3PAR array and configure the backend to be used by OpenStack.

Adaptive Optimization Adaptive Optimization (AO) is configured at the storage pool or CPG level. This means if you have AO configured and the HPE 3PAR Cinder drivers are configured to provision storage on those CPGs, the CPGs will be AO enabled. The CPGs are configured using the hpe3par_cpg entry in the cinder.conf file.

Data-At-Rest Encryption Data-At-Rest Encryption is an array-level configuration providing encryption on the hard disk. If the HPE 3PAR array has Self-Encrypting Disks (SEDs) and a valid license for the feature, the OpenStack Cinder volumes take advantage of the array-based encryption.

Known limitations • Adding a physical copy to a Remote Copy group is not an officially supported feature. Therefore, the creation of a cloned volume is expected

to fail for a replicated volume (when replication is enabled for source volume).

• Volume encryption is not supported for the HPE 3PAR FC Block Storage driver.

• If you specify replication_enabled = <is> True as an extra_spec with one or more of the QoS settings, the VVS with QoS settings is created on the primary HPE 3PAR system only. VVS is not present on secondary HPE 3PAR system.

Page 39: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 39

Specify NSP for FC Bootable Volume This capability has been added in the Train release. Given a system connected to HPE 3PAR via FC and multipath setting is NOT used in cinder.conf, when the user tries to create a bootable volume, it fails intermittently with the following error: Fibre Channel volume device not found.

This happens when a zone is created using second or later target from the HPE 3PAR backend. In this case, HPE 3PAR client code picks up the first target to form an initiator target map. This is illustrated in the following example.

Sample output of showport command:

$ showport -sortcol 6

N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol Partner FailoverState 0:1:1 target ready 2FF70002AC002DB6 20110002AC002DB6 host FC - - 0:1:2 target ready 2FF70002AC002DB6 20120002AC002DB6 host FC 1:1:2 none 1:1:1 initiator ready 2FF70002AC002DB6 21110002AC002DB6 rcfc FC - - 1:1:2 target ready 2FF70002AC002DB6 21120002AC002DB6 host FC 0:1:2 none 2:1:1 initiator ready 2FF70002AC002DB6 22110002AC002DB6 rcfc FC - - 2:1:2 target ready 2FF70002AC002DB6 22120002AC002DB6 host FC 3:1:2 none 3:1:1 target ready 2FF70002AC002DB6 23110002AC002DB6 host FC - - 3:1:2 target ready 2FF70002AC002DB6 23120002AC002DB6 host FC 2:1:2 none Suppose a zone is created using targets "2:1:2" and "3:1:2" from the above output. The initiator target map is created using target "0:1:1" only. In this case, the path is not found, and the bootable volume creation fails.

To avoid this failure, the user can specify the target in cinder.conf (within the HPE 3PAR backend section) as follows:

[3pariscsi] hpe3par_api_url = http://10.50.3.7:8008/api/v1 hpe3par_username = <user_name> hpe3par_password = <password> ... <other parameters> ... hpe3par_target_nsp = 3:1:2

Using the above mentioned nsp, respective wwn information is fetched. Later, the initiator target map is created using wwn information and the bootable volume is created successfully.

Note If the above mentioned option (nsp) is not specified in cinder.conf, then the original flow is executed. That is, the first target is picked and the bootable volume creation might fail.

Peer Persistence support Given an HPE 3PAR backend configured with replication setup, currently only Active/Passive replication is supported by HPE 3PAR in OpenStack. When failover happens, nova does not support volume force-detach (from the dead primary backend) / re-attach to secondary backend. Manual intervention by the storage engineer is required.

To overcome this scenario, support for Peer Persistence is added in the Train release. Given a system with Peer Persistence configured and a replicated volume is created, when this volume is attached to an instance, vlun is created automatically in the secondary backend, in addition to the primary backend; so that when a failover happens, it is seamless.

Page 40: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 40

For Peer Persistence support, perform the following steps:

1. Enable multipath by adding the following lines:

a. In /etc/nova/nova.conf, under the [libvirt] section:

iscsi_use_multipath=True

b. In /etc/nova/nova-cpu.conf, under the [libvirt] section:

volume_use_multipath = True

2. Set replication mode as "sync"

3. Configure a quorum witness server.

Specify the ip address of the quorum witness server in cinder.conf [within the HPE 3PAR backend section] as shown below:

[3pariscsirep] hpe3par_api_url = http://10.50.3.7:8008/api/v1 hpe3par_username = <user_name> hpe3par_password = <password> ... <other parameters> ... replication_device = backend_id:CSIM-EOS12_1611702, replication_mode:sync, quorum_witness_ip:10.50.3.192, hpe3par_api_url:http://10.50.3.22:8008/api/v1, ... <other parameters> ...

Summary Hewlett Packard Enterprise is a Platinum member of the OpenStack Foundation. Hewlett Packard Enterprise has integrated OpenStack open source cloud platform technology into its enterprise solutions to enable customers and partners to build enterprise-grade private, public, and hybrid clouds.

The Mitaka release continues Hewlett Packard Enterprise’s contributions to the Cinder project, enhancing core Cinder capabilities, as well as extending the HPE 3PAR Block Storage driver. The focus continues to be on adding enterprise functionality, such as volume replication, image caching, and enhanced volume cloning. The HPE 3PAR Block Storage drivers support the OpenStack technology across both iSCSI and Fibre Channel protocols.

Page 41: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 41

Appendix Creating a goodness function There are several equations that can be used as a starting point for a useful goodness_function. Each equation causes goodness values to be distributed differently. Choosing the correct equation to start with depends on an administrator’s requirements. Also, because goodness values can only be between 0 through 100, the following equations utilize minimum and maximum functions to create limits. For a general description, see Block Storage scheduler configuration with driver filter and weigher.

Determining a good value for maxIOPS The following equations assume you have a good reference value for maximum IOPS. The maximum IOPS a backend supports depends on which HPE 3PAR backend is being used. Some sample values for maximum IOPS are as follows.

From the HPE 3PAR 8000 StoreServ Storage data sheet:

“Remove bottlenecks with a flash-optimized, scale-out architecture delivering over 1 million IOPS and over 20 GB/s”—for 8540 AFA.

From HPE 3PAR StoreServ 20000 Storage on hpe.com

“HPE 3PAR StoreServ 20000 Storage is an enterprise flash array with more than 3.8M IOPS”—for 20450/20850 AFA.

The values of 1 million and 3.8 million IOPS can be good starting points for setups that are utilizing those backends. For backends not previously listed, their values could be used as a starting point. Another approach for determining the best maximum IOPS value is to test the performance of your setup directly.

Another consideration when deciding on a maximum IOPS value is the number of CPGs that will be in use. The maximum IOPS should be on the CPG level and not the backend level. For example, if you know a backend has a maximum IOPS of 250,000, you would make sure that the sum of all of assigned CPG maximum IOPS does not exceed 250,000 IOPS.

Polynomial This equation produces goodness values for a backend that decrease in a polynomial fashion as the current IOPS on that backend increase. The equation requires values for maxIOPS and a smoothing factor (smooth). The maxIOPS value can be specified by an administrator in QoS_specs. It can also be hard-coded, if necessary. The smoothing factor is a hard-coded value that can be modified by an administrator to adjust the steepness of the polynomial decline.

This is how the equation above looks in the cinder.conf file:

goodness_function = max(min(-(smooth/qos.maxIOPS) * capabilities.throughput ^ 2 + 100, 100), 0)

A slight modification can be made to the polynomial equation to change the point where the goodness value of a backend begins to decrease. This version is recommended because it gives more control over the inflection point.

The new vertical value is used to change where the goodness value begins to drop off:

goodness_function = max(min(-(smooth/qos.maxIOPS) * capabilities.throughput ^ 2 + 100 + vertical, 100), 0)

Note

Page 42: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 42

The previous cinder.conf examples include capping the minimum and maximum goodness values at 0 and 100, respectively. The smoothing and vertical values should be hard-coded values that are decided upon beforehand by the administrator.

Figure 10 is a graphical representation of the goodness values that the recommended polynomial equation produces. A maxIOPS of 25,000 and a smoothing value of 0.001 were used. A vertical shift of 250 was also used. The dashed red line represents the point at which IOPS is 80% of the maximum possible IOPS.

Figure 10. Goodness value in relation to IOPS using the recommended polynomial equation

Figure 11 is a graphical representation of the goodness values that the polynomial equation produces. A maxIOPS of 25,000 and a smoothing value of 0.004 were used.

Figure 11. Goodness value in relation to IOPS using the polynomial equation

Linear This equation produces goodness values for a backend, which decrease in a linear fashion as the current IOPS on that backend increase. This equation requires values for maxIOPS and minIOPS, both of which can be specified by an administrator in QoS_specs. The values can also be hard-coded, if necessary.

Page 43: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 43

The general equation is as follows:

This is how the equation above looks in the cinder.conf file:

goodness_function = max(min(100 * ((qos.maxIOPS—capabilities.throughput)/(qos.maxIOPS – qos.minIOPS)), 100), 0)

Note The cinder.conf example above includes capping the minimum and maximum goodness values at 0 and 100, respectively.

Page 44: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 44

Figure 12 is a graphical representation of the goodness values the linear equation produces. A maxIOPS of 25,000 and a minIOPS of 0 were used in this example.

Figure 12. Goodness value in relation to IOPS when using linear equation

Figure 13 is a graphical representation of the goodness values the linear equation produces. A maxIOPS of 25,000 and a minIOPS of 20,000 were used for this example. Altering minIOPS determines the inflection point for when goodness values begin decreasing from 100. The dashed red line represents the point at which IOPS is 80% of the maximum possible IOPS.

Figure 13. Goodness value in relation to IOPS when using a linear equation with minIOPS set to 20000

Page 45: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 45

Exponential This equation produces goodness values for a backend that decrease exponentially as the current IOPS on that backend increase. The equation requires values for maxIOPS and a smoothing factor (smooth). The maxIOPS value can be specified by an administrator in QoS specs. It can also be hard-coded, if desired. The smoothing factor is a hard-coded value that can be modified by an administrator to adjust the steepness of the exponential decline.

This is how the equation above looks in the cinder.conf file:

goodness_function = max(min(100 * (1 + smooth/qos.maxIOPS) ^ -capabilities.throughput, 100), 0)

Note The cinder.conf example includes capping the minimum and maximum goodness values as 0 and 100, respectively. The smoothing value should be a hard-coded value that is decided upon beforehand by the administrator.

Figure 14 is a graphical representation of the goodness values that the exponential equation produces. A maxIOPS of 25000 and a smoothing value of 1.4 were used.

Figure 14. Goodness value in relation to IOPS using the exponential equation

Failing back an HPE 3PAR backend To properly resynchronize an HPE 3PAR volume, the data written to the secondary array while the primary was down must be written back to the primary, and the Remote Copy group must be resumed to allow primary-to-secondary replication.

Complete the following steps for each volume that needs resynchronization:

1. Recover the Remote Copy group from the backup system.

This will put the Remote Copy group’s disaster recovery (DR) state in Recover.

2. Restore Remote Copy group to normal operation.

Page 46: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper Page 46

The group will have a DR state of Started.

3. If the above steps did not automatically start the Remote Copy group, start it now.

At this point the group should be replicating normally.

After every volume on the Cinder host has been resynchronized and the Remote Copy groups have been resumed to their natural direction, the failback command can be successfully issued.

Configuring HPE 3PAR for volume replication The HPE 3PAR arrays used for replication must be properly licensed with Remote Copy and must be paired beforehand. The following example details how to configure two arrays with the simplest configuration. For more details and advanced configurations, see the official HPE 3PAR Remote Copy Software User Guide.

In this example, two arrays are used, eos01 (10.10.20.241) and eos16 (10.50.3.37). eos01 is the primary array; eos16 is the secondary array. Before attempting to configure the HPE 3PAR arrays for Remote Copy, make sure both the primary and secondary arrays have at minimum of one RCIP or RCFC port correctly configured. An example configuration is shown below:

Primary: Port 0:3:1—RCIP—10.10.220.190

Secondary: Port 0:3:1—RCIP—10.10.120.209

After the ports are configured, the following effectively pairs the two arrays for Remote Copy:

• Primary system (eos01)

– startrcopy

– creatercopytarget eos16 IP 0:3:1:10.10.120.209

– createdomain DOMAIN

• Secondary system (eos16)

– startrcopy

– creatercopytarget eos01 IP 0:3:1:10.10.220.190

– createdomain DOMAIN

• Primary system

– createcpg -domain DOMAIN -t r1 -ha mag SRC_CPG

– createcpg -domain DOMAIN -t r1 -ha mag SRC_SNAP_CPG

• Secondary system

– createcpg -domain DOMAIN -t r1 -ha mag DEST_CPG

– createcpg -domain DOMAIN -t r1 -ha mag DEST_SNAP_CPG

Note that each target can only be used for a single mode at a time. Asynchronous periodic and synchronous replication groups cannot run together—only run one or the other.

You are not required to specify a replication mode at group create time; rather, the group that is created first on the target determines the type going forward. If a synchronous replication group is the first group created, only other synchronous groups are allowed. If you want to switch the target to asynchronous periodic, all current groups must be removed and the first group created must be of the new type.

Multiple modes on the same HPE 3PAR array are supported, but they require different Remote Copy Targets for each mode. A target should be created with a replication mode in mind, and only be used for that mode.

Page 47: OpenStack HPE 3PAR Block Storage Driver Configuration …• Cinder’s cinder.conf settings of max_over_subscription_ratio is used for over subscription of thin provisioned volumes

Technical white paper

Share now

Get updates

© Copyright 2016-2019 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other third-party marks are property of their respective owners.

4AA6-5209ENW, November 2019, Rev. 9

Resources and additional links HPE Resources HPE 3PAR Storage hpe.com/us/en/storage/3par.html

HPE Primera Storage

OpenStack resources OpenStack openstack.org/

Horizon: The OpenStack Dashboard Project docs.openstack.org/horizon/latest/

OpenStack: HPE 3PAR and HPE Primera Fibre Channel and iSCSI drivers docs.openstack.org/cinder/train/configuration/block-storage/drivers/hpe-3par-driver.html

OpenStack StackAlytics (Train) stackalytics.com/?module=cinder-group&metric=commits&release=train

OpenStack documentation OpenStack documentation (Rocky) docs.openstack.org/rocky/

OpenStack documentation (Stein) docs.openstack.org/stein/

OpenStack documentation (Train) docs.openstack.org/train

OpenStack Train Administrator Guides docs.openstack.org/train/admin/

Cinder, the OpenStack Block Storage Service docs.openstack.org/cinder/latest/

OpenStack Fibre Channel Zone Manager docs.openstack.org/cinder/train/configuration/block-storage/fc-zoning.html

OpenStack Configuration References/Guides Rocky release docs.openstack.org/rocky/configuration/

Stein release docs.openstack.org/stein/configuration/

Train release docs.openstack.org/train/configuration/

Learn more at hpe.com/storage