h6665 symtrx microsoft windows server wp

47
 EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning Abstract This white paper outlines the concepts, procedures, and best practices associated with deploying Microsoft Windows Server 2003 and 2008 with EMC ®  Symmetrix ®  DMX-3 and DMX-4, and Symmetrix V-Max™ storage. October 2009  

Upload: ethershark1636

Post on 12-Oct-2015

34 views

Category:

Documents


0 download

DESCRIPTION

EMC Windows

TRANSCRIPT

  • EMC Symmetrix with Microsoft Windows Server 2003 and 2008

    Best Practices Planning

    Abstract

    This white paper outlines the concepts, procedures, and best practices associated with deploying Microsoft Windows Server 2003 and 2008 with EMC Symmetrix DMX-3 and DMX-4, and Symmetrix V-Max storage.

    October 2009

  • Copyright 2009 EMC Corporation. All rights reserved.

    EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

    THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

    Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

    For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com

    All other trademarks used herein are the property of their respective owners.

    Part Number h6665

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 2

  • Table of Contents Executive summary ............................................................................................5 Introduction.........................................................................................................5

    Audience.......................................................................................................................5 Windows storage connectivity ..........................................................................5

    Symmetrix front-end director flags............................................................................6 Additional director flag information .............................................................................7 SCSI-3 persistent group reservations.........................................................................9

    LUN mapping and masking ........................................................................................9 Connectivity recommendations.......................................................................11

    Multipathing ...............................................................................................................12 Symmetrix storage............................................................................................14

    Understanding hypervolumes ..................................................................................14 Understanding metavolumes ...................................................................................15

    Metavolume configurations.......................................................................................15 Gatekeepers ...............................................................................................................16 RAID options ..............................................................................................................17 Disk types...................................................................................................................17 Virtual Provisioning...................................................................................................18

    Discovering storage .........................................................................................19 Windows Server 2008 SAN Policy............................................................................20 Offline Shared ............................................................................................................21 Automount..................................................................................................................22

    Initializing and formatting storage ..................................................................22 Disk types...................................................................................................................22

    Master Boot Record (MBR) ......................................................................................22 GUID partition table (GPT) .......................................................................................22 Basic disks................................................................................................................23 Dynamic disks ..........................................................................................................23 Veritas Storage Foundation for Windows .................................................................24 Disk type recommendations .....................................................................................24 Large volume considerations....................................................................................24

    Partition alignment ....................................................................................................25 Partition alignment prior to Windows Server 2003 SP1............................................25 Partition alignment with Windows Server 2003, SP1, or later versions....................25 Partition alignment with Windows Server 2008 ........................................................26 Querying alignment ..................................................................................................26

    Formatting ..................................................................................................................27 Allocation unit size....................................................................................................27 Quick format vs. regular format ................................................................................28 Windows Server 2003 format ...................................................................................28 Windows Server 2008 format ...................................................................................28

    Volume expansion ............................................................................................28 Striped metavolume expansion example ................................................................29

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 3

  • Symmetrix replication technologies and management tools .......................32

    EMC TimeFinder family .............................................................................................32 EMC SRDF family.......................................................................................................34 Open Replicator overview.........................................................................................35 Symmetrix Integration Utilities.................................................................................37 EMC Replication Manager.........................................................................................38

    Managing storage replicas...............................................................................39 Symmetrix device states...........................................................................................39

    Read write (RW) .......................................................................................................39 Write disabled (WD) .................................................................................................39 Not ready (NR) .........................................................................................................40

    Managing the mount state of storage replicas .......................................................41 Conclusion ........................................................................................................47 References ........................................................................................................47

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 4

  • Executive summary The success of deploying and managing storage in Windows environments is heavily dependent on utilizing vendor-qualified and vendor-supported configurations while ensuring the proper processes and procedures are used during implementation. Supported configurations and defined best practices are continually changing, which requires a high level of due diligence to ensure new, as well as existing, environments are properly deployed.

    EMC Symmetrix V-Max and Symmetrix DMX storage systems undergo rigorous qualifications to ensure supported topologies throughout the storage stack (operating system, driver, host bus adapter, firmware, switch, and so on) provide the highest levels of stability and performance available in the industry. Additionally, best practices and recommendations are continually tested and re-evaluated to ensure deployments are optimized as new operating system versions, patches, and features are made available. EMC provides a myriad of delivery mechanisms for relaying the information found during qualification and testing, including documentation and white papers, support forums, technical advisory notifications, and extensive support matrices as qualified by EMCs quality assurance organizations, including EMC E-Lab.

    By combining best-of-breed software and hardware technologies like the Symmetrix DMX and Symmetrix V-Max with thorough qualification, support, and documentation facilities, EMC provides the most comprehensive set of tools to ensure five 9s availability in the most demanding environments.

    Introduction Critical information for deploying Windows-based servers on Symmetrix storage is available today but can be spread across various white papers, technical documentation, and knowledgebase articles. The goal of this paper is to define and consolidate key concepts and frequently asked questions for implementing Windows Server 2003 and 2008-based operating systems with Symmetrix storage. Some topics will be directly addressed in this paper, while others will reference more in-depth information available from other resources where detailed step-by-step guidance is required.

    The general topics covered include settings and best practices in the context of storage connectivity, device presentation, multipathing, Windows and Symmetrix disk configurations, and LUN management including growth and replication. Additional documentation will be referenced where appropriate and a list of related resources will be included in the References section on page 47.

    Audience This white paper is intended for storage architects and administrators responsible for deploying Microsoft Windows Server 2003 and 2008 operating systems on Symmetrix V-Max, and Symmetrix DMX-4 and DMX-3 and storage systems.

    Windows storage connectivity Symmetrix storage systems support several modes of connectivity for Windows hosts including Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and Internet Small Computer System Interface (iSCSI). Additionally the Symmetrix can support direct connections from host bus adapters (HBA) utilizing Fibre Channel Arbitrated Loop or connections via switched architectures (FC-SW). FCoE environments currently require an FCoE switch to convert from native Fibre Channel from the Symmetrix array. For each of these connectivity options, specific host and operating system functionality can be supported, including boot from SAN and clustering configurations. For detailed information on supported hardware and software configurations with these technologies, please see the EMC Host Connectivity Guide for Windows and the EMC Support Matrix (ESM), both available at http://elabnavigator.emc.com (access required). Beyond the supported configurations listed within the ESM, specific configurations are qualified as part of the Microsoft Windows Server Catalog (WSC), also referred to as the hardware compatibility list (HCL).

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 5

  • For clustering with Windows Server 2003, referred to as Microsoft Cluster Service (MSCS), Microsoft Customer Support Services (CSS) only supports clusters where the hardware and software, in their entirety, are listed on the WSC. Microsoft Knowledge Base (KB) article 309395, which can be found at http://support.microsoft.com/kb/309395/en-us, has additional details. For failover clustering with Windows Server 2008, officially supported solutions require software and hardware components to receive a Certified for Windows Server 2008 logo. Windows Server 2008 failover clusters, however, do not need to be listed in the WSC in contrast to the requirements of Windows Server 2003. For Windows Server 2008 failover clustering, the fully configured cluster must pass a validation test. The validation test is provided as part of the validate a configuration wizard included with the Windows Server 2008 operating system. The cluster validation runs a set of tests against the defined cluster nodes in the environment, including tests for processor architecture, drivers, networking configuration, storage, and Active Directory, among other components. By allowing specific configurations to be tested by an end user, the validation process allows for a much simpler and streamlined procedure for qualifying a specific clustered environment. Because of this change in support policy, specific Windows Server 2008 failover clustering configurations will not necessarily be listed in the ESM or WSC. Geographically dispersed clusters are unique in the way they are validated with Windows Server 2008. Geographically dispersed clusters are clusters where nodes and storage arrays are separated across data centers for the purposes of disaster recovery. The Symmetrix Remote Data Facility/Cluster Enabler, or SRDF/CE, is an EMC-developed extension to Windows Server configurations, which implements support of a geographically dispersed cluster. With SRDF/CE, nodes within a cluster will access different storage arrays, depending on their geographic locations, and subsequently different LUNs where data is replicated consistently with SRDF. With nodes potentially accessing separate LUNs, some of the storage specific tests performed by the validation wizard, including SCSI-3 persistent reservation tests, will not be successful. The storage test failures are expected, and due to the nature of geographical clusters such as SRDF/CE, Microsoft does not require them to pass the storage tests within the validation process. For more information regarding cluster validation with Windows Server 2008, including Microsoft policy around geographically dispersed clusters, please see Microsoft Knowledge Base article 943984 (http://support.microsoft.com/kb/943984)

    Symmetrix front-end director flags The EMC Support Matrix is the definitive guide for information regarding Symmetrix director flags and should be consulted prior to server deployments or operating system upgrades. The ESM can be viewed at http://elabnavigator.emc.com, which is also known as the E-Lab Interoperability Navigator. One method for using the Navigator to determine the appropriate director flags is to utilize the Advanced Query option. From within the Navigator as depicted in Figure 1, under the Advanced Query tab, select the appropriate host operating system and storage array. Once selected and queried via get results, support statements will become available for the selected components. Within the support statements, under Networked Storage a link called Director Bit/Flag Information appears. This link contains the most up-to-date information regarding the appropriate director flags for the selected operating system and Symmetrix storage array.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 6

  • Figure 1. E-Lab Interoperability Navigator

    Table 1 outlines the director flags required for Windows Server 2003 and 2008 standalone or clustered hosts on Symmetrix V-Max and Symmetrix DMX-3/DMX-4 arrays at the time of this papers publication. Please note for Windows Server 2008 failover clustering an additional device level flag is required to enable SCSI-3 persistent reservations. Please see the section SCSI-3 persistent group reservations for additional details.

    Table 1. Windows Server 2003 and 2008 required Symmetrix port flags

    Bit Description Common_Serial_Number (C) This flag should be enabled for multipath configurations or hosts that

    need a unique serial number to determine which paths lead to the same device.

    SCSI_3 (SC3) When enabled, the Inquiry data is altered when returned by any device on the port to report that the Symmetrix supports the SCSI_3 protocol.

    SPC-2 Compliance (SPC-2) Provides compliance to newer (SCSI primary commands - 2) protocol specifications. For more information, see the SPC-2 section.

    Host SCSI Compliance 2007 (OS2007)

    When enabled, this flag provides a stricter compliance with SCSI standards for managing device identifiers, multi-port targets, unit attention reports, and the absence of a device at LUN 0. For more information please see the OS2007 section.

    Additional director flag information For Symmetrix V-Max, volume masking is enabled via the ACLX director flag. For Symmetrix DMX-3/DMX-4, volume masking is enabled via the VCM director flag. In most switched Fibre Channel environments it is recommended to enable masking. For iSCSI environments it is required to have masking enabled in order to allow initiators to log in to the Symmetrix. The section LUN mapping and masking has additional information. For FC Loop-based topologies, logically enable the following base setting in addition to the required Windows settings in Table 1: EAN (Enable Auto Negotiation), UWN (Unique WWN). For FC switched-EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 7

  • based topologies, logically enable the following base setting with the required Windows settings in Table 1: EAN (Enable Auto Negotiation), PP (Point-to-Point), UWN (Unique WWN). SPC-2 With Windows Server 2003 versions prior to SP1, SPC-2 was not a required director flag. With Windows 2003 SP1 and later, specific Microsoft applications began checking for SPC-2 storage compliance, including the Microsoft Hardware Compatibility Test (HCT) 12.1, as well as the Volume Shadow Copy Service (VSS) when used in conjunction with Microsoft clusters. Due to specific applications requiring SPC-2 compliance, it was recommended to enable SPC-2 in legacy Windows Server 2003 SP1 environments. Current Windows Server 2003-based qualifications for the Windows Server Catalog are executed with the SPC-2 flag enabled; therefore it is a requirement to have SPC-2 enabled in environments for compliance. For Windows Server 2008 environments, the SPC-2 flag has always been required. Should any software modifications, including service packs, hotfixes, or driver updates, be made to a legacy Windows Server 2003 environment where SPC-2 is not enabled, the SPC-2 director flag should be enabled at that time. Specific Windows Server 2003 hotfixes (including Microsoft hotfix 950903) may require SPC-2 compliance and could otherwise cause an outage in the environment if this flag is not set. OS2007 Windows Server 2008 configurations require the OS2007 director flag be enabled. For Windows Server 2003 environments it is recommended to have this setting enabled; however, it is not required in legacy Windows Server 2003 environments. Having the OS2007 flag enabled in Windows Server 2003 environments does not affect the OS and is recommended to be enabled in case there is a future upgrade to Windows Server 2008. As with the SPC-2 flag, future Windows 2003 Windows Server Catalog qualifications will be executed with the OS2007 flag enabled, which will impact Windows Server 2003 compliance where OS2007 is not enabled in new or upgraded environments. Methods for setting director flags Director flags can be configured at the director port level or at the HBA level. When director flags are set at the director port level, all hosts connected to those ports will be presented with the same settings. In a heterogeneous environment where ports are shared, different host operating systems may require different flags. In such cases it is possible to enable specific settings based on a HBA-to-director port relationship. Director-level flags can be set via configuration changes commonly done with the Solutions Enabler (SE) command line interface (CLI) symconfigure. HBA-level director flags are enabled via masking operations, such as with the symmask or symaccess hba_flag functionality. Director- or HBA-level settings can also be managed via the Symmetrix Management Console (SMC) graphical user interface (GUI), or with EMC Ionix ControlCenter (ECC). The following is an example of using the symconfigure CLI command to enable the OS2007 flag at the director port (port 0 on director 7f) level: symconfigure -cmd "set port 7f:0 SCSI_Support1=enable;" -sid 94 commit The following is an example of using the symaccess CLI command to enable the OS2007 flag for a specific WWN: symaccess -sid 94 set hba_flags on OS2007 -enable -wwn 10000000c96d0a50 Conflicts regarding director flags can occur in existing environments where requirements change based on the introduction of new or updated operating systems. Most flags can be modified while the director port remains online, however, the hosts connected to those ports may need to be restarted for the operating system to properly detect and otherwise manage the change in settings. The requirement for restarting is especially true for director-level changes that cause modification to SCSI inquiry data, such as the SPC-2 or OS2007 director flags.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 8

  • For configurations where changes to flags are required for some, but not all hosts, connected to a common set of director ports, modifying the flags at the HBA level will ensure the smallest impact to the existing environment. The tradeoff for setting director flags at the HBA level is the additional overhead for managing the settings at a more granular level, which can be problematic in large environments. It is also important to ensure in multipathed or clustered environments that all paths for all cluster nodes have the same director flag settings. To configure director flags inconsistently across ports and HBAs or in a piecemeal fashion, in an effort to avoid system reboots, is not supported and could lead to instability in the environment. Recommendations regarding the ability to modify director flags without impact to Windows or other operating systems are outside of the scope of this paper. For the most up-to-date and detailed resources regarding director configuration changes and their impact on specific operating systems, please see the ESM or query the EMC support knowledgebase available on Powerlink (http://powerlink.emc.com).

    SCSI-3 persistent group reservations Functionality new to Windows Server 2008 failover clustering is the use of SCSI-3 persistent group reservations. Persistent reservations allow for multiple hosts to register unique keys with a storage array through which a persistent reservation can be taken against a specified LUN. Persistent reservations introduce several improvements over the previously used SCSI-2 reserve/release commands utilized by MSCS with Windows Server 2003, including the ability to maintain reservations such that a shared LUN is never left in an unprotected state. For a Symmetrix to support SCSI-3 persistent reservations, and subsequently support Windows Server 2008 clustering, a logical device level setting must be enabled on each LUN requiring persistent reservation support. This setting is commonly referred to as the PER bit, or the SCSI3_persist_reserv attribute from a Solutions Enabler perspective. The SCSI3_persist_reserv attribute can be enabled via configuration changes commonly done with the Solutions Enabler command line interface (CLI) symconfigure. The setting can also be managed via SMC or ControlCenter. Metavolumes require that all member devices have the same attributes prior to forming the metadevice. With this in mind, it is necessary to set the SCSI3_persist_reserv attribute against any hypervolumes intended to form metavolumes in the future. For existing metavolumes, this attribute needs only to be set on the metavolume head device when making configuration changes using Solutions Enabler. The following is an example of using the symconfigure CLI command to set SCSI-3 persistent reservation support for a contiguous range of devices: symconfigure -sid 94 -cmd "set dev 42D:430 attribute = SCSI3_persist_reserv;" commit

    When the persistent reservation attribute is enabled, the Symmetrix is required to store and otherwise query the reservation status of the device. Because of this, it is generally recommended to only enable persistent reservation support for the devices that require this functionality. If the environment is dynamic enough such that enabling the persistent reservation attribute on demand creates significant administrative overhead, it is possible to set the attribute on all devices.

    LUN mapping and masking Symmetrix arrays manage the presentation of devices to operating systems through front-end director ports via a combination of mapping and masking functionality. Mapping Symmetrix devices is the process by which a LUN address is assigned to a specific device on a given front-end director. Should masking be disabled on a director port (VCM or ACLX director flags set to disabled) any hosts zoned to or directly attached to that director will have access to all mapped devices. The LUN address assigned to the device is the LUN number by which the host will discover and access the storage. For example, if the LUN address is defined on the director as F0 in hex (240 decimal), the host will discover the device as LUN 240.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 9

  • In switched environments, where multiple hosts commonly access the same front-end directors, an additional level of device presentation granularity can be accommodated with the Symmetrix masking functionality. Masking operations allow for the restriction of access for a given WWN (defined on an HBA) to mapped devices regardless of the physical or zoned connectivity in the environment. Masking records define which WWN is allowed to access which Symmetrix devices on which director ports. Masking operations also allow for the modification of LUN addresses as seen by the host to provide a more predictable, uniform approach. In iSCSI environments it is required for masking to be enabled on the Symmetrix front-end directors. iSCSI connectivity to a Symmetrix requires the iSCSI Qualified Name (IQN) to have masking entries that subsequently allow an HBA or NIC to log in to a front-end director. One exception to the rule that masking prevents access to all mapped devices involves the VCM or ACLX device. The VCM or ACLX flag is a special device attribute that allows a LUN, when mapped, to be viewed by hosts regardless of masking entries. In older versions of the Symmetrix operating environment Enginuity the VCM device was the repository where masking records were maintained. With newer versions of Enginuity the VCM or ACLX device is simply a gatekeeper that can be used for the initial configuration of the Symmetrix from a host.

    The VCM or ACLX device need not be mapped to Symmetrix front-end adapters or otherwise presented to hosts in order to perform masking operations. Masking can be performed through regular gatekeeper devices. Additionally the VCM or ACLX device, when presented to potential cluster nodes undergoing cluster validation with Windows Server 2008, may cause validation warnings. These warnings can be avoided by removing the VCM or ACLX from being mapped to the front-end directors

    When mapping and masking Symmetrix devices to a host, it is important to note the Windows maximum limit of 255 usable LUNs per HBA target. While this number applies to the total number of addressable LUNs per target, it also impacts the LUN numbers through which Windows allows access for devices. The LUN address range for Windows is from 0 to 254. Should a LUN have an address higher than 254, even if the operating system is not accessing more than 255 total LUNs on that target, the device will not be detected for use by the operating system. To some degree this limitation can be managed by the HBA driver. For instance the Emulex SCSIPort driver with Windows Server 2003 allows for higher LUN addresses to be managed (up to 512) via an adjusted LUN mapping. However, with Windows Storport and HBA miniport drivers, the 254 LUN address limit is enforced as a part of the operating system limit. A Symmetrix can support a much higher number of mapped devices per director, well beyond 255. Therefore the ability to modify LUN addresses via masking can be an important feature in large environments. With older versions of Solutions Enabler and Enginuity, a lun offset feature was used to adjust the starting LUN address for a given HBA and director combination. The lun offset functionality, however, has become obsolete with newer code revisions and is otherwise replaced by Dynamic LUN Addressing (DLA). DLA allows for Symmetrix devices, regardless of their LUN address on the front-end director, to start at address 0 for a given HBA and director port pairing. In addition, DLA can be used to directly specify a LUN address for a given device. The Symmetrix V-Max, with the use of Auto-provisioning Groups, not only automates director LUN mapping but also utilizes DLA to simplify LUN addressing. For more information regarding dynamic LUN addressing, please see the Symmetrix Dynamic LUN Addressing Technical Note available on Powerlink.

    The LUN address value is very different from the Symmetrix device number. The Symmetrix device number is assigned to a Symmetrix addressable volume upon its creation and will remain the same independent of the LUN address used across directors.

    When using multiple paths to a Symmetrix device, or when presenting shared storage to a cluster, it is recommended to ensure the LUN address is the same across all given directors. This guideline is more for ease of troubleshooting and not a hard requirement, as it is possible for LUNs to be multipathed to a Windows host or presented to multiple clustered hosts with different LUN addresses. EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 10

  • Connectivity recommendations It is recommended to configure at least two HBAs per Windows server with the goal of presenting multiple unique paths to the Symmetrix system. The benefits of multiple paths include high availability from a host, switch, and Symmetrix front-end director perspective, as well as enhanced performance.

    From a high-availability perspective, given the possibility for director maintenance, each Windows server should have redundant paths to multiple front-end directors. For a Symmetrix V-Max, this can be accomplished by connecting to opposite even and odd directors within a V-Max Engine, or across directors within multiple V-Max Engines (recommended when multiple engines are available). In the case of a Symmetrix DMX array this can be accomplished by ensuring a given host is connected to different numbered directors (director 4a and director 13a for example).

    For each HBA port at least one Symmetrix front-end port should be configured. For I/O intensive hosts in the environment, it could prove beneficial to connect each HBA port to multiple Symmetrix front-end ports. Connectivity to the Symmetrix front-end ports should consist of first connecting unique hosts to port 0 of the front-end directors before connecting additional hosts to port 1 of the same director and processor. This methodology for connectivity ensures all front-end directors and processors are utilized, providing maximum potential performance and load balancing for I/O intensive operations.

    As port 0 and port 1 of a given director number and letter or slice share a given processor complex, it is not recommended to connect the same HBAs for a given host to both port 0 and port 1 of the same director. Ideally individual hosts should be connected to port 0 or port 1 from different directors. For Windows Server 2008 failover clustering environments it is currently required to ensure a given HBA is not presented to both port 0 and port 1 from the same front-end director processor. For example, to zone map and mask devices from director 7A port 0 and director 7A port 1 to the same HBA is not supported in a Windows Server 2008 failover cluster. At the time this paper was published, the SCSI-3 persistent reservations of a given initiator are maintained at the front-end processor level. Because port 0 and port 1 of a given director slice share the same processors it is not supported to have an application that utilizes SCSI-3 persistent reservations access a LUN on an HBA sharing both ports.

    Figure 2 uses a physical view of a Symmetrix V-Max Engine to provide a depiction of the aforementioned recommendations.

    Figure 2. Connectivity recommendations for a Symmetrix V-Max Engine

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 11

  • Multipathing Configurations with multiple paths to storage LUNs will require a path management software solution on the Windows host. The recommended solution for multipathing software is EMC PowerPath, the industry-leading path management software with benefits including:

    Enhanced path failover and failure recovery logic Improved I/O throughput based on advanced algorithms such as the Symmetrix Optimization load

    balancing and failover policy Ease of management including a Microsoft Management Console (MMC) GUI snap-in and CLI

    utilities to control all PowerPath features Value-added functionality including Migration Enabler, to aid with online data migration, and LUN

    encryption utilizing RSA technology Product maturity with proven reliability over years of development and use in the most demanding

    enterprise environments

    While PowerPath is recommended, an alternative is the use of the native Multipath I/O (MPIO) capabilities of the Windows operations system. The MPIO framework has been available for the Windows operating system for many years; however, it was not until the release of Windows Server 2008 where a generic device specific module (DSM) was provided by Microsoft to manage Fibre Channel devices. For more information regarding the Windows MPIO DSM implementation, please see the Multipath I/O Overview article at http://technet.microsoft.com/en-us/library/cc725907.aspx.

    Should native MPIO be chosen as the method for path management, the default failover policy with the RTM release of Windows Server 2008, for devices that do not report ALUA support such as the Symmetrix, is Fail Over Only. For performance reasons, especially in I/O intensive environments, it will be beneficial to modify this default behavior to one of the other options, including but not limited to, Least Queue Depth.

    The load-balance policy can be found under the MPIO tab within the Properties of each physical disk resource in the Windows Device Manager, as depicted in Figure 3.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 12

  • Figure 3. MPIO load-balance policy for Windows Server 2008 RTM

    With Windows Server 2008 R2, the default load-balance policy for non-ALUA reporting devices, including Symmetrix, has changed from Fail Over Only to Round Robin. MPIO also has an additional load-balance policy with Windows Server 2008 R2 called least block. To help with managing MPIO more efficiently, Windows Server 2008 R2 has an enhanced mpclaim CLI with the ability to modify the default load-balance policy at either a device, target hardware ID (such as Symmetrix), or global DSM level. The following section gives an example of how to set the default load-balancing policy at the target hardware ID level using the mpclaim CLI. To view the target hardware identifier: mpclaim /e

    "Target H/W Identifier " Bus Type MPIO-ed ALUA Support -----------------------------------------------------------------------

    --------

    "EMC SYMMETRIX " Fibre NO ALUA Not Supported To claim all devices for the Microsoft MPIO DSM based on target hardware ID (if not already done), please do the following. Note that the spaces are required within the EMC Symmetrix hardware ID string. mpclaim -n -i -d "EMC SYMMETRIX " Success, reboot required. To set the load-balance policy to least queue depth (4 in this example) based on target hardware ID:

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 13

  • mpclaim -l -t "EMC SYMMETRIX " 4 To view target-wide load-balance policies after being set: mpclaim -s -t

    "Target H/W Identifier " LB Policy -----------------------------------------------------------------------

    --------

    "EMC SYMMETRIX " LQD With the preceding commands completed all existing and any future Symmetrix devices discovered by MPIO will have a load-balance policy of least queue depth. Additional information regarding connectivity and multipathing can be found in the EMC Host Connectivity Guide for Windows.

    Symmetrix storage

    Understanding hypervolumes To provide data storage, a Symmetrix systems physical devices must be configured into logical volumes called hypervolumes. Hypervolumes are the unit of storage at which RAID protection is defined. A given open systems, Fixed Block Architecture (FBA) hypervolume can have a RAID 1, RAID 5, or RAID 6 configuration. Cache-only hypervolumes, such as thin devices or virtual (TimeFinder/Snap) devices, are unique in that they do not have a direct RAID protection. RAID protection for the physical storage used by cache only devices is defined within the pools that provide the storage area for cache-based hypervolumes. Symmetrix systems allow a maximum of 512 logical volumes on each physical drive, depending on the hardware configuration and the type of RAID protection used. Prior to Enginuity 5874 on the Symmetrix V-Max, the largest single hypervolume that could be created on a Symmetrix was 65,520 cylinders, approximately 59.99 GB. With Enginuity version 5874, a hypervolume can be configured up to a maximum capacity of 262,668 cylinders, or approximately 240.48 GB, about four times as large as with Enginuity version 577x. Figure 4 shows four disks with hypervolumes configured in a logical-to-physical ratio of 8 to 1.

    Figure 4. Symmetrix physical disks with hypervolumes

    In general, fewer larger hypervolumes are recommended where applicable in a Symmetrix environment; however, to ensure the best possible performance experience, large hypervolumes should be carefully considered in a traditional, fully provisioned environment. For example, to assign a single large

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 14

  • hypervolume that is RAID 1 protected would only allow for two physical spindles to support the workload intended for that LUN. Should the RAID protection for a single large hypervolume be RAID 5 7+1, however, then this concern is lessened as eight disks would be available to service the workload. Additionally, striped metavolumes, outlined in the next section, provide the ability to spread a given workload across a larger number of physical spindles. Large hypervolumes provide additional value in Virtual Provisioning environments. In these environments, administrators may strive to overprovision the thin pool as a means to improve storage utilization. Furthermore, Virtual Provisioning deals with the performance needs by utilizing a striping mechanism across all data devices allocated to the thin pool. Performance limits can be mitigated by the total number of spindles allocated to the thin pool. Additional information about Virtual Provisioning is provided later.

    Understanding metavolumes A metavolume is an aggregation of two or more Symmetrix hypervolumes presented to a host as a single addressable device. Creating metavolumes provides the ability to define host volumes larger than the maximum size of a single hypervolume. A single Symmetrix system metavolume can contain a maximum of 255 hypervolumes. When combining the maximum hypervolume size with the maximum number of metavolume members, the largest addressable single LUN is 61.32 TB (240.28 GB * 255 members) for a Symmetrix V-Max and 15.29 TB (59.99 GB * 255 members) for a DMX-3/DMX-4 . Configuring metavolumes helps to reduce the number of host-visible devices as each metavolume is counted as a single logical volume. Devices that are members of the metavolume, however, are counted toward the maximum number of host-supported logical volumes for a given Symmetrix director. Metavolumes contain a head device, which provides control information, member devices, and a tail device. All devices defined for the metavolume are used to store data. Metavolumes also provide the mechanism by which a host addressable LUN can be expanded. Metavolumes allow for additional members to be added for the purposes of presenting additional storage within an existing LUN. The section Volume expansion on page 28 provides additional details. Figure 5 shows a metavolume comprised of four hypervolumes on different physical devices.

    Figure 5. Symmetrix metavolume

    Metavolume configurations Metavolumes provide two ways to access data: Concatenated and striped. Concatenated metavolumes organize addresses for the first byte of data at the beginning of the first volume, and continue sequentially to the end of the volume. Once the first hypervolume is full, data is then written to the next member device, again sequentially, beginning with the first byte until the end of the volume. Figure 6 shows a concatenated metavolume.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 15

  • Figure 6. Concatenated metavolume Integration Striped metavolumes organizes addresses across all members, by using addresses that are interleaved between hypervolumes. The interleave or striping of data across the metavolume is done at a default stripe depth of 960K (one or two cylinders depending on the Enginuity version). Data striping benefits configurations with random operations by avoiding stacking I/O on a single hypervolume, spindle, and director. In this fashion data striping helps to balance the I/O activity between the drives and the Symmetrix system directors. Figure 7 shows a striped metavolume.

    Figure 7. Striped metavolume

    Gatekeepers Low-level I/O commands executed using Solutions Enabler SYMCLI are routed to the Symmetrix array by way of a Symmetrix storage device that is specified as a gatekeeper. The gatekeeper device allows SYMCLI commands to retrieve configuration and status information from the Symmetrix array without interfering with normal Symmetrix operations. A gatekeeper is not intended to store data and is usually configured as a small device (typically six cylinders or 2.8 MB.) The gatekeeper must be accessible from the host where the commands are being executed. Gatekeepers should be dedicated to the specific host that will be issuing commands to control or otherwise query a Symmetrix. In Microsoft failover clustering environments it is recommended to not cluster gatekeeper devices and to present unique gatekeepers to each cluster node as required. When presented to a Windows host, there is no requirement to signature or otherwise format a gatekeeper device. It will automatically become available for use by the host to communicate with the Symmetrix.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 16

  • Detailed information regarding gatekeeper devices can be found in the EMC Solutions Enabler Symmetrix Array Management CLI Product Guide available on Powerlink.

    RAID options Symmetrix systems support varying levels of RAID protection. RAID protection options are configured at the physical drive level based on hypervolumes. Multiple types of RAID protection can be configured for different datasets in a Symmetrix system. Table 2 shows the levels of RAID protection available for open systems hosts like Microsoft Windows.

    Table 2. RAID protection options

    RAID option

    Provides the following Configuration considerations

    RAID 1 The highest level of performance and availability for all mission-critical and business-critical applications. Maintains a duplicate copy of a volume on two drives: If a drive in the mirrored pair fails, the Symmetrix

    system automatically uses the mirrored partner without interruption of data availability.

    When the drive is (nondisruptively) replaced, the Symmetrix system re-establishes the mirrored pair and automatically re-synchronizes the data with the drive.

    Withstands failure of a single drive. RAID 1 provides 50% data storage capacity. For a single write operation from a host RAID 1 devices will perform two disk I/O operations (a write to each mirror member).

    RAID 5 Distributed parity and striped data across all drives in the RAID group. Options include: RAID 5 (3 + 1) Consists of four drives with

    parity and data striped across each device. RAID 5 (7 + 1) Consists of eight drives with

    data and parity striped across each device.

    RAID 5 (3 + 1) provides 75% data storage capacity. RAID (7 + 1) provides 87.5% storage capacity. Withstands failure of a single drive. For a single random write operation from a host, RAID 5 devices will perform four disk I/O operations (two reads and two writes).

    RAID 6 Striped drives with double distributed parity (horizontal and diagonal). Options include: RAID 6 (6 + 2) Consists of eight drives with

    dual parity and data striped across each device. RAID 6 (14 + 2) Consists of 16 drives with

    dual parity and data striped across each device.

    RAID 6 (6 + 2) provides 75% data storage capacity. RAID 6 (14 + 2) provides 87.5% storage capacity. Withstands failure of two drives. For a single random write operations from a host, RAID 6 devices will perform six disk I/O operations (three reads and three writes).

    Disk types Along with the aforementioned RAID technologies, Symmetrix storage can be configured across a wide range of disk technologies. Symmetrix storage systems support high-capacity, low-cost SATA II drives, high-performing 10k rpm and 15k rpm Fibre Channel drives, as well as ultra-high-performance solid state EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 17

  • Enterprise Flash Drives. Supported drive types, capacities, and speeds are continually changing as new technology becomes available. Please see Powerlink for the most up-to-date lists of supported drive types and capacities for Symmetrix systems.

    Virtual Provisioning Virtual Provisioning, generally known in the industry as thin provisioning, enables organizations to enhance performance and increase capacity utilization in their Symmetrix storage environments. Virtual Provisioning features provide:

    Simplified storage management Allows storage to be provisioned independent of physical constraints and reduces the steps required to accommodate growth.

    Improved capacity utilization Reduces the storage that is allocated but unused. Simplified data layout Includes automated wide striping that can provide similar and

    potentially better performance than standard provisioning. Symmetrix thin devices are host-accessible devices that can be used in many of the same ways that Symmetrix devices have traditionally been used. Unlike regular host-accessible Symmetrix devices, thin devices do not need to have physical storage completely allocated at the time the device is created and presented to a host. The physical storage that is used to supply disk space to thin devices comes from a shared storage pool called a thin pool. The thin pool is comprised of devices called data devices that provide the actual physical storage to support the thin device allocations. When a write is performed to a part of the thin device for which physical storage has not yet been allocated, the Symmetrix allocates physical storage from the thin pool that covers that portion of the thin device. Enginuity satisfies the requirement by providing a block of storage from the thin pool called a thin device extent. This approach allows for on-demand allocation from the thin pool and reduces the amount of storage that is consumed or otherwise dedicated to a particular device. When more storage is required to service existing or future thin devices, data devices can be added to the thin storage pools. Virtual Provisioning data devices are supported on all RAID types. However, thin pools cannot be protected by a mixture of RAID types. The architecture of Virtual Provisioning creates a naturally striped environment where the thin extents are allocated across all volumes in the assigned storage pool. By striping the data across all devices within a thin storage pool, a widely striped environment is created. The larger the storage pool for the allocations, then the greater number of devices that can be leveraged for a thin device. It is this wide and evenly balanced striping across a large number of devices in a pool that allows for optimized performance in the environment. If metavolumes are required for the thin devices in a particular environment, it is recommended that the metavolume be concatenated rather than striped since the thin pool is already striped using thin extents. Concatenated metavolumes also support fast expansion capabilities, as new metavolume members can easily be appended to the existing concatenated metavolume. This functionality may be applicable when the provisioned thin device has become fully allocated at the host level, and it is required to further increase the thin device to gain additional space. Striped metavolumes are supported with Virtual Provisioning and there may be workloads that will benefit from multiple levels of striping. For additional information on the use of Virtual Provisioning with Windows operating systems, please see the white papers Implementing Virtual Provisioning on EMC Symmetrix DMX with Microsoft Exchange 2007and Implementing Virtual Provisioning on EMC Symmetrix DMX with Microsoft SQL Server 2005, both available on Powerlink.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 18

  • Discovering storage Once the appropriate steps have been taken from the connectivity, zoning, volume creation, mapping, and masking perspectives, devices can be discovered by the operating system. In most cases the discovery of new devices can be done by performing a rescan operation from the disk management console or from the diskpart command line interface as depicted in Figure 8.

    Figure 8. Diskpart rescan

    In some instances the discovery of the initial target requires either a host reboot or an HBA reset. Once the target and first device(s) are discovered, the host should not need to be rebooted and the HBA should not need to be reset in order to discover additional storage. A reset should not be issued on a host already accessing in-use storage devices from the HBA to be refreshed as this may interrupt access to in-use devices. Should a disk management console or diskpart rescan not prove successful in discovering new devices, a plug and play rescan can also be issued. Plug and play rescans can be executed from Windows Device Manager using the Scan for hardware changes option. With Windows Server 2003, the devcon CLI, a free download from Microsoft, can also be used to perform these kinds of rescans. EMC also offers ways to perform this operation with the Symmetrix Integration Utilities (SIU). Among its functions the SIU CLI symntctl has a rescan function to assist in discovering storage. SIU is available as a free download from Powerlink and is now included with Solutions Enabler 7.0 or later. Rescan operations for storage are generally not synchronous with regards to the completion of the rescan command that initiated the discovery. What this means is that a rescan may return complete; however, the actual discovery and surfacing of the LUNs to the operating system may happen several seconds after the command finishes. This behavior is important to note when scripting operations that surface LUNs and then perform a subsequent action against those LUNs. In this case it may be necessary to sleep, loop, or provide additional checks in scripts to allow all LUNs to be discovered and otherwise be available to the operating system. Once LUNs are discovered they are given a physicaldrive or disk number generally based on the order of discovery by the operating system based on LUN address. There are several methods to ensure the correct Symmetrix devices are being seen as disks by the host. One method is to use the EMC inq utility available at ftp://ftp.emc.com/pub/symm3000/inquiry. The inq CLI uses SCSI inquiry information to list Symmetrix specific information, including Symmetrix serial number and device numbers associated with a given physical drive. Figure 9 gives an example of using the inq utility with the sym_wwn option.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 19

  • Figure 9. Inq utility

    In addition to inq, Solutions Enabler can be installed on the host for the purposes of querying physical drive specific information. Similar to the inq utility, Solutions Enabler includes a syminq CLI that performs a SCSI inquiry collection and returns the current disk information. Along with syminq SE provides a sympd CLI that can return additional Symmetrix specific information associated with a physical drive. It should be noted that the drive associations used by the sympd command are cached within the SE (symapi) database. To update this cached information a symcfg discover command should be run if any changes were made to the drives presented to the host. At the time of publication, a symcfg sync command does not update the physical drive specific information in the symapi database. In environments where masking is enabled, it is possible for the VCM or ACLX device to be mapped to director ports. As previously discussed, this means the VCM or ACLX device will be available to all hosts connected to those directors. In multipathed environments, where EMC PowerPath is used, the VCM or ACLX device is the only Symmetrix LUN where PowerPath will not automatically manage multipathing. With this in mind it should be expected that the VCM or ACLX device will be seen multiple times by the operating system.

    Windows Server 2008 SAN Policy Functionality new to Windows Server 2008, referred to as SAN Policy, allows administrators to control how newly discovered storage devices are managed by the operating system. With Windows Server 2003, new disks discovered by Windows would automatically be brought online for potential use by the operating system. With Windows Server 2008, the SAN Policy allows administrators to control the way disks are brought online. Specifically, the SAN Policy determines if new disks are brought online or remain offline or marked as read-only or read/write.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 20

  • The specific options offered by the SAN policy are shown in Table 3.

    Table 3. SAN Policy options

    SAN Policy Description

    Offline Shared

    Offline Shared is the default policy for Windows Server 2008 Enterprise and Data Center editions. This policy makes any storage discovered on a shared bus (FC, SCSI, iSCSI, SAS, and so on), to be made offline and read-only. Any storage discovered on a non-shared bus, as well as the boot disk, will be brought online read/write. Symmetrix devices presented to a Windows Server 2008 host with the Offline Shared policy will be placed offline and read-only. The only exception would be the boot device in a boot from SAN configuration.

    Online

    This policy will bring all discovered storage devices online and read/write automatically.

    Offline All In this case all disks, except for the boot disk, will be marked offline and read-only. To modify the policy the diskpart CLI can be used. Specifically the SAN option within diskpart can be used to view and change the policy. The full syntax of the SAN command can be obtained by typing help san from a diskpart command prompt. The state of the disks can be managed from either the disk management console or the diskpart CLI. Changing online or offline status for disks from the disk management console will also affect the read/write state of the device. For example, an online from disk management will also read/write enable the disk, and conversely an offline of a disk will subsequently mark the device as read-only automatically. The diskpart CLI offers more granular control, as an offline or online (using the online disk syntax) does not modify the read/write state of the device. To modify the read/write state of the disk, the disk specific setting must also be modified via the attributes disk diskpart command. Figure 10 provides an example of how to online and read/write a specific disk using diskpart.

    Figure 10. Diskpart command to online and read/write a disk EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 21

  • Automount Windows Server 2003 and 2008 include the ability to automatically mount newly discovered basic disk storage to the next available drive letter upon discovery. For Windows Server 2003, this setting is disabled by default, while for Windows Server 2008 this setting is enabled by default. To view or otherwise modify this setting the diskpart CLI can be used, specifically using the automount command. The mountvol CLI can also be used to disable or enable automounting of new devices. In most SAN environments it is not necessary to have Windows automatically mount storage, as applications or scripts are used to manage the device state. With this said, it may not be necessary to change the automount setting unless otherwise recommended by the application vendor or as required due to unwanted behavior in a specific environment.

    Initializing and formatting storage Newly presented and previously unused storage will display as not initialized when marked as online to Windows. The act of initializing a disk performs several functions including the assignment of a disk signature, boot record, and partition table as written to the disk. Prior to initializing the storage, the disk type, be it Master Boot Record (MBR) or GUID partition table (GPT), needs to be determined. Additionally, whether the disks are basic or dynamic needs to be considered and defined based on storage requirements. The following sections outline the definitions and capabilities of MBR and GPT style basic or dynamic disk storage with Windows Server 2003 and 2008.

    Disk types

    Master Boot Record (MBR) The MBR partitioning has historically been the most commonly used disk type on the Windows platform. MBR disks create a 512-byte record in the first sector of a disk containing boot information, disk signature, and a table of primary partitions. The following list highlights the main features and limitations of MBR disks on Windows operating systems: Support up to four primary partitions. Support for more than four partitions requires an extended partition in which logical drives are

    created. Support 32-bit entries for partition length and partition starting address, which limits the maximum

    size of the disk to 2^32 blocks (512 bytes) or 2 TB. Contain a 32-bit, eight-character hexadecimal signature. Partition GUIDs for MBR basic disk volumes are not stored on disk and are otherwise assigned by the

    operating system and maintained in the registry. Support with Windows Server 2003 and 2008 standalone or clustered hosts.

    GUID partition table (GPT) The GPT disk format was designed to overcome the limitations in the MBR style of partitioning. GPT disks start with a protective MBR in the first sector of the disk. The protective MBR is designed to prevent operating systems that do not recognize the GPT format from assuming the disk is not partitioned. After the protective MBR, GPT information is maintained in the next 32 sectors of the disk. This information includes the primary GPT header and self-identifying partition entries. GPT disks also maintain a redundant copy of its information at the end of the disk and have CRC32 checksums for added integrity. The following list highlights additional features and limitations of GPT disks on Windows operating systems. Support up to 128 partitions Support 64-bit partition table entries, which in theory can produce disks or partitions that are zettabytes

    (2^64 blocks) in size.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 22

  • Windows limits supportable disk sizes to 18 exabytes where raw partitions are used and 256 terabytes

    for NTFS formatted partitions. Maintain a 128-bit Globally Unique ID (GUID) for each disk Maintain a 128-bit GUID for each partition on a disk. Support Windows Server 2003 SP1 or later Windows Server 2003 clustering support requires a hotfix (http://support.microsoft.com/kb/919117) Full support with Windows Server 2008

    Basic disks Basic disks utilize the native partitioning capabilities of the MBR and GPT formats. MBR basic disks will support primary partitions, extended partitions and logical drives. GPT basic disks will support the partition table entries native to this format. Volumes on MBR or GPT basic disks cannot span across multiple disks, but can be expanded in-place assuming there is space available on the disk where the partition resides. Basic disks are also natively supported in Microsoft clusters.

    Dynamic disks The native Microsoft logical disk manager (LDM) also offers the ability to create so called dynamic disks. Dynamic disks maintain a 1 MB private region on each disk to store the LDM database. The LDM database stores the relevant information regarding dynamic disks in the system including volume types, offsets, memberships, and drive letters for each volume. Dynamic disks can be either MBR- or GPT-based and include the capability to distribute filesystems across multiple disks as presented to the OS. Dynamic disks, while providing for enhanced functionality, are not supported in Microsoft clusters when using the base LDM. Dynamic disks can be used to create several types of volumes in non-clustered environments including simple, spanned, striped, mirrored, and RAID 5. Simple A simple dynamic volume is a volume that resides on a dynamic disk but does not span across multiple disks. Simple volumes can be created from free space on a dynamic disk, or by converting a basic disk with existing partitions. The value of a simple volume is the ability to subsequently create a spanned volume (assuming it is not a system or boot partition) or a mirrored volume. A simple volume cannot be used to create a striped or RAID 5 volume. Spanned A spanned dynamic volume is a concatenation of multiple volumes across one or multiple dynamic disks. Spanned volumes write data sequentially to each volume, filling one before moving onto the next volume in the spanned set. The value of a spanned volume is the ability to grow a filesystem across multiple dynamic disks non-disruptively. A spanned volume can be created or expanded between two to 32 dynamic disks, but is not fault-tolerant. Should any one member of the spanned volume become unavailable, the entire volume will go into a failed state. Striped A striped dynamic volume, as it sounds, is a dynamic volume that stripes a filesystem across multiple disks. The stripe depth (amount of data written to one disk before moving onto the next in the stripe) is 64 KB. A striped dynamic volume can be formed with anywhere between two and 32 dynamic disks. Once created, a striped volume cannot be expanded with the base Windows LDM. A striped volume is not fault-tolerant and is considered to be a RAID 0 device. Should any one member of the striped volume become unavailable, the entire volume will go into a failed state. Mirrored Mirrored dynamic volumes are volumes synchronized across two physical disks. Mirrored dynamic volumes are considered RAID 1 protected and provide for fault-tolerance should one of the disks fail. A mirrored volume will required twice the amount of storage for the same amount of usable space. Mirrored

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 23

  • dynamic disks can be created or broken online without disruption to the availability of the volume. Once created a mirrored volume cannot be extended RAID 5 RAID 5 dynamic volumes are fault-tolerant volumes that contain data and parity striped across a set of at least three and up to 32 dynamic disks. The parity space required will consume an amount of storage equal to one full member of the RAID 5 set. Should any one disk fail the RAID 5 volume will remain online. Data and parity can be rebuilt from the remaining members upon recovery of the failed disk. Once created a RAID 5 volume cannot be extended.

    Veritas Storage Foundation for Windows The dynamic disk functionality and restrictions listed above apply to the base LDM included with Windows Server 2003 and 2008 operating systems. With Veritas Storage Foundation for Windows (SFW), dynamic disk support and capabilities are expanded to include additional functionality. The following list details some but not all of the additional functionality provided by SFW with dynamic disks over and above the base Windows LDM: Simple volumes can be dynamically converted to striped volumes. Spanned volumes can support up to 256 dynamic disks. Mirrored volumes can be extended and striped to create RAID 0 + 1 devices. Mirrored volumes can

    also be assigned a preferred mirrored disk or plex. Striped volumes can be mirrored and extended to create RAID 0 + 1 devices. Striped volumes can also

    be dynamically modified to change stripe characteristics including change to a concatenated volume. Stripe depth can also be controlled.

    RAID 5 volumes can be extended. Multiple dynamic disk groups are supported. Microsoft clustering is supported with dynamic disks. Additional functionality provided by Veritas Storage Foundation for Windows can be found on the Symantec website.

    Disk type recommendations In most environments, MBR basic disks with a single partition fulfill the majority of storage requirements. MBR basic disks offer a disk type supported by all Microsoft and third-party applications. The functionality offered by dynamic disks, including striped volumes, RAID protection, and volume growth is somewhat mitigated as they can occur with more efficiency in the Symmetrix array. Additionally, the restriction that dynamic disks are not supported with Microsoft clustering when using the base LDM prohibits their use in many environments. The GPT disk type is generally reserved for environments that require volumes larger than 2 TB in size. While the GPT disk type, upon first release on Windows platforms, did have some support limitations, most of those limits have been removed by both Microsoft and third-party applications. In the future, GPT-based disks should become the standard partitioning format. Before utilizing GPT disks, ensure the disk type is supported by the required Microsoft or third-party applications.

    Large volume considerations While GPT disks allow for larger disk sizes, volumes that are multiple terabytes in size should be created and used with some degree of caution. The main concerns regarding large volumes are generally performance-related or tied to the ability to perform administrative tasks in a timely manner. Common administrative tasks where very large volumes become a concern include backup and restore activities, defragmentation, or filesystem verification tasks like chkdsk. The amounts of time to perform administrative tasks like chkdsk have as much to do with the number of the files in the filesystem as the

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 24

  • size of the volume itself. A small number of large files will chkdsk much faster than a large number of small files in a comparably sized file system. Performance concerns also come from the fact that a single large volume could contain enough data that, when accessed with enough user concurrency, would potentially saturate the performance capabilities of the underlying disks. This concern can be mitigated on the Symmetrix by creating metavolumes with enough meta members to spread the workload across a larger number of physical spindles. The use of Virtual Provisioning can also provide a mechanism to spread large LUNs across a greater number of drives.

    Partition alignment Historically Windows operating systems have calculated disk geometry based on generic SCSI information including Cylinder-Head-Sector or CHS values as reported by SCSI controllers. The perceived or assumed geometry of the disk based on CHS values led Windows to create partitions based on 63 sectors per track. Generally speaking this meant Windows would create the first partition in the 63rd sector or at an offset 32,256 bytes into the physical drive, assuming 512-byte sectors. The creation of partitions based on the assumption of 63 sectors per track led to the partition and subsequently data within the partition to be misaligned with storage boundaries in the Symmetrix. Misalignment with these storage boundaries could potentially lead to performance problems. The logical geometry of Symmetrix host addressable logical volumes is listed in Table 4.

    Table 4 Symmetrix device geometry

    Symmetrix DMX-2 and prior Symmetrix DMX-3 and later, including V-Max Cylinder = 15 tracks (480K) Cylinder = 15 tracks (960K) Track = 8 sectors (32K) Track = 8 sectors (64K) Sector = 8 blocks (4K) Sector = 16 blocks (8K) RAID 5/6 Stripe Boundary = 4 tracks (128K) RAID 5/6 Stripe Boundary = 2 tracks (128K) Metavolume default stripe boundary = 2 Cylinders (960K)

    Metavolume default stripe boundary = 1 Cylinder (960K)

    Based on these values, misaligned I/O could cause partial sector write activity and additional, unwanted I/O within the Symmetrix from crossing track and/or stripe boundaries. Depending on the version of Windows, there are several ways to correct alignment and ensure optimal performance. In all cases it is recommended that the partition offset or alignment be equal to some increment of 64 KB. This could mean that the partition may start 128 sectors or 65,536 bytes into the disk, or at some number larger but evenly divisible by 128 sectors or 64 KB. In either case, the partition will be considered aligned.

    Partition alignment prior to Windows Server 2003 SP1 Prior to Windows Server 2003 SP1, the diskpar utility could be used to manually create partitions on a specific offset or boundary within a physical drive. The recommended offset value when creating a partition using the diskpar command is 128 sectors. For dynamic disks a filler partition must first be created with diskpar prior to converting the disk to dynamic and subsequently creating volumes for user data. For detailed information regarding partition alignment with the diskpar command, please see Using diskpar and diskpart to Align Partition on Windows Basic and Dynamic Disks available on Powerlink.

    Partition alignment with Windows Server 2003, SP1, or later versions With Windows Server 2003 SP1 or later, Microsoft introduced a version of the diskpart CLI command that included an option to align a partition upon creation. The recommended value when creating a partition using the diskpart command is 64 KB. Figure 11 gives an example of how to align an MBR basic partition using the diskpart command including the align option. EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 25

  • For dynamic disks, the diskpart command cannot be used to create aligned volumes. The first reason for this is that the align option is not available as a part of the diskpart command when creating dynamic volumes. Secondly, the diskpart command cannot be used to create a filler partition in order to force alignment for subsequent volumes (the filler partition created by diskpart starts aligned, but does not end aligned). The diskpar command must be used to create a filler partition prior to converting the disk to dynamic and creating subsequent volumes for user data. For detailed information regarding partition alignment with the diskpart command, please see the white paper Using diskpar and diskpart to Align Partition on Windows Basic and Dynamic Disks and the Aligning GPT Basic and Dynamic Disks For Microsoft Windows 2003 Technical Note available on Powerlink.

    Figure 11. Using Diskpart to create an aligned partition on an MBR basic disk

    Partition alignment with Windows Server 2008 With Windows Server 2008, the issues around partition alignment when using default tools, such as the disk management MMC, have been corrected. By default Windows Server 2008 will create partitions based on a 1 MB boundary or offset. Specifically, for disks larger than 4 GB, Windows will create partitions with an offset of 1 MB increments. For disks smaller than 4 GB, Windows will default to an offset of 64 KB. In both cases, the partition will be aligned with the recommended Symmetrix best practice of 64 KB increments.

    Querying alignment One method to query and otherwise ensure alignment is to use the WMI interfaces native to Windows Server 2003 and 2008. These versions of Windows include a WMI CLI interface called wmic that can be used to determine if a partition is properly aligned. The example in Figure 12 uses the wmic CLI to return specific partition information including the starting offset, from an MBR basic disk and a GPT basic disk created specifying a 64 KB alignment with diskpart.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 26

  • Figure 12. Using the wmic CLI to query partition alignment The starting offset provided by the wmic command is in bytes. To ensure proper alignment, this number should be evenly divisible by 65536. Alternatively, the provided offset in bytes can be divided by the block size (512 bytes) to get the number of blocks or sectors for the offset. The sector offset should then be evenly divisible by 128.

    Formatting Once a partition is created it will generally be formatted with an NTFS file system. The process of formatting a partition with a filesystem performs several functions such as creating NTFS metadata including the Master File Table (MFT), defining the allocation unit size and determining if a quick format should be performed.

    Allocation unit size The allocation unit size, or cluster size, is the smallest amount of storage that can be allocated to an object or fragment of an object in a filesystem. The ideal allocation unit size, generally speaking, should represent the average file size for the filesystem in question. An allocation unit size that is too large could lead to wasted space in the filesystem, while an allocation unit size that is too small could lead to excessive fragmentation. In the context of alignment, the allocation unit size will also determine where an object resides in the filesystem. To have a properly aligned partition is the first step in ensuring aligned operations in the environment. However, files do not live or otherwise start in the first sector of an aligned and formatted partition (which is reserved for the NTFS header). Files will start in the filesystem at some offset based on the allocation unit size. For example, to have an aligned partition at 64 KB with an allocation unit size of 4 KB, would cause files to be created at 64 KB, plus some number of 4 KB into the filesystem. This may not be an issue for general purpose filesystems, but for database applications such as Microsoft Exchange and Microsoft SQL Server, this could cause the internal structures of the data file to be misaligned with some of the critical storage boundaries as mentioned in the Partition alignment section. Because of the impact to alignment caused by the allocation unit size, it is recommended, especially for database applications, to format a volume with a cluster size of 64 KB. The 64 KB allocation unit size will ensure that the file(s) created in the filesystem will maintain a 64 KB offset from the beginning of the partition. Assuming the partition is also aligned with a 64 KB offset, this will ensure I/O operations are as aligned as possible with the critical boundaries in the Symmetrix. Querying allocation unit size The allocation unit size can be determined by using the wmic CLI. Specifically, the WMI volume object can be queried to determine the blocksize (in bytes) of the filesystem. Figure 13 gives an example of using wmic to determine the allocation unit size.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 27

  • Figure 13. Using the wmic CLI to query allocation unit size

    Quick format vs. regular format Depending on the version of the Windows operating system, the behavior of a non quick or regular format will differ. In either case, references to files in an existing filesystem will be removed. But the difference in behavior is most interesting in the context of Virtual Provisioning, specifically the potential impact to thin pool allocations.

    Windows Server 2003 format With Windows Server 2003, the difference between a regular and quick format is that a regular format will scan the entire disk for bad sectors. The scan for bad sectors (SCSI verify command) is a read operation. In virtually provisioned environments, this read operation will not cause space to be allocated in the thin pool. When a read is requested from a thin device on an area of the LUN that has not been allocated, the device will simply return zeroes to the application. Since a full format is an unnecessary operation when considering there is no actual allocation or disk to verify in virtually provisioned environments, a quick format should be used. However, no harm will be done should a regular format accidentally be selected; there will simply be unnecessary I/O to the array. So whether a regular format or a quick format is selected, only a small number of writes will occur against the thin device, causing minimal allocation within the thin pool.

    Windows Server 2008 format With Windows Server 2008 the difference between a regular and a quick format is that a regular format will write zeroes to every block in the filesystem. From a Virtual Provisioning perspective this will cause a thin device to become fully allocated within its respective thin pool. With this behavior in mind, it is important to select the quick format option (/Q from the command line) when formatting any thin device on Windows Server 2008. A quick format will perform similarly to Windows Server 2003 where only a small number of tracks will become allocated within a thin pool.

    Volume expansion Storage administrators are continually looking for flexibility in the way that storage is provisioned and may be altered in-place and online. Administrators in Microsoft environments may find the need to increase storage for a given filesystem due to an increase in storage requirements. One method to account for growth in storage needs is to expand the LUN on which a given partition or filesystem resides. Previous versions of Enginuity have provided methods in which to grow Symmetrix volumes. The method to expand volumes in place and online would involve adding additional members to an existing metavolume. If the metavolume was concatenated, then only the additional volumes to be added to the meta would be required to expand the volume online with no disruption to the application. Striped metavolume expansion, however, required not only the additional volumes but also a mirrored BCV in order to perform the expansion with data protection. The requirement for a mirrored BCV excluded other more cost-effective protection types, such as RAID 5, which may be more desirable for BCV volumes.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 28

  • With Enginuity 5874 and Symmetrix V-Max arrays, users may now use other protection types for the BCV used in conjunction with striped metavolume expansion, including RAID 5 or RAID 6. The following section provides an example of online striped metavolume expansion using a RAID 5 BCV.

    Striped metavolume expansion example This example focuses on Symmetrix metavolume 41F, which happens to hold a Microsoft Exchange database. We will expand metavolume 41F with four new devices (42D, 42E, 42F, and 430) that reside in the same 15k rpm Fibre Channel disk group that holds the existing metavolume. The RAID 5 BCV metavolume to be used in order to protect data during the migration, device 431, exists on a separate disk group. In preparation for expanding a striped metavolume with data protection, it is necessary to ensure there are no existing Symmetrix-based replication sessions occurring against the device. This includes ensuring TimeFinder, SRDF, and Open Replicator sessions have been removed, terminated, or canceled as appropriate to the respective technology. The requirement to remove all replication sessions also applies to the TimeFinder BCV to be used for protecting data during the expansion. The BCV cannot be synchronized or otherwise have a relationship with the metavolume prior to running the expansion procedure using Solutions Enabler 7.0. It is also important to ensure that the devices being added to the existing metavolume have the same attributes. In this example the metavolume is a clustered resource within a Windows Server 2008 failover cluster. A Symmetrix device within a Windows Server 2008 failover cluster requires that the SCSI-3 persistent reservation attribute be set. Since at the beginning of this example the SCSI-3 persistent reservation attribute is not set on the volumes being used for the expansion, the following command needs to be issued: symconfigure -sid 94 -cmd "set dev 42D:430 attribute = SCSI3_persist_reserv;" commit

    Once the environment is prepared, the LUN expansion can be executed. The expansion procedure will be executed from a host with gatekeeper access to the required Symmetrix using the symconfigure CLI command. Figure 14 shows the partition for the LUN in this example, as seen from the disk administrator, prior to it being expanded.

    Figure 14. Striped metavolume prior to expansion To expand the metavolume, the following command was executed:

    symconfigure -sid 94 -cmd "add dev 42d:430 to meta 41f protect_data=true bcv_meta_head=431;" commit

    Once the expansion process has begun, the following high-level steps will be taken: 1. The BCV metadevice specified for data protection will begin a clone copy operation from the source

    metavolume. 2. During the clone copy operation, writes from an application, like Exchange, will be mirrored between

    the source metavolume and the BCV.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 29

  • 3. When the BCV copy is complete, the BCV is split from the source and all read and write I/O is

    redirected to the BCV device. 4. While the I/O is redirected, the source metavolume will be expanded with the specified volumes. 5. After the metavolume is expanded, the data from the BCV is copied back and restriped across all

    members of the newly expanded metavolume. 6. During the copy from the BCV, I/O is redirected back to the expanded metavolume. 7. Once the copy back is complete, the BCV clone relationship is terminated and the expansion

    completes. Due to the nature of the volume expansion there will be a performance impact for reads and writes to the LUN. With this in mind it is recommended to perform any expansion operations during maintenance windows or times of low I/O rates to the LUN. The symconfigure command will monitor the expansion throughout the process as seen in Figure 15. Once the expansion is complete symconfigure will exit.

    Figure 15. symconfigure command during the expansion process

    After the symconfigure command completes, it will now require the administrator to extend the partition that resides on the now larger LUN. To do this the first step is to perform a rescan from the host via the disk manager console or the diskpart cli command. Since this is a clustered environment we will need to perform a rescan from all nodes in order to discover the new LUN size. Once the rescan is executed the new size of the LUN should be seen from all hosts as depicted in Figure 16.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 30

  • Figure 16. Metavolume after the expansion At the completion of the metavolume expansion the diskpart command can be used to grow the partition into the newly discovered free space. From the diskpart command, either the volume or the partition needs to be selected prior to issuing the extend option. Figure 17 gives an example of using diskpart to select the target disk, selecting the appropriate partition on the disk, followed by issuing the extend command.

    Figure 17. diskpart commands to expand the NTFS partition

    The extend command will grow the partition into the free space on the disk, as shown in Figure 18. Another rescan will need to be issued on all cluster nodes in order to discover the now larger partition. This completes the expansion process and the Exchange database can grow into the now larger volume on which it resides.

    Figure 18. Metavolume following the diskpart extend command This particular example was tested on a Windows Server 2008 failover cluster while running a light loadgen workload against the database LUN being expanded (~200 IOPS). The ~88 GB LUN was expanded in roughly 35 minutes.

    EMC Symmetrix with Microsoft Windows Server 2003 and 2008 Best Practices Planning 31

  • Symmetrix replication technologies and management tools In many environments a key aspect of managing Symmetrix storage involves storage replication. The Symmetrix offers several native forms of replication including TimeFinder, SRDF, and Open Replicator. Each of these technologies offers LUN-based replication either within a Symmetrix array (TimeFinder), between multiple Symmetrix arrays (SRDF or Open Replicator), or between the Symmetrix and other qualified storage arrays (Open Replicator). The following sections offer an introductory description to these technologies.

    EMC TimeFinder family The TimeFinder family of software provides a local copy or image of data, independent of the host and operating sytem, application, and database. TimeFinder local replication software helps to manage backup windows while minimizing or eliminating any impact on the application and host performance. It allows for immediate application and host access during restores, also referred to as instant restore. TimeFinder also allows for fast data refreshes for activities such as data warehousing and decision support as well as test and development.