host integration basics

Download Host Integration Basics

If you can't read please download the document

Upload: lael

Post on 25-Feb-2016

41 views

Category:

Documents


4 download

DESCRIPTION

Host Integration Basics. Upon completion of this module, you should be able to: Identify storage network topologies and requirements Describe PowerPath features and functions Describe Unisphere Agent and Unisphere Server Utility considerations. Host Integration Basics. - PowerPoint PPT Presentation

TRANSCRIPT

Host Integration BasicsUpon completion of this module, you should be able to:Identify storage network topologies and requirementsDescribe PowerPath features and functionsDescribe Unisphere Agent and Unisphere Server Utility considerationsHost Integration Basics1Copyright 2014 EMC Corporation. All Rights Reserved.This module focuses on various Storage Network Topologies and the requirements to implement them. It also discusses PowerPath other host utilities for integration with VNX.

Host Integration Basics1 Copyright 2014 EMC Corporation. All rights reservedHost Integration BasicsThis lesson covers the following topics:Identifying Network TechnologiesIdentifying Fibre Channel components, addressing, and Connectivity rulesIdentifying iSCSI components, addressing, and Connectivity rulesExplaining host connectivity requirements

Lesson 1: Storage Network Topologies and RequirementsHost Integration Basics2Copyright 2014 EMC Corporation. All Rights Reserved.During this lesson we will discuss the various Storage Network Topologies and the requirements to implement them. We will look at identifying the different network topologies, while taking a closer look at the Fibre Chanel and iSCSI implementations. We will delve into the Fibre Channel and iSCSI components and addressing as well as look at the various rules associated with implementing those technologies. Finally we will look at host connectivity requirements for the various storage network topologies.Host Integration Basics2 Copyright 2014 EMC Corporation. All rights reservedNetwork Technologies3Host Integration BasicsCopyright 2014 EMC Corporation. All Rights Reserved.Network technologies are more flexible than channel technologies and provide greater distance capabilities. Most networks provide connectivity between client or host systems and carry a variety of data between the devices. A simple example is a network of desktop PCs within a company.This type of setup can provide each Host with connectivity to file and print services, server-based applications, and corporate intranets.The networks these Hosts are connected to provide shared bandwidth and the ability to communicate with many different systems. This flexibility results in greater protocol overhead and reduced performance.Some characteristics of network technologies are:Lower performanceHigher protocol overheadDynamic configurationsLonger distancesConnectivity among heterogeneous types of systems

Host Integration Basics3 Copyright 2014 EMC Corporation. All rights reservedStorage Area Network ManagementSANs are networks of host and storage devices often connected over Fibre Channel FabricsA common method of managing the variety of devices on a SAN is SNMPOut of BandThe FibreAlliance is defining the SNMP MIB to facilitate SAN management The Fibre Channel Management Integration (FCMGMT-INT) MIB provides a heterogeneous method of managing multiple devices across a SAN4Host Integration BasicsCopyright 2014 EMC Corporation. All Rights Reserved.Networks of host and storage devices (called Storage Area Networks, or SANs) are often connected over Fibre Channel Fabrics. A common method of managing the variety of devices on a SAN is SNMP (Simple Network Management Protocol), popular because it is widely supported and can be run out of band (which is advantageous because it does not rely on the Fibre Channel network).An open industry consortium called FibreAlliance is defining an SNMP MIB (Management Information Base) to facilitate SAN management. The MIB is a group of parameters (variables) whose values define, and describe the status of, a network and its components. The Fibre Channel Management Integration (FCMGMT-INT) MIB provides a heterogeneous method of managing multiple devices across a SAN.

Host Integration Basics4 Copyright 2014 EMC Corporation. All rights reserved

Fibre ChannelFibre Channel is a serial data transfer interface Copper Wire ConnectionOptical Fiber ConnectionHigh-speed is obtained through:Mapping networking and I/O protocols to Fibre Channel constructsEncapsulating them and transporting them within Fibre Channel framesFibre Channel SwitchWindowsHostLinuxHostStorage

Host Bus Adapters

5Host Integration BasicsCopyright 2014 EMC Corporation. All Rights Reserved.Fibre Channel is a serial data transfer interface that operates over copper wire and/or optical fiber at data rates up to 3200 MB/s (16 GB connection).Networking and I/O protocols (such as SCSI commands) are mapped to Fibre Channel constructs, and then encapsulated and transported within Fibre Channel frames. This process allows high-speed transfer of multiple protocols over the same physical interface.Fibre Channel systems are assembled from familiar types of components: adapters, hubs, switches and storage devices. Host bus adapters are installed in computers and servers in the same manner as a SCSI host bus adapter or a network interface card (NIC). Hubs link individual elements together to form a shared bandwidth loop. Fibre Channel switches provide full bandwidth connections for highly scalable systems without a practical limit to the number of connections supported (16 million addresses are possible).Note: The word fiber indicates the physical media. The word fibre indicates the Fibre Channel protocol and standards.

Host Integration Basics5 Copyright 2014 EMC Corporation. All rights reservedHost Bus Adapter (HBA)6Host Integration BasicsCopyright 2014 EMC Corporation. All Rights Reserved.An HBA is an I/O adapter that sits between the host computer's bus and the Fibre Channel loop, and manages the transfer of information between the two channels. In order to minimize the impact on host processor performance, the host bus adapter performs many low-level interface functions automatically or with minimal processor involvement.In simple terms, a host bus adapter (HBA) provides I/O processing and physical connectivity between a server and storage. The storage may be attached using a variety of direct attached or storage networking technologies, including Fibre Channel, iSCSI, FICON, or SCSI. Host bus adapters provide critical server CPU off-load, freeing servers to perform application processing. As the only part of a storage area network that resides in a server, HBAs also provide a critical link between the SAN and the operating system and application software. In this role, the HBA enables a range of high-availability and storage management capabilities, including load balancing, fail-over, SAN administration, and storage management.

Host Integration Basics6 Copyright 2014 EMC Corporation. All rights reservedFibre Channel AddressingFibre Channel Addresses are required to route the frames from source to target24 bits (3 bytes) physical addresses are assigned when a Fibre Channel node is connected to the switch (or loop in the case of FC-AL)TargetSourceFC Initiator: HBAFC Responder:SP PortsFC Switch

7Host Integration BasicsCopyright 2014 EMC Corporation. All Rights Reserved.Fibre Channel addresses are used to designate the source and destination of frames in the Fibre Channel network. The Fibre Channel address field is 24 bits (3 bytes) in length. Unlike Ethernet, these addresses are not burned in, but are assigned when the node either enters the loop or is connected to the switch. There are reserved addresses, which are used for services rather than interface addresses.Host Integration Basics7 Copyright 2014 EMC Corporation. All rights reservedViewing SP Fibre Channel Port Properties

8Host Integration BasicsCopyright 2014 EMC Corporation. All Rights Reserved.VNX Fibre Channel ports can be viewed from the Unisphere GUI as well as using the Navisphere CLI commands. By navigating to the System > Storage Hardware menu. Once there expand the tree for I/O Modules, to view the physical locations and properties of a given port. The example shows SPA expanded to display the I/O modules and ports. Highlight the port number in the right hand window ( Port 1 in the example). To display Port properties, highlight the port and select Properties The WWN can be determined for the port as well as other parameters such as speed and initiator information. The VNX can contain FC, FCoE, and iSCSI ports depending on the I/O module installed. Host Integration Basics8 Copyright 2014 EMC Corporation. All rights reserved

Switched Fabric TopologySwitched Fabric is a Fibre Channel topology where many devices connect with each other via Fibre Channel switches This topology allows for the most number of connections with a theoretical 16 million devices per Fabric Frames are routed between source and destination by the FabricFibre Channel Switch

9Host Integration BasicsCopyright 2014 EMC Corporation. All Rights Reserved.A Switched Fabric is one or more Fibre Channel switches connected to multiple devices. The architecture involves a switching device such as a Fibre Channel switch, interconnecting two or more nodes. Rather than traveling around an entire loop, frames are routed between source and destination by the Fabric.

Host Integration Basics9 Copyright 2014 EMC Corporation. All rights reserved

Always put ONLY one HBA in a zone with Storage portsEach HBA port can only talk to Storage ports in the same zoneHBAs & Storage Ports may be members of more than one zoneHBA ports are isolated from each other to avoid potential problems associated with the SCSI discovery processSingle Initiator ZoningHost Integration Basics10Single Emulex HBA zoned to two VNX portsCopyright 2014 EMC Corporation. All Rights Reserved.Under single-HBA zoning, each HBA is configured with its own zone. The members of the zone consist of the HBA and one or more storage ports with the volumes that the HBA will use. In the example There is an Emulex HBA zoned to two VNX ports.This zoning practice provides a fast, efficient, and reliable means of controlling the HBA discovery/login process. Without zoning, the HBA will attempt to log in to all ports on the Fabric during discovery and during the HBAs response to a state change notification. With single-HBA zoning, the time and Fibre Channel bandwidth required to process discovery and the state change notification are minimized.Two VERY good reasons for Single HBA Zoning: Cuts down on the reset time for any change made in the state of the Fabric. Only the nodes within the same zone will be forced to log back into the Fabric after a RSCN (Registered State Change Notification)When a nodes state has changed in a Fabric (i.e. cable moved to another port), it will have to perform the Fabric Login process again before resuming normal communication with the other nodes it is zoned with. If there is only one SCSI Initiator in the zone (HBA), then the amount of disrupted communication is reduced. If you had a zone with two HBAs and one of them had a state change, then BOTH would be forced to log in again, causing disruption to the other HBA that did not have any change in its Fabric state. Performance can be severely impacted by this.Host Integration Basics10 Copyright 2014 EMC Corporation. All rights reservediSCSI Overview

IP NetworkIP Network

FCiSCSI/FCGateway11Host Integration BasicsCopyright 2014 EMC Corporation. All Rights Reserved.iSCSI is a native IP-based protocol for establishing and managing connections between IP-based storage devices, hosts, and clients. It provides a means of transporting SCSI packets over TCP/IP. iSCSI works by wrapping SCSI commands into TCP and transporting them over an IP network. Since iSCSI is IP based traffic, it can be routed or switched on standard Ethernet equipment. Traditional Ethernet adapters or NICs are designed to transfer file-level data packets among PCs, servers, and storage devices. NICs, however, do not usually transfer block-level data, which has been traditionally handled by a Fibre Channel host bus adapter. Through the use of iSCSI drivers on the host or server, a NIC can transmit packets of block-level data over an IP network. The block-level data is placed into a TCP/IP packet so the NIC can process and send it over the IP network. Today, there are three block storage over IP approaches: iSCSI, FCIP, and iFCP. There is no Fibre Channel content.If required, bridging devices can be used between an IP network and a SAN.Host Integration Basics11 Copyright 2014 EMC Corporation. All rights reserved

iSCSI Device Options

12Host Integration BasicsCopyright 2014 EMC Corporation. All Rights Reserved.NICs do not traditionally transfer block level data. To do so, the data needs to be placed into a TCP/IP packet. Through the use of iSCSI drivers on the host or server, a NIC can transmit packets of block level data over an IP network. When using a NIC, the server handles the packet creation of block level data and performs all of the TCP/IP processing. This is extremely CPU intensive and lowers the overall server performance.The TCP/IP processing performance bottleneck has been the driving force behind the development of hardware implementation of iSCSI within specialized NICs (1 Gb/s and 10 Gb/s) offloads TCP and iSCSI processing into hardware.Partial Offload - TCP offload only (TOE)Full Offload - iSCSI & TCP offload (iSCSI HBA)This relieves the host CPU from iSCSI and TCP processing, but only increases performance if the applications are CPU bound. With the TCP/IP stack implemented in hardware, vendors have been able to demonstrate wire-speed data transfers. Note: This is not required for building iSCSI solutions.Host Integration Basics12 Copyright 2014 EMC Corporation. All rights reservediSCSI NamesAn iSCSI addressUniquely identifies nodesThere are Two variationsiqn. iSCSI Qualified Nameiqn.1992-04.com.emc:cx.fcntr073900083.a4eui. Extended Unique Identifiereui.5006016141e0163a13Host Integration BasicsCopyright 2014 EMC Corporation. All Rights Reserved.All iSCSI nodes are identified by an iSCSI name. An iSCSI name is neither the IP address nor the DNS name of an IP host. Names enable iSCSI storage resources to be managed regardless of address. An iSCSI node name is also the SCSI device name, which is the principal object used in authentication of targets to initiators and initiators to targets. iSCSI addresses can be one of two types: iSCSI Qualified Name (iQN) or IEEE Naming convention (EUI).iQN format - iqn.yyyy-mm.com.xyz.aabbccddeeffgghh where:iqn - Naming convention identifieryyyy-nn - Point in time when the .com domain was registeredcom.xyz - Domain of the node backwardsaabbccddeeffgghh - Device identifier (can be a WWN, the system name, or any other vendor-implemented standard)EUI format - eui.64-bit WWN:eui - Naming prefix64-bit WWN - FC WWN of the hostWithin iSCSI a node is defined as a single initiator or target. These definitions map to the traditional SCSI target/ initiator model. iSCSI Names are assigned to all nodes and are independent of the associated address.

13Host Integration Basics Copyright 2014 EMC Corporation. All rights reservediSCSI Front-end Port Properties

Host Integration Basics14

Copyright 2014 EMC Corporation. All Rights Reserved.Front end connections in an iSCSI environment consist of iSCSI NICs and TOEs. Right clicking on a selected port displays the Port Properties. This example shows an iSCSI port, Port 0, which represents the physical location of the port in the chassis and matches the label (0 in this example) on the I/O module hardware in the chassis. A-4 in this example means:A represents the SP (A or B) on which the port resides4 represents the software assigned logical ID for this port. The logical id and the physical location may not always match.Host Integration Basics14 Copyright 2014 EMC Corporation. All rights reservediSCSI CHAP SecurityChallenge Handshake Authentication ProtocolCHAP Target sends challenge to CHAP initiatorInitiator responds with a calculated value to the targetTarget checks the calculated value, and if it matches, login continuesIf mutual CHAP is enabled, initiator will authenticate target using the same processOne-way and Mutual CHAPTarget and Initiator configured the sameConfigurationUnisphere ArrayHost NBAs Vendor Specific Tools

15Host Integration BasicsCopyright 2014 EMC Corporation. All Rights Reserved.CHAP is an authentication scheme used by Point to Point servers to validate the identity of remote clients. The connection is based upon the peer sharing a password or secret. iSCSI capable storage systems support both one-way and mutual CHAP. For one-way CHAP, each target can have its own unique CHAP secret. For mutual CHAP, the initiator itself has a single secret with all targets. CHAP security can be set up either as one-way CHAP or mutual CHAP. You must set up the target (storage array) and the initiator (host) to use the same type of CHAP to establish a successful login. Unisphere is used to configure CHAP on the storage array. To configure CHAP on the host, use the vendor tools for either the iSCSI HBA or network interface card (NIC) installed on each initiator host. For a Qlogic iSCSI HBA use SANsurfer software. For a standard NIC on a Windows host use Microsoft iSCSI Initiator software. On a Linux host, CHAP is configured by entering the appropriate information in the /etc/iscsi.conf file. Host Integration Basics15 Copyright 2014 EMC Corporation. All rights reservediSCSI Network RequirementsLAN configuration allows Layer 2 (switched) and Layer 3 (routed) networksLayer 2 networks are recommended over Layer 3 networksThe network should be dedicated solely to the iSCSI configurationFor performance reasons EMC recommends that no traffic apart from iSCSI traffic should be carried over itIf using MDS switches, EMC recommends creating a dedicated VSAN for all iSCSI traffic.The network must be a well-engineered network with no packet loss or packet duplication.vLAN tagging protocol is supported16Host Integration BasicsCopyright 2014 EMC Corporation. All Rights Reserved.LAN configuration allows Layer 2 (switched) and Layer 3 (routed) networks. Layer 2 networks are recommended over Layer 3 networks.The network should be dedicated solely to the iSCSI configuration. For performance reasons EMC recommends that no traffic apart from iSCSI traffic should be carried over it. If using MDS switches, EMC recommends creating a dedicated VSAN for all iSCSI traffic.CAT5 network cables are supported for distances up to 100 meters. If cabling is to exceed 100 meters, you must use CAT6 network cables.The network must be a well-engineered network with no packet loss or packet duplication. When planning the network, care must be taken in making certain that the utilized throughput will never exceed the available bandwidth. The vLAN tagging protocol is supported. Link Aggregation, also known as NIC teaming, is not supported.Host Integration Basics16 Copyright 2014 EMC Corporation. All rights reservedPingCheck basic connectivityTrace RouteProvides information on number of hops required for the packet to reach its destination

iSCSI Basic Connectivity VerificationHost Integration Basics17

Copyright 2014 EMC Corporation. All Rights Reserved.iSCSI Basic Connectivity Verification includes Ping and Trace Route. They are available on the Network Settings menu in Unisphere.Ping provides a basic connectivity check to ensure the host can see the array and vice versa. This command can be run from the host, Unisphere and the storage systems SP.Trace Route provides the user with information on how many hops are required for the packet to reach its final destination. This command can also be run from the host, Unisphere, and the storage systems SP. The first entry should be the gateway defined in the iSCSI port configuration.

Host Integration Basics17 Copyright 2014 EMC Corporation. All rights reservediSCSI and FC Host Connectivity RulesAll connections from a host to an array must use the same protocolConnections must be all FC or all iSCSINIC and HBA iSCSI connections cannot be mixed in the same serverA server must have all NIC iSCSI connections or all HBA iSCSI connectionsDo not connect a single server to both an FC storage system and an iSCSI storage systemServers with iSCSI HBAs and servers with NICs can connect to the same iSCSI storage system

Host Integration Basics18Copyright 2014 EMC Corporation. All Rights Reserved.The rules concerning iSCSI and FC host connectivity are detailed below:All connections from a host to an array must use the same protocolConnections from a host must be all FC or all iSCSINIC and HBA iSCSI connections cannot be mixed in the same serverA server must have all NIC iSCSI connections or all HBA iSCSI connectionsDo not connect a single server to both a FC storage system and a iSCSI storage systemServers with iSCSI HBAs and servers with NICs can connect to the same iSCSI storage systemHost Integration Basics18 Copyright 2014 EMC Corporation. All rights reservedHost Integration BasicsDuring this lesson the following topics were covered:Identifying Network TechnologiesIdentifying Fibre Channel and iSCSI components and addressingExplaining FC and iSCSI connectivity rulesExplaining host connectivity requirements

Lesson 1: SummaryHost Integration Basics19Copyright 2014 EMC Corporation. All Rights Reserved.During this lesson we covered the various Storage Network Topologies and the requirements to implement them. We looked at the Fibre Chanel and iSCSI implementations and what the components, addressing, and the rules associated with implementing those technologies. We also investigated the host connectivity requirements for the various storage network technologies.

Host Integration Basics19 Copyright 2014 EMC Corporation. All rights reservedHost Integration BasicsThis lesson covers the following topics:Describe PowerPath Features and FunctionsDescribe Unisphere Agent and Unisphere Server Utility ConsiderationsImplement Host Utilities

Lesson 2: PowerPath and Other Host UtilitiesHost Integration Basics20Copyright 2014 EMC Corporation. All Rights Reserved.This lesson covers the PowerPath features and functionality including discussions on the various failover mechanisms and LUN access technologies used on the various VNX arrays. Also covered are the other host utilities and their installation considerations.Host Integration Basics20 Copyright 2014 EMC Corporation. All rights reservedPowerPathProvides Path ManagementHost-Based SoftwareMultiple Storage System SupportMultiple OS SupportSupports Fibre Channel and iSCSIHost Integration Basics21Copyright 2014 EMC Corporation. All Rights Reserved.PowerPath is an EMC-supplied, host-based layered software product that provides path management and may be installed on any supported host platform. It provides all the essential multipathing features to meet High Availability requirements at the application level.PowerPath operates with several storage systems, on several operating systems, and supports both Fibre Channel and iSCSI data channels (with Windows Server 2003, Windows Server 2008, Windows Server 2008 R2,and Windows Server 2012 non-clustered hosts only, parallel SCSI channels are supported).Host Integration Basics21 Copyright 2014 EMC Corporation. All rights reservedFeatures and FunctionalityHost Integration Basics22Copyright 2014 EMC Corporation. All Rights Reserved.The core features of PowerPath are automated path failover, which includes an automated path restore function and dynamic load-balancing.Regardless of cause, the direct result of a path failure is the failure of I/O requests on the corresponding native device from the host. These failures can include HBA/NIC failures, Interconnect failures (cable, patch panel, etc.), Switch failures, and Interface and Interface port failures. PowerPath auto-detects such I/O failures, confirms the path failure via subsequent retries on the same path, and then reroutes the I/O request to alternative paths to the same device. The application is completely unaware of the I/O rerouting. Thus, failover is fully transparent to the application. After the failure, PowerPath continues testing the failed path. If the path passes the test, PowerPath resumes using it.PowerPath has built-in algorithms that attempt to balance I/O load over all available, active paths to a LUN. This is done on a host-by-host basis. It maintains statistics on all I/O for all paths. For each I/O request, PowerPath intelligently chooses the least-burdened available path, depending on the load-balancing and failover policy in effect. If an appropriate policy is specified, all paths in a PowerPath system have approximately the same load.By design, PowerPath is a configure-it-and-forget-it product for most typical deployments. Any subsequent manual reconfiguration is required only in highly specific situations, e.g. when new LUNs or new paths are provisioned to a host on the fly, and uptime requirements prohibit a subsequent host reboot. The fundamental strength of PowerPath is the default, out-of-the-box functionality that it provides for automatic path failover, automatic path restore and load-balancing. This greatly simplifies host-side administration for multipathed storage. It reduces essential administrative tasks to routine monitoring, manual spot checks on path availability, and examining PowerPath logs when path faults are suspected.Host Integration Basics22 Copyright 2014 EMC Corporation. All rights reservedArray and OS SupportHost Integration Basics23Copyright 2014 EMC Corporation. All Rights Reserved.PowerPath supports all EMC-branded storage arrays for both Fibre Channel and iSCSI implementations. PowerPath also supports several third party arrays including IBM ESS (Shark), Hitachi, HP, EMA, and ESG platforms. PowerPath also supports most major OS versions including Windows, Solaris, HP-UX and Linux (Red Hat and SUSE). For a complete list of supported arrays and OSs, see E-lab advisor.Host Integration Basics23 Copyright 2014 EMC Corporation. All rights reservedIntegration with Volume ManagersHost Integration Basics24Copyright 2014 EMC Corporation. All Rights Reserved.Since PowerPath sits above host native volume mangers, PowerPath devices can be managed just as any other device.The following host LVMs have been qualified for PowerPath:Solstice Disk Suite, Veritas, VCS on SolarisWith Veritas, multipathed devices should be excluded from DMP control; recommendation is to use native devices within Veritas (not emcpower pseudos)Veritas, native LVM on HP-UXVeritas, native LVM on AIXAdd hdiskpower devices to AIX volume groups; may be done via smitty Sistina LVM on LinuxThe PowerPath Installation and Administration guide for each supported operating system provides information on integration with specific third-party volume managers.In Invista implementations, host-based PowerPath provides the load-balancing on the front-end, i.e., from the HBAs to the virtual array ports.

Host Integration Basics24 Copyright 2014 EMC Corporation. All rights reservedDevice StatesHost Integration Basics25Copyright 2014 EMC Corporation. All Rights Reserved.PowerPath provides the facility to continuously monitor the state of all configured paths to a LUN. PowerPath manages the state of each path to each logical device independently. From PowerPaths perspective, a path is either alive or dead:A path is alive if it is usable; PowerPath can direct I/O to this path.A path is dead if it is not usable; PowerPath does not direct user I/O to this path. PowerPath marks a path dead when it fails a path test; it marks the path alive again when it passes a path test.To determine whether a path is operational, PowerPath uses a path test. A path test is a sequence of I/Os PowerPath issues specifically to ascertain the viability of a path. If a path test fails, PowerPath disables the path and stops sending I/O to it. After a path fails, PowerPath continues testing it periodically, to determine if it is fixed. If the path passes a test, PowerPath restores it to service and resumes sending I/O to it. The storage system, host, and application remain available while the path is restored. The time it takes to do a path test varies. Testing a working path takes milliseconds. Testing a failed path can take several seconds, depending on the type of failure.Host Integration Basics25 Copyright 2014 EMC Corporation. All rights reservedDevice ModesHost Integration Basics26Copyright 2014 EMC Corporation. All Rights Reserved.The PowerPath mode setting can be configured to either Active or Standby for each native path to a LUN. Since this can be tweaked on a per-LUN basis, it becomes possible to reserve the bandwidth of a specific set of paths to a set of applications on the host.I/O is usually routed to Standby paths, only when all Active paths to the LUN are dead. When multiple Active paths are available, PowerPath attempts to balance load over all available Active paths.Load-balancing behavior is influenced by mode setting:PowerPath will route I/O requests only to the Active pathsStandby paths will be used only if all Active paths failMode settings can also be used for dedicating specific paths to specific LUNs (and thus to specific applications).

Host Integration Basics26 Copyright 2014 EMC Corporation. All rights reservedActive/Passive Arrays: Failover MechanismTwo types of path failover:Array-initiated LUN trespassTypical cause: an SP fails or needs to rebootPowerPath logs a follow-overHost-initiated LUN trespassPowerPath detects a path failure, e.g. due to a cable break, port failure etc.PowerPath initiates a trespass, and logs the event Fabric AFabric BSP-APassiveActiveSP-B TrespassHost

Host Integration Basics27Copyright 2014 EMC Corporation. All Rights Reserved.With Active/Passive arrays such as a VNX, there is a concept of LUN ownership. On a VNX array, every LUN is owned by one of the two Storage Processors. Host paths to the currently active SP are active paths, and can service I/O. Paths to the same LUN via the other SP are passive; PowerPath is aware of them, but does not route I/O requests to them. When LUN ownership changes to the other SP, the active paths for that LUN become passive, and vice versa. A LUN trespass can occur in one of two ways. The trespass can be initiated by the array itself, when it detects total failure of an SP, or when the SP needs to reboot , e.g., during a non-disruptive update of array code. When this happens, PowerPath becomes aware of the change in LUN ownership, and follows over the LUN to the other SP. This follow-over is reported by PowerPaths logging mechanism.A LUN trespass can also occur when an I/O fails due to path failure from the HBA to the SP e.g. due to a cable break, or one of various fabric-related causes. When this happens, PowerPath initiates the LUN trespass and logs the trespass. When there are multiple available paths to each SP, every path to the currently active SP must fail before PowerPath initiates a trespass.The PowerPath mechanisms described above, follow-over and host-initiated trespass, apply to other supported Active/Passive arrays as well. Host Integration Basics27 Copyright 2014 EMC Corporation. All rights reservedActive/Active Mode (ALUA) Asymmetric Logical Unit Access (ALUA) Asymmetric accessibility to logical units through various portsRequest forwarding implementationCommunication method to pass IOs between SPsSoftware on the controller forwards requests to the other controllerNot an Active-Active Array model!I/Os are not serviced by both SPs for a given LUN I/Os are redirected to the SP owning the LUN

Front-End Fault Masking

Back-End Fault MaskingHost Integration Basics28Copyright 2014 EMC Corporation. All Rights Reserved.ALUA (Asymmetric Logical Unit Access) is a request forwarding implementation. In other words, the LUN is still owned by a single SP however, if I/O is received by an SP that does not own a LUN, that I/O is redirected to the owning SP. Its redirected using a communication method to pass I/O to the other SP.ALUA terminology: The optimized path would be a path to an SP that owns the LUN. A non-optimized path would be a path to an SP that doesnt own the LUN. This implementation should not be confused by an active-active model because I/O is not serviced by both SPs for a given LUN (like it is in a Symmetrix array). You still have a LUN ownership in place. I/O is redirected to the SP owning the LUN. One port may provide full performance access to a logical unit, while another port, possibly on a different physical controller, provides either lower performance access or supports a subset of the available SCSI. It uses failover mode 4. In the event of a front-end path failure there is no need to trespass LUNs immediately. The Upper Redirector driver routes the I/O to the SP owning the LUNs through the CMI channel.In the event of a back-end path failure there is no need to trespass LUNs immediately. The Lower Redirector routes the I/O to the SP owning the LUNs through the CMI channel. The host is unaware of failure and the LUNs do not have to be trespassed. An additional benefit of the lower-redirector is internal in that the replication software drivers (including meta-lun components) are also unaware of the redirectHost Integration Basics28 Copyright 2014 EMC Corporation. All rights reserved

LUN

Cache Coherency Links

LUN

LUNSymmetrical Active-Active: Overview

Only one SP serves IOs via a given LUNThe remaining SP is acting as standbySP trespasses LUN when paths fail and host software adjusts to new path LUN is presented across both SP-paths via internal linksOnly one SP is actively processing IO to the backendHost initiates trespass when path fails

Both SPs serve IOs to and from a given LUNIf path fails, no disruption to LUNThe performance is now improved up to 2XClassic LUNs only!LUNCX: Active-Passive

Active-Active (Symmetrical)VNX: Active-Active (ALUA)Host Integration Basics29Copyright 2014 EMC Corporation. All Rights Reserved.Starting with the VNX with MCx, the array now supports Symmetrical Active-Active access to the LUNs. With the CX family of CLARiiON there was Active-Passive connectivity to the LUNs. This meant that there was only one set of active paths to the LUN through the owning SP. When all the paths failed the LUN was trespassed and the host had to adjust to the new path which could cause a significant performance delay. Then we had the VNX which had Asymmetrical Active-Active connectivity through the use of ALUA. This allowed the LUN to be seen on paths through both SPs, but only the owning SP processed I/O. If the active path through the owning SP failed, the host was able to initiate the trespass of the LUN to the other SP and continue on. This caused only a minor delay while the trespass took place. With the introduction of VNX with MCx, we now have Symmetrical Active-Active connectivity to backend LUNs, currently supported only for Classic LUNs. With this configuration a LUN can be seen and accessed through either SP equally. If a path or SP should fail, there is no delay in I/O to the LUN. This dual SP access also results in up to a 2X boost in performance. Note: All data services which require pool LUNs are not available with the Classic LUNs needed for Active/Active access.29Host Integration Basics Copyright 2014 EMC Corporation. All rights reservedAsymmetric LUN Access: VNX SP reports SCSI descriptor:TARGET_PORT_GROUPSActive/OptimizedActive/Non-OptimizedSPA

SPB

I/O resumes to LUN through alternate SP after short delay

SPA

SPB

Optimized PathNon-optimized PathALUA masks the failure and trespasses LUNOwned by SPAOwned by SPBHost Integration Basics30Copyright 2014 EMC Corporation. All Rights Reserved.Shown here is a closer look at how the Asymmetrical Active-Active works.On the left is a host accessing a LUN on a VNX. The LUN is owned by SPA and so has an active/optimized path to send all I/O down. The path through SPB is active/non-optimized, meaning the host can see the LUN but no I/O s will be sent through it. If a path or SPA should happen to fail, ALUA will see the failure and through the alternate non-optimized path, cause the LUN to trespass over to the other SP. On the right side we can now see that I/O has resumed to the LUN through the alternate SP, and ownership of the LUN has been transferred to SPB. It will remain this way until manually trespassed back, or the other paths comes back on line and the current paths to SPB fail.30Host Integration Basics Copyright 2014 EMC Corporation. All rights reservedSPA

SPB

Owned by SPASymmetric LUN Access: VNX with MCxSPA

SPB

Both SPs send and receiveActive/Optimized Classic LUNs ONLY (OE R5.33)

Optimized PathNon-optimized PathOwned by SPAI/O continues through remaining SP and paths with NO delay

Host Integration Basics31Copyright 2014 EMC Corporation. All Rights Reserved.Now for a look at Symmetric Active-Active of VNX with MCxOn the left is a host accessing a LUN on a VNX with MCx. Notice that all the paths through both SPA and SPB are showing a status of Active/Optimized. This means that I/O can go through both paths equally and LUN ownership does not make a difference, even though the LUN is owned by SPA. If all paths to SPA, or SPA itself fails, I/O will continue to the LUN through SPB as though there were no failure at all. Ownership of the LUN does not trespass to the remaining SP. Once the SP or path failure has been repaired, I/O will resume going through both SPs automatically. It should be noted that for a host to take advantage of using both SPs for I/O, the host Operating System must support Symmetric Active-Active, or be running software such as PowerPath 5.7 which can take advantage of this feature.31Host Integration Basics Copyright 2014 EMC Corporation. All rights reserved

SPASPBLUN Parallel Access Locking ServiceRequired for Active-Active access

LUNLockLockWrite I/O operation acquires a lock on LBA address on both SPsLock requests sent over CMILock requests are smaller/quicker than the entire I/O

CMIHost Integration Basics32Copyright 2014 EMC Corporation. All Rights Reserved.In order for a host to be able to write I/O through both SPs at the same time, a new feature called the LUN Parallel Access Locking Service has been created. This service allows each SP to reserve Logical Block Addresses (LBAs) on a LUN at which it will write its information. In the example above, a host sends information down paths to both SPs. The SPs communicate to each other that they will be writing to a specific LUN and where. The information sent over the CMI is much smaller and happens more quickly than the actual writing to the LUN and so has no impact on the writing process. The SPs then use the locks to write to their section of the LUN in parallel with each other. Using the same process, when a host is going to read from a LUN on both SPs, shared locks are given out for the read I/Os. This way both SPs can access the same area on the LUN (Symmetric Active/Active) for increased performance.Host Integration Basics32 Copyright 2014 EMC Corporation. All rights reservedLower risk with increased availability within data centersImproved AvailabilityAll Paths are ActiveNo trespass during path failureNo trespass during NDUNo setup on VNX or Host sideImproved PerformanceAll Paths serving I/OUp to 2X ImprovementVNX with Symmetric / Active-Active BenefitsEliminate application timeouts Improve application throughputMulti-path load balancing

LUNHost Integration Basics33Copyright 2014 EMC Corporation. All Rights Reserved.When talking about the benefits of Symmetrical Active-Active access on the Next-Generation VNX, the main points to come away with are the added reliability and availability of the system. By this we mean that now all paths can be active at the same time, although only for Classic LUNs in this release. During path or SP failures, or NDUs, there is no longer a need for trespassing LUNs to the alternate SP with the delay involved, improving the reliability of the system. The Active-Active feature is easy to implement. There are no settings to configure on the VNX and any host OS that is capable of using this feature does so automatically. And finally with all paths able to serve I/O to hosts there is up to a 2X performance boost possible. 33Host Integration Basics Copyright 2014 EMC Corporation. All rights reservedRequirements for Unisphere Host Agent, Unisphere Server UtilityHost Integration Basics34Copyright 2014 EMC Corporation. All Rights Reserved.To run the host agent, CLI, or server utility, your server must meet the following requirements:Run a supported version of the operating systemHave an EMC VNX supported HBA hardware and driver installedBe connected to each SP in each storage system either directly or through a switch or hub. Each SP must have an IP connectionHave a configured TCP/IP network connection to any remote hosts that you will use to manage the servers storage systems, including any host whose browser you will use to access Unisphere, any Windows Server 2008 or 2003 host running Storage Management Server software, and any AIX, HP-UX, IRIX, Linux, NetWare, Solaris, Windows Server 2008 or 2003 host running the CLI.If you want to use the CLI on the server to manage storage systems on a remote server, the server must be on a TCP/IP network connected to both the remote server and each SP in the remote servers storage system. The remote server can be running AIX, HP-UX, Linux, Solaris, or the Windows operating system.Host Integration Basics34 Copyright 2014 EMC Corporation. All rights reservedInstalling Unisphere Host Agent or Server Utility

Host Integration Basics35Copyright 2014 EMC Corporation. All Rights Reserved.Depending on your application needs, you can install the host agent, server utility, or both on an attached server.If you want to install both applications, you must install revision 1.0.0.0474 or later of the Unisphere Server Utility . The registration feature of the server utility will be disabled and the host agent will be used to register the servers NICs or HBAs to the storage system. Note if the server utility is used while the host agent is running a scan of the new devices will fail.

Host Integration Basics35 Copyright 2014 EMC Corporation. All rights reservedUnisphere Server Utility: Install Rules for NIC InitiatorsMust use Unisphere Server UtilityMicrosoft iSCSI initiator, you must install the Microsoft iSCSI SoftwareDo not install the server utility on a VMware Virtual Machine.Do not disable the Registration Service optionReboot the server when the installation

Host Integration Basics36Copyright 2014 EMC Corporation. All Rights Reserved.If you have a Microsoft iSCSI initiator, you must install the Microsoft iSCSI Software Initiator because the Unisphere Server Utility uses it to configure iSCSI connections.Note: Do not install the server utility on a VMware Virtual Machine. You can install the utility on a VMware ESX Server.Do not disable the Registration Service option (it is enabled by default ) The Registration Service option automatically registers the servers NICs or HBAs with the storage system after the installation and updates server information to the storage system whenever the server configuration changes (for example, when you mount new volumes or create new partitions). If you have the host agent installed and you are installing revision 6.22.20 or later of the server utility, the server utilitys Registration Service feature will not be installed. Prior to revision 6.22.20 of the server utility, you could not install both applications on the same serverYou must reboot the server when the installation dialog prompts you to reboot. If the server is connected to the storage system with NICs and you do not reboot before you run the Microsoft iSCSI Software Initiator or server utility, the NIC initiators will not log in to the storage system.

Host Integration Basics36 Copyright 2014 EMC Corporation. All rights reservedHost Integration BasicsDuring this lesson the following topics were covered:Describe PowerPath Features and FunctionsDescribe Unisphere Agent and Unisphere Server Utility ConsiderationsImplement Host Utilities

Lesson 2: SummaryHost Integration Basics37Copyright 2014 EMC Corporation. All Rights Reserved.This lesson covered the PowerPath features and functionality including discussions on the various failover mechanisms and LUN access technologies used on the various VNX arrays. Also covered were the other host utilities available and their installation considerations.Host Integration Basics37 Copyright 2014 EMC Corporation. All rights reservedSummaryKey points covered in this module:Each network technology has key components, addressing, and connectivity requirements that must be followed to enable host connectivity.PowerPath provides path management essential for multipathing and high availability.Host Integration Basics38Copyright 2014 EMC Corporation. All Rights Reserved.This module covered the various Storage Network Topologies and the requirements to implement them. It also covered the PowerPath features and functionality, and other host utilities and their installation considerations.

Host Integration Basics38 Copyright 2014 EMC Corporation. All rights reserved