Transcript
Page 1: TechBook: iSCSI SAN Topologies

iSCSI SAN Topologies

Version 2.0

• iSCSI SAN Topology Overview

• TCP/IP and iSCSI Overview

• Use Case Scenarios

Ron DharmaMugdha Kulkarni Vinay JonnakutiJonghoon (Jason) JeongSteven Chung

Page 2: TechBook: iSCSI SAN Topologies

iSCSI SAN Topologies TechBook2

Copyright © 2011 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

Part number H8080.2

Page 3: TechBook: iSCSI SAN Topologies

Contents

Preface............................................................................................................................ 11

Chapter 1 TCP/IP TechnologyTCP/IP overview.............................................................................. 18

Transmission Control Protocol ................................................ 18Internet Protocol ........................................................................ 20

TCP terminology............................................................................... 21TCP error recovery............................................................................ 25TCP network congestion.................................................................. 28IPv6 ..................................................................................................... 29

Features of IPv6.......................................................................... 29Deployment status..................................................................... 31Addressing.................................................................................. 32IPv6 packet.................................................................................. 37Transition mechanisms ............................................................. 38

Internet Protocol security (IPsec).................................................... 40Tunneling and IPsec.................................................................. 40IPsec terminology ...................................................................... 41

Chapter 2 iSCSI TechnologyiSCSI technology overview.............................................................. 44iSCSI discovery.................................................................................. 46

Static............................................................................................. 46Send target .................................................................................. 46iSNS ............................................................................................. 46

iSCSI error recovery.......................................................................... 47iSCSI security..................................................................................... 48

Security mechanisms................................................................. 48

iSCSI SAN Topologies TechBook 3

Page 4: TechBook: iSCSI SAN Topologies

Contents

Authentication methods ........................................................... 49

Chapter 3 iSCSI SolutionsBest practices ..................................................................................... 52

Network design ......................................................................... 52Header and data digest............................................................. 52

EMC native iSCSI targets................................................................. 53Symmetrix................................................................................... 53VNX for Block and CLARiiON................................................ 54Celerra Network Server............................................................ 55VNX series for File..................................................................... 56

Configuring iSCSI targets ................................................................ 58Bridged solutions.............................................................................. 60

Brocade........................................................................................ 60Cisco ............................................................................................ 63Brocade M Series ....................................................................... 69

Summary............................................................................................ 73

Chapter 4 Use Case ScenariosConnecting an iSCSI Windows host to a VMAX array ............... 76

Configuring storage port flags and an IP address on a VMAX array ............................................................................ 76Configuring LUN Masking on a VMAX array...................... 81Configuring an IP address on a Windows host .................... 83Configuring iSCSI on a Windows host .................................. 85Configuring Jumbo frames .................................................... 101Setting MTU on a Windows host .......................................... 101

Connecting an iSCSI Linux host to a VMAX array.................... 103Configuring storage port flags and an IP address on a VMAX array .......................................................................... 104Configuring LUN Masking on a VMAX array.................... 111Configuring an IP address on a Linux host......................... 114Configuring CHAP on the Linux host.................................. 117Configuring iSCSI on a Linux host using Linux iSCSI Initiator CLI ........................................................................... 117Configuring Jumbo frames .................................................... 119Setting MTU on a Linux host................................................. 119

Configuring the VNX for block 1 Gb/10 Gb iSCSI port ........... 121Prerequisites ............................................................................. 121Configuring storage system iSCSI front-end ports ............ 122

iSCSI SAN Topologies TechBook4

Page 5: TechBook: iSCSI SAN Topologies

Contents

Assigning an IP address to each NIC or iSCSI HBA in a Windows Server 2008 ........................................................... 127Configuring iSCSI initiators for a configuration without iSNS......................................................................................... 130Registering the server with the storage system................... 146Setting storage system failover values for the server initiators with Unisphere .................................................... 148Configuring the storage group .............................................. 162iSCSI CHAP authentication.................................................... 175

5iSCSI SAN Topologies TechBook

Page 6: TechBook: iSCSI SAN Topologies

Contents

iSCSI SAN Topologies TechBook6

Page 7: TechBook: iSCSI SAN Topologies

Title Page

Figures

1 TCP header example ...................................................................................... 192 TCP header fields, size, and functions ........................................................ 193 Slow start and congestion avoidance .......................................................... 264 Fast retransmit ................................................................................................ 275 IPv6 packet header structure ........................................................................ 376 iSCSI exmaple ................................................................................................. 447 iSCSI header example .................................................................................... 458 iSCSI header fields, size, and functions ...................................................... 459 Celerra iSCSI configurations ......................................................................... 5510 VNX 5000 series iSCSI configuration .......................................................... 5611 VNX VG2 iSCSI configuration ..................................................................... 5712 iSCSI gateway service basic implementation ............................................. 6013 Supportable configuration example ............................................................ 6414 Brocade M Series multiprotocol switch ...................................................... 6915 Windows host connected to a VMAX array with 1 G connectivity ........ 7616 EMC Symmetrix Manager Console, Directors ........................................... 7717 Set Port Attributes dialog box ...................................................................... 7818 Config Session tab .......................................................................................... 7919 My Active Tasks, Commit All ...................................................................... 7920 EMC Symmetrix Management Console, Storage Provisioning ............... 8121 Internet Protocol Version 6 (TCP/IPv6) Properties dialog box ............... 8422 Test connectivity ............................................................................................. 8423 iSCSI Initiator Properties window ............................................................... 8624 Discovery tab, Discover Portal ..................................................................... 8725 Discover Portal dialog box ............................................................................ 8826 Advanced Settings window .......................................................................... 8927 Target portals .................................................................................................. 9028 Targets tab ....................................................................................................... 9029 Connect to Target dialog box ....................................................................... 9130 Discovered targets .......................................................................................... 91

iSCSI SAN Topologies TechBook 7

Page 8: TechBook: iSCSI SAN Topologies

Figures

31 Volume and Devices tab ............................................................................... 9232 Devices ............................................................................................................. 9333 iSNS Server Properties window, storage ports .......................................... 9434 Discovery tab .................................................................................................. 9535 iSNS Server added ......................................................................................... 9636 iSNS Server ...................................................................................................... 9737 Linux hosts connected to a VMAX array with 10 G connectivity ......... 10338 Set port attributes ......................................................................................... 10539 Set Port Attributes dialog box .................................................................... 10640 Config Session tab ........................................................................................ 10741 My Active Tasks, Commit All .................................................................... 10842 CHAP authentication .................................................................................. 10943 Director Port CHAP Authentication Enable/Disable dialog box ......... 10944 Director Port CHAP Authentication Set dialog box ............................... 11045 EMC Symmetrix Management Console, Storage Provisioning ............ 11246 Verify IP addresses ...................................................................................... 11547 Test connectivity .......................................................................................... 11748 Windows host connected to a VNX array with 1 G/ 10 G

connectivity..................................................................................................... 12149 Unisphere, System tab ................................................................................. 12350 Message box .................................................................................................. 12451 iSCSI Port Properties window .................................................................... 12552 iSCSI Virtual Port Properties window ...................................................... 12653 Warning message ......................................................................................... 12754 Successful message ...................................................................................... 12755 Control Panel, Network Connections window ....................................... 12856 Local Area Connection Properties dialog box ......................................... 12957 Internet Protocol Version 4 (TCP/IPv4) Properties dialog box ............ 13058 EMC Unisphere Server Utility welcome window ................................... 13259 EMC Unisphere Server Utility window, Configure iSCSI

Connections..................................................................................................... 13360 iSCSI Targets and Connections window .................................................. 13461 Discover iSCSI targets on this subnet ....................................................... 13562 Discover iSCSI targets for this target portal ............................................. 13663 iSCSI Targets window ................................................................................. 13764 Successful logon message ........................................................................... 13865 Server registration window ........................................................................ 13966 Successfully updated message ................................................................... 14067 Microsoft iSCSI Initiator Properties dialog box ....................................... 14168 Discovery tab ................................................................................................ 14169 Add Target Portal dialog box ..................................................................... 14270 Advanced Settings dialog box, General tab ............................................. 14271 iSCSI Initiator Properties dialog box, Discovery tab .............................. 143

iSCSI SAN Topologies TechBook8

Page 9: TechBook: iSCSI SAN Topologies

Figures

72 iSCSI Initiator Properties dialog box, Targets tab .................................... 14473 Log on to Target dialog box ........................................................................ 14474 Target, Connected ......................................................................................... 14575 EMC Unisphere Server Utility, welcome window .................................. 14676 Connected Storage Systems ........................................................................ 14777 Successfully updated message ................................................................... 14878 EMC Unisphere, Hosts tab .......................................................................... 14979 Start Wizard dialog box ............................................................................... 15080 Select Host dialog box .................................................................................. 15181 Select Storage System dialog box ............................................................... 15282 Specify Settings dialog box ......................................................................... 15383 Review and Commit Settings ..................................................................... 15484 Failover Setup Wizard Confirmation dialog box ..................................... 15585 Details from Operation dialog box ............................................................ 15686 EMC Unisphere, Hosts tab .......................................................................... 15787 Connectivity Status Window, Host Initiators tab .................................... 15788 Expanded hosts ............................................................................................. 15889 Edit Initiators window ................................................................................. 15890 Confirmation dialog box ............................................................................. 16091 Success confirmation message .................................................................... 16092 Connectivity Status window, Host Initiators tab ..................................... 16193 Initiator Information window ..................................................................... 16194 Select system .................................................................................................. 16295 Select Storge Groups .................................................................................... 16396 Storage Groups window .............................................................................. 16497 Create Storage dialog box ............................................................................ 16498 Confirmation dialog box ............................................................................. 16599 Storage Group, Properties ........................................................................... 166100 Hosts tab ........................................................................................................ 166101 Hosts to be Connected column .................................................................. 167102 Connect LUNs ............................................................................................... 168103 LUNs tab ........................................................................................................ 169104 Selected LUNs ............................................................................................... 170105 Confirmation dialog box ............................................................................. 170106 Success message box .................................................................................... 171107 Added LUNs ................................................................................................. 171108 Computer Management window ............................................................... 172109 Rescanned disks ............................................................................................ 173110 PowerPath icon ............................................................................................. 173111 EMC PowerPath Console screen ................................................................ 174112 Disks ............................................................................................................... 174

9iSCSI SAN Topologies TechBook

Page 10: TechBook: iSCSI SAN Topologies

Figures

iSCSI SAN Topologies TechBook10

Page 11: TechBook: iSCSI SAN Topologies

Preface

This EMC Engineering TechBook provides a high-level overview of iSCSI SAN topologies and includes basic information about TCP/IP technologies and iSCSI solutions.

E-Lab would like to thank all the contributors to this document, including EMC engineers, EMC field personnel, and partners. Your contributions are invaluable.

As part of an effort to improve and enhance the performance and capabilities of its product lines, EMC periodically releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes. If a product does not function properly or does not function as described in this document, please contact your EMC representative.

Audience This TechBook is intended for EMC field personnel, including technology consultants, and for the storage architect, administrator, and operator involved in acquiring, managing, operating, or designing a networked storage environment that contains EMC and host devices.

EMC Support Matrixand E-Lab

InteroperabilityNavigator

For the most up-to-date information, always consult the EMC Support Matrix (ESM), available through E-Lab Interoperability Navigator (ELN) at http://elabnavigator.EMC.com, under the PDFs and Guides tab.

The EMC Support Matrix links within this document will take you to Powerlink where you are asked to log in to the E-Lab Interoperability Navigator. Instructions on how to best use the ELN (tutorial, queries, wizards) are provided below this Log in window. If you are

iSCSI SAN Topologies TechBook 11

Page 12: TechBook: iSCSI SAN Topologies

12

Preface

unfamiliar with finding information on this site, please read these instructions before proceeding any further.

Under the PDFs and Guides tab resides a collection of printable resources for reference or download. All of the matrices, including the ESM (which does not include most software), are subsets of the E-Lab Interoperability Navigator database. Included under this tab are:

◆ The EMC Support Matrix, a complete guide to interoperable, and supportable, configurations.

◆ Subset matrices for specific storage families, server families, operating systems or software products.

◆ Host connectivity guides for complete, authoritative information on how to configure hosts effectively for various storage environments.

Under the PDFs and Guides tab, consult the Internet Protocol pdf under the "Miscellaneous" heading for EMC's policies and requirements for the EMC Support Matrix.

Relateddocumentation

Related documents include:

◆ The former EMC Networked Storage Topology Guide has been divided into several TechBooks and reference manuals. The following documents, including this one, are available through the E-Lab Interoperability Navigator, Topology Resource Center tab, at http://elabnavigator.EMC.com.

These documents are also available at the following location:

http://www.emc.com/products/interoperability/topology-resource-center.htm

• Backup and Recovery in a SAN TechBook

• Building Secure SANs TechBook

• Extended Distance Technologies TechBook

• Fibre Channel over Ethernet (FCoE): Data Center Bridging (DCB) Concepts and Protocols TechBook

• Fibre Channel SAN Topologies TechBook

• Networked Storage Concepts and Protocols TechBook

• Networking for Storage Virtualization and RecoverPoint TechBook

• WAN Optimization Controller Technologies TechBook

• EMC Connectrix SAN Products Data Reference Manual

iSCSI SAN Topologies TechBook

Page 13: TechBook: iSCSI SAN Topologies

Preface

• Legacy SAN Technologies Reference Manual

• Non-EMC SAN Products Data Reference Manual

◆ EMC Support Matrix, available through E-Lab Interoperability Navigator at http://elabnavigator.EMC.com >PDFs and Guides

◆ RSA security solutions documentation, which can be found at http://RSA.com > Content Library

All of the following documentation and release notes can be found at http://Powerlink.EMC.com. From the toolbar, select Support > Technical Documentation and Advisories, then choose the appropriate Hardware/Platforms, Software, or Host Connectivity/HBAs documentation links.

Hardware documents and release notes include those on:

◆ Connectrix B series ◆ Connectrix M series ◆ Connectrix MDS (release notes only)◆ VNX series◆ CLARiiON ◆ Celerra ◆ Symmetrix

Software documents include those on:

◆ EMC Ionix ControlCenter ◆ RecoverPoint ◆ Invista ◆ TimeFinder ◆ PowerPath

The following E-Lab documentation is also available:

◆ Host Connectivity Guides◆ HBA Guides

For Cisco and Brocade documentation, refer to the vendor’s website.

◆ http://cisco.com

◆ http://brocade.com

iSCSI SAN Topologies TechBook 13

Page 14: TechBook: iSCSI SAN Topologies

14

Preface

Authors of thisTechBook

This TechBook was authored by Ron Dharma, Mugdha Kulkarni, and Vinay Jonnakuti, with contributions from EMC engineers, EMC field personnel, and partners.

Ron Dharma is a Principal Integration Engineer and team-lead for Advance Product Solution group in E-Lab. Prior to joining EMC, Ron was a SCSI software engineer, spending almost 11 years resolving integration issues in multiple SAN components. He dabbled in almost every aspect of the SAN including storage virtualization, backup and recovery, point-in-time recovery, and distance extension. Ron provided the original information in this document, and works with other contributors to update and expand the content.

Mugdha Kulkarni is a Senior Systems Integration Engineer and has been with EMC for over 6 years. For the past 6 years, Mugdha has worked in the E-Lab qualifying new Symmetrix and CLARiiON releases. Mugdha is also involved in the technical evaluation of Fibre Channel over Ethernet (FCoE) products, including the CNA and FCoE switches.

Vinay Jonnakuti is a Systems Integration Engineer and has been with EMC's E-Lab for over 3 years in the storage environment. Vinay qualifies WAN-Optimization appliances with SRDF (GigE/FCIP), SAN-Copy, MirrorView, and RecoverPoint. Vinay also qualifies Brocade , Cisco FCIP, Fibre Channel, and iSCSI with the Symmetrix storage platform.

Jonghoon (Jason) Jeong is a Systems Integration Engineer and has been with EMC for over 3 years. For the past 2 years, Jonghoon has worked in E-Lab qualifying new CLARiiON/VNX, Invista, and PowerPath Migration Enabler releases.

Steven Chung is a Senior Systems Integration Engineer and has been with EMC E-Lab for almost 2 years. Steven qualifies PowerPath Migration Enabler, RecoverPoint, Replication Manager, iSCSI technologies, and Brocade/Cisco Encryption.

iSCSI SAN Topologies TechBook

Page 15: TechBook: iSCSI SAN Topologies

Preface

Conventions used inthis document

EMC uses the following conventions for special notices:

CAUTION!CAUTION, used with the safety alert symbol, indicates a hazardous situation which, if not avoided, could result in minor or moderate injury.

IMPORTANT!An important notice contains information essential to software or hardware operation.

Note: A note presents information that is important, but not hazard-related.

Typographical conventionsEMC uses the following type style conventions in this document.

Normal Used in running (nonprocedural) text for:• Names of interface elements (such as names of windows,

dialog boxes, buttons, fields, and menus)• Names of resources, attributes, pools, Boolean expressions,

buttons, DQL statements, keywords, clauses, environment variables, functions, utilities

• URLs, pathnames, filenames, directory names, computer names, filenames, links, groups, service keys, file systems, notifications

Bold Used in running (nonprocedural) text for:• Names of commands, daemons, options, programs,

processes, services, applications, utilities, kernels, notifications, system calls, man pages

Used in procedures for:• Names of interface elements (such as names of windows,

dialog boxes, buttons, fields, and menus)• What user specifically selects, clicks, presses, or types

Italic Used in all text (including procedures) for:• Full titles of publications referenced in text• Emphasis (for example a new term)• Variables

Courier Used for:• System output, such as an error message or script • URLs, complete paths, filenames, prompts, and syntax when

shown outside of running text

iSCSI SAN Topologies TechBook 15

Page 16: TechBook: iSCSI SAN Topologies

16

Preface

Where to get help EMC support, product, and licensing information can be obtained as follows.

Product information — For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Powerlink website (registration required) at:

http://Powerlink.EMC.com

Technical support — For technical support, go to Powerlink and choose Support. On the Support page, you will see several options, including one for making a service request. Note that to open a service request, you must have a valid support agreement. Please contact your EMC sales representative for details about obtaining a valid support agreement or with questions about your account.

We'd like to hear from you!

Your feedback on our TechBooks is important to us! We want our books to be as helpful and relevant as possible, so please feel free to send us your comments, opinions and thoughts on this or any other TechBook:

[email protected]

Courier bold Used for:• Specific user input (such as commands)

Courier italic Used in procedures for:• Variables on command line• User input variables

< > Angle brackets enclose parameter or variable values supplied by the user

[ ] Square brackets enclose optional values

| Vertical bar indicates alternate selections - the bar means “or”

{ } Braces indicate content that you must specify (that is, x or y or z)

... Ellipses indicate nonessential information omitted from the example

iSCSI SAN Topologies TechBook

Page 17: TechBook: iSCSI SAN Topologies

1

This chapter provides a brief overview of TCP/IP technology.

◆ TCP/IP overview ............................................................................... 18◆ TCP terminology ................................................................................ 21◆ TCP error recovery............................................................................. 25◆ TCP network congestion................................................................... 28◆ IPv6 ...................................................................................................... 29◆ Internet Protocol security (IPsec)..................................................... 40

TCP/IP Technology

TCP/IP Technology 17

Page 18: TechBook: iSCSI SAN Topologies

18

TCP/IP Technology

TCP/IP overviewThe Internet Protocol Suite is named from the first two networking protocols defined in this standard, each briefly described in this section:

◆ “Transmission Control Protocol” on page 18

◆ “Internet Protocol” on page 20

Transmission Control ProtocolThe Transmission Control Protocol (TCP) provides a communication service between an application program and the Internet Protocol (IP). The entire suite is commonly referred to as TCP/IP. When an application program wants to send a large chunk of data across the Internet using IP, the software can issue a single request to TCP and let TCP handle the IP details.

TCP is a connection-oriented transport protocol that guarantees reliable in-order delivery of a stream of bytes between the endpoints of a connection. TCP achieves this by assigning each byte of data a unique sequence number by maintaining timers, acknowledging received data through the use of acknowledgements (ACKs), and retransmitting data if necessary.

Data can be transferred after a connection is established between the endpoints. The data stream that passes across the connection is considered a single sequence of eight-bit bytes, each of which is given a sequence number.

TCP accepts data from a data stream, segments it into chunks, and adds a TCP header. A TCP header follows the internet header, supplying information specific to the TCP protocol. This division allows for the existence of host-level protocols other than TCP. Figure 1 on page 19 shows an example of a TCP header.

iSCSI SAN Topologies TechBook

Page 19: TechBook: iSCSI SAN Topologies

TCP/IP Technology

Figure 1 TCP header example

Figure 2 on page 19 defines the fields, size, and functions of the TCP header.

Figure 2 TCP header fields, size, and functions

TCP/IP overview 19

Page 20: TechBook: iSCSI SAN Topologies

20

TCP/IP Technology

Internet ProtocolThe Internet Protocol (IP) is the main communications protocol used for relaying datagrams (packets) across an internetwork using the Internet Protocol Suite. It is responsible for routing packets across network boundaries.

iSCSI SAN Topologies TechBook

Page 21: TechBook: iSCSI SAN Topologies

TCP/IP Technology

TCP terminologyThis section provides information for TCP terminology.

Acknowledgements(ACKs)

The TCP acknowledgement scheme is cumulative as it acknowledges all the data received up until the time the ACK was generated. As TCP segments are not of uniform size and a TCP sender may retransmit more data than what was in a missing segment, ACKs do not acknowledge the received segment, rather they mark the position of the acknowledged data in the stream. The policy of cumulative acknowledgement makes the generation of ACKs easy and any loss of ACKs do not force the sender to retransmit data. The disadvantage is that the sender does not receive any detailed information about the data received except the position in the stream of the last byte that has been received.

Delayed ACKs Delayed ACKs allow a TCP receiver to refrain from sending an ACK for each incoming segment. However, a receiver should send an ACK for every second full-sized segment that arrives. Furthermore, the standard mandates that a receiver must not withhold an ACK for more than 500 ms. The receivers should not delay ACKs that acknowledge out-of-order segments.

Maximum segmentsize (MSS)

The maximum segment size (MSS) is the maximum amount of data, specified in bytes, that can be transmitted in a segment between the two TCP endpoints. The MSS is decided by the endpoints, as they need to agree on the maximum segment they can handle. Deciding on a good MSS is important in a general inter-networking environment because this decision greatly affects performance. It is difficult to choose a good MSS value since a very small MSS means an underutilized network, whereas a very large MSS means large IP datagrams that may lead to IP fragmentation, greatly hampering the performance. An ideal MSS size would be when the IP datagrams are as large as possible without any fragmentation anywhere along the path from the source to the destination. When TCP sends a segment with the SYN bit set during connection establishment, it can send an optional MSS value up to the outgoing interface’s MTU minus the size of the fixed TCP and IP headers. For example, if the MTU is 1500 (Ethernet standard), the sender can advertise a MSS of 1460 (1500 minus 40).

TCP terminology 21

Page 22: TechBook: iSCSI SAN Topologies

22

TCP/IP Technology

Maximumtransmission unit

(MTU)

Each network interface has its own MTU that defines the largest packet that it can transmit. The MTU of the media determines the maximum size of the packets that can be transmitted without IP fragmentation.

Retransmission A TCP sender starts a timer when it sends a segment and expects an acknowledgement for the data it sent. If the sender does not receive an acknowledgement for the data before the timer expires, it assumes that the data was lost or corrupted and retransmits the segment. Since the time required for the data to reach the receiver and for the acknowledgement to reach the sender is not constant (because of the varying Internet delays), an adaptive retransmission algorithm is used to monitor performance of each connection and conclude a reasonable value for timeout based on the round trip time.

SelectiveAcknowledgement

(SACK)

TCP may experience poor performance when multiple packets are lost from one window of data. With the limited information available from cumulative acknowledgements, a TCP sender can only learn about a single lost packet per round trip time. An aggressive sender could choose to retransmit packets early, but such retransmitted segments may have already been successfully received. The Selective Acknowledgement (SACK) mechanism, combined with a selective repeat retransmission policy, helps to overcome these limitations. The receiving TCP sends back SACK packets to the sender confirming receipt of data and specifies the holes in the data that has been received. The sender can then retransmit only the missing data segments. The selective acknowledgment extension uses two TCP options. The first is an enabling option, SACKpermitted, which may be sent in a SYN segment to indicate that the SACK option can be used once the connection is established. The other is the SACK option itself, which may be sent over an established connection once permission has been given by SACKpermitted.

TCP segment The TCP segments are units of transfer for TCP and used to establish a connection, transfer data, send ACKs, advertise window size, and close a connection. Each segment is divided into three parts:

◆ Fixed header of 20 bytes

◆ Optional variable length header, padded out to a multiple of 4 bytes

◆ Data

The maximum possible header size is 60 bytes. The TCP header carries the control information. SOURCE PORT and

iSCSI SAN Topologies TechBook

Page 23: TechBook: iSCSI SAN Topologies

TCP/IP Technology

DESTINATION PORT contain TCP port numbers that identify the application programs at the endpoints. The SEQUENCE NUMBER field identifies the position in the sender’s byte stream of the first byte of attached data, if any, and the ACKNOWLEDGEMENT NUMBER field identifies the number of the byte the source expects to receive next. The ACKNOWLEDGEMENT NUMBER field is valid only if the ACK bit in the CODE BITS field is set. The 6-bit CODE BITS field is used to determine the purpose and contents of the segment. The HLEN field specifies the total length of the fixed plus variable headers of the segment as a number of 32-bit words. TCP software advertises how much data it is willing to receive by specifying its buffer size in the WINDOW field. The CHECKSUM field contains a 16-bit integer checksum used to verify the integrity of the data as well as the TCP header and the header options. The TCP header padding is used to ensure that the TCP header ends and data begins on a 32-bit boundary. The padding is composed of zeros.

TCP window A TCP window is the amount of data a sender can send without waiting for an ACK from the receiver. The TCP window is a flow control mechanism and ensures that no congestion occurs in the network. For example, if a pair of hosts are talking over a TCP connection that has a TCP window size of 64 KB, the sender can only send 64 KB of data and it must stop and wait for an acknowledgement from the receiver that some or all of the data has been received. If the receiver acknowledges that all the data has been received, the sender is free to send another 64 KB. If the sender gets back an acknowledgement from the receiver that it received the first 32 KB (which is likely if the second 32 KB was still in transit or it is lost), then the sender could only send another 32 KB since it cannot have more than 64 KB of unacknowledged data outstanding (the second 32 KB of data plus the third).

The primary reason for the window is congestion control. The whole network connection, which consists of the hosts at both ends, the routers in between, and the actual connections themselves, might have a bottleneck somewhere that can only handle so much data so fast. The TCP window throttles the transmission speed down to a level where congestion and data loss do not occur.

The factors affecting the window size are as follows:

Receiver’s advertised windowThe time taken by the receiver to process the received data and send ACKs may be greater than the sender’s processing time, so it is necessary to control the transmission rate of the sender to prevent it

TCP terminology 23

Page 24: TechBook: iSCSI SAN Topologies

24

TCP/IP Technology

from sending more data than the receiver can handle, thus causing packet loss. TCP introduces flow control by declaring a receive window in each segment header.

Sender’s congestion window The congestion window controls the number of packets a TCP flow has in the network at any time. The congestion window is set using an Additive-Increase, Multiplicative-Decrease (AIMD) mechanism that probes for available bandwidth, dynamically adapting to changing network conditions.

Usable window This is the minimum of the receiver’s advertised window and the sender’s congestion window. It is the actual amount of data that the sender is able to transmit. The TCP header uses a 16-bit field to report the receive window size to the sender. Therefore, the largest window that can be used is 2**16 = 65 KB.

Window scalingThe ordinary TCP header allocates only 16 bits for window advertisement. This limits the maximum window that can be advertised to 64 KB, limiting the throughput. RFC 1323 provides the window scaling option, to be able to advertise windows greater than 64 KB. Both the endpoints must agree to use window scaling during connection establishment.

The window scale extension expands the definition of the TCP window to 32 bits and then uses a scale factor to carry this 32-bit value in the 16-bit Window field of the TCP header (SEG.WND in RFC-793). The scale factor is carried in a new TCP option, Window Scale. This option is sent only in a SYN segment (a segment with the SYN bit on), hence the window scale is fixed in each direction when a connection is opened.

iSCSI SAN Topologies TechBook

Page 25: TechBook: iSCSI SAN Topologies

TCP/IP Technology

TCP error recoveryIn TCP, each source determines how much capacity is available in the network so it knows how many packets it can safely have in transit. Once a given source has this many packets in transit, it uses the arrival of an ACK as a signal that some of its packets have left the network and it is therefore safe to insert new packets into the network without adding to the level of congestion. TCP uses congestion control algorithms to determine the network capacity. From the congestion control point of view, a TCP connection is in one of the following states.

◆ Slow start: After a connection is established and after a loss is detected by a timeout or by duplicate ACKs.

◆ Fast recovery: After a loss is detected by fast retransmit.

◆ Congestion avoidance: In all other cases. Congestion avoidance and slow start work hand-in-hand. The congestion avoidance algorithm assumes that the chance of a packet being lost due to damage is very small. Therefore, the loss of a packet means there is congestion somewhere in the network between the source and destination. Occurrence of a timeout and the receipt of duplicate ACKs indicates packet loss.

When congestion is detected in the network it is necessary to slow things down, so the slow start algorithm is invoked. Two parameters, the congestion window (cwnd) and a slow start threshold (ssthresh), are maintained for each connection. When a connection is established, both of these parameters are initialized. The cwnd is initialized to one MSS. The ssthresh is used to determine whether the slow start or congestion avoidance algorithm is to be used to control data transmission. The initial value of ssthresh may be arbitrarily high (usually ssthresh is initialized to 65535 bytes), but it may be reduced in response to congestion.

The slow start algorithm is used when cwnd is less than ssthresh, while the congestion avoidance algorithm is used when cwnd is greater than ssthresh. When cwnd and ssthresh are equal, the sender may use either slow start or congestion avoidance.

TCP never transmits more than the minimum of cwnd and the receiver’s advertised window. When a connection is established, or if congestion is detected in the network, TCP is in slow start and the congestion window is initialized to one MSS. Each time an ACK is received, the congestion window is increased by one MSS. The sender

TCP error recovery 25

Page 26: TechBook: iSCSI SAN Topologies

26

TCP/IP Technology

starts by transmitting one segment and waiting for its ACK. When that ACK is received, the congestion window is incremented from one to two, and two segments can be sent. When each of those two segments is acknowledged, the congestion window is increased to four, and so on. The window size increases exponentially during slow start as shown in Figure 3. When a time-out occurs or a duplicate ACK is received, ssthresh is reset to one half of the current window (that is, the minimum of cwnd and the receiver's advertised window). If the congestion was detected by an occurrence of a timeout, the cwnd is set to one MSS.

When an ACK is received for data transmitted, the cwnd is increased. However, the way it is increased depends on whether TCP is performing slow start or congestion avoidance. If the cwnd is less than or equal to the ssthresh, TCP is in slow start and slow start continues until TCP is halfway to where it was when congestion occurred, then congestion avoidance takes over. Congestion avoidance increments the cwnd by MSS squared divided by cwnd (in bytes) each time an ACK is received, increasing the cwnd linearly as shown in Figure 3. This provides a close approximation to increasing cwnd by, at most, one MSS per RTT.

Figure 3 Slow start and congestion avoidance

cwnd

RTT

Slow start: Exponentialgrowth of cwnd

SYM-001457

ssthresh

Congestion avoidance: Lineargrowth of cwnd

iSCSI SAN Topologies TechBook

Page 27: TechBook: iSCSI SAN Topologies

TCP/IP Technology

A TCP receiver generates ACKs on receipt of data segments. The ACK contains the highest contiguous sequence number the receiver expects to receive next. This informs the sender of the in-order data that was received by the receiver. When the receiver receives a segment with a sequence number greater than the sequence number it expected to receive, it detects the out-of-order segment and generates an immediate ACK with the last sequence number it has received in-order (that is, a duplicate ACK). This duplicate ACK is not delayed. Since the sender does not know if this duplicate ACK is a result of a lost packet or an out-of-order delivery, it waits for a small number of duplicate ACKs, assuming that if the packets are only reordered there will be only one or two duplicate ACKs before the reordered segment is received and processed and a new ACK is generated. If three or more duplicate ACKs are received in a row, it implies there has been a packet loss. At that point, the TCP sender retransmits this segment without waiting for the retransmission timer to expire. This is known as fast retransmit (Figure 4).

After fast retransmit has sent the supposedly missing segment, the congestion avoidance algorithm is invoked instead of the slow start; this is called fast recovery. Receipt of a duplicate ACK implies that not only is a packet lost, but that there is data still flowing between the two ends of TCP, as the receiver will only generate a duplicate ACK on receipt of another segment. Hence, fast recovery allows high throughput under moderate congestion.

Figure 4 Fast retransmit

Send segments 21 - 26

Receive ACK for 21and 22

Received 3 duplicateACKs expecting 23Retransmit 23

Received ACK for 26expecting 27

23 lost in the network

Received segment 21 and 22send ACK for 21 and 22expecting 23

Received 24 still expecting 23 senda duplicate ACK

Received 25 still expecting 23 senda duplecate ACK

Received 26 still expecting 23 senda duplicate ACK

GEN-000299

TCP error recovery 27

Page 28: TechBook: iSCSI SAN Topologies

28

TCP/IP Technology

TCP network congestion A network link is said to be congested if contention for it causes queues to build up and packets start getting dropped. The TCP protocol detects these dropped packets and starts retransmitting them, but using aggressive retransmissions to compensate for packet loss tends to keep systems in a state of network congestion even after the initial load has been reduced to a level which would not normally have induced network congestion. In this situation, demand for link bandwidth (and eventually queue space), outstrips what is available. When congestion occurs, all the flows that detect it must reduce their transmission rate. If they do not do so, the network will remain in an unstable state with queues continuing to build up.

iSCSI SAN Topologies TechBook

Page 29: TechBook: iSCSI SAN Topologies

TCP/IP Technology

IPv6Internet Protocol version 6 (IPv6) is a network layer protocol for packet-switched internets. It is designated as the successor of IPv4.

Note: For the most up-to-date support information, always refer to the EMC Support Matrix > PDF and Guides > Miscellaneous> Internet Protocol.

Note: The information in this section was acquired from Wikipedia.org, August 2007, which provides further details on many of these topics.

The main improvement of IPv6 is the increase in the number of addresses available for networked devices. IPv4 supports 232 (about 4.3 billion) addresses. In comparison, IPv6 supports 2128 (about 34×1037) addresses, or approximately 5×1028 addresses for each of roughly 6.5 billion people. However, that is not the intention of the designers.

The extended address length simplifies operational considerations, including dynamic address assignment and router decision-making. It also avoids many complex workarounds that were necessary in IPv4, such as Classless Inter-Domain Routing (CIDR). Its simplified packet header format improves the efficiency of forwarding in routers. More information on this topic is provided in “Larger address space” on page 30 and “Addressing” on page 32.

This section contains the following information:

◆ “Features of IPv6” on page 29◆ “Deployment status” on page 31◆ “Addressing” on page 32◆ “IPv6 packet” on page 37◆ “Transition mechanisms” on page 38

Features of IPv6

To a great extent, IPv6 is a conservative extension of IPv4. Most transport- and application-layer protocols need little or no change to work over IPv6. The few exceptions are applications protocols that embed network-layer addresses (such as FTP or NTPv3). Applications, however, usually need small changes and a recompile in order to run over IPv6.

IPv6 29

Page 30: TechBook: iSCSI SAN Topologies

30

TCP/IP Technology

The following features of IPv6 will be further discussed in this section:

◆ “Larger address space” on page 30◆ “Stateless autoconfiguration of hosts” on page 30◆ “Multicast” on page 31◆ “Jumbograms” on page 31◆ “Network-layer security” on page 31◆ “Mobility” on page 31

Larger address space The main feature of IPv6 is the larger address space: 128 bits long (versus 32 bits in IPv4). The larger address space avoids the potential exhaustion of the IPv4 address space without the need for network address translation (NAT) and other devices that break the end-to-end nature of Internet traffic.

Note: In rare cases, NAT may still be necessary, but it will be difficult in IPv6 so should be avoided whenever possible.

It also makes administration of medium and large networks simpler, by avoiding the need for complex subnetting schemes. Ideally, subnetting will revert to its original purpose of logical segmentation of an IP network for optimal routing and access.

There are a few drawbacks to larger addresses. For instance, in regions where bandwidth is limited, IPv6 carries some bandwidth overhead over IPv4. However, header compression can sometimes be used to alleviate this problem. IPv6 addresses are also harder to memorize than IPv4 addresses, which are, in turn, harder to memorize than Domain Name System (DNS) names. DNS protocols have been modified to support IPv6 as well as IPv4.

For more information, refer to “Addressing” on page 32.

Statelessautoconfiguration of

hosts

IPv6 hosts can be automatically configured when connected to a routed IPv6 network. When first connected to a network, a host sends a link-local (automatic configuration of IP addresses) multicast (broadcast) request for its configuration parameters. If configured suitably, routers respond to such a request with a router advertisement packet that contains network-layer configuration parameters.

If IPv6 autoconfiguration is not suitable, a host can use stateful autoconfiguration (DHCPv6) or be configured manually.

iSCSI SAN Topologies TechBook

Page 31: TechBook: iSCSI SAN Topologies

TCP/IP Technology

Note: Stateless autoconfiguration is suitable only for hosts. Routers must be configured manually or by other means.

Multicast Network infrastructures, in most environments, are not configured to route multicast. The link-scoped aspect of multicast (that is, on a single subnet) will work but the site-scope, organization-scope, and global-scope multicast will not be routed.

IPv6 does not have a link-local broadcast facility. The same effect can be achieved by multicasting to the all-hosts group (FF02::1).

The m6bone is catering for deployment of a global IPv6 multicast network.

Jumbograms IPv6 has optional support for packets over the IPv4 limit of 64 KB when used between capable communication partners and on communication links with a maximum transmission unit larger than 65,576 octets. These are referred to as jumbograms and can be as large as 4 GB. The use of jumbograms may improve performance over high-MTU (Maximum Transmission Unit) networks.

An optional feature of IPv6, the jumbo payload option, allows the exchange of packets larger than this size between cooperating hosts.

Network-layersecurity

IP security (IPsec), the protocol for IP network-layer encryption and authentication, is an integral part of the base protocol suite in IPv6. In IPv4, this is optional (although usually implemented). IPsec is not widely deployed except for securing traffic between IPv6 Border Gateway Protocol (BGP) routers (the core routing protocol of the Internet).

Mobility Mobile IPv6 (MIPv6) avoids triangular routing and is as efficient as normal IPv6. This advantage is mostly hypothetical, since neither MIP nor MIPv6 are widely deployed.

Deployment status

As of December 2005, IPv6 accounts for only a small percentage of the live addresses in the Internet, which is still dominated by IPv4. Many of the features of IPv6 have been ported to IPv4, with the exception of stateless autoconfiguration, more flexible addressing, and Secure Neighbor Discovery (SEND).

IPv6 31

Page 32: TechBook: iSCSI SAN Topologies

32

TCP/IP Technology

IPv6 deployment is primarily driven by IPv4 address space exhaustion, which has been slowed by the introduction of classless inter-domain routing (CIDR) and the extensive use of network address translation (NAT).

Estimates as to when the pool of available IPv4 addresses will be exhausted vary widely, ranging from around 2011 (2005 report by Cisco Systems) to Paul Wilson’s (director of APNIC) prediction of 2023.

To prepare for the inevitable, a number of governments are starting to require support for IPv6 in new equipment. The U.S. Government, for example, has specified that the network backbones of all federal agencies must deploy IPv6 by 2008 and bought 247 billion IPv6 addresses to begin the deployment. The People’s Republic of China has a 5-year plan for deployment of IPv6, called the “China Next Generation Internet.”

AddressingThe following subjects are briefly discussed in this section:

◆ “128-bit length” on page 32 ◆ “Notation” on page 33◆ “Literal IPv6 addresses in URLs” on page 33◆ “Network notation” on page 34◆ “Types of IPv6 addresses” on page 34◆ “Special addresses” on page 35◆ “Zone indices” on page 36

128-bit length The primary change from IPv4 to IPv6, as discussed in “Larger address space” on page 30, is the length of network addresses. IPv6 addresses are 128-bits long (as defined by RFC 4291), compared to IPv4 addresses, which are 32 bits. IPv6 has enough room for 3.4×1038 unique addresses, while the IPv4 address space contains about 4 billion addresses.

IPv6 addresses are typically composed of two logical parts: a 64-bit (sub-)network prefix and a 64-bit host part, which is either automatically generated from the interface's Media Access Control (MAC) address or assigned sequentially. Globally unique MAC addresses offer an opportunity to track user equipment (and thus users) across time and IPv6 address changes. In order to restore some of the anonymity existing in the IPv4, RFC 3041 was developed to reduce the prospect of user identity being permanently tied to an

iSCSI SAN Topologies TechBook

Page 33: TechBook: iSCSI SAN Topologies

TCP/IP Technology

IPv6 address. RFC 3041 specifies a mechanism by which time-varying random bit strings can be used as interface circuit identifiers, replacing unchanging and traceable MAC addresses.

Notation IPv6 addresses are normally written as eight groups of four hexadecimal digits. For example, the following is a valid IPv6 address:

2001:0db8:85a3:08d3:1319:8a2e:0370:7334

If one or more four-digit group(s) is 0000, the zeros may be omitted and replaced with two colons(::). For example, 2001:0db8:0000:0000:0000:0000:1428:57ab can be shortened to 2001:0db8::1428:57ab. Following this rule, any number of consecutive 0000 groups may be reduced to two colons, as long as there is only one double colon used in an address. Leading zeros in a group can also be omitted (as in ::1 for localhost). For example, the following addresses are all valid and equivalent:

2001:0db8:0000:0000:0000:0000:1428:57ab2001:0db8:0000:0000:0000::1428:57ab2001:0db8:0:0:0:0:1428:57ab2001:0db8:0:0::1428:57ab2001:0db8::1428:57ab2001:db8::1428:57ab

Note: Having more than one double-colon abbreviation in an address is invalid, as it would make the notation ambiguous.

A sequence of 4 bytes at the end of an IPv6 address can also be written in decimal, using dots as separators. This notation is often used with compatibility addresses. For example, the following two addresses are the same:

::ffff:1.2.3.4

::ffff:0102:0304 and 0:0:0:0:0:ffff:0102:0304.

Additional information can be found in RFC 4291 — IP Version 6 Addressing Architecture.

Literal IPv6 addressesin URLs

In a URL the IPv6-Address is enclosed in brackets. For example:

http://[2001:0db8:85a3:08d3:1319:8a2e:0370:7344]/

IPv6 33

Page 34: TechBook: iSCSI SAN Topologies

34

TCP/IP Technology

This notation allows parsing a URL without confusing the IPv6 address and port number:

https://[2001:0db8:85a3:08d3:1319:8a2e:0370:7344]:443/

Additional information can be found in RFC 2732 — Format for Literal IPv6 Addresses in URLs and RFC 3986 — Uniform Resource Identifier (URI): Generic Syntax.

Network notation IPv6 networks are written using Classless Inter-Domain Routing (CIDR) notation.

An IPv6 network (or subnet) is a contiguous group of IPv6 addresses, the size of which must be a power of two. The initial bits of addresses, identical for all hosts in the network, are called the network's prefix.

A network is denoted by the first address in the network and the size in bits of the prefix (in decimal), separated with a slash. For example:

2001:0db8:1234::/48

stands for the network with addresses:

2001:0db8:1234:0000:0000:0000:0000:0000 through 2001:0db8:1234:FFFF:FFFF:FFFF:FFFF:FFFF

Because a single host can be seen as a network with a 128-bit prefix, you will sometimes see host addresses written followed with:

/128.

Types of IPv6addresses

IPv6 addresses are divided into the following three categories:

◆ Unicast Addresses — Identifies a single network interface. A packet sent to a unicast address is delivered to that specific computer.

◆ Multicast Addresses — Used to define a set of interfaces that typically belong to different nodes instead of just one. When a packet is sent to a multicast address, the protocol delivers the packet to all interfaces identified by that address. Multicast addresses begin with the prefix FF00::/8. Their second octet identifies the addresses scope, that is, the range over which the multicast address is propagated. Commonly used scopes include link-local (2), site-local (5), and global (E).

◆ Anycast Addresses — Also assigned to more than one interface, belonging to different nodes. However, a packet sent to an anycast address is delivered to just one of the member interfaces, typically the “nearest” according to the routing protocol’s idea of

iSCSI SAN Topologies TechBook

Page 35: TechBook: iSCSI SAN Topologies

TCP/IP Technology

distance. Anycast addresses cannot be easily identified. They have the structure of normal unicast addresses, and differ only by being injected into the routing protocol at multiple points in the network.

Special addresses There are a number of addresses with special meaning in IPv6:

◆ ::/128 — The address with all zeros is an unspecified address, and is to be used only in software.

◆ ::1/128 — The loopback address is a localhost address. If an application in a host sends packets to this address, the IPv6 stack will loop these packets back to the same host (corresponding to 127.0.0.1 in IPv4).

◆ ::/96 — The zero prefix was used for IPv4-compatible addresses. It is now obsolete.

◆ ::ffff:0:0/96 — This prefix is used for IPv4 mapped addresses (see “Transition mechanisms” on page 38).

◆ 2001:db8::/32 — This prefix is used in documentation (RFC 3849). Addresses from this prefix should be used anywhere an example IPv6 address is given.

◆ 2002::/16 — This prefix is used for 6to4 addressing.

◆ fc00::/7 — Unique Local Addresses (ULA) are routable only within a set of cooperating sites. They were defined in RFC 4193 as a replacement for site-local addresses. The addresses include a 40-bit pseudorandom number that minimizes the risk of conflicts if sites merge or packets somehow leak out. This address space is split into two parts:

• fc00::/8 — ULA Central, currently not used as the draft is expired.

• fd00::/8 — ULA, as per RFC 4193, Generator and unofficial registry.

◆ fe80::/64 — The link-local prefix specifies that the address is valid only in the local physical link. This is analogous to the Autoconfiguration IP address 169.254.0.0/16 in IPv4.

◆ fec0::/10 — The site-local prefix specifies that the address is valid only inside the local organization.

Note: Its use has been deprecated in September 2004 by RFC 3879 and systems must not support this special type of address.

IPv6 35

Page 36: TechBook: iSCSI SAN Topologies

36

TCP/IP Technology

◆ ff00::/8 — The multicast prefix is used for multicast addresses[10] as defined by in "IP Version 6 Addressing Architecture" (RFC 4291).

There are no address ranges reserved for broadcast in IPv6. Instead, applications use multicast to the all-hosts group. IANA maintains the official list of the IPv6 address space. Global unicast assignments can be found at the various RIRs or at the Ghost Route Hunter (GRH) DFP pages.

Zone indices Link-local addresses present a particular problem for systems with multiple interfaces. Because each interface may be connected to different networks and the addresses all appear to be on the same subnet, an ambiguity arises that cannot be solved by routing tables.

For example, host A has two interfaces that automatically receive link-local addresses when activated (per RFC 2462): fe80::1/64 and fe80::2/64), only one of which is connected to the same physical network as host B which has address fe80::3/64. If host A attempts to contact fe80::3, how does it know which interface (fe80::1 or fe80::2) to use?

The solution, defined by RFC 4007, is the addition of a unique zone index for the local interface, represented textually in the form <address>%<zone_id>. For example:

http://[fe80::1122:33ff:fe11:2233%eth0]:80/

However, this may cause the following problems due to clashing with the percent-encoding used with URIs.

◆ Microsoft Windows IPv6 stack uses numeric zone IDs: fe80::3%1

◆ BSD applications typically use the interface name as a zone ID: fe80::3%pcn0

◆ Linux applications also typically use the interface name as a zone ID: fe80::3%eth0, although Linux ifconfig as of version 1.42 (part of net-tools 1.60) does not display zone IDs.

Relatively few IPv6-capable applications understand zone ID syntax (with the notable exception of OpenSSH), rendering link-local addresses unusable within them if multiple interfaces use link-local addresses.

iSCSI SAN Topologies TechBook

Page 37: TechBook: iSCSI SAN Topologies

TCP/IP Technology

IPv6 packetA packet is a formatted block of data carried by a computer network. Figure 5 shows the structure of an IPv6 packet header.

Figure 5 IPv6 packet header structure

The IPv6 packet is composed of two main parts:

◆ Header

The header is in the first 40 octets (320 bits) of the packet and contains:

• Both source and destination addresses (128 bits each)

• Version (4-bit IP version)

• Traffic class (8 bits, Packet Priority)

• Flow label (20 bits, QoS management)

• Payload length in bytes (16 bits)

• Next header (8 bits)

• Hop limit (8 bits, time to live)

◆ Payload

The payload can be up to 64 KB in size in standard mode, or larger with a jumbo payload option (refer to “Jumbograms” on page 31).

Fragmentation is handled only in the sending host in IPv6. Routers never fragment a packet, and hosts are expected to use Path MTU (PMTU) discovery.

IPv6 37

Page 38: TechBook: iSCSI SAN Topologies

38

TCP/IP Technology

The protocol field of IPv4 is replaced with a Next Header field. This field usually specifies the transport layer protocol used by a packet's payload. In the presence of options, however, the Next Header field specifies the presence of an Extra Options header, which then follows the IPv6 header. The payload's protocol itself is specified in a field of the Options header. This insertion of an extra header to carry options is analogous to the handling of AH and Encapsulating Security Payload (ESP) in IPsec for both IPv4 and IPv6.

Transition mechanismsUntil IPv6 completely supplants IPv4, which is not likely to happen in the near future, a number of so-called transition mechanisms are needed to enable IPv6-only hosts to reach IPv4 services and to allow isolated IPv6 hosts and networks to reach the IPv6 Internet over the IPv4 infrastructure. The following transition mechanisms are briefly discussed in this section.

◆ “Dual stack” on page 38

◆ “Tunneling” on page 38

◆ “Automatic tunneling” on page 39

◆ “Configured tunneling” on page 39

◆ “Proxying and translation” on page 39

Dual stack Since IPv6 is a conservative extension of IPv4, it is relatively easy to write a network stack that supports both IPv4 and IPv6 while sharing most of the code. Such an implementation is called a dual stack. A host implementing a dual stack is called a dual-stack host. This approach is described in RFC 4213.

Most current implementations of IPv6 use a dual stack. Some early experimental implementations used independent IPv4 and IPv6 stacks. There are no known implementations that implement IPv6 only.

Tunneling In order to reach the IPv6 Internet, an isolated host or network must be able to use the existing IPv4 infrastructure to carry IPv6 packets. This is done using a technique somewhat misleadingly known as tunnelling that consists of encapsulating IPv6 packets within IPv4, in effect using IPv4 as a link layer for IPv6.

IPv6 packets can be directly encapsulated within IPv4 packets using protocol number 41. They can also be encapsulated within UDP

iSCSI SAN Topologies TechBook

Page 39: TechBook: iSCSI SAN Topologies

TCP/IP Technology

packets, for example, in order to cross a router or NAT device that blocks protocol 41 traffic. They can also use generic encapsulation schemes, such as Anything In Anything (AYIYA) or Generic Routing Encapsulation (GRE).

Automatic tunneling Automatic tunneling refers to a technique where the tunnel endpoints are automatically determined by the routing infrastructure. The recommended technique for automatic tunneling is 6to4 tunneling, which uses protocol 41 encapsulation. Tunnel endpoints are determined by using a well-known IPv4 anycast address on the remote side, and embedding IPv4 address information within IPv6 addresses on the local side. 6to4 tunneling is widely deployed today.

Another automatic tunneling mechanism is Intra-Site Automatic Tunnel Addressing Protocol (ISATAP). This protocol treats the IPv4 network as a virtual IPv6 local link, with mappings from each IPv4 address to a link-local IPv6 address.

Teredo is an automatic tunneling technique that uses UDP encapsulation and is claimed to be able to cross multiple NAT boxes. Teredo is not widely deployed today, but an experimental version of Teredo is installed with the Windows XP SP2 IPv6 stack.

Note: IPv6, 6to4, and Teredo are enabled by default in Windows Vista.

Configured tunneling Configured tunneling is a technique where the tunnel endpoints are configured explicitly, either by a human operator or by an automatic service known as a Tunnel Broker. Configured tunneling is usually more deterministic and easier to debug than automatic tunneling, and is therefore recommended for large, well-administered networks.

Configured tunneling typically uses either protocol 41 (recommended) or raw UDP encapsulation.

Proxying andtranslation

When an IPv6-only host needs to access an IPv4-only service (for example, a web server), some form of translation is necessary. The one form of translation that actually works is the use of a dual-stack application-layer proxy (for example, a web proxy).

Techniques for application-agnostic translation at the lower layers have also been proposed, but they have been found to be too unreliable due to the wide range of functionality required by common application-layer protocols. As such, they are commonly considered to be obsolete.

IPv6 39

Page 40: TechBook: iSCSI SAN Topologies

40

TCP/IP Technology

Internet Protocol security (IPsec)Internet Protocol security (IPsec) is a set of protocols developed by the IETF to support secure exchange of packets in the IP layer. IP Security has been deployed widely to implement Virtual Private Networks (VPNs).

IP security supports two encryption modes:

◆ Transport

◆ Tunnel

Transport mode encrypts only the payload of each packet, but leaves the header untouched. The more secure Tunnel mode encrypts both the header and the payload.

On the receiving side, an IP Security compliant device decrypts each packet. For IP security to work, the sending and receiving devices must share a public key. This is accomplished through a protocol known as Internet Security Association and Key Management Protocol/Oakley (ISAKMP/Oakley), which allows the receiver to obtain a public key and authenticate the sender using digital certificates.

Tunneling and IPsecInternet Protocol security (IPsec) uses cryptographic security to ensure private, secure communications over Internet Protocol networks. IPsec supports network-level data integrity, data confidentiality, data origin authentication, and replay protection. It helps secure your SAN against network-based attacks from untrusted computers, attacks that can result in the denial-of-service of applications, services, or the network, data corruption, and data and user credential theft.

By default, when creating an FCIP tunnel, IPsec is disabled.

FCIP tunneling with IPsec enabled will support maximum throughput as follows:

◆ Unidirectional: approximately 104 MB/sec

◆ Bidirectional: approximately 90 MB/sec

Used to provide greater security in tunneling on an FR4-18i blade or a Brocade SilkWorm 7500 switch, the IPsec feature does not require you

iSCSI SAN Topologies TechBook

Page 41: TechBook: iSCSI SAN Topologies

TCP/IP Technology

to configure separate security for each application that uses TCP/IP. When configuring for IPsec, however, you must ensure that there is an FR4-18i blade or a Brocade SilkWorm 7500 switch in each end of the FCIP tunnel. IPsec works on FCIP tunnels with or without IP compression (IPComp).

IPsec requires an IPsec license in addition to the FCIP license.

IPsec terminology

AES Advanced Encryption Standard. FIPS 197 endorses the Rijndael encryption algorithm as the approved AES for use by US government organizations and others to protect sensitive information. It replaces DES as the encryption standard.

AES-XCBC Cipher Block Chaining. A key-dependent one-way hash function (MAC) used with AES in conjunction with the Cipher-Block-Chaining mode of operation, suitable for securing messages of varying lengths, such as IP datagrams.

AH Authentication Header. Like ESP, AH provides data integrity, data source authentication, and protection against replay attacks but does not provide confidentiality.

DES Data Encryption Standard is the older encryption algorithm that uses a 56-bit key to encrypt blocks of 64-bit plain text. Because of the relatively shorter key length, it is not a secured algorithm and no longer approved for Federal use.

3DES Triple DES is a more secure variant of DES. It uses three different 56-bit keys to encrypt blocks of 64-bit plain text. The algorithm is FIPS-approved for use by Federal agencies.

ESP Encapsulating Security Payload is the IPsec protocol that provides confidentiality, data integrity, and data source authentication of IP packets, as well as protection against replay attacks.

MD5 Message Digest 5, like SHA-1, is a popular one-way hash function used for authentication and data integrity.

SHA Secure Hash Algorithm, like MD5, is a popular one-way hash function used for authentication and data integrity.

Internet Protocol security (IPsec) 41

Page 42: TechBook: iSCSI SAN Topologies

42

TCP/IP Technology

MAC Message Authentication Code is a key-dependent, one-way hash function used for generating and verifying authentication data.

HMAC A stronger MAC because it is a keyed hash inside a keyed hash. SA Security association is the collection of security parameters and authenticated keys that are negotiated between IPsec peers.

iSCSI SAN Topologies TechBook

Page 43: TechBook: iSCSI SAN Topologies

2

This chapter provides a brief overview of iSCSI technology.

◆ iSCSI technology overview............................................................... 44◆ iSCSI discovery................................................................................... 46◆ iSCSI error recovery........................................................................... 47◆ iSCSI security...................................................................................... 48

iSCSI Technology

iSCSI Technology 43

Page 44: TechBook: iSCSI SAN Topologies

44

iSCSI Technology

iSCSI technology overviewInternet Small Computer System Interface, (iSCSI) is an IP-based storage networking standard for linking data storage facilities developed by the Internet Engineering Task Force. By transmitting SCSI commands over IP networks, iSCSI can facilitate block-level transfers over the intranet and internet.

The iSCSI architecture is similar to a client/server architecture. In this case, the client is an initiator that issues an I/O request and the server is a target (such as a device in a storage system). This architecture can be used over IP networks to provide distance extension. This can be implemented between routers, host-to-switch, and storage array-to-storage array to provide asynchronous/synchronous data transfer.

Figure 6 shows an example of where iSCSI sits in the network.

Figure 6 iSCSI example

iSCSI SAN Topologies TechBook

Page 45: TechBook: iSCSI SAN Topologies

iSCSI Technology

Figure 7 shows an example of an iSCSI header.

Figure 7 iSCSI header example

Figure 8 defines the fields, size, and functions of the iSCSI header.

Figure 8 iSCSI header fields, size, and functions

iSCSI technology overview 45

Page 46: TechBook: iSCSI SAN Topologies

46

iSCSI Technology

iSCSI discoveryIn order for an iSCSI initiator to establish an iSCSI session with an iSCSI target, the initiator needs the IP address, TCP port number, and iSCSI target name information. The goals of iSCSI discovery mechanisms are to provide low overhead support for small iSCSI setups and scalable discovery solutions for large enterprise setups.

The following methods are briefly discussed in this section:

◆ “Static” on page 46

◆ “Send target” on page 46

◆ “iSNS” on page 46

StaticThis is the known target IP address, TCP port, and iSCSI name.

Send targetAn initiator may log in to an iSCSI target with session type of discovery and request a list of target WWUIs through a separate SendTargets command. All iSCSI targets are required to support the SendTargets command.

iSNSThe iSNS protocol is designed to facilitate the automated discovery, management, and configuration of iSCSI and Fibre Channel devices on a TCP/IP network. iSNS provides intelligent storage discovery and management services comparable to those found in Fibre Channel networks, allowing a commodity IP network to function in a similar capacity as a storage area network. iSNS also facilitates a seamless integration of IP and Fibre Channel networks, due to its ability to emulate Fibre Channel fabric services, and manage both iSCSI and Fibre Channel devices. iSNS thereby provides value in any storage network comprised of iSCSI devices, Fibre Channel devices, or any other combination.

iSCSI SAN Topologies TechBook

Page 47: TechBook: iSCSI SAN Topologies

iSCSI Technology

iSCSI error recoveryiSCSI supports three levels of error recovery: 0, 1, and 2:

◆ Error recovery level 0 implies session level recovery.

◆ Error recovery level 1 implies level 0 capabilities as well as digest failure recovery.

◆ Error recovery level 2 implies level 1 capabilities as well as connection recovery.

The most basic kind of recovery is called session recovery. In session recovery, whenever any kind of error is detected, the entire iSCSI session is terminated. All TCP connections connecting the initiator to the target are closed, and all pending SCSI commands are completed with an appropriate error status. A new iSCSI session is then established between the initiator and target, with new TCP connections.

Digest failure recovery starts if the iSCSI driver detects that data arrived with an invalid data digest and that data packet must be rejected. The command corresponding to the corrupted data can then be completed with an appropriate error indication.

Connection recovery can be used when a TCP connection is broken. Upon detection of a broken TCP connection, the iSCSI driver can either immediately complete the pending command with an appropriate error indication, or can attempt to transfer the SCSI command to another TCP connection. If necessary, the iSCSI initiator driver can establish another TCP connection to the target, and the iSCSI initiator driver can inform the target the change in allegiance of the SCSI command to another TCP connection.

iSCSI error recovery 47

Page 48: TechBook: iSCSI SAN Topologies

48

iSCSI Technology

iSCSI securityHistorically, native storage systems have not had to consider security because their environments offered minimal security risks. These environments consisted of storage devices either directly attached to hosts or connected through a Storage Area Network (SAN) distinctly separate from the communications network.

The use of storage protocols, such as SCSI over IP-networks, requires that security concerns be addressed. iSCSI implementations must provide means of protection against active attacks (such as, pretending to be another identity, message insertion, deletion, modification, and replaying) and passive attacks (such as, eavesdropping, gaining advantage by analyzing the data sent over the line).

Although technically possible, iSCSI should not be configured without security. iSCSI configured without security should be confined, in extreme cases, to closed environments without any security risk.

This section provides basic information on:

◆ “Security mechanisms” on page 48

◆ “Authentication methods” on page 49

Security mechanisms

The entities involved in iSCSI security are the initiator, target, and IP communication end points. iSCSI scenarios in which multiple initiators or targets share a single communication end points are expected. To accommodate such scenarios, iSCSI uses two separate security mechanisms:

◆ In-band authentication between the initiator and the target at the iSCSI connection level (carried out by exchange of iSCSI Login PDUs).

◆ Packet protection (integrity, authentication, and confidentiality) by IPsec at the IP level.

The two security mechanisms complement each other. The in-band authentication provides end-to-end trust (at login time) between the iSCSI initiator and the target while IPsec provides a secure channel between the IP communication end points.

iSCSI SAN Topologies TechBook

Page 49: TechBook: iSCSI SAN Topologies

iSCSI Technology

Authentication methodsThe authentication methods that can be used are:

CHAP (Challenge Handshake Authentication Protocol)The Challenge-Handshake Authentication Protocol (CHAP) is used to periodically verify the identity of the peer using a three-way handshake. This is done upon establishing initial link and may be repeated anytime after the link has been established. CHAP provides protection against playback attack by the peer through the use of an incrementally changing identifier and a variable challenge value. The use of repeated challenges is intended to limit the time of exposure to any single attack. The authenticator is in control of the frequency and timing of the challenges. This authentication method depends upon a "secret" known only to the authenticator and that peer. The secret is not sent over the link.

SRP (Secure Remote Password)This mechanism is suitable for negotiating secure connections using a user-supplied password, while eliminating the security problems traditionally associated with reusable passwords. This system also performs a secure key exchange in the process of authentication, allowing security layers (privacy and/or integrity protection) to be enabled during the session. Trusted key servers and certificate infrastructures are not required, and clients are not required to store or manage any long-term keys.

KRB5 (Kerberos V5)Kerberos provides a means of verifying the identities of principals, (such as a workstation user or a network server) on an open (unprotected) network. This is accomplished without relying on authentication by the host operating system, or basing trust on host addresses, or requiring physical security of all the hosts on the network, and under the assumption that packets traveling along the network can be read, modified, and inserted at will. Kerberos performs authentication under these conditions as a trusted third-party authentication service by using conventional cryptography such as a shared secret key.

iSCSI security 49

Page 50: TechBook: iSCSI SAN Topologies

50

iSCSI Technology

SPKM1 & 2 (Simple Public Key GSS-API Mechanism)This mechanism provides authentication, key establishment, data integrity, and data confidentiality in an on-line distributed application environment using a public-key infrastructure. SPKM can be used as a drop-in replacement by any application which makes use of security services through GSS-API calls (for example, any application which already uses the Kerberos GSS-API for security).

DigestsDigests enable the checking of end-to-end, non-cryptographic data integrity beyond the integrity checks provided by the link layers and the cover the entire communication path including all elements that may change the network level PDUs such as routers, switches, and proxies.

Optional header and data digests protect the integrity of the header and data, respectively. The digests, if present, are located after the header and PDU-specific data and cover the header and the PDU data, each including the padding bytes, if any. The existence and type of digests are negotiated during the Login phase. The separation of the header and data digests is useful in iSCSI routing applications, where only the header changes when a message is forwarded. In this case, only the header digest should be recalculated.

IPSecIPSec is used for encryption and IP-level protection. It uses

◆ Authentication Header (AH)

◆ Encapsulating Security Payload (ESP)

◆ Internet Key Exchange (IKE)

IPSec is supported on the 1 G for iSCSI.

For more information on IPSec, refer to “Internet Protocol security (IPsec)” on page 40.

iSCSI SAN Topologies TechBook

Page 51: TechBook: iSCSI SAN Topologies

3

This chapter provides the following information on iSCSI solutions.

◆ Best practices....................................................................................... 52◆ EMC native iSCSI targets .................................................................. 53◆ EMC native iSCSI targets .................................................................. 53◆ Configuring iSCSI targets ................................................................. 58◆ Bridged solutions ............................................................................... 60◆ Summary ............................................................................................. 73

iSCSI Solutions

iSCSI Solutions 51

Page 52: TechBook: iSCSI SAN Topologies

52

iSCSI Solutions

Best practices This section lists general best practices concerning:

◆ “Network design” on page 52

◆ “Header and data digest” on page 52

Network designThe network should be dedicated solely to the IP technology being used and other traffic should be carried over it.

The network must be a well-engineered network with no packet loss or packet duplication. This would lead to retransmission, which is undesirable.

While planning the network, care must be taken to ascertain that the utilized throughput will never exceed the available bandwidth. Oversubscribing available bandwidth will lead to network congestion, which causes dropped packets and leads to TCP slow start. Network congestion must be considered between switches as well as between the switch and the end device.

The MTU must be configured based on the maximum available MTU supported by each component on the network.

Header and data digest

Header and data digest are mandatory when using a routed network (Layer 3) or when using Layer 2 network with VLAN tagging.

In a plain LAN (other than those mentioned above) digests are not mandatory.

iSCSI SAN Topologies TechBook

Page 53: TechBook: iSCSI SAN Topologies

iSCSI Solutions

EMC native iSCSI targetsThis section discusses the following EMC® native iSCSI targets:

◆ “Symmetrix” on page 53

◆ “VNX for Block and CLARiiON” on page 54

◆ “Celerra Network Server” on page 55

◆ “VNX series for File” on page 56

Symmetrix

This section describes the EMC Symmetrix® VMAX™, DMX-4, and DMX-3.

VMAX, DMX-4, DMX-3 The iSCSI channel director supports iSCSI channel connectivity to IP networks and to iSCSI-capable open systems server systems for block storage transfer between hosts and storage. The primary applications are storage consolidation and host extension for stranded servers and departmental workgroups.

◆ The Symmetrix DMX iSCSI provides 1 Gb/s Ethernet ports and connects through LC connectors.

◆ The Symmetrix VAMX iSCSI provides 1 Gb/s Ethernet ports and also connects through LC connectors. With EMC Enginuity™ 5875 code, both 1 Gb/s and 10 Gb/s is supported.

The iSCSI directors support the iSNS protocol. CHAP (Challenge Handshake Authentication Protocol) is the supported authentication mechanism. LUNs are configured in the same manner as for Fibre Channel directors and are assigned to the iSCSI ports. LUN masking is available. Both the 10 Gb/s (VMAX) and 1 Gb/s (DMX/VMAX) ports support IPv4 and IPv6.

References For configuration of Symmetrix iSCSI target please check the Symmetrix configuration guide.

For up-to-date iSCSI host support please refer to EMC Support Matrix, available through E-Lab Interoperability Navigator at: http://elabnavigator.EMC.com.

For configuration of iSCSI server, please check the respective host connectivity guide.

EMC native iSCSI targets 53

Page 54: TechBook: iSCSI SAN Topologies

54

iSCSI Solutions

VNX for Block and CLARiiONEMC VNX™ for Block and CLARiiON® native iSCSI targets include:

VNX5300/5500/5700/7500

This can be configured as a combination of a 10/1 Gb iSCSI and 8 Gb Fibre Channel array. iSNS protocol is supported. Authentication mechanism is Challenge Handshake Authentication Protocol (CHAP). LUNs are configured in the same manner as for Fibre Channel arrays and are assigned to a storage group.

CX4 120/240/480/960 This can be configured as a combination of a 10/1 Gb iSCSI and 8 Gb Fibre Channel array. iSNS protocol is supported. Authentication mechanism is CHAP. LUNs are configured in the same manner as for Fibre Channel arrays and are assigned to a storage group.

CX3-20/CX3-40 This can be configured as an iSCSI array or Fibre Channel array. All iSCSI ports on the array are 1 Gb/s Ethernet ports. iSNS protocol is supported. Authentication mechanism is CHAP.

LUNs are configured in the same manner as for Fibre Channel array and are assigned to a storage group.

CX300i/500i These are dedicated iSCSI arrays. All iSCSI ports on the array are 1 Gb/s Ethernet ports. iSNS protocol is supported. Authentication mechanism is CHAP.

LUNs are configured in the same manner as for Fibre Channel array and are assigned to a storage group.

AX150/100i These are dedicated iSCSI arrays. All iSCSI ports on the array are 1 Gb/s Ethernet ports. iSNS protocol is supported. Authentication mechanism is CHAP.

LUNs are configured in the same manner as for Fibre Channel array and are assigned to a storage group.

References For configuration of CLARiiON iSCSI target please check the CLARiiON configuration guide.

For up-to-date iSCSI host support please refer to EMC Support Matrix, available through E-Lab Interoperability Navigator at: http://elabnavigator.EMC.com.

For configuration of iSCSI server, please check the respective host connectivity guide.

iSCSI SAN Topologies TechBook

Page 55: TechBook: iSCSI SAN Topologies

iSCSI Solutions

Celerra Network Server

Note: This configuration is available on pre-VNX series systems.

EMC Celerra® native iSCSI targets include:

EMC Celerra Network Server provides iSCSI target capabilities combined with NAS capabilities, as shown in Figure 9 on page 55. The Celerra iSCSI system is defined by creating a file system. The file system is build on Fibre Channel LUNs accessible on EMC Symmetrix or CLARiiON arrays. The file system is then mounted on the Celerra server data movers. Out of the file system iSCSI LUNs are defined and allocated to iSCSI targets. The targets are then associated with one of the Celerra TCP/IP interfaces.

Figure 9 Celerra iSCSI configurations

All Celerra Network Servers can be configured to provide iSCSI services. The following are some of the characteristics of the Celerra Network Server:

◆ iSCSI error recovery level 0 (session-level recovery).

◆ Supports CHAP with unlimited entries for one-way authentication and one entry for reverse authentication.

◆ Uses iSNS protocol for discovery.

◆ Provides 10 Gb/s and 1 Gb/s interfaces

◆ Supports EMC storage Symmetrix and CLARiiON on the back end.

ICO-IMG-000952

TCP / IP networkor direct connect

Fibre ChannelFabric

Celerra NetworkAttach Storage (NAS)

CLARiiONFC targets

SymmetrixFC targets

iSCSIinitiator

TCP / IP networkor direct connect

Fibre ChannelFabric

Celerra NetworkAttach Storage (NAS)

iSCSIinitiator

EMC native iSCSI targets 55

Page 56: TechBook: iSCSI SAN Topologies

56

iSCSI Solutions

Implementation best practicesThe following information is provided to help you estimate size requirements for iSCSI LUNs and provides guidelines for configuring iSCSI on the Celerra Network Server.

◆ Estimate size requirements for the file system.

When using regular iSCSI LUNs, the file system should be large enough to hold the LUNs and the planned snapshots of those LUNs. Each iSCSI snapshot may require the same amount of space on the file system as the LUN.

◆ Create and mount file systems for iSCSI LUNs.

The next step in configuring iSCSI targets on a Celerra Network Server is to create and mount one or more file systems to provide a dedicated storage resource for the iSCSI LUNs. Create and mount a file system through Celerra Manager or the CLI. The Celerra Manager Online Help and the technical module Managing Celerra Volumes and File Systems Manually provide instructions.

VNX series for File

IMPORTANT!iSCSI functionality is available for the VNX unified storage platforms and Gateway file systems, but must first be enabled by EMC Customer Service.

VNX 5000 seriesUnified storage system

The VNX 5000 series unified storage system implements a modular architecture that integrates hardware components for block, file, and object with concurrent support for native NAS, iSCSI, Fibre Channel, and FCoE protocols. Figure 10 shows an example of a VNX 5000 series unified storage systems configuration.

Figure 10 VNX 5000 series iSCSI configuration

ICO-IMG-000951

VNX 5xxx CLARiiONFC targets

iSCSIinitiator

TCP / IP networkor direct connect

iSCSI SAN Topologies TechBook

Page 57: TechBook: iSCSI SAN Topologies

iSCSI Solutions

VNX series GatewayVG2

The EMC VNX series Gateway VG2 platform delivers a comprehensive, consolidated solution that adds NAS storage in a centrally managed information storage system. Figure 11 shows an example of a VNX series Gateway VG2 configuration.

Figure 11 VNX VG2 iSCSI configuration

ICO-IMG-000950

TCP / IP networkor direct connect

Fibre ChannelFabric

VNX-VG2

VNX-VG2Symmetrix VMAX

FC targets

CLARiiONFC targets

iSCSIinitiator

TCP / IP networkor direct connect

Fibre ChannelFabric

iSCSIinitiator

EMC native iSCSI targets 57

Page 58: TechBook: iSCSI SAN Topologies

58

iSCSI Solutions

Configuring iSCSI targetsThis section lists the tasks you must perform to configure iSCSI targets and LUNs on the Celerra Network Server.

The online Celerra man pages and the Celerra Network Server Command Reference Manual provide detailed descriptions of the commands used in these procedures.

1. Create iSCSI targets:

You need to create one or more iSCSI targets on the Data Mover so an iSCSI initiator can establish a session and exchange data with the Celerra Network Server.

2. Create iSCSI LUNs:

After creating an iSCSI target, you must create iSCSI LUNs on the target. The LUNs provide access to the storage space on the Celerra Network Server. From the point of view of a client system, a Celerra iSCSI LUN appears as any other disk device.

3. Create iSCSI LUN masks:

On the Celerra Network Server, a LUN mask on a target controls incoming iSCSI access by granting or denying an iSCSI initiator access to specific iSCSI LUNs on that target. When created, an iSCSI target has no LUN masks, which means no initiator can access LUNs on that target. To enable an initiator to access LUNs on a target, you need to create a LUN mask to specify the initiator and the LUNs it can access.

4. Configure iSNS on the Data Mover (optional):

If you want iSCSI initiators to automatically discover the iSCSI targets on a Data Mover, you can configure an iSNS client on the Data Mover. Configuring an iSNS client on the Data Mover causes the Data Mover to register all of its iSCSI targets with an external iSNS server. iSCSI initiators can then query the iSNS server to discover the available targets on the Data Movers.

5. Create CHAP entries (optional):

If you want a Data Mover to authenticate the identity of each iSCSI initiator, configure CHAP authentication on the Data Mover. To configure CHAP, you must:

a. Set the appropriate parameters so targets on the Data Mover require CHAP authentication.

iSCSI SAN Topologies TechBook

Page 59: TechBook: iSCSI SAN Topologies

iSCSI Solutions

b. Create a CHAP entry for each initiator that contacts the Data Mover. CHAP entries are configured on each Data Mover. Each initiator has a unique CHAP secret for the Data Mover.

c. In some cases, initiators authenticate the identity of the targets as well. In this case, you must configure a CHAP entry for reverse authentication. Reverse authentication entries differ from regular CHAP entries because each Data Mover can have only one CHAP secret. The Data Mover uses the same CHAP secret when any iSCSI initiator authenticates a target on the Data Mover.

6. Start the iSCSI service:

Before using iSCSI targets on the Celerra Network Server, you must start the iSCSI service on the Data Mover.

References For more information please refer to Configuring iSCSI Targets on Celerra, available on Powerlink.

Configuring iSCSI targets 59

Page 60: TechBook: iSCSI SAN Topologies

60

iSCSI Solutions

Bridged solutionsThe following switches are discussed in this section:

◆ “Brocade”, next

◆ “Cisco” on page 63

◆ “Brocade M Series” on page 69

BrocadeThe FC4-16IP iSCSI gateway service is an intermediate device in the network, allowing iSCSI initiators in an IP SAN to access and utilize storage in a Fibre Channel (FC) SAN.

Supportedconfigurations

The iSCSI gateway enables applications on an IP network to use an iSCSI initiator to connect to FC targets. The iSCSI gateway translates iSCSI protocol to Fibre Channel Protocol (FCP), bridging the IP network and FC SAN.

Note: The FC4-16IP iSCSI gateway service is not compatible with other iSCSI gateway platforms, including Brocade iSCSI Gateway or the SilkWorm Multiprotocol Router.

Figure 12 shows a basic iSCSI gateway service implementation.

Figure 12 iSCSI gateway service basic implementation

The Brocade FC4-16IP blade acts as an iSCSI gateway between FC-attached targets and iSCSI initiators. On the iSCSI initiator, iSCSI is mapped between the SCSI driver and the TCP/IP stack. At the

IPnetwork SAN

iSCSIinitiator

FC4-16IPiSCSI gateway

FC target 1

LUNs

FC target 2LUNs

ICO-IMG-000942

iSCSI SAN Topologies TechBook

Page 61: TechBook: iSCSI SAN Topologies

iSCSI Solutions

iSCSI gateway port, the incoming iSCSI data is converted to FCP (SCSI on FC) by the iSCSI virtual initiator and then forwarded to the FC target. This allows low-cost servers to leverage an existing FC infrastructure.

To represent all iSCSI initiators and sessions, each iSCSI portal has one iSCSI virtual initiator (VI) to the FC fabric that appears as an N_Port device with a special WWN format. Regardless of the number of iSCSI initiators or iSCSI sessions sharing the portal, Fabric OS uses one iSCSI VI per iSCSI portal.

Fabric OS provides a mechanism that maps LUNs to iSCSI VTs, a one-to-one mapping with unique iSCSI Qualified Names (IQNs) for each target. It presents an iSCSI VT for each native FC target to the IP network and an iSCSI VI for each iSCSI port to the FC fabric.

Fabric OS also supports more complicated configurations, allowing each iSCSI VT to be mapped to one or more physical FC targets. Each FC target can have one or more LUNs. Physical LUNs can be mapped to different virtual LUNs.

Implementation bestpractices

Table 1 lists scalability guidelines, restrictions, and limitations:

Table 1 Scalability guidelines (page1 of 2)

# of iSCSI sessions per port 64

# of iSCSI ports per FC4-16IP blade 8

# of iSCSI blades in a switch 4

# of iSCSI sessions per FC4-16IP blade 512

# of iSCSI sessions per switch 1024

# of TCP sessions per switch 1024

# of TCP connections per iSCSI session 2

# of iSCSI sessions per fabric 4096

# of TCP connections per fabric 4096

# of iSCSI targets per fabric 4096

# CHAP entries per fabric 4096

# LUNS per iSCSI target 256

Bridged solutions 61

Page 62: TechBook: iSCSI SAN Topologies

62

iSCSI Solutions

The following are installation tips and recommendations:

◆ All iSCSI Virtual Initiators should be included in the zone with specified target.

◆ All iSCSI VIs must be registered on the CLARiiON array and added to the appropriate storage groups.

◆ All iSCSI VIs must be added to the Symmetrix VCM database, if utilizing the device masking functionality.

◆ If the FC targets use access control lists/database, you must add the FC NWWN/WWPN of the Ironman blade to the ACL/database (fclunquery -s to determine Ironman FC NWWN/WWPN).

◆ Recommend masking all LUNS for all VIs and performing the LUN masking functionality from the Ironman blade by creating individual iSCSI Virtual Targets and assigning the LUNS to the appropriate iSCSI Virtual Target.

◆ Firmware upgrades are not online events for the Ironman GigE ports, so plan accordingly.

◆ The fcLunQuery command only gets addresses from targets that support the ReportLuns command.

References All Brocade documentation can be located at http://www.brocade.com. Click Brocade Connect to register, at no cost, for a user ID and password.

The following documentation is available for Fabric OS:

◆ Fabric OS Administrator’s Guide

◆ Fabric OS Command Reference

◆ Fabric OS MIB Reference

◆ Fabric OS Message Reference

◆ Brocade Glossary

# Members per discovery domain 64

# Discovery domains per discovery domain set 4096

# of Discovery domain sets 4

Table 1 Scalability guidelines (page2 of 2)

iSCSI SAN Topologies TechBook

Page 63: TechBook: iSCSI SAN Topologies

iSCSI Solutions

The following documentation is available for SilkWorm 48000 director and iSCSI blade:

◆ SilkWorm 48000 Hardware Reference Manual

◆ iSCSI Gateway Service Administrator’s Guide

◆ FC4-16IP Hardware Reference Manual

CiscoCisco MDS 9000 storage switches are multiprotocol switches that support the Fibre Channel and Gigabit Ethernet (FCIP and iSCSI) protocols. Each switch model can be used as a Fibre Channel-iSCSI gateway to support iSCSI solutions with Fibre Channel targets (Symmetrix, VNX series, and CLARiiON).

Cisco MDS 9000 family IP storage (IPS) services extend the reach of Fibre Channel SANs by using open-standard, IP-based technology. The switch allows IP hosts to access Fibre Channel storage using the iSCSI protocol. The iSCSI feature is specific to the IPS module and is available in Cisco MDS 9200 Switches or Cisco MDS 9500 Directors. The Cisco MDS 9216i switch and the 14/2 Multiprotocol Services (MPS-14/2) module also allow you to use Fibre Channel, FCIP, and iSCSI features. The MPS-14/2 module is available for use in any switch in the Cisco MDS 9200 Series or Cisco MDS 9500 Series.

Supportedconfigurations

Initiator presentation modes (transparent and proxy)The two modes available to present iSCSI hosts in the Fibre Channel fabric are transparent initiator mode and proxy initiator mode.

◆ In transparent initiator mode, each iSCSI host is presented as one virtual Fibre Channel host. The benefit of transparent mode is it allows a finer level of Fibre Channel access control configuration (similar to managing a "real" Fibre Channel host). Because of the one-to-one mapping from iSCSI to Fibre Channel, each host can have different zoning or LUN access control on the Fibre Channel storage device.

◆ In proxy-initiator mode, there is only one virtual Fibre Channel host per one IPS port that all iSCSI hosts use to access Fibre Channel targets. In a scenario where the Fibre Channel storage device requires explicit LUN access control for every host, the static configuration for each iSCSI initiator can be overwhelming. In such case, using the proxy-initiator mode simplifies the configuration.

Bridged solutions 63

Page 64: TechBook: iSCSI SAN Topologies

64

iSCSI Solutions

Figure 13 shows an example of a supportable configuration.

Figure 13 Supportable configuration example

The following iSCSI configurations are supported:

◆ The Cisco MDS switches can be used as Fibre Channel-iSCSI gateway to run applications using an iSCSI initiator to Symmetrix, VNX series, and CLARiiON storage devices.

◆ Host-based redundancy is supported through the use of EMC PowerPath®.

iSCSI configuration has the following limits:

◆ The maximum number of iSCSI initiators supported in a fabric is 1800.

◆ The maximum number of iSCSI sessions supported by an IPS port in either transparent or proxy initiator mode is 300.

◆ The maximum number of iSCSI session support by switch is 5000.

VSAN AFC fabric

Symmetrix CLARiiON VNX

DedicatedWell-engineeredLayer 2 network

IPS portconfigured for

iSCSI on VSAN ACiscoMDS 9000

iSCSIhost

iSCSIhost

iSCSIhost

iSCSIhost

ICO-IMG-000947

iSCSI SAN Topologies TechBook

Page 65: TechBook: iSCSI SAN Topologies

iSCSI Solutions

◆ The maximum number of iSCSI targets supported in a fabric is 6000.

Configuration overviewTo use the iSCSI feature, you must explicitly enable iSCSI on the required switches in the fabric. By default, this feature is disabled in all switches in the Cisco MDS 9000 family. Each physical Gigabit Ethernet interface on an IPS module or MPS-14/2 module can be used to translate and route iSCSI requests to Fibre Channel targets and responses in the opposite direction. To enable this capability, the corresponding iSCSI interface must be in an enabled state.

Presenting Fibre Channel Targets as iSCSI Targets The IPS module or MPS-14/2 module presents physical Fibre Channel targets as iSCSI virtual targets, allowing them to be accessed by iSCSI hosts. It does this in one of two ways:

◆ Dynamic mapping — Automatically maps all the Fibre Channel target devices/ports as iSCSI devices. Use this mapping to create automatic iSCSI target names.

◆ Static mapping — Manually creates iSCSI target devices and maps them to the whole Fibre Channel target port or a subset of Fibre Channel LUNs. With this mapping, you must specify unique iSCSI target names.

Presenting iSCSI hosts as virtual Fibre Channel hosts The IPS module or MPS-14/2 module connects to the Fibre Channel storage devices on behalf of the iSCSI host to send commands and transfer data to and from the storage devices. These modules use a virtual Fibre Channel N_Port to access the Fibre Channel storage devices on behalf of the iSCSI host. iSCSI hosts are identified by either iSCSI qualified name (IQN) or IP address.

Initiator identification iSCSI hosts can be identified by the IPS module or MPS-14/2 module using the following:

◆ iSCSI qualified name (IQN)

An iSCSI initiator is identified based on the iSCSI node name it provides in the iSCSI login. This mode can be useful if an iSCSI host has multiple IP addresses and you want to provide the same service independent of the IP address used by the host. An initiator with multiple IP addresses (multiple network interface cards, NICs) has one virtual N_Port on each IPS port to which it logs into.

Bridged solutions 65

Page 66: TechBook: iSCSI SAN Topologies

66

iSCSI Solutions

◆ IP address

An iSCSI initiator is identified based on the IP address of the iSCSI host. This mode is useful if an iSCSI host has multiple IP addresses and you want to provide different service-based on the IP address used by the host. It is also easier to get the IP address of a host compared to getting the iSCSI node name. A virtual N_Port is created for each IP address it uses to log in to iSCSI targets. If the host using one IP address logs in to multiple IPS ports, each IPS port will create one virtual N_Port for that IP address.

You can configure the iSCSI initiator identification mode on each IPS port and all the iSCSI hosts terminating on the IPS port will be identified according to that configuration. The default mode is to identify the initiator by name.

iSCSI access control Two mechanisms of access control are available for iSCSI devices.

◆ Fibre Channel zoning-based access control

◆ iSCSI ACL-based access control

Depending on the initiator mode used to present the iSCSI hosts in the Fibre Channel fabric, either or both access control mechanisms can be used.

Fibre Channel zoning-based access control Cisco SAN-OS VSAN and zoning concepts have been extended to cover both Fibre Channel devices and iSCSI devices. Zoning is the standard access control mechanism for Fibre Channel devices, which is applied within the context of a VSAN. Fibre Channel zoning has been extended to support iSCSI devices, and this extension has the advantage of having a uniform, flexible access control mechanism across the whole SAN.

◆ Fibre Channel device WWPN.

◆ Interface and switch WWN. Device connecting through that interface is within the zone.

In the case of iSCSI, multiple iSCSI devices may be connected behind an iSCSI interface. Interface-based zoning may not be useful because all the iSCSI devices behind the interface will automatically be within the same zone.

In transparent initiator mode (where one Fibre Channel virtual N_Port is created for each iSCSI host), the standard Fibre Channel

iSCSI SAN Topologies TechBook

Page 67: TechBook: iSCSI SAN Topologies

iSCSI Solutions

device WWPN-based zoning membership mechanism can be used if an iSCSI host has static WWN mapping.

Zoning membership mechanism has been enhanced to add iSCSI devices to zones based on the following:

◆ IPv4 address/subnet mask

◆ IPv6 address/prefix length (currently EMC does not support ip version 6)

◆ iSCSI qualified name (IQN)

◆ Symbolic-node-name (IQN)

For iSCSI hosts that do not have a static WWN mapping, the feature allows the IP address or iSCSI node name to be specified as zone members. Note that iSCSI hosts that have static WWN mapping can also use these features. IP address-based zone membership allows multiple devices to be specified in one command by providing the subnet mask.

iSCSI-based access control iSCSI-based access control is applicable only if static iSCSI virtual targets are created. For a static iSCSI target, you can configure a list of iSCSI initiators that are allowed to access the targets.

By default, static iSCSI virtual targets are not accessible to any iSCSI host. You must explicitly configure accessibility to allow an iSCSI virtual target to be accessed by all hosts. The initiator access list can contain one or more initiators. The iSCSI initiator can be identified by one of the following mechanisms:

◆ iSCSI node name

◆ IPv4 address and subnet

◆ IPv6 address (currently EMC does not support IP version 6)

Note: For a transparent mode iSCSI initiator, if both Fibre Channel zoning and iSCSI ACLs are used, for every static iSCSI target that is accessible to the iSCSI host, the initiator's virtual N_Port should be in the same Fibre Channel zone as the Fibre Channel target.

iSCSI session authentication The IPS module or MPS-14/2 module supports the iSCSI authentication mechanism to authenticate the iSCSI hosts that request access to the storage devices. By default, the IPS modules or MPS-14/2 modules allow CHAP or None authentication of iSCSI

Bridged solutions 67

Page 68: TechBook: iSCSI SAN Topologies

68

iSCSI Solutions

initiators. If authentication is always used, you must configure the switch to allow only CHAP authentication.

For CHAP user name or secret validation, you can use any method supported and allowed by the Cisco MDS AAA infrastructure. AAA authentication supports a RADIUS, TACACS+, or local authentication device.

iSCSI immediate data and unsolicited data features Cisco MDS switches support the iSCSI immediate data and unsolicited data features if requested by the initiator during the login negotiation phase. Immediate data is iSCSI write data contained in the data segment of an iSCSI command protocol data unit (PDU), such as combining the write command and write data together in one PDU. Unsolicited data is iSCSI write data that an initiator sends to the iSCSI target, such as an MDS switch, in an iSCSI data-out PDU without having to receive an explicit ready to transfer (R2T) PDU from the target.

These two features help reduce I/O time for small write commands because it removes one round-trip between the initiator and the target for the R2T PDU. As an iSCSI target, the MDS switch allows up to 64 KB of unsolicited data per command. This is controlled by the FirstBurstLength parameter during iSCSI login negotiation phase.

If an iSCSI initiator supports immediate data and unsolicited data features, these features are automatically enabled on the MDS switch with no configuration required.

Implementation bestpractices

Symmetrix setupSymmetrix SRDF ports should be configured as standard Fibre Channel SRDF ports. In a Fibre Channel environment, the Cisco MDS switch provides all the services of a Fibre Channel switch, similar to those provided by any other Fibre Channel switch.

VNX series setupVNX ports should be configured as standard Fibre Channel target ports for iSCSI configurations.

CLARiiON setupCLARiiON ports should be configured as standard Fibre Channel target ports for iSCSI configurations.

References All documentation can be found at www.cisco.com. Please search on Cisco MDS Configuration Guide and choose the guide relevant to the code running in your environment.

iSCSI SAN Topologies TechBook

Page 69: TechBook: iSCSI SAN Topologies

iSCSI Solutions

Brocade M SeriesThe Brocade M Series Eclipse switches are multiprotocol switches (see Figure 14). They support Fibre Channel and Gigabit Ethernet. In an FC environment, the switch provides all the services similar to those provided by any other FC switch.

The Brocade M Series multiprotocol switches are used in distance-extension configurations running SRDF or MirrorView over IP, for SAN Copy deployment over IP, or as a gateway device to perform FC-iSCSI translation.

On these switches:

◆ Ports 1 through 12 should be configured as FC ports (F_Ports or R_Ports).

◆ Ports 13 to 16 can be configured as either iFCP (TCP) ports (to cover long distances) or as iSCSI ports.

Figure 14 Brocade M Series multiprotocol switch

EMC

Dedicated,well-engineeredLayer 2 networkNO packet lossNO duplication

McDATAmultiprotocol switch

iSCSIhost

iSCSIhost

iSCSIhost

iSCSIhost

SupportInteroperable switch

Symmetrix SYM-001459CLARiiONSymmetrixCLARiiON

Bridged solutions 69

Page 70: TechBook: iSCSI SAN Topologies

70

iSCSI Solutions

Supportedconfigurations

The following iSCSI configurations are supported:

◆ The Eclipse switches can be used as Fibre Channel-iSCSI gateway to run applications using an iSCSI initiator to Symmetrix and CLARiiON storage devices.

◆ The iSCSI host must run the MS iSCSI driver on a Windows 2000 or Windows 2003 system.

◆ Host-based redundancy is supported through the use of EMC PowerPath.

◆ The configuration supports a fan-out ratio of up to 50 iSCSI hosts connected to each Eclipse iSCSI port.

Implementation bestpractices

Certain limitations and restrictions need to be enforced for iSCSI configurations:

◆ The use of a single Brocade M Series Eclipse switch port for both iSCSI and iFCP is not supported.

◆ The network must be a local Layer 2 network. Currently, EMC does not support deployment over Layer 3 networks or over extended distances.

◆ The network must be dedicated solely to the iSCSI configuration and no traffic apart from iSCSI traffic should be carried over it.

◆ The network must be a well-engineered network with no packet loss or packet duplication. While planning the network, care must be taken in making certain that the utilized throughput will never exceed the available bandwidth. The MTU must be configured based on the maximum available MTU supported by each component on the network.

◆ iSCSI sessions may need to be manually reestablished. If the iSCSI session breaks due to an FC-side cable pull, it will need to be enabled manually from the host.

Symmetrix setupFor iSCSI configurations, the Symmetrix ports should be configured as standard FA ports for access to any Windows system.

CLARiiON setupFor iSCSI configurations, the CLARiiON ports should be configured as standard FC ports. If a CX200 is used in the iSCSI configuration, connections to only one port per SP are supported.

iSCSI SAN Topologies TechBook

Page 71: TechBook: iSCSI SAN Topologies

iSCSI Solutions

Settings for Brocade/ Brocade M Series/Cisco/ QLogic switchesWhile setting up the Brocade/ Brocade M Series/Cisco/QLogic switches to work with the Brocade M Series Eclipse switch, some settings are essential, as follows:

◆ The Brocade/Brocade M Series/Cisco/QLogic switches must use the latest code supported by EMC for Interop Solutions. The EMC Support Matrix, available through E-Lab Interoperability Navigator at: http://elabnavigator.EMC.com, provides more information.

◆ The E_Port on the Brocade must be in Brocade native mode, or Brocade M Series in Brocade M Series mode (to which the Brocade M Series Eclipse switch is connected to) must be hard set to the actual port speed and autonegotiation must be disabled.

◆ The Cisco and QLogic switches must be set up to work in the Interop mode.

◆ Once the zoning is defined on the Brocade M Series Eclipse switches, it appears in the Brocade/Brocade M Series/Cisco/QLogic active configurations as zones beginning with SoIP_xxx. When changes are made through the SANvergence Manager, these zones will be updated accurately. The Brocade, Brocade M Series, Cisco, and QLogic zoning management utilities must not be used to alter these zones.

◆ When zones are created through Brocade M Series Eclipse switches, they are appended to the active zone configurations on the other switches. These zones are not automatically saved to the zoning library on all switches in the SAN. There may be cases where updates can be made using more than one application and this could result in errors. To avoid this, after every zoning change made using Brocade M Series Eclipse switch applications, the updated zone sets should be saved to the zoning libraries. Refer to documentation on Connectrix Manager, EMC Ionix™ ControlCenter®, or VisualSAN®, as applicable.

Zoning and deviceimportation

Fibre Channel targets need to be imported from their respective fabrics. This is done using the SANVergence manager. By importing the devices, only the devices that you want to participate in the Brocade M Series Router fabric are exposed and the other devices in the attached fabric are hidden.

In order to have an iSCSI initiator communicate with a Fibre Channel target it first needs to perform a discovery login. This discovery login is rejected but the initiator information is added to the SNS.

Bridged solutions 71

Page 72: TechBook: iSCSI SAN Topologies

72

iSCSI Solutions

Create a zone with the iSCSI initiator and the Fibre Channel target using the SANVergence manager. At this point the iSCSI initiator needs to perform another discovery session and then continue with a normal session.

References All Brocade M Series documentation can be found at http://www.Brocade.com.

Refer to the Brocade M Series Eclipse User Guide (1620/2640) for additional information regarding:

◆ Command Line Interface reference

◆ SANVergence Manager

◆ Recommended settings

iSCSI SAN Topologies TechBook

Page 73: TechBook: iSCSI SAN Topologies

iSCSI Solutions

)

SummaryTable 2 compares the iSCSI solution features available.

Table 2 iSCSI solution features comparison table (page1 of 2)

Celerra Brocade MDS Brocade M series Symmetrix (N) VNX series CLARiiON (N

Jumbo frames yes yes yes yes yes yes yes

O/S support Everything but AIX

Refer to the EMC Support Matrix

Refer to the EMC Support Matrix

Refer to the EMC Support Matrix

Refer to the EMC Support Matrix

Refer to the EMC Support Matrix

Refer to the EMC SupportMatrix

Number of initiators per port/box

256, but check the EMC Support Matrix

64/512(This is per port/blade)

300/2000 Refer to the EMC Support Matrix

50/200 Refer to the EMC Support Matrix

• Symmetrix DMX: 512

• Symmetrix VMAX: 1024

Refer to Table 3 on page 74.Refer to the EMC Support Matrix

Refer to Table 3 on page 74. Refer to the EMC SupportMatrix

Proxy initiator yes yes 500/2000 Refer to the EMC Support Matrix

no n/a n/a n/a

header/data digest

yes yes yes yes yes yes yes

Immediate data

yes yes yes yes • Symmetrix DMX: no

• Symmetrix VMAX: yes

no no

Initial R2T yes yes yes yes • Symmetrix DMX: yes

• Symmetrix VMAX: yes

no no

Authentication yes, CHAP yes, CHAP yes yes • Symmetrix DMX: yes, CHAP

• Symmetrix VMAX: yes, CHAP

yes, CHAP yes, CHAP

Encryption no no no no no no no

Summary 73

Page 74: TechBook: iSCSI SAN Topologies

74

iSCSI Solutions

)

The EMC Support Matrix is available through E-Lab Interoperability Navigator at http://elabnavigator.EMC.com.

Table 3 lists information on VNX and CX4 front-end port support.

PP support yes, for all supported environments

yesRefer to the EMC Support Matrix for all supported environments

yesRefer to the EMC Support Matrix for all supported environments

yesRefer to the EMC Support Matrix for all supported environments

yesRefer to the EMC Support Matrix for all supported environments

yesRefer to the EMC Support Matrix for all supported environments

yesRefer to the EMC SupportMatrix for all supported environments

Table 2 iSCSI solution features comparison table (page2 of 2)

Celerra Brocade MDS Brocade M series Symmetrix (N) VNX series CLARiiON (N

Table 3 VNX series and CLARiiON CX4 front-end port support

Front End Ports CX4-120 CX4-240 CX4-480 CX4-960 VNX 5300 VNX 5500 VNX 5700

VNX 7500

Max 1 Gb/s iSCSI ports per SP/ per Storage System

4/8 8/16 8/16 8/16 4/8 8/16 12/24 12/24

Max 10 Gb/s iSCSI ports per SP/ per Storage System

2/4 2/4 4/8 4/8 4/8 4/8 6/8 6/8

Max initiators/1 Gb/s iSCSI port 256 256 256 256 256 512 1,024 1,024

Max initiators/10 Gb/s iSCSI port

256 512 1,024 1,024 256 512 1,024 1,024

Max VLANs/10 Gb/s iSCSI port 8 8 8 8 8 8 8 8

Max VLANs/1 Gb/s iSCSI port 2 2 2 2 8 8 8 8

iSCSI SAN Topologies TechBook

Page 75: TechBook: iSCSI SAN Topologies

4

This chapter provides the following use case scenarios.

◆ Connecting an iSCSI Windows host to a VMAX array ................ 76◆ Connecting an iSCSI Linux host to a VMAX array ..................... 103◆ Configuring the VNX for block 1 Gb/10 Gb iSCSI port............. 121

Use Case Scenarios

Use Case Scenarios 75

Page 76: TechBook: iSCSI SAN Topologies

76

Use Case Scenarios

Connecting an iSCSI Windows host to a VMAX arrayFigure 15 shows a Windows host connected to a VMAX array. This scenario will be used in this use case study.

This section includes the following information:

◆ “Configuring storage port flags and an IP address on a VMAX array” on page 76

◆ “Configuring LUN Masking on a VMAX array” on page 81◆ “Configuring an IP address on a Windows host” on page 83◆ “Configuring iSCSI on a Windows host” on page 85◆ “Configuring Jumbo frames” on page 101◆ “Setting MTU on a Windows host” on page 101

Figure 15 Windows host connected to a VMAX array with 1 G connectivity

This setup consists of a Windows host connected to a VMAX array as follows:

1. The Windows host is connected via two paths with 1 G iSCSI and IPv6.

2. The VMAX array is connected via two paths for 1 G and 10 G iSCSI each.

3. PowerPath is installed on the host.

Configuring storage port flags and an IP address on a VMAX array

The following two methods discussed in this section can be used to configure storage and port flags and an IP address on a VMAX array:

◆ “Symmetrix Management Console” on page 77◆ “Solutions Enabler” on page 80

Windows ServerRouter

PowerPath

Subnet IPv6

VMAX

Subnet IPv6IPV6

IPV6

ICO-IMG-000986

2001:db8:0:f108::2

2001:db8:0:f109::2

SE 9G:02001:db8:0:f108::1

SE 10G:02001:db8:0:f109::1

iSCSI SAN Topologies TechBook

Page 77: TechBook: iSCSI SAN Topologies

Use Case Scenarios

SymmetrixManagement

Console

Note: For more details, refer to the EMC Symmetrix Management Console online help, available on Powerlink. Follow instructions to download the help.

To configure storage and port flags and an IP address on a VMAX array using the Symmetrix Management Console, complete the following steps:

1. Open the Symmetrix Management Console by using the IP address of the array.

2. In the Properties tab, left-hand pane, select Symmetrix Arrays > Directors > Gig-E, to navigate to the VMAX Gig-E storage port, as shown in Figure 16.

3. Right-click the storage port you want to configure, check Online, and select Port and Director Configuration > Set Port Attributes from the drop-down menus, as shown in Figure 16.

Figure 16 EMC Symmetrix Manager Console, Directors

Connecting an iSCSI Windows host to a VMAX array 77

Page 78: TechBook: iSCSI SAN Topologies

78

Use Case Scenarios

The Set Port Attributes dialog box displays, as shown in Figure 17.

Figure 17 Set Port Attributes dialog box

4. In the Set Port Attributes dialog box, select the following, as shown in Figure 17:

• Common_Serial_Number (C)• SCSI_3 (SC3)• SPC2_Protocol_Version (SPC2)• SCSI_Support1 (OS2007)

Note: Refer to the appropriate host connectivity guide, available on Powerlink, for your operating system for the correct port attributes to set.

5. In the Set Port Attributes dialog box, enter the following, as shown in Figure 17:

• For IPv4, enter the IPv4 Address, IPv4 Default Gateway, and IPv4 Netmask.

• For IPv6, enter the IPv6 Addresses and IPv6 Net Prefix.

iSCSI SAN Topologies TechBook

Page 79: TechBook: iSCSI SAN Topologies

Use Case Scenarios

6. Click Add to Config Session List.

7. In the Symmetrix Manager Console window, select the Config Session tab, as shown in Figure 18.

Figure 18 Config Session tab

8. In the My Active Tasks tab, click Commit All, as shown in Figure 19.

Figure 19 My Active Tasks, Commit All

Connecting an iSCSI Windows host to a VMAX array 79

Page 80: TechBook: iSCSI SAN Topologies

80

Use Case Scenarios

Solutions Enabler To configure storage and port flags and an IP address on a VMAX array using Solutions Enabler, complete the following steps:

◆ “Setting storage port flags and IP address” on page 80

◆ “Setting flags per initiator group” on page 80

◆ “Viewing flags setting for initiator group” on page 81

Setting storage port flags and IP addressIssue the following command:

symconfigure -sid <SymmID> –file <command file> preview|commit

where command file contains:

set port DirectorNum:PortNum[FlagName=enable|disable][, ...] ] gige primary_ip_address=IPAddressprimary_netmask=IPAddressdefault_gateway=IPAddressisns_ip_address=IPAddressprimary_ipv6_address=IPAddressprimary_ipv6_prefix=<0 -128>[fa_loop_id=integer] [hostname=HostName];

For example:

Command file for enabling Common_Serial_Number (C), SCSI_3 (SC3), SPC2_Protocol_Version (SPC2) and SCSI_Support1 (OS2007) flags and setting IPv6 address and prefix on port 9g:0:

set port 9g:0C=enable, SC3=enable, SPC2=enable, OS2007=enable gigeprimary_ipv6_address=2001:db8:0:f108::1primary_ipv6_prefix=64;

Setting flags per initiator groupIssue the following command:

symaccess -sid <SymmID> -name <GroupName> -type initiator set ig_flags <on <flag> <-enable |-disable> | off [flag]>

For example:

Enabling Common_Serial_Number (C), SCSI_3 (SC3), SPC2_Protocol_Version (SPC2) and SCSI_Support1 (OS2007) flags for initiator group SGELI2-83:

symaccess -sid 316 -name SGELI2-83_IG -type initiator set ig_flags on C,SC3,SPC2,OS2007 –enable

iSCSI SAN Topologies TechBook

Page 81: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Viewing flags setting for initiator groupIssue the following command:

symaccess -sid <SymmID> -type initiator show <GroupName> -detail

For example:

symaccess -sid 316 -type initiator show SGELI2-83_IG -detail

Configuring LUN Masking on a VMAX array

The following two methods discussed in this section can be used to configure LUN Masking on a VMAX array:

◆ “Using Symmetrix Management Console” on page 81◆ “Using Solutions Enabler” on page 82

Using SymmetrixManagement

Console

To create an initiator group, port group, storage group, and masking view using the Symmetrix Management Console, refer to the EMC Symmetrix Management Console online help, available on Powerlink. Follow instructions to download the help, then refer to the Storage Provisioning section, as shown in Figure 20.

Figure 20 EMC Symmetrix Management Console, Storage Provisioning

Connecting an iSCSI Windows host to a VMAX array 81

Page 82: TechBook: iSCSI SAN Topologies

82

Use Case Scenarios

Using SolutionsEnabler

To create an initiator group, port group, storage group, and masking view using the Solutions Enabler, refer to the following sections:

◆ “Creating an initiator group” on page 82◆ “Creating a port group” on page 82◆ “Creating a storage group” on page 82◆ “Creating masking view” on page 82

Creating an initiator groupIssue the following command:

symaccess -sid <SymmID> -type initiator -name <GroupName> createsymaccess -sid <SymmID> -type initiator -name -iscsi <iqn> add

For example:

symaccess -sid 316 -type initiator -name SGELI2-83_IG createsymaccess -sid 316 -type initiator -name SGELI2-83_IG -iscsi

iqn.1991-05.com.microsoft:sgeli2-83 add

Creating a port groupIssue the following command:

symaccess -sid <SymmID> -type port -name <GroupName> createsymaccess -sid <SymmID> -type port -name <GroupName> -dirport

<DirectorNum>:<PortNum> add

For example:

symaccess -sid 316 -type port -name SGELI2-83_PG createsymaccess -sid 316 -type port -name SGELI2-83_PG -dirport 9g:0 add

Creating a storage groupIssue the following command:

symaccess -sid <SymmID> -type storage -name <GroupName> createsymaccess -sid <SymmID> -type storage -name -iscsi <iqn> add devs

<SymDevStart>:<SymDevEnd>

For example:

symaccess -sid 316 -type storage -name SGELI2-83_SG createsymaccess -sid 316 -type storage -name SGELI2-83_SG add devs 0047:110

Creating masking view Issue the following command:

symaccess -sid <SymmID> create view -name <MaskingView> -ig <InitiatorGroup> -pg <PortGroup> -sg <StorageGroup>

iSCSI SAN Topologies TechBook

Page 83: TechBook: iSCSI SAN Topologies

Use Case Scenarios

For example:

symaccess -sid 316 create view -name SGELI2-83_MV -ig SGELI2-83_IG -pg SGELI2-83_PG -sg SGELI2-83_SG

Listing masking viewIssue the following command:

symaccess -sid <SymmID> list view -name <MaskingView>

For example:

symaccess -sid 316 list view -name SGELI2-83_MV

For more details, refer to the EMC Solutions Enabler Symmetrix Array Controls CLI Product Guide, available on Powerlink.

Configuring an IP address on a Windows host

To configure an IP address on a Windows host, complete the following steps:

Note: Step 1 through Step 5 are applicable to Windows 2008 Server. Other versions of Windows may be different.

1. Click Start > Control Panel.

2. Click View network status and tasks.

3. Click Change adapter settings.

4. Right-click on the adapter and select Properties.

5. Double-click the Internet Protocol version:

• For IPv4, double-click Internet Protocol Version 4 (TCP/IPv4).

• For IPv6, double-click Internet Protocol Version 6 (TCP/IPv6).

Connecting an iSCSI Windows host to a VMAX array 83

Page 84: TechBook: iSCSI SAN Topologies

84

Use Case Scenarios

6. Go to Network Connections and open the IPv6 Properties window. The Internet Protocol Version 6 (TCP/IPv6) Properties dialog box opens, as shown in Figure 21.

Figure 21 Internet Protocol Version 6 (TCP/IPv6) Properties dialog box

7. Enter the IPv6 address and the Subnet prefix length.

8. Click OK.

9. Ping the storage port to test connectivity, as shown in Figure 22.

Figure 22 Test connectivity

iSCSI SAN Topologies TechBook

Page 85: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Configuring iSCSI on a Windows hostYou can configure iSCSI on a Windows host using the steps provided in the following sections:

◆ “Using Microsoft iSCSI Initiator GUI” on page 85

◆ “Using Microsoft iSCSI Initiator CLI” on page 97

Using Microsoft iSCSIInitiator GUI

Note: The screenshots used in this section are taken from the built-in MS iSCSI Initiator application in Windows 2008 Server. Other versions of Windows might have different GUI.

This section provides the steps needed for:

◆ “Configuring via Target Portal Discovery” on page 85

◆ “Configuring via iSNS Server” on page 93

Configuring via Target Portal DiscoveryTo configure iSCSI on Windows via Target Port Discovery, complete the following steps:

1. Launch the Microsoft iSCSI Initiator GUI.

The iSCSI Initiator Properties window displays, as shown in Figure 23 on page 86.

Connecting an iSCSI Windows host to a VMAX array 85

Page 86: TechBook: iSCSI SAN Topologies

86

Use Case Scenarios

Figure 23 iSCSI Initiator Properties window

iSCSI SAN Topologies TechBook

Page 87: TechBook: iSCSI SAN Topologies

Use Case Scenarios

2. Select the Discovery tab, click Discover Portal, and click OK, as shown in Figure 24.

Figure 24 Discovery tab, Discover Portal

Connecting an iSCSI Windows host to a VMAX array 87

Page 88: TechBook: iSCSI SAN Topologies

88

Use Case Scenarios

The Discover Target Portal dialog box displays, as shown inFigure 25.

Figure 25 Discover Portal dialog box

3. Enter the IPv6 address of the target and click Advanced.

4. The Advanced Settings window displays, as shown in Figure 26 on page 89.

iSCSI SAN Topologies TechBook

Page 89: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Figure 26 Advanced Settings window

5. In the General tab, choose the Local adapter and Initiator IP from the pull-down menu. Select Data digest and Header digest, if required.

6. Click OK to close the Advanced Settings window.

7. Click OK to close the Discover Target Portal window.

Connecting an iSCSI Windows host to a VMAX array 89

Page 90: TechBook: iSCSI SAN Topologies

90

Use Case Scenarios

The targets behind the discovered portal now display, as shown in Figure 27.

Figure 27 Target portals

8. Select Targets tab, as shown in Figure 28.

Figure 28 Targets tab

iSCSI SAN Topologies TechBook

Page 91: TechBook: iSCSI SAN Topologies

Use Case Scenarios

9. Select one Target and click Connect. Repeat for each Target.

The Connect to Target dialog box displays, as shown in Figure 29.

Figure 29 Connect to Target dialog box

10. Select the Add this connection to the list of Favorite Targets checkbox.

11. Click OK.

The host is connected to the targets, as shown in Figure 30.

Figure 30 Discovered targets

12. Select the Volumes and Devices tab.

Connecting an iSCSI Windows host to a VMAX array 91

Page 92: TechBook: iSCSI SAN Topologies

92

Use Case Scenarios

13. Click Auto Configure to bind the volumes, as shown in Figure 31.

Figure 31 Volume and Devices tab

iSCSI SAN Topologies TechBook

Page 93: TechBook: iSCSI SAN Topologies

Use Case Scenarios

14. Open PowerPath. The devices appear, as shown in Figure 32.

Figure 32 Devices

Configuring via iSNS ServerTo configure via iSNS Server, complete the following steps:

1. Set the iSNS Server IP address for both storage ports using Solutions Enabler.

symconfigure -sid 2316 -file isns.txt commit

Execute a symconfigure operation for symmetrix '000192602316' (y/[n]) ? y

A Configuration Change operation is in progress. Please wait...

Establishing a configuration change session...............Established. Processing symmetrix 000192602316 Performing Access checks..................................Allowed. Checking Device Reservations..............................Allowed. Initiating COMMIT of configuration changes................Queued. COMMIT requesting required resources......................Obtained. Step 004 of 050 steps.....................................Executing. Step 017 of 050 steps.....................................Executing. Step 026 of 050 steps.....................................Executing. Step 042 of 085 steps.....................................Executing.

Connecting an iSCSI Windows host to a VMAX array 93

Page 94: TechBook: iSCSI SAN Topologies

94

Use Case Scenarios

Step 060 of 085 steps.....................................Executing. Step 064 of 085 steps.....................................Executing. Step 082 of 085 steps.....................................Executing. Local: COMMIT............................................Done. Terminating the configuration change session..............Done.

The configuration change session has successfully completed.

Where isns.txt contains:

set port 10G:0isns_ip_address=12.10.10.206

Note: iSNS Server IP Address supports only IPv4.

2. Launch the iSNS Server.

The storage ports appear as shown in Figure 33.

Figure 33 iSNS Server Properties window, storage ports

3. Launch the Microsoft iSCSI Initiator GUI.

iSCSI SAN Topologies TechBook

Page 95: TechBook: iSCSI SAN Topologies

Use Case Scenarios

4. Select the Discovery tab, as shown in Figure 34.

Figure 34 Discovery tab

5. Click Add Server. The Add iSNS Server window displays.

6. Enter the IP address for each iSNS Server interface and click OK.

Connecting an iSCSI Windows host to a VMAX array 95

Page 96: TechBook: iSCSI SAN Topologies

96

Use Case Scenarios

The iSNS Server is successfully added, as shown in Figure 35.

Figure 35 iSNS Server added

iSCSI SAN Topologies TechBook

Page 97: TechBook: iSCSI SAN Topologies

Use Case Scenarios

7. Return to the iSNS Server. The Initiator has been successfully added, as shown in Figure 36.

Figure 36 iSNS Server

8. Follow Step 8 on page 100 through Step 10 on page 100 in “Configuring via Target Portal Discovery,” discussed next.

Using Microsoft iSCSIInitiator CLI

Steps for configuring iSCSI on a Windows host using Microsoft iSCSI Initiator CLI are provided in the following sections:

◆ “Configuring via Target Portal Discovery” on page 97◆ “Configuring via iSNS Server” on page 101

Configuring via Target Portal DiscoveryTo configure iSCSI on Windows using Microsoft iSCSI Initiator CLI, complete the following steps:

1. Add the Target Portal for each storage port.

C:\>iscsicli QAddTargetPortal 2001:db8:0:f108::1Microsoft iSCSI Initiator Version 6.1 Build 7601

The operation completed successfully.

Connecting an iSCSI Windows host to a VMAX array 97

Page 98: TechBook: iSCSI SAN Topologies

98

Use Case Scenarios

2. List the Target Portals.

C:\>iscsicli ListTargetPortalsMicrosoft iSCSI Initiator Version 6.1 Build 7601

Total of 2 portals are persisted:

Address and Socket : 2001:db8:0:f108::1 3260 Symbolic Name : Initiator Name : Port Number : <Any Port> Security Flags : 0x0 Version : 0 Information Specified: 0x0 Login Flags : 0x0

Address and Socket : 2001:db8:0:f109::1 3260 Symbolic Name : Initiator Name : Port Number : <Any Port> Security Flags : 0x0 Version : 0 Information Specified: 0x0 Login Flags : 0x0

The operation completed successfully.

3. List the Targets behind the discovered Portals. The Target iqn is displayed.

C:\>iscsicli ListTargetsMicrosoft iSCSI Initiator Version 6.1 Build 7601

Targets List: iqn.1992-04.com.emc:50000972082431a4 iqn.1992-04.com.emc:50000972082431a0

The operation completed successfully.

4. Get the Target information.

C:\>iscsicli TargetInfo iqn.1992-04.com.emc:50000972082431a0Microsoft iSCSI Initiator Version 6.1 Build 7601

Discovery Mechanisms : "SendTargets:*2001:db8:0:f108::1 0003260 Root\ISCSIPRT\0000_0 "

The operation completed successfully.

5. Log in to each Target. The Session Id is created.

C:\>iscsicli QLoginTarget iqn.1992-04.com.emc:50000972082431a0Microsoft iSCSI Initiator Version 6.1 Build 7601

iSCSI SAN Topologies TechBook

Page 99: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Session Id is 0xfffffa8007af4018-0x400001370000000cConnection Id is 0xfffffa8007af4018-0xbThe operation completed successfully.

6. Display the Target Mappings assigned to all LUNs that the initiators have logged in to.

C:\>iscsicli ReportTargetMappingsMicrosoft iSCSI Initiator Version 6.1 Build 7601

Total of 2 mappings returned Session Id : fffffa8007af4018-400001370000000c Target Name : iqn.1992-04.com.emc:50000972082431a0 Initiator : Root\ISCSIPRT\0000_0 Initiator Scsi Device : \\.\Scsi9: Initiator Bus : 0 Initiator Target Id : 0 Target Lun: 0x100 <--> OS Lun: 0x1 Target Lun: 0x200 <--> OS Lun: 0x2 … …

Session Id : fffffa8007af4018-400001370000000d Target Name : iqn.1992-04.com.emc:50000972082431a4 Initiator : Root\ISCSIPRT\0000_0 Initiator Scsi Device : \\.\Scsi9: Initiator Bus : 0 Initiator Target Id : 1 Target Lun: 0x0 <--> OS Lun: 0x0 Target Lun: 0x100 <--> OS Lun: 0x1 … …

7. The mappings obtained through the QLoginTarget command are not persistent and will be lost at reboot. To have a persistent connection, use the PersistentLoginTarget command for each Target.

Note: The value T means the LUN is exposed as a device. Otherwise, the LUN is not exposed and the only operations that can be performed are SCSI Inquiry, SCSI Report LUNS, and SCSI Read Capacity, and only through the iSCSI discovery service since the operating system is not aware of the existence of the device.

C:\>iscsicli PersistentLoginTarget iqn.1992-04.com.emc:50000972082431a0 T * * * * * * * * * * * * * * * 0Microsoft iSCSI Initiator Version 6.1 Build 7601

The operation completed successfully.

Connecting an iSCSI Windows host to a VMAX array 99

Page 100: TechBook: iSCSI SAN Topologies

100

Use Case Scenarios

8. List the Persistent Targets.

C:\>iscsicli ListPersistentTargetsMicrosoft iSCSI Initiator Version 6.1 Build 7601

Total of 2 persistent targets Target Name : iqn.1992-04.com.emc:50000972082431a0 Address and Socket : 2001:0db8:0000:f108:0000:0000:0000:0001%0 3260 Session Type : Data Initiator Name : Root\ISCSIPRT\0000_0 Port Number : <Any Port> Security Flags : 0x0 Version : 0 Information Specified : 0x20 Login Flags : 0x8 Username :

Target Name : iqn.1992-04.com.emc:50000972082431a4 Address and Socket : 2001:0db8:0000:f109:0000:0000:0000:0001%0 3260 Session Type : Data Initiator Name : Root\ISCSIPRT\0000_0 Port Number : <Any Port> Security Flags : 0x0 Version : 0 Information Specified : 0x20 Login Flags : 0x8 Username :

The operation completed successfully.

9. Bind the Persistent Devices to cause the iSCSI Initiator service to determine which disk volumes are currently exposed by the active iSCSI sessions for all initiators and make that list persistent. The next time the iSCSI Initiator service starts, it will wait for all those volumes to be mounted before completing its service startup.

C:\>iscsicli BindPersistentDevicesMicrosoft iSCSI Initiator Version 6.1 Build 7601

The operation completed successfully.

10. Display the list of volumes and devices that are currently persistently bound by the iSCSI initiator.

C:\>iscsicli ReportPersistentDevicesMicrosoft iSCSI Initiator Version 6.1 Build 7601

Persistent Volumes"\\?\scsi#disk&ven_emc&prod_power&#{4a54205a-c920-4e28-88c5-9a6296a74b0b}&emcp&p

ower123#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}"

iSCSI SAN Topologies TechBook

Page 101: TechBook: iSCSI SAN Topologies

Use Case Scenarios

"\\?\scsi#disk&ven_emc&prod_power&#{4a54205a-c920-4e28-88c5-9a6296a74b0b}&emcp&power63#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}"

……

Configuring via iSNS ServerTo configure iSCSI via iSNS Server, complete the following steps:

1. Set the iSNS Server IP address for both storage ports as described in “Configuring via iSNS Server” on page 101.

2. Add both iSNS Server interfaces.

C:\copa>iscsicli AddiSNSServer 2001:db8:0:f108::3Microsoft iSCSI Initiator Version 6.1 Build 7601

The operation completed successfully.

3. List the iSNS Servers.

C:\copa>iscsicli ListiSNSServersMicrosoft iSCSI Initiator Version 6.1 Build 7601

2001:db8:0:f108::3 2001:db8:0:f109::3

The operation completed successfully.

4. Follow Step 3 on page 98 through Step 10 on page 100 in “Configuring via Target Portal Discovery.”

Configuring Jumbo framesTo configure Jumbo frames, set the MTU on the host, switch (host and storage side) and storage port to 9000.

The switch port MTU can be set using the switch admin tool.

Contact your EMC Customer Service Engineer to set the storage port MTU.

Setting MTU on a Windows hostThe MTU can be changed by editing the HBA driver properties. Consult your driver documentation for more information.

The netsh command line scripting utility can also be used to set the MTU. The usage of the netsh utility described next applies to

Connecting an iSCSI Windows host to a VMAX array 101

Page 102: TechBook: iSCSI SAN Topologies

102

Use Case Scenarios

Windows 2008 Server and may not be applicable for other versions of Windows.

To set MTU on Windows, complete the following steps:

1. Show the MTU.

C:\>netsh interface ipv6 show subinterface

MTU MediaSenseState Bytes In Bytes Out Interface------ --------------- --------- --------- -------------1500 1 110592960 22062103 CORP1500 1 2073668 894650 1G iSCSI 11500 1 796432 3343627 1G iSCSI 2

2. Change the MTU of "1G iSCSI 1" interface to 9000.

C:\>netsh interface ipv6 set subinterface "1G iSCSI 1" mtu=9000 store=persistentOk.

3. Show the updated MTU.

C:\>netsh interface ipv6 show subinterface

MTU MediaSenseState Bytes In Bytes Out Interface------ --------------- --------- --------- -------------1500 1 110592960 22062103 CORP9000 1 2073668 894650 1G iSCSI 11500 1 796432 3343627 1G iSCSI 2

iSCSI SAN Topologies TechBook

Page 103: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Connecting an iSCSI Linux host to a VMAX arrayFigure 37 shows a Linux host connected to a VMAX array. This scenario will be used in this use case study. This section includes the following information:

◆ “Configuring storage port flags and an IP address on a VMAX array” on page 104

◆ “Configuring LUN Masking on a VMAX array” on page 111

◆ “Configuring an IP address on a Linux host” on page 114

◆ “Configuring CHAP on the Linux host” on page 117

◆ “Configuring iSCSI on a Linux host using Linux iSCSI Initiator CLI” on page 117

◆ “Configuring Jumbo frames” on page 119

◆ “Setting MTU on a Linux host” on page 119

Figure 37 Linux hosts connected to a VMAX array with 10 G connectivity

This setup consists of a Linux host connected to a VMAX array as follows:

1. The Linux host is connected via two paths with 10 G iSCSI and IPv4. CHAP Authentication is used.

2. The VMAX array is connected via two paths for 1 G and 10 G iSCSI each.

3. PowerPath is installed on the host.

Linux Server

Switch

Subnet IPv4

VMAX

Subnet IPv4IPV4

IPV4

ICO-IMG-000987

PowerPath

eth0: 10.20.5.210

eth1: 10.20.20.210

SE 7G:010.20.5.201

SE 7H:010.20.20.100

Connecting an iSCSI Linux host to a VMAX array 103

Page 104: TechBook: iSCSI SAN Topologies

104

Use Case Scenarios

Configuring storage port flags and an IP address on a VMAX arrayThe following two methods discussed in this section can be used to configure storage and port flags and an IP address on a VMAX array:

◆ “Symmetrix Management Console” on page 104

◆ “CHAP” on page 108

◆ “Solutions Enabler” on page 110

SymmetrixManagement

Console

Note: For more details, refer to the EMC Symmetrix Management Console online help, available on Powerlink. Follow instructions to download the help.

To configure storage and port flags and an IP address on a VMAX array using the Symmetrix Management Console, complete the following steps:

1. Open the Symmetrix Management Console by using the IP address of the array.

2. In the Properties tab, left-hand pane, select Symmetrix Arrays > Directors > Gig-E, to navigate to the VMAX Gig-E storage port, as shown in Figure 38.

3. Right-click the storage port you want to configure, check Online, and select Port and Director Configuration > Set Port Attributes from the drop-down menu, as shown in Figure 38.

iSCSI SAN Topologies TechBook

Page 105: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Figure 38 Set port attributes

IMPORTANT!Take the port offline if the IP address is being changed. Select Port and Director Configuration and uncheck Online.

Connecting an iSCSI Linux host to a VMAX array 105

Page 106: TechBook: iSCSI SAN Topologies

106

Use Case Scenarios

The Set Port Attributes dialog box displays, as shown in Figure 39.

Figure 39 Set Port Attributes dialog box

4. In the Set Port Attributes dialog box, select the following, as shown in Figure 39:

• Common_Serial_Number (C)• SCSI_3 (SC3)• SPC2_Protocol_Version (SPC2)• SCSI_Support1 (OS2007)

Note: Refer to the appropriate host connectivity guide, available on Powerlink, for your operating system for the correct port attributes to set.

5. In the Set Port Attributes dialog box, enter the following, as shown in Figure 39:

iSCSI SAN Topologies TechBook

Page 107: TechBook: iSCSI SAN Topologies

Use Case Scenarios

• For IPv4, enter the IPv4 Address, IPv4 Default Gateway, and IPv4 Netmask.

• For IPv6, enter the IPv6 Addresses and IPv6 Net Prefix.

6. Click Add to Config Session List.

7. In the Symmetrix Manager Console window, select the Config Session tab, as shown in Figure 40.

Figure 40 Config Session tab

Connecting an iSCSI Linux host to a VMAX array 107

Page 108: TechBook: iSCSI SAN Topologies

108

Use Case Scenarios

8. In the My Active Tasks tab, click Commit All, as shown in Figure 41.

Figure 41 My Active Tasks, Commit All

CHAP To configure CHAP, complete the following steps.

1. From the Symmetrix Management Console, right-click on the storage port you want to configure and select Port and Director Configuration > CHAP Authentication for CHAP-related information, as shown in

iSCSI SAN Topologies TechBook

Page 109: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Figure 42 CHAP authentication

The following dialog box displays.

Figure 43 Director Port CHAP Authentication Enable/Disable dialog box

2. Click OK.

Connecting an iSCSI Linux host to a VMAX array 109

Page 110: TechBook: iSCSI SAN Topologies

110

Use Case Scenarios

The following dialog box displays.

Figure 44 Director Port CHAP Authentication Set dialog box

3. A Credential and Secret must be configured for the CHAP to be operational.

Solutions Enabler To configure storage and port flags and an IP address on a VMAX array using Solutions Enabler, complete the following steps:

◆ “Setting storage port flags and IP address” on page 110

◆ “Setting flags per initiator group” on page 111

◆ “Viewing flags setting for initiator group” on page 111

Setting storage port flags and IP addressIssue the following command:

symconfigure -sid <SymmID> –file <command file> preview|commit

where command file contains:

set port DirectorNum:PortNum[FlagName=enable|disable][, ...] ] gige primary_ip_address=IPAddressprimary_netmask=IPAddressdefault_gateway=IPAddressisns_ip_address=IPAddressprimary_ipv6_address=IPAddressprimary_ipv6_prefix=<0 -128>[fa_loop_id=integer] [hostname=HostName];

iSCSI SAN Topologies TechBook

Page 111: TechBook: iSCSI SAN Topologies

Use Case Scenarios

For example:

Command file for enabling Common_Serial_Number (C), SCSI_3 (SC3), SPC2_Protocol_Version (SPC2) and SCSI_Support1 (OS2007) flags and setting IPv6 address and prefix on port 9g:0:

set port 7G:0 C=enable, SC3=enable, SPC2=enable, OS2007=enable gigeprimary_ip_address= 10.20.5.201primary_netmask = 255.255.255.0default_gateway=10.20.5.1

set port 7H:0 C=enable, SC3=enable, SPC2=enable, OS2007=enable gigeprimary_ip_address= 10.20.20.100primary_netmask = 255.255.255.0default_gateway=10.20.20.1

Setting flags per initiator groupIssue the following command:

symaccess -sid <SymmID> -name <GroupName> -type initiator set ig_flags <on <flag> <-enable |-disable> | off [flag]>

For example:

Enabling Common_Serial_Number (C), SCSI_3 (SC3), SPC2_Protocol_Version (SPC2) and SCSI_Support1 (OS2007) flags for initiator group SGELI2-83:

symaccess -sid 316 -name Linux10G_IG -type initiator set ig_flags on C,SC3,SPC2,OS2007 –enable

Viewing flags setting for initiator groupIssue the following command:

symaccess -sid <SymmID> -type initiator show <GroupName> -detail

For example:

symaccess -sid 316 -type initiator show Linux10G_IG -detail

Configuring LUN Masking on a VMAX arrayThe following methods discussed in this section can be used to configure LUN Masking on a VMAX array:

◆ “Using Symmetrix Management Console” on page 112

◆ “Using Solutions Enabler” on page 112

◆ “Using SYMCLI for VMAX” on page 114

Connecting an iSCSI Linux host to a VMAX array 111

Page 112: TechBook: iSCSI SAN Topologies

112

Use Case Scenarios

Using SymmetrixManagement

Console

To create an initiator group, port group, storage group, and masking view using the Symmetrix Management Console, refer to the EMC Symmetrix Management Console online help, available on Powerlink. Follow instructions to download the help, then refer to the Storage Provisioning section, as shown in Figure 45.

Figure 45 EMC Symmetrix Management Console, Storage Provisioning

Using SolutionsEnabler

To create an initiator group, port group, storage group, and masking view using the Solutions Enabler, refer to the following sections:

◆ “Creating an initiator group” on page 112◆ “Creating a port group” on page 113◆ “Creating a storage group” on page 113◆ “Creating masking view” on page 113

Creating an initiator groupIssue the following command:

symaccess -sid <SymmID> -type initiator -name <GroupName> createsymaccess -sid <SymmID> -type initiator -name -iscsi <iqn> add

iSCSI SAN Topologies TechBook

Page 113: TechBook: iSCSI SAN Topologies

Use Case Scenarios

For example:

symaccess -sid 3003 -type initiator -name Linux10G_IG createsymaccess -sid 3003 -type initiator -name Linux10G_IG -iscsi

iqn.1994-05.com.redhat:1339be8c4613 add

Creating a port groupIssue the following command:

symaccess -sid <SymmID> -type port -name <GroupName> createsymaccess -sid <SymmID> -type port -name <GroupName> -dirport

<DirectorNum>:<PortNum> add

For example:

symaccess -sid 3003 -type port -name Linux10G_PG createsymaccess -sid 3003 -type port -name Linux10G_PG -dirport 7G:0 add

Creating a storage groupIssue the following command:

symaccess -sid <SymmID> -type storage -name <GroupName> createsymaccess -sid <SymmID> -type storage -name -iscsi <iqn> add devs

<SymDevStart>:<SymDevEnd>

For example:

symaccess -sid 3003 -type storage -name Linux10G_SG createsymaccess -sid 3003 -type storage -name Linux10G_SG add devs 816:842

Creating masking view Issue the following command:

symaccess -sid <SymmID> create view -name <MaskingView> -ig <InitiatorGroup> -pg <PortGroup> -sg <StorageGroup>

For example:

symaccess -sid 316 create view -name SGELI2-83_MV -ig SGELI2-83_IG -pg SGELI2-83_PG -sg SGELI2-83_SG

Listing masking viewIssue the following command:

symaccess -sid <SymmID> list view -name <MaskingView>

For example:

Symaccess -sid 3003 list view -name Linux10G

For more details, refer to the EMC Solutions Enabler Symmetrix Array Controls CLI Product Guide, available on Powerlink.

Connecting an iSCSI Linux host to a VMAX array 113

Page 114: TechBook: iSCSI SAN Topologies

114

Use Case Scenarios

Using SYMCLI forVMAX

1. To enable CHAP on an iSCSI initiator, use the following form:

symaccess -sid SymmID -iscsi iscsi enable chap

For example:

# symaccess -sid 3003 -iscsi iqn.1994-05.com.redhat:1339be8c4613 enable CHAP

2. To enable CHAP on a specific director and port, use the following form:

symaccess -sid SymmID [-dirport Dir:Port] enable chap

For example:

# symaccess -sid 3003-dirport 7G:0 enable chap

3. To set the CHAP credential and secret on a director and port, use the following form:

symaccess -sid SymmID -dirport Dir:Port set chap -cred Credential -secret Secret

For example:

# symaccess -sid SymmID -dirport 7G:0 set chap -cred chap -secret abcdefgh

4. To disable CHAP on a specific director and port, use the following form:

symaccess -sid SymmID [-dirport Dir:Port] disable chap

5. To delete CHAP from a specific director and port, use the following form:

symaccess -sid SymmID [-dirport Dir:Port] delete chap

Configuring an IP address on a Linux host

To configure an IP address on a Linux host, complete the following steps:

iSCSI SAN Topologies TechBook

Page 115: TechBook: iSCSI SAN Topologies

Use Case Scenarios

1. Issue the ifconfig command to verify the present IP addresses, as show in Figure 46:

Figure 46 Verify IP addresses

2. Use the following command to shut down the interface:

# ifconfig eth0 down# ifconfig eth1 down

3. Use the following command to set the IP address and bring the port back up.

# ifconfig eth0 10.20.5.210 netmask 255.255.255.0 up# ifconfig eth1 10.20.20.210 netmask 255.255.255.0 up

4. Check the parameters on the interface located.

/etc/sysconfig/network-scripts

Note: This folder contains all ifcfg-eth files. Make changes to the file appropriate to the interface being used.

For example, the following lists the properties on the interface eth0. To enable the IP address to be present with each reboot, set "ONBOOT=yes" .

[root@i2051210 network-scripts]# more ifcfg-eth0DEVICE="eth0"NM_CONTROLLED="yes"ONBOOT=yesTYPE=EthernetBOOTPROTO=none

Connecting an iSCSI Linux host to a VMAX array 115

Page 116: TechBook: iSCSI SAN Topologies

116

Use Case Scenarios

IPADDR=10.20.5.210PREFIX=24GATEWAY:10.20.20.1DEFROUTE=noIPV4_FAILURE_FATAL=yesIPV6INIT=noNAME="System eth0"UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03GATEWAY=10.246.51.1HWADDR=00:00:C9:C0:5E:90

5. Verify the IP address by issuing the following command:

[root@i2051210 ~]# ifconfigeth0 Link encap:Ethernet HWaddr 00:00:C9:C0:5E:90 inet addr:10.20.5.210 Bcast:10.20.5.255 Mask:255.255.255.0 inet6 addr: fe80::200:c9ff:fec0:5e90/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:29 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4149 (4.0 KiB) TX bytes:4604 (4.4 KiB)

eth1 Link encap:Ethernet HWaddr 00:00:C9:C0:5E:92 inet addr:10.20.20.210 Bcast:10.20.20.255 Mask:255.255.255.0 inet6 addr: fe80::200:c9ff:fec0:5e92/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1 errors:0 dropped:0 overruns:0 frame:0 TX packets:29 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:243 (243.0 b) TX bytes:4596 (4.4 KiB)

6. Add the IPv6 address, if needed, by using the following commands:

ifconfig eth0 inet6 add 2001:0db8:0:f101::1/6ifconfig eth1 inet6 add 2001:0db8:0:f101::2/6

iSCSI SAN Topologies TechBook

Page 117: TechBook: iSCSI SAN Topologies

Use Case Scenarios

7. Ping the storage port to test connectivity, as shown in Figure 47.

Figure 47 Test connectivity

Configuring CHAP on the Linux hostTo configure CHAP on the Linux host, complete the following steps:

1. Configure Credential and Secret on the host (/etc/iscsi/iscsi.conf).

node.session.auth.authmethod = CHAPnode.session.auth.username = chapnode.session.auth.password = abcdefgh

2. Restart the iSCSI service.

service open-iscsi restart

Configuring iSCSI on a Linux host using Linux iSCSI Initiator CLIComplete the following steps to configure iSCSI on a Linux host using Linux iSCSI Initiator CLI:

1. Issue the following commands to discover the target devices:

# iscsiadm -m discovery -t sendtargets -p 10.20.5.20110.20.5.201:3260,1 iqn.1992-04.com.emc:50000972082eed98# iscsiadm -m discovery -t sendtargets -p 10.20.20.10010.20.20.100:3260,1 iqn.1992-04.com.emc:50000972082eedd8

Connecting an iSCSI Linux host to a VMAX array 117

Page 118: TechBook: iSCSI SAN Topologies

118

Use Case Scenarios

2. Issue the following command to print out the nodes that have been discovered:

./iscsiadm -m node

# iscsiadm -m node10.20.20.100:3260,1 iqn.1992-04.com.emc:50000972082eedd810.20.5.201:3260,1 iqn.1992-04.com.emc:50000972082eed98

3. Log in by take the ip, port, and target name from the above example and run:

./iscsiadm -m node -T targetname -p ip:port -l

# iscsiadm --mode node --targetname iqn.1992-04.com.emc:50000972082eedd8 --portal 10.20.20.100 --login

Logging in to [iface: default, target: iqn.1992-04.com.emc:50000972082eedd8, portal: 10.20.20.100,3260]

Login to [iface: default, target: iqn.1992-04.com.emc:50000972082eedd8, portal: 10.20.20.100,3260] successful.

# iscsiadm --mode node --targetname iqn.1992-04.com.emc:50000972082eed98 --portal 10.20.5.201 --login

Logging in to [iface: default, target: iqn.1992-04.com.emc:50000972082eed98, portal: 10.20.5.201,3260]

Login to [iface: default, target: iqn.1992-04.com.emc:50000972082eed98, portal: 10.20.5.201,3260] successful.

4. Issue the following command to show all records in discovery database and show the targets discovered from each record:

./iscsiadm -m discovery -P 1

# iscsiadm -m discovery -P 1SENDTARGETS:DiscoveryAddress: 10.20.20.100,3260Target: iqn.1992-04.com.emc:50000972082eedd8 Portal: 10.20.20.100:3260,1 Iface Name: defaultDiscoveryAddress: 10.20.5.200,3260DiscoveryAddress: 10.20.5.201,3260Target: iqn.1992-04.com.emc:50000972082eed98 Portal: 10.20.5.201:3260,1 Iface Name: defaultiSNS:No targets found.STATIC:No targets found.FIRMWARE:No targets found.

iSCSI SAN Topologies TechBook

Page 119: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Configuring Jumbo framesTo configure Jumbo frames, set the MTU on the host, switch (host and storage side) and storage port to 9000.

The switch port MTU can be set using the switch admin tool.

Contact your EMC Customer Service Engineer to set the storage port MTU.

Setting MTU on a Linux hostThe MTU can be changed by editing the HBA driver properties. Consult your driver documentation for more information.

The netsh command line scripting utility can also be used to set the MTU. The usage of the netsh utility described next applies to a Linux Server and may not be applicable for other versions of Linux.

To set MTU on Linux, complete the following steps:

1. To show the MTU issue the following command.

Note: By default the MTU size is set to 1500 MTU.

Ip link list

2. To change the MTU issue the following command for the 10G iSCSI initiator Ethernet interface on the Linux host.

ifconfig eth0 mtu 9000

Connecting an iSCSI Linux host to a VMAX array 119

Page 120: TechBook: iSCSI SAN Topologies

120

Use Case Scenarios

3. To make the changes to the MTU persistent upon reboot, change the "ifcfg_eth*" file associated with the interface.

4. To show the updated MTU, issue the following command

Ip link list

iSCSI SAN Topologies TechBook

Page 121: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Configuring the VNX for block 1 Gb/10 Gb iSCSI portThis section contains the following information:

◆ “Prerequisites” on page 121

◆ “Configuring storage system iSCSI front-end ports” on page 122

◆ “Assigning an IP address to each NIC or iSCSI HBA in a Windows Server 2008” on page 127

◆ “Configuring iSCSI initiators for a configuration without iSNS” on page 130

◆ “Registering the server with the storage system” on page 146

◆ “Setting storage system failover values for the server initiators with Unisphere” on page 148

◆ “Configuring the storage group” on page 162

◆ “iSCSI CHAP authentication” on page 175

Figure 48 will be used in the examples presented in this section.

Figure 48 Windows host connected to a VNX array with 1 G/ 10 G connectivity

PrerequisitesBefore you begin, you must complete the cabling of the iSCSI front-end data ports to the server ports.

Note: The 10 GbE iSCSI modules requires EMC FLARE® Operating Environment (OE) version 04.29.000.5.0xx or later.

Windows ServerSwitch

PowerPath

IPV4

IPV4

ICO-IMG-001030

10.1.1.198

192.168.1.198

10.1.1.98

192.168.1.98

VNX

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 121

Page 122: TechBook: iSCSI SAN Topologies

122

Use Case Scenarios

IMPORTANT!1 GbE iSCSI ports require Ethernet LAN cables and 10 GbE iSCSI ports require fibre optical cables for Ethernet transmission.

For 1 Gb transmission, you need CAT 5 Ethernet LAN cables for 10/100 transmission or CAT 6 cables. These cables can be up to 100 meters long.

For 10 Gb Ethernet transmission, you need fibre optical cables for a fibre optic infrastructure or active twinaxial cables for an active twinaxial infrastructure. EMC strongly recommends you use OM3 50 µm cables for all optical connections.

An active twinaxial infrastructure is supported for switch configurations only.

For cable specifications, refer to the Technical Specifications for your storage system. You can generate an up-to-date version of the these specification using the Learn about storage system link on the storage system support website.

For high availability:

◆ Connect one or more iSCSI front-end data ports on SP A to ports on the switch or router and connect the same number of iSCSl front-end data ports on SP B to ports on the same switch or router or on another switch or router, if two switches or routers are available.

◆ For a multiple NIC or iSCSI HBA server, connect one or more NIC or iSCSI ports to ports on the switch or router and connect the same number NIC or iSCSI HBA ports to ports on the same switch or router or on another switch or router, if two switches or routers are available.

Configuring storage system iSCSI front-end ports To configure storage system iSCSI front-end ports, complete the following steps:

1. Start Unisphere by entering the IP address of one of the storage system SP in an Internet browser that you are trying to manage.

2. Enter your user name and password.

iSCSI SAN Topologies TechBook

Page 123: TechBook: iSCSI SAN Topologies

Use Case Scenarios

3. Click Login.

4. From Unisphere, select System > Hardware > Storage Hardware.

Figure 49 Unisphere, System tab

5. Identify the storage system iSCSI front-end ports by clicking SPs> SP A/B > IO Modules > Slot > Port <#> in the Hardware window.

The example used here is SPs> SP A > IO Modules > Slot A4 > Port 0.

The Properties message box will display.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 123

Page 124: TechBook: iSCSI SAN Topologies

124

Use Case Scenarios

Figure 50 Message box

6. Click OK.

7. Highlight the iSCSI front-end port that you want to configure and click Properties.

iSCSI SAN Topologies TechBook

Page 125: TechBook: iSCSI SAN Topologies

Use Case Scenarios

The iSCSI Port Properties window displays.

Figure 51 iSCSI Port Properties window

8. Click Add in Virtual Port Properties to assign IP address to the port. The iSCSI Virtual Port Properties window displays.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 125

Page 126: TechBook: iSCSI SAN Topologies

126

Use Case Scenarios

Figure 52 iSCSI Virtual Port Properties window

9. Click OK and the close all open dialog boxes.

iSCSI SAN Topologies TechBook

Page 127: TechBook: iSCSI SAN Topologies

Use Case Scenarios

A Warning message displays asking if you wish to continue.

Figure 53 Warning message

10. Click OK.

A message showing successful completion displays.

Figure 54 Successful message

11. Click OK.

The iSCSI Port Properties window displays the added virtual ports in the Virtual Port Properties area.

Assigning an IP address to each NIC or iSCSI HBA in a Windows Server 2008To assign an IP address to each NIC or iSCSI HBA in a Windows Server 2008 that will be connected to the storage system, complete the following steps.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 127

Page 128: TechBook: iSCSI SAN Topologies

128

Use Case Scenarios

1. Click Start > Control Panel > Network and Sharing Center > Manage Network Connections.

The Network Connections window displays.

Figure 55 Control Panel, Network Connections window

2. Locate 10 GbE interfaces in the Network Connections dialog box.

3. Identify the NIC or iSCSI HBA which you want to set the IP address in the dialog (QLogic 10 Gb PCI Ethernet Adapter) and right-click the NIC or iSCSI HBA.

The Local Area Connection Properties dialog box displays.

iSCSI SAN Topologies TechBook

Page 129: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Figure 56 Local Area Connection Properties dialog box

4. Select the Internet Protocol Version 4 (TCP/IPv4) entry in the list and then click Properties.

The Internet Protocol Version 4 (TCP/IPv4) Properties dialog box displays.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 129

Page 130: TechBook: iSCSI SAN Topologies

130

Use Case Scenarios

Figure 57 Internet Protocol Version 4 (TCP/IPv4) Properties dialog box

5. In the General tab, select Use the following IP address and enter the appropriate IP address and subnet mask of the adapter in the IP address and Subnet mask fields.

6. Click OK and the close all open dialog boxes.

7. Repeat these steps for any other iSCSI adapters in the host.

Configuring iSCSI initiators for a configuration without iSNS

Before an iSCSI initiator can send data to or receive data from the storage system, you must configure the network parameters for the NIC or HBA iSCSI initiators to connect with the storage-system SP iSCSI targets.

You may need to install the Microsoft iSCSI Initiator software. This can be downloaded from http://www.microsoft.com.

iSCSI SAN Topologies TechBook

Page 131: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Note: Some operating systems, such as Microsoft Windows 2008 (used in this example) have bundled the iSCSI initiator with the OS. As a result, it will not need to be installed and can be accessed directly from Start > Administrative Tools > iSCSI Initiator.

There are two ways to configure iSCSI initiators on a Windows server to connect to the storage-system iSCSI targets:

◆ Using Unisphere Server Utility

You can register the r server's NICs or iSCSI HBAs with the storage system. Refer to “Using Unisphere Server Utility” on page 131.

◆ Using Microsoft iSCSI initiator

If you are an advanced user, you can configure iSCSI initiators to connect to the targets. Refer to “Successful logon message” on page 138.

Using Unisphere Server Utility To configure iSCSI initiators on a Windows server to connect to the storage-system iSCSI targets using Unisphere Service Utility, complete the following steps:

1. On the server, open the Unisphere Server Utility. The EMC Unisphere Server Utility window displays.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 131

Page 132: TechBook: iSCSI SAN Topologies

132

Use Case Scenarios

Figure 58 EMC Unisphere Server Utility welcome window

2. Select Configure iSCSI Connections on this server and click Next.

iSCSI SAN Topologies TechBook

Page 133: TechBook: iSCSI SAN Topologies

Use Case Scenarios

The next window displays.

Figure 59 EMC Unisphere Server Utility window, Configure iSCSI Connections

3. Select Configure iSCSI Connections and click Next.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 133

Page 134: TechBook: iSCSI SAN Topologies

134

Use Case Scenarios

The iSCSI Targets and Connections window displays.

Figure 60 iSCSI Targets and Connections window

4. Select one of the following options to discover the iSCSI target ports on the connected storage systems:

• Discover iSCSI targets on this subnet

iSCSI SAN Topologies TechBook

Page 135: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Scans the current subnet for all connected iSCSI storage-system targets. The utility scans the subnet in the range from 1 to 255. For example, if the current subnet is 10.1.1, the utility will scan the IP addresses from 10.1.1.1 to 10.1.1.255.

Figure 61 Discover iSCSI targets on this subnet

• Discover iSCSI targets for this target portal

Discovers targets known to the specified iSCSI SP data port.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 135

Page 136: TechBook: iSCSI SAN Topologies

136

Use Case Scenarios

Figure 62 Discover iSCSI targets for this target portal

5. Click Next.

iSCSI SAN Topologies TechBook

Page 137: TechBook: iSCSI SAN Topologies

Use Case Scenarios

The iSCSI Targets window displays.

Figure 63 iSCSI Targets window

6. For each target you want to log in to, complete the following:

a. In the iSCSI Targets window, select the IP address of the inactive target.

b. Under Login Options, select Also login to peer iSCSI target for High Availability (recommended) if the peer iSCSI target is listed.

c. Select a Server Network Adapter IP address from the drop-down list if you have the appropriate failover software, such as EMC PowerPath.

Note: The IP Address used should be the IP Address of the Adapter that is on the same Network as the target. In this case, you would select the IP Address of 10.1.1.98 to access the Target at the IP Address of 10.1.1.198.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 137

Page 138: TechBook: iSCSI SAN Topologies

138

Use Case Scenarios

d. If you selected Also login to peer iSCSI target for High Availability (recommended), leave the Server Network Adapter IP set to Default to allow the iSCSI initiator to automatically fail over to an available NIC in the event of a failure.

This option allows the utility to create a login connection to the peer target so if the target you selected becomes unavailable, data will continue to the peer target.

e. Click Logon to connect to the selected target.

A message displays showing the logon as successful.

Figure 64 Successful logon message

f. Click OK. The iSCSI Targets window (Figure 63 on page 137) displays again.

g. Click Next.

The Server Utility window displays.

iSCSI SAN Topologies TechBook

Page 139: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Figure 65 Server registration window

7. In the server registration window, click Next to send the updated information to the storage system.

A message showing a successful update displays.

Note: If you have the host agent installed on the server, you will get an error message indicating that the host agent is running and you cannot use the server utility to update information to the storage system; the host agent will do this automatically.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 139

Page 140: TechBook: iSCSI SAN Topologies

140

Use Case Scenarios

Figure 66 Successfully updated message

8. Click Finish.

9. Repeat steps 2-8 for any additional iSCSI Targets.

Using Microsoft iSCSI initiatorTo configure iSCSI initiators on a Windows server to connect to the storage-system iSCSI targets using Microsoft isCSI initiator software, complete the following steps:

1. Open the Microsoft iSCSI Initiator properties dialog by clicking Start > Administrative Tools >iSCSI Initiator.

The Microsoft iSCSI Initiator Properties dialog box displays.

iSCSI SAN Topologies TechBook

Page 141: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Figure 67 Microsoft iSCSI Initiator Properties dialog box

2. Add an iSCSI Target by clicking the Discovery Tab and then Add Portal.

Figure 68 Discovery tab

The Add Target Portal dialog box displays.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 141

Page 142: TechBook: iSCSI SAN Topologies

142

Use Case Scenarios

Figure 69 Add Target Portal dialog box

3. Click Advance.

The Advanced Settings dialog box displays.

Figure 70 Advanced Settings dialog box, General tab

iSCSI SAN Topologies TechBook

Page 143: TechBook: iSCSI SAN Topologies

Use Case Scenarios

a. In the Local Adapter field, choose Microsoft iSCSI Initiator from the pull-down list.

b. In the Source IP field, choose the IP Address of the adapter that will be used to access this target.

c. In the Target portal field, choose the IP address of the target that will be used to access by this source.

Note: The IP Address used should be the IP Address of the Adapter that is on the same Network as the target. In this case, you would select the IP Address of 10.1.1.98 to access the Target at the IP Address of 10.1.1.198.

d. Click OK. You are returned to the iSCSI Initiator Properties, Discovery tab.

Figure 71 iSCSI Initiator Properties dialog box, Discovery tab

4. Repeat steps 2-3 for any additional iSCSI Targets.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 143

Page 144: TechBook: iSCSI SAN Topologies

144

Use Case Scenarios

5. In the iSCSI Initiator Properties dialog box, click the Targets tab and the iSCSI Targets should be displayed as Inactive, as shown in Figure 72.

Figure 72 iSCSI Initiator Properties dialog box, Targets tab

6. Select the target in the list and click Logon….

The Log On to Target dialog box displays.

Figure 73 Log on to Target dialog box

iSCSI SAN Topologies TechBook

Page 145: TechBook: iSCSI SAN Topologies

Use Case Scenarios

7. Ensure that the Automatically restore this connection when the computer starts checkbox is selected. Also check the Enable multi-path box if PowerPath multi-path software is already installed on the host.

8. Click OK.

The iSCSI Initiator Properties dialog box, Targets tab displays again. The target should be shown as Connected.

Other iSCSI targets display as Inactive.

Figure 74 Target, Connected

9. Click OK.

10. Repeat steps 5-9 to configure additional iSCSI Targets.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 145

Page 146: TechBook: iSCSI SAN Topologies

146

Use Case Scenarios

Registering the server with the storage systemTo register the server using the Unisphere Server Utility on a Windows server, complete the following steps:

1. On the server, run the Unisphere Server Utility by selecting Start > Programs > EMC > Unisphere > Unisphere Server Utility or Start > All Programs > EMC > Unisphere > Unisphere Server Utility or click the Unisphere Server Utility shortcut icon.

The EMC Unisphere Server Utility, welcome window displays.

Figure 75 EMC Unisphere Server Utility, welcome window

2. In the Unisphere Server Utility dialog box, select Configure iSCSI Connections on this server and click Next.

The utility automatically scans for all connected storage systems and lists them under Connected Storage Systems, as shown in Figure 76.

iSCSI SAN Topologies TechBook

Page 147: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Figure 76 Connected Storage Systems

3. Locate the WWN of the NIC or iSCSI HBA you just installed. The NIC or iSCSI HBA should appear once for every SP port to which it is connected.

If the Unisphere Server Utility does not list your storage processors, verify that your server is properly connected and zoned to the storage system ports.

4. Click Next to register the server with the storage system.

The utility sends the server's name and IP address of the each NIC or iSCSI HBA to each storage system. Once the server has storage on the storage system, the utility also sends the device name and volume or file system information for each LUN (virtual disk) in the storage system that the server sees.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 147

Page 148: TechBook: iSCSI SAN Topologies

148

Use Case Scenarios

A message displays if the update is successful.

Figure 77 Successfully updated message

5. Click Finish to exit the utility.

Setting storage system failover values for the server initiators with UnisphereThere are tow ways to set failover values for the server initiators with Unisphere:

◆ Using Failover Setup Wizard

You can configure failover mode for the host initiators. Refer to “Using Failover Setup Wizard” on page 148.

◆ Using Connectivity Status in Host Management

If you are an advanced user, you can configure failover mode for the host initiators via connectivity status window. Refer to “Using Connectivity Status in Host Management” on page 156.

Using Failover Setup WizardTo use the Unisphere Failover Setup wizard to set the storage system failover values for all NIC or iSCSI HBA initiators belonging to the server, complete the following steps:

1. From Unisphere, select All Systems > System List.

iSCSI SAN Topologies TechBook

Page 149: TechBook: iSCSI SAN Topologies

Use Case Scenarios

2. From the Systems page, select the storage system for whose failover values you want to set.

3. Select the Hosts tab. The following window displays.

Figure 78 EMC Unisphere, Hosts tab

4. Under Wizards, select the Failover Wizard.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 149

Page 150: TechBook: iSCSI SAN Topologies

150

Use Case Scenarios

The Start Wizard dialog box displays.

Figure 79 Start Wizard dialog box

5. In the Start Wizard dialog box, read the introduction, and then click Next.

The Select Host dialog box displays.

iSCSI SAN Topologies TechBook

Page 151: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Figure 80 Select Host dialog box

6. In the Select Host dialog box, select the server you just connected to the storage system and click Next.

The Select Storage System dialog box displays.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 151

Page 152: TechBook: iSCSI SAN Topologies

152

Use Case Scenarios

Figure 81 Select Storage System dialog box

7. Select the storage system and click Next.

The Specify Settings dialog box displays.

iSCSI SAN Topologies TechBook

Page 153: TechBook: iSCSI SAN Topologies

Use Case Scenarios

Figure 82 Specify Settings dialog box

8. Set the following values for the type of software running on the server.

For a Windows server or Windows virtual machine with PowerPath, set:

a. Initiator Type to CLARiiON Open

b. Array CommPath to Enabled

c. Failover Mode to:

– 4 if your PowerPath version supports ALUA.– 1 if your PowerPath version does not support ALUA.For information on which versions of PowerPath support ALUA, refer to the PowerPath release notes on the Powerlink website or to EMC Knowledgebase solution emc99467.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 153

Page 154: TechBook: iSCSI SAN Topologies

154

Use Case Scenarios

IMPORTANT!If you enter incorrect values the storage system could become unmanageable and unreachable by the server and the server's failover software could stop operating correctly.

If you configured your storage system iSCSI connections to your Windows virtual machine with NICs, set the storage system failover values for the virtual machine. If you configured your storage system iSCSI connections to your Hyper-V or ESX server, set the storage system failover values for the Hyper-V or ESX server.

If you have a non-Windows virtual machine or a Windows virtual machine with iSCSI HBAs, set the storage-system failover values for the Hyper-V or ESX server.

d. Click Next.

A Review and Commit Settings window displays.

Figure 83 Review and Commit Settings

iSCSI SAN Topologies TechBook

Page 155: TechBook: iSCSI SAN Topologies

Use Case Scenarios

9. Review the configuration and all settings.

• If the settings are incorrect, click Back until you return to the dialog box in which you need to re-enter the correct values.

• If the settings are correct, click Next.

If you clicked Next, the wizard displays a confirmation dialog box.

Figure 84 Failover Setup Wizard Confirmation dialog box

10. Click Yes to continue.

The wizard displays a summary of the values you set for the storage system.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 155

Page 156: TechBook: iSCSI SAN Topologies

156

Use Case Scenarios

Figure 85 Details from Operation dialog box

11. If the operation failed, return to the wizard. If the operation is successful, click Finish and close the wizard.

12. Reboot the server for the initiator records to take affect.

Using Connectivity Status in Host ManagementTo use the Connectivity Status to set the storage system failover values for all NIC or iSCSI HBA initiators belonging to the server, complete the following steps:

1. From Unisphere, select All Systems > System List.

2. From the Systems page, select the storage system for whose failover values you want to set.

iSCSI SAN Topologies TechBook

Page 157: TechBook: iSCSI SAN Topologies

Use Case Scenarios

3. Select the Hosts tab. The following window displays.

Figure 86 EMC Unisphere, Hosts tab

4. Under Host Management, select Connectivity Status.

The Connectivity Status window displays.

Figure 87 Connectivity Status Window, Host Initiators tab

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 157

Page 158: TechBook: iSCSI SAN Topologies

158

Use Case Scenarios

5. In the Host Initiators tab, select the host name and expand it. The expanded hosts display.

Figure 88 Expanded hosts

6. Click Edit. The Edit Initiator window displays.

Figure 89 Edit Initiators window

7. Check the boxes of the initiators that you want to edit and set the following values for the type of software running on the server.

iSCSI SAN Topologies TechBook

Page 159: TechBook: iSCSI SAN Topologies

Use Case Scenarios

8. Set the following values for the type of software running on the server.

For a Windows server or Windows virtual machine with PowerPath, set:

a. Initiator Type to CLARiiON Open

b. Array CommPath to Enabled

c. Failover Mode to:

– 4 if your PowerPath version supports ALUA.– 1 if your PowerPath version does not support ALUA.For information on which versions of PowerPath support ALUA, refer to the PowerPath release notes on the Powerlink website or to EMC Knowledgebase solution emc99467.

IMPORTANT!If you enter incorrect values the storage system could become unmanageable and unreachable by the server and the server's failover software could stop operating correctly.

If you configured your storage system iSCSI connections to your Windows virtual machine with NICs, set the storage system failover values for the virtual machine. If you configured your storage system iSCSI connections to your Hyper-V or ESX server, set the storage system failover values for the Hyper-V or ESX server.

If you have a non-Windows virtual machine or a Windows virtual machine with iSCSI HBAs, set the storage-system failover values for the Hyper-V or ESX server.

d. Click OK. A confirmation dialog box displays.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 159

Page 160: TechBook: iSCSI SAN Topologies

160

Use Case Scenarios

Figure 90 Confirmation dialog box

9. If the operation is successful, click Yes and close all windows. A Success message displays.

Figure 91 Success confirmation message

10. Click OK.

iSCSI SAN Topologies TechBook

Page 161: TechBook: iSCSI SAN Topologies

Use Case Scenarios

11. You can confirm the change by selecting the initiator and then clicking Detail in the Host Initiator tab of the Connectivity Status window.

Figure 92 Connectivity Status window, Host Initiators tab

Initiator details display in the Initiator Information window.

Figure 93 Initiator Information window

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 161

Page 162: TechBook: iSCSI SAN Topologies

162

Use Case Scenarios

Configuring the storage group Before you begin, you need the completed LUN creation according to your storage provisioning plan. For the detailed information of LUN provisioning, refer to the VNX/CLARiiON documentation available on Powerlink.

1. Start Unisphere by entering the IP address of one of the storage system SP in an Internet browser that you are trying to manage.

2. Enter your user name and password.

3. Click Login.

4. From Unisphere, select your system, as shown in Figure 94.

Figure 94 Select system

iSCSI SAN Topologies TechBook

Page 163: TechBook: iSCSI SAN Topologies

Use Case Scenarios

5. The following window displays, as shown in Figure 95.

Figure 95 Select Storage Groups

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 163

Page 164: TechBook: iSCSI SAN Topologies

164

Use Case Scenarios

6. Select Hosts > Storage Groups in the top menu. The Storage Groups window displays, as shown in Figure 96.

Figure 96 Storage Groups window

7. If you have created storage groups, skip to Step 8. If not, complete the following steps:

a. From the task list, select Storage Groups > Create.

The Create Storage dialog box displays, as shown in Figure 97.

Figure 97 Create Storage dialog box

iSCSI SAN Topologies TechBook

Page 165: TechBook: iSCSI SAN Topologies

Use Case Scenarios

b. Enter a name for the Storage Group. In this example the name 10Gb_iSCSI_i2051098_Win is used.

c. Choose one of the following options:

– Click OK to create the new storage group and click close the dialog box; or

– Click Apply to create the new storage group without closing the dialog box. This allows you to create additional storage groups.

A message displays showing the storage group creation as success, as shown in Figure 98.

Figure 98 Confirmation dialog box

d. Choose one of the following options:

– If you want to add LUNs or connect hosts now, click Yes.– If you want to do add LUNs on your own timeframe, click

No and follow the next steps. 8. From the system page, select your system, then Hosts > Storage

Groups.

9. To connect the servers/hosts, select the storage group you just created and choose one of the following options:

– Click the connect hosts; or– Open Properties by clicking Properties or right-clicking

and selecting Properties of the selected storage group, as shown in Figure 99.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 165

Page 166: TechBook: iSCSI SAN Topologies

166

Use Case Scenarios

Figure 99 Storage Group, Properties

10. Click the Hosts tab from the properties of the storage group to which you want connect the servers, as shown in Figure 100.

Figure 100 Hosts tab

11. In the Host tab, select the available hosts you want to connect.

iSCSI SAN Topologies TechBook

Page 167: TechBook: iSCSI SAN Topologies

Use Case Scenarios

12. Click the arrow to move the host from the Available Hosts column to the Host to be Connected column and click Apply.

The host displays in the Host to be Connected column, as shown in Figure 101.

Figure 101 Hosts to be Connected column

13. Click OK. The main Unisphere window displays.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 167

Page 168: TechBook: iSCSI SAN Topologies

168

Use Case Scenarios

14. From the main Unisphere window, connect the LUNs to the storage group, as shown in Figure 102.

Figure 102 Connect LUNs

From the task list under Storage Groups, select a storage group to which you want to add LUNs and choose one of the following options:

– Select Connect LUNs; or– Click the LUNs tab from the Properties of the storage

group to which you want to add LUNs.

iSCSI SAN Topologies TechBook

Page 169: TechBook: iSCSI SAN Topologies

Use Case Scenarios

The LUNs tab displays, as shown in Figure 103.

Figure 103 LUNs tab

15. In the Available LUNs box, select the LUNs that you want to add and click Add, as shown in Figure 103.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 169

Page 170: TechBook: iSCSI SAN Topologies

170

Use Case Scenarios

The LUNs will appear in the Selected LUNs box, as shown in Figure 104.

Figure 104 Selected LUNs

16. Click Apply as shown in Figure 104. A confirmation box displays as shown in Figure 105.

Figure 105 Confirmation dialog box

iSCSI SAN Topologies TechBook

Page 171: TechBook: iSCSI SAN Topologies

Use Case Scenarios

17. Click Yes.

A message displays showing the operation was success, as shown in Figure 106.

Figure 106 Success message box

18. Click OK. The LUNs are now displayed, as shown in Figure 107.

Figure 107 Added LUNs

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 171

Page 172: TechBook: iSCSI SAN Topologies

172

Use Case Scenarios

Making LUNs visible to a Windows server or Window virtual machine with NICsTo allow the Windows server access to the LUNs that you created, use Windows Computer Management to perform a rescan by completing the following steps.

1. Choose one of the following options to open the computer Management window:

– Start > Computer Management

– Right-click My Computer > Manage The Computer Management window displays, as shown in Figure 108.

Figure 108 Computer Management window

2. Under the Storage tree, select Disk Management.

3. From the tool bar, select Action > Rescan Disks.

iSCSI SAN Topologies TechBook

Page 173: TechBook: iSCSI SAN Topologies

Use Case Scenarios

The rescanned disks display, as shown in Figure 109.

Figure 109 Rescanned disks

Verifying that PowerPath for Windows servers sees all paths to the LUNsIf you do not already have PowerPath installed, then install PowerPath by referring to the appropriate PowerPath Installation and Administration Guide for your operating system. This guide is available on PowerLink.

1. On the Windows server, open the PowerPath Management Console by choosing one of the following options:

– Click the PowerPath monitor task bar icon; or– Right-click the icon and select PowerPath Administrator

Figure 110 PowerPath icon

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 173

Page 174: TechBook: iSCSI SAN Topologies

174

Use Case Scenarios

The EMC PowerPath Console screen displays, as shown in Figure 111.

Figure 111 EMC PowerPath Console screen

2. Select Disks and the left pane and the following screen displays, as shown in Figure 112.

Figure 112 Disks

iSCSI SAN Topologies TechBook

Page 175: TechBook: iSCSI SAN Topologies

Use Case Scenarios

3. Verify that the path metric for each LUN is n/n where n is the total number of paths to the LUN. Our example shows 2/2.

iSCSI CHAP authentication

The Windows server and the VNX for block support the Challenge Handshake Authentication Protocol (CHAP) for iSCSI network security.

CHAP provides a method for the Windows server and VNX for block to authenticate each other through an exchange of a shared secret (a security key that is similar to a password), which is typically a string of 12-16 bytes.

IMPORTANT!If CHAP security is not configured for the VNX for block, any computer connected to the same IP networks as the VNX for block iSCSI ports can read form or write to the VNX for block.

Chap has two variants, one-way and reverse CHAP authentication:

◆ In one-way CHAP authentication, CHAP sets up the accounts that the Windows server uses to connect to the VNX for block. The VNX for block authenticates the Windows server.

◆ In reverse CHAP authentication, the VNX for block authenticates the Windows server and the Windows server also authenticates the VNX for block.

The CX-Series iSCSI Security Setup Guide provides detailed information regarding CHAP. This can be found on the EMC Online Support website.

Configuring the VNX for block 1 Gb/10 Gb iSCSI port 175

Page 176: TechBook: iSCSI SAN Topologies

176

Use Case Scenarios

iSCSI SAN Topologies TechBook

Page 177: TechBook: iSCSI SAN Topologies

Glossary

This glossary contains terms related to EMC products and EMC networked storage concepts.

Aaccess control A service that allows or prohibits access to a resource. Storage

management products implement access control to allow or prohibit specific users. Storage platform products implement access control, often called LUN Masking, to allow or prohibit access to volumes by Initiators (HBAs). See also “persistent binding” and “zoning.”

active domain ID The domain ID actively being used by a switch. It is assigned to a switch by the principal switch.

active zone set The active zone set is the zone set definition currently in effect and enforced by the fabric or other entity (for example, the name server). Only one zone set at a time can be active.

agent An autonomous agent is a system situated within (and is part of) an environment that senses that environment, and acts on it over time in pursuit of its own agenda. Storage management software centralizes the control and monitoring of highly distributed storage infrastructure. The centralizing part of the software management system can depend on agents that are installed on the distributed parts of the infrastructure. For example, an agent (software component) can be installed on each of the hosts (servers) in an environment to allow the centralizing software to control and monitor the hosts.

iSCSI SAN Topologies TechBook 177

Page 178: TechBook: iSCSI SAN Topologies

178

Glossary

alarm An SNMP message notifying an operator of a network problem.

any-to-any portconnectivity

A characteristic of a Fibre Channel switch that allows any port on the switch to communicate with any other port on the same switch.

application Application software is a defined subclass of computer software that employs the capabilities of a computer directly to a task that users want to perform. This is in contrast to system software that participates with integration of various capabilities of a computer, and typically does not directly apply these capabilities to performing tasks that benefit users. The term application refers to both the application software and its implementation which often refers to the use of an information processing system. (For example, a payroll application, an airline reservation application, or a network application.) Typically an application is installed “on top of” an operating system like Windows or Linux, and contains a user interface.

application-specificintegrated circuit

(ASIC)

A circuit designed for a specific purpose, such as implementing lower-layer Fibre Channel protocols (FC-1 and FC-0). ASICs contrast with general-purpose devices such as memory chips or microprocessors, which can be used in many different applications.

arbitration The process of selecting one respondent from a collection of several candidates that request service concurrently.

ASIC family Different switch hardware platforms that utilize the same port ASIC can be grouped into collections known as an ASIC family. For example, the Fuji ASIC family which consists of the ED-64M and ED-140M run different microprocessors, but both utilize the same port ASIC to provide Fibre Channel connectivity, and are therefore in the same ASIC family. For inter operability concerns, it is useful to understand to which ASIC family a switch belongs.

ASCII ASCII (American Standard Code for Information Interchange), generally pronounced [aeski], is a character encoding based on the English alphabet. ASCII codes represent text in computers, communications equipment, and other devices that work with text. Most modern character encodings, which support many more characters, have a historical basis in ASCII.

audit log A log containing summaries of actions taken by a Connectrix Management software user that creates an audit trail of changes. Adding, modifying, or deleting user or product administration

iSCSI SAN Topologies TechBook

Page 179: TechBook: iSCSI SAN Topologies

Glossary

values, creates a record in the audit log that includes the date and time.

authentication Verification of the identity of a process or person.

Bbackpressure The effect on the environment leading up to the point of restriction.

See “congestion.”

BB_Credit See “buffer-to-buffer credit.”

beaconing Repeated transmission of a beacon light and message until an error is corrected or bypassed. Typically used by a piece of equipment when an individual Field Replaceable Unit (FRU) needs replacement. Beaconing helps the field engineer locate the specific defective component. Some equipment management software systems such as Connectrix Manager offer beaconing capability.

BER See “bit error rate.”

bidirectional In Fibre Channel, the capability to simultaneously communicate at maximum speeds in both directions over a link.

bit error rate Ratio of received bits that contain errors to total of all bits transmitted.

blade server A consolidation of independent servers and switch technology in the same chassis.

blocked port Devices communicating with a blocked port are prevented from logging in to the Fibre Channel switch containing the port or communicating with other devices attached to the switch. A blocked port continuously transmits the off-line sequence (OLS).

bridge A device that provides a translation service between two network segments utilizing different communication protocols. EMC supports and sells bridges that convert iSCSI storage commands from a NIC- attached server to Fibre Channel commands for a storage platform.

broadcast Sends a transmission to all ports in a network. Typically used in IP networks. Not typically used in Fibre Channel networks.

iSCSI SAN Topologies TechBook 179

Page 180: TechBook: iSCSI SAN Topologies

180

Glossary

broadcast frames Data packet, also known as a broadcast packet, whose destination address specifies all computers on a network. See also “multicast.”

buffer Storage area for data in transit. Buffers compensate for differences in link speeds and link congestion between devices.

buffer-to-buffer credit The number of receive buffers allocated by a receiving FC_Port to a transmitting FC_Port. The value is negotiated between Fibre Channel ports during link initialization. Each time a port transmits a frame it decrements this credit value. Each time a port receives an R_Rdy frame it increments this credit value. If the credit value is decremented to zero, the transmitter stops sending any new frames until the receiver has transmitted an R_Rdy frame. Buffer-to-buffer credit is particularly important in SRDF and Mirror View distance extension solutions.

CCall Home A product feature that allows the Connectrix service processor to

automatically dial out to a support center and report system problems. The support center server accepts calls from the Connectrix service processor, logs reported events, and can notify one or more support center representatives. Telephone numbers and other information are configured through the Windows NT dial-up networking application. The Call Home function can be enabled and disabled through the Connectrix Product Manager.

channel With Open Systems, a channel is a point-to-point link that transports data from one point to another on the communication path, typically with high throughput and low latency that is generally required by storage systems. With Mainframe environments, a channel refers to the server-side of the server-storage communication path, analogous to the HBA in Open Systems.

Class 2 Fibre Channelclass of service

In Class 2 service, the fabric and destination N_Ports provide connectionless service with notification of delivery or nondelivery between the two N_Ports. Historically Class 2 service is not widely used in Fibre Channel system.

Class 3 Fibre Channelclass of service

Class 3 service provides a connectionless service without notification of delivery between N_Ports. (This is also known as datagram service.) The transmission and routing of Class 3 frames is the same

iSCSI SAN Topologies TechBook

Page 181: TechBook: iSCSI SAN Topologies

Glossary

as for Class 2 frames. Class 3 is the dominant class of communication used in Fibre Channel for moving data between servers and storage and may be referred to as “Ship and pray.”

Class F Fibre Channelclass of service

Class F service is used for all switch-to-switch communication in a multiswitch fabric environment. It is nearly identical to class 2 from a flow control point of view.

community A relationship between an SNMP agent and a set of SNMP managers that defines authentication, access control, and proxy characteristics.

community name A name that represents an SNMP community that the agent software recognizes as a valid source for SNMP requests. An SNMP management program that sends an SNMP request to an agent program must identify the request with a community name that the agent recognizes or the agent discards the message as an authentication failure. The agent counts these failures and reports the count to the manager program upon request, or sends an authentication failure trap message to the manager program.

community profile Information that specifies which management objects are available to what management domain or SNMP community name.

congestion Occurs at the point of restriction. See “backpressure.”

connectionless Non dedicated link. Typically used to describe a link between nodes that allows the switch to forward Class 2 or Class 3 frames as resources (ports) allow. Contrast with the dedicated bandwidth that is required in a Class 1 Fibre Channel Service point-to-point link.

Connectivity Unit A hardware component that contains hardware (and possibly software) that provides Fibre Channel connectivity across a fabric. Connectrix switches are example of Connectivity Units. This is a term popularized by the Fibre Alliance MIB, sometimes abbreviated to connunit.

Connectrixmanagement

software

The software application that implements the management user interface for all managed Fibre Channel products, typically the Connectrix -M product line. Connectrix Management software is a client/server application with the server running on the Connectrix service processor, and clients running remotely or on the service processor.

iSCSI SAN Topologies TechBook 181

Page 182: TechBook: iSCSI SAN Topologies

182

Glossary

Connectrix serviceprocessor

An optional 1U server shipped with the Connectrix -M product line to run the Connectrix Management server software and EMC remote support application software.

Control Unit In mainframe environments, a Control Unit controls access to storage. It is analogous to a Target in Open Systems environments.

core switch Occupies central locations within the interconnections of a fabric. Generally provides the primary data paths across the fabric and the direct connections to storage devices. Connectrix directors are typically installed as core switches, but may be located anywhere in the fabric.

credit A numeric value that relates to the number of available BB_Credits on a Fibre Channel port. See“buffer-to-buffer credit”.

DDASD Direct Access Storage Device.

default Pertaining to an attribute, value, or option that is assumed when none is explicitly specified.

default zone A zone containing all attached devices that are not members of any active zone. Typically the default zone is disabled in a Connectrix M environment which prevents newly installed servers and storage from communicating until they have been provisioned.

Dense WavelengthDivision Multiplexing

(DWDM)

A process that carries different data channels at different wavelengths over one pair of fiber optic links. A conventional fiber-optic system carries only one channel over a single wavelength traveling through a single fiber.

destination ID A field in a Fibre Channel header that specifies the destination address for a frame. The Fibre Channel header also contains a Source ID (SID). The FCID for a port contains both the SID and the DID.

device A piece of equipment, such as a server, switch or storage system.

dialog box A user interface element of a software product typically implemented as a pop-up window containing informational messages and fields for modification. Facilitates a dialog between the user and the application. Dialog box is often used interchangeably with window.

iSCSI SAN Topologies TechBook

Page 183: TechBook: iSCSI SAN Topologies

Glossary

DID An acronym used to refer to either Domain ID or Destination ID. This ambiguity can create confusion. As a result E-Lab recommends this acronym be used to apply to Domain ID. Destination ID can be abbreviated to FCID.

director An enterprise-class Fibre Channel switch, such as the Connectrix ED-140M, MDS 9509, or ED-48000B. Directors deliver high availability, failure ride-through, and repair under power to insure maximum uptime for business critical applications. Major assemblies, such as power supplies, fan modules, switch controller cards, switching elements, and port modules, are all hot-swappable.

The term director may also refer to a board-level module in the Symmetrix that provides the interface between host channels (through an associated adapter module in the Symmetrix) and Symmetrix disk devices. (This description is presented here only to clarify a term used in other EMC documents.)

DNS See “domain name service name.”

domain ID A byte-wide field in the three byte Fibre Channel address that uniquely identifies a switch in a fabric. The three fields in a FCID are domain, area, and port. A distinct Domain ID is requested from the principal switch. The principal switch allocates one Domain ID to each switch in the fabric. A user may be able to set a Preferred ID which can be requested of the Principal switch, or set an Insistent Domain ID. If two switches insist on the same DID one or both switches will segment from the fabric.

domain name servicename

Host or node name for a system that is translated to an IP address through a name server. All DNS names have a host name component and, if fully qualified, a domain component, such as host1.abcd.com. In this example, host1 is the host name.

dual-attached host A host that has two (or more) connections to a set of devices.

EE_D_TOV A time-out period within which each data frame in a Fibre Channel

sequence transmits. This avoids time-out errors at the destination Nx_Port. This function facilitates high speed recovery from dropped frames. Typically this value is 2 seconds.

iSCSI SAN Topologies TechBook 183

Page 184: TechBook: iSCSI SAN Topologies

184

Glossary

E_Port Expansion Port, a port type in a Fibre Channel switch that attaches to another E_Port on a second Fibre Channel switch forming an Interswitch Link (ISL). This link typically conforms to the FC-SW standards developed by the T11 committee, but might not support heterogeneous inter operability.

edge switch Occupies the periphery of the fabric, generally providing the direct connections to host servers and management workstations. No two edge switches can be connected by interswitch links (ISLs). Connectrix departmental switches are typically installed as edge switches in a multiswitch fabric, but may be located anywhere in the fabric

Embedded WebServer

A management interface embedded on the switch’s code that offers features similar to (but not as robust as) the Connectrix Manager and Product Manager.

error detect time outvalue

Defines the time the switch waits for an expected response before declaring an error condition. The error detect time out value (E_D_TOV) can be set within a range of two-tenths of a second to one second using the Connectrix switch Product Manager.

error message An indication that an error has been detected. See also “information message” and “warning message.”

Ethernet A baseband LAN that allows multiple station access to the transmission medium at will without prior coordination and which avoids or resolves contention.

event log A record of significant events that have occurred on a Connectrix switch, such as FRU failures, degraded operation, and port problems.

expansionport See “E_Port.”

explicit fabric login In order to join a fabric, an Nport must login to the fabric (an operation referred to as an FLOGI). Typically this is an explicit operation performed by the Nport communicating with the F_port of the switch, and is called an explicit fabric login. Some legacy Fibre Channel ports do not perform explicit login, and switch vendors perform login for ports creating an implicit login. Typically logins are explicit.

iSCSI SAN Topologies TechBook

Page 185: TechBook: iSCSI SAN Topologies

Glossary

FFA Fibre Adapter, another name for a Symmetrix Fibre Channel director.

F_Port Fabric Port, a port type on a Fibre Channel switch. An F_Port attaches to an N_Port through a point-to-point full-duplex link connection. A G_Port automatically becomes an F_port or an E-Port depending on the port initialization process.

fabric One or more switching devices that interconnect Fibre Channel N_Ports, and route Fibre Channel frames based on destination IDs in the frame headers. A fabric provides discovery, path provisioning, and state change management services for a Fibre Channel environment.

fabric element Any active switch or director in the fabric.

fabric login Process used by N_Ports to establish their operating parameters including class of service, speed, and buffer-to-buffer credit value.

fabric port A port type (F_Port) on a Fibre Channel switch that attaches to an N_Port through a point-to-point full-duplex link connection. An N_Port is typically a host (HBA) or a storage device like Symmetrix or CLARiiON.

fabric shortest pathfirst (FSPF)

A routing algorithm implemented by Fibre Channel switches in a fabric. The algorithm seeks to minimize the number of hops traversed as a Fibre Channel frame travels from its source to its destination.

fabric tree A hierarchical list in Connectrix Manager of all fabrics currently known to the Connectrix service processor. The tree includes all members of the fabrics, listed by WWN or nickname.

failover The process of detecting a failure on an active Connectrix switch FRU and the automatic transition of functions to a backup FRU.

fan-in/fan-out Term used to describe the server:storage ratio, where a graphic representation of a 1:n (fan-in) or n:1 (fan-out) logical topology looks like a hand-held fan, with the wide end toward n. By convention fan-out refers to the number of server ports that share a single storage port. Fan-out consolidates a large number of server ports on a fewer number of storage ports. Fan-in refers to the number of storage ports that a single server port uses. Fan-in enlarges the storage capacity used by a server. A fan-in or fan-out rate is often referred to as just the

iSCSI SAN Topologies TechBook 185

Page 186: TechBook: iSCSI SAN Topologies

186

Glossary

n part of the ratio; For example, a 16:1 fan-out is also called a fan-out rate of 16, in this case 16 server ports are sharing a single storage port.

FCP See “Fibre Channel Protocol.”

FC-SW The Fibre Channel fabric standard. The standard is developed by the T11 organization whose documentation can be found at T11.org. EMC actively participates in T11. T11 is a committee within the InterNational Committee for Information Technology (INCITS).

fiber optics The branch of optical technology concerned with the transmission of radiant power through fibers made of transparent materials such as glass, fused silica, and plastic.

Either a single discrete fiber or a non spatially aligned fiber bundle can be used for each information channel. Such fibers are often called optical fibers to differentiate them from fibers used in non-communication applications.

fibre A general term used to cover all physical media types supported by the Fibre Channel specification, such as optical fiber, twisted pair, and coaxial cable.

Fibre Channel The general name of an integrated set of ANSI standards that define new protocols for flexible information transfer. Logically, Fibre Channel is a high-performance serial data channel.

Fibre ChannelProtocol

A standard Fibre Channel FC-4 level protocol used to run SCSI over Fibre Channel.

Fibre Channel switchmodules

The embedded switch modules in the back plane of the blade server. See “blade server” on page 179.

firmware The program code (embedded software) that resides and executes on a connectivity device, such as a Connectrix switch, a Symmetrix Fibre Channel director, or a host bus adapter (HBA).

F_Port Fabric Port, a physical interface within the fabric. An F_Port attaches to an N_Port through a point-to-point full-duplex link connection.

frame A set of fields making up a unit of transmission. Each field is made of bytes. The typical Fibre Channel frame consists of fields: Start-of-frame, header, data-field, CRC, end-of-frame. The maximum frame size is 2148 bytes.

iSCSI SAN Topologies TechBook

Page 187: TechBook: iSCSI SAN Topologies

Glossary

frame header Control information placed before the data-field when encapsulating data for network transmission. The header provides the source and destination IDs of the frame.

FRU Field-replaceable unit, a hardware component that can be replaced as an entire unit. The Connectrix switch Product Manager can display status for the FRUs installed in the unit.

FSPF Fabric Shortest Path First, an algorithm used for routing traffic. This means that, between the source and destination, only the paths that have the least amount of physical hops will be used for frame delivery.

Ggateway address In TCP/IP, a device that connects two systems that use the same

or different protocols.

gigabyte (GB) A unit of measure for storage size, loosely one billion (109) bytes. One gigabyte actually equals 1,073,741,824 bytes.

G_Port A port type on a Fibre Channel switch capable of acting either as an F_Port or an E_Port, depending on the port type at the other end of the link.

GUI Graphical user interface.

HHBA See “host bus adapter.”

hexadecimal Pertaining to a numbering system with base of 16; valid numbers use the digits 0 through 9 and characters A through F (which represent the numbers 10 through 15).

high availability A performance feature characterized by hardware component redundancy and hot-swappability (enabling non-disruptive maintenance). High-availability systems maximize system uptime while providing superior reliability, availability, and serviceability.

hop A hop refers to the number of InterSwitch Links (ISLs) a Fibre Channel frame must traverse to go from its source to its destination.

iSCSI SAN Topologies TechBook 187

Page 188: TechBook: iSCSI SAN Topologies

188

Glossary

Good design practice encourages three hops or less to minimize congestion and performance management complexities.

host bus adapter A bus card in a host system that allows the host system to connect to the storage system. Typically the HBA communicates with the host over a PCI or PCI Express bus and has a single Fibre Channel link to the fabric. The HBA contains an embedded microprocessor with on board firmware, one or more ASICs, and a Small Form Factor Pluggable module (SFP) to connect to the Fibre Channel link.

II/O See “input/output.”

in-band management Transmission of monitoring and control functions over the Fibre Channel interface. You can also perform these functions out-of-band typically by use of the Ethernet to manage Fibre Channel devices.

information message A message telling a user that a function is performing normally or has completed normally. User acknowledgement might or might not be required, depending on the message. See also “error message” and “warning message.”

input/output (1) Pertaining to a device whose parts can perform an input process and an output process at the same time. (2) Pertaining to a functional unit or channel involved in an input process, output process, or both (concurrently or not), and to the data involved in such a process. (3) Pertaining to input, output, or both.

interface (1) A shared boundary between two functional units, defined by functional characteristics, signal characteristics, or other characteristics as appropriate. The concept includes the specification of the connection of two devices having different functions. (2) Hardware, software, or both, that links systems, programs, or devices.

Internet Protocol See “IP.”

interoperability The ability to communicate, execute programs, or transfer data between various functional units over a network. Also refers to a Fibre Channel fabric that contains switches from more than one vendor.

iSCSI SAN Topologies TechBook

Page 189: TechBook: iSCSI SAN Topologies

Glossary

interswitch link (ISL) Interswitch link, a physical E_Port connection between any two switches in a Fibre Channel fabric. An ISL forms a hop in a fabric.

IP Internet Protocol, the TCP/IP standard protocol that defines the datagram as the unit of information passed across an internet and provides the basis for connectionless, best-effort packet delivery service. IP includes the ICMP control and error message protocol as an integral part.

IP address A unique string of numbers that identifies a device on a network. The address consists of four groups (quadrants) of numbers delimited by periods. (This is called dotted-decimal notation.) All resources on the network must have an IP address. A valid IP address is in the form nnn.nnn.nnn.nnn, where each nnn is a decimal in the range 0 to 255.

ISL Interswitch link, a physical E_Port connection between any two switches in a Fibre Channel fabric.

Kkilobyte (K) A unit of measure for storage size, loosely one thousand bytes. One

kilobyte actually equals 1,024 bytes.

Llaser A device that produces optical radiation using a population inversion

to provide light amplification by stimulated emission of radiation and (generally) an optical resonant cavity to provide positive feedback. Laser radiation can be highly coherent temporally, spatially, or both.

LED Light-emitting diode.

link The physical connection between two devices on a switched fabric.

link incident A problem detected on a fiber-optic link; for example, loss of light, or invalid sequences.

load balancing The ability to distribute traffic over all network ports that are the same distance from the destination address by assigning different paths to different messages. Increases effective network bandwidth. EMC PowerPath software provides load-balancing services for server IO.

iSCSI SAN Topologies TechBook 189

Page 190: TechBook: iSCSI SAN Topologies

190

Glossary

logical volume A named unit of storage consisting of a logically contiguous set of disk sectors.

Logical Unit Number(LUN)

A number, assigned to a storage volume, that (in combination with the storage device node's World Wide Port Name (WWPN)) represents a unique identifier for a logical volume on a storage area network.

MMAC address Media Access Control address, the hardware address of a device

connected to a shared network.

managed product A hardware product that can be managed using the Connectrix Product Manager. For example, a Connectrix switch is a managed product.

management session Exists when a user logs in to the Connectrix Management software and successfully connects to the product server. The user must specify the network address of the product server at login time.

media The disk surface on which data is stored.

media access control See “MAC address.”

megabyte (MB) A unit of measure for storage size, loosely one million (106) bytes. One megabyte actually equals 1,048,576 bytes.

MIB Management Information Base, a related set of objects (variables) containing information about a managed device and accessed through SNMP from a network management station.

multicast Multicast is used when multiple copies of data are to be sent to designated, multiple, destinations.

multiswitch fabric Fibre Channel fabric created by linking more than one switch or director together to allow communication. See also “ISL.”

multiswitch linking Port-to-port connections between two switches.

Nname server (dNS) A service known as the distributed Name Server provided by a Fibre

Channel fabric that provides device discovery, path provisioning, and

iSCSI SAN Topologies TechBook

Page 191: TechBook: iSCSI SAN Topologies

Glossary

state change notification services to the N_Ports in the fabric. The service is implemented in a distributed fashion, for example, each switch in a fabric participates in providing the service. The service is addressed by the N_Ports through a Well Known Address.

network address A name or address that identifies a managed product, such as a Connectrix switch, or a Connectrix service processor on a TCP/IP network. The network address can be either an IP address in dotted decimal notation, or a Domain Name Service (DNS) name as administered on a customer network. All DNS names have a host name component and (if fully qualified) a domain component, such as host1.emc.com. In this example, host1 is the host name and EMC.com is the domain component.

nickname A user-defined name representing a specific WWxN, typically used in a Connectrix -M management environment. The analog in the Connectrix -B and MDS environments is alias.

node The point at which one or more functional units connect to the network.

N_Port Node Port, a Fibre Channel port implemented by an end device (node) that can attach to an F_Port or directly to another N_Port through a point-to-point link connection. HBAs and storage systems implement N_Ports that connect to the fabric.

NVRAM Nonvolatile random access memory.

Ooffline sequence

(OLS)The OLS Primitive Sequence is transmitted to indicate that the FC_Port transmitting the Sequence is:

a. initiating the Link Initialization Protocol

b. receiving and recognizing NOS

c. or entering the offline state

OLS See “offline sequence (OLS)”.

operating mode Regulates what other types of switches can share a multiswitch fabric with the switch under consideration.

iSCSI SAN Topologies TechBook 191

Page 192: TechBook: iSCSI SAN Topologies

192

Glossary

operating system Software that controls the execution of programs and that may provide such services as resource allocation, scheduling, input/output control, and data management. Although operating systems are predominantly software, partial hardware implementations are possible.

optical cable A fiber, multiple fibers, or a fiber bundle in a structure built to meet optical, mechanical, and environmental specifications.

OS See “operating system.”

out-of-bandmanagement

Transmission of monitoring/control functions outside of the Fibre Channel interface, typically over Ethernet.

oversubscription The ratio of bandwidth required to bandwidth available. When all ports, associated pair-wise, in any random fashion, cannot sustain full duplex at full line-rate, the switch is oversubscribed.

Pparameter A characteristic element with a variable value that is given a constant

value for a specified application. Also, a user-specified value for an item in a menu; a value that the system provides when a menu is interpreted; data passed between programs or procedures.

password (1) A value used in authentication or a value used to establish membership in a group having specific privileges. (2) A unique string of characters known to the computer system and to a user who must specify it to gain full or limited access to a system and to the information stored within it.

path In a network, any route between any two nodes.

persistent binding Use of server-level access control configuration information to persistently bind a server device name to a specific Fibre Channel storage volume or logical unit number, through a specific HBA and storage port WWN. The address of a persistently bound device does not shift if a storage target fails to recover during a power cycle. This function is the responsibility of the HBA device driver.

port (1) An access point for data entry or exit. (2) A receptacle on a device to which a cable for another device is attached.

iSCSI SAN Topologies TechBook

Page 193: TechBook: iSCSI SAN Topologies

Glossary

port card Field replaceable hardware component that provides the connection for fiber cables and performs specific device-dependent logic functions.

port name A symbolic name that the user defines for a particular port through the Product Manager.

preferred domain ID An ID configured by the fabric administrator. During the fabric build process a switch requests permission from the principal switch to use its preferred domain ID. The principal switch can deny this request by providing an alternate domain ID only if there is a conflict for the requested Domain ID. Typically a principal switch grants the non-principal switch its requested Preferred Domain ID.

principal downstreamISL

The ISL to which each switch will forward frames originating from the principal switch.

principal ISL The principal ISL is the ISL that frames destined to, or coming from, the principal switch in the fabric will use. An example is an RDI frame.

principal switch In a multiswitch fabric, the switch that allocates domain IDs to itself and to all other switches in the fabric. There is always one principal switch in a fabric. If a switch is not connected to any other switches, it acts as its own principal switch.

principal upstream ISL The ISL to which each switch will forward frames destined for the principal switch. The principal switch does not have any upstream ISLs.

product (1) Connectivity Product, a generic name for a switch, director, or any other Fibre Channel product. (2) Managed Product, a generic hardware product that can be managed by the Product Manager (a Connectrix switch is a managed product). Note distinction from the definition for “device.”

Product Manager A software component of Connectrix Manager software such as a Connectrix switch product manager, that implements the management user interface for a specific product. When a product instance is opened from the Connectrix Manager software products view, the corresponding product manager is invoked. The product manager is also known as an Element Manager.

iSCSI SAN Topologies TechBook 193

Page 194: TechBook: iSCSI SAN Topologies

194

Glossary

product name A user configurable identifier assigned to a Managed Product. Typically, this name is stored on the product itself. For a Connectrix switch, the Product Name can also be accessed by an SNMP Manager as the System Name. The Product Name should align with the host name component of a Network Address.

products view The top-level display in the Connectrix Management software user interface that displays icons of Managed Products.

protocol (1) A set of semantic and syntactic rules that determines the behavior of functional units in achieving communication. (2) A specification for the format and relative timing of information exchanged between communicating parties.

RR_A_TOV See “resource allocation time out value.”

remote access link The ability to communicate with a data processing facility through a remote data link.

remote notification The system can be programmed to notify remote sites of certain classes of events.

remote userworkstation

A workstation, such as a PC, using Connectrix Management software and Product Manager software that can access the Connectrix service processor over a LAN connection. A user at a remote workstation can perform all of the management and monitoring tasks available to a local user on the Connectrix service processor.

resource allocationtime out value

A value used to time-out operations that depend on a maximum time that an exchange can be delayed in a fabric and still be delivered. The resource allocation time-out value of (R_A_TOV) can be set within a range of two-tenths of a second to 120 seconds using the Connectrix switch product manager. The typical value is 10 seconds.

SSAN See “storage area network (SAN).”

segmentation A non-connection between two switches. Numerous reasons exist for an operational ISL to segment, including interop mode incompatibility, zoning conflicts, and domain overlaps.

iSCSI SAN Topologies TechBook

Page 195: TechBook: iSCSI SAN Topologies

Glossary

segmented E_Port E_Port that has ceased to function as an E_Port within a multiswitch fabric due to an incompatibility between the fabrics that it joins.

service processor See “Connectrix service processor.”

session See “management session.”

single attached host A host that only has a single connection to a set of devices.

small form factorpluggable (SFP)

An optical module implementing a shortwave or long wave optical transceiver.

SMTP Simple Mail Transfer Protocol, a TCP/IP protocol that allows users to create, send, and receive text messages. SMTP protocols specify how messages are passed across a link from one system to another. They do not specify how the mail application accepts, presents or stores the mail.

SNMP Simple Network Management Protocol, a TCP/IP protocol that generally uses the User Datagram Protocol (UDP) to exchange messages between a management information base (MIB) and a management client residing on a network.

storage area network(SAN)

A network linking servers or workstations to disk arrays, tape backup systems, and other devices, typically over Fibre Channel and consisting of multiple fabrics.

subnet mask Used by a computer to determine whether another computer with which it needs to communicate is located on a local or remote network. The network mask depends upon the class of networks to which the computer is connecting. The mask indicates which digits to look at in a longer network address and allows the router to avoid handling the entire address. Subnet masking allows routers to move the packets more quickly. Typically, a subnet may represent all the machines at one geographic location, in one building, or on the same local area network.

switch priority Value configured into each switch in a fabric that determines its relative likelihood of becoming the fabric’s principal switch.

iSCSI SAN Topologies TechBook 195

Page 196: TechBook: iSCSI SAN Topologies

196

Glossary

TTCP/IP Transmission Control Protocol/Internet Protocol. TCP/IP refers to

the protocols that are used on the Internet and most computer networks. TCP refers to the Transport layer that provides flow control and connection services. IP refers to the Internet Protocol level where addressing and routing are implemented.

toggle To change the state of a feature/function that has only two states. For example, if a feature/function is enabled, toggling changes the state to disabled.

topology Logical and/or physical arrangement of switches on a network.

trap An asynchronous (unsolicited) notification of an event originating on an SNMP-managed device and directed to a centralized SNMP Network Management Station.

Uunblocked port Devices communicating with an unblocked port can log in to a

Connectrix switch or a similar product and communicate with devices attached to any other unblocked port if the devices are in the same zone.

Unicast Unicast routing provides one or more optimal path(s) between any of two switches that make up the fabric. (This is used to send a single copy of the data to designated destinations.)

upper layer protocol(ULP)

The protocol user of FC-4 including IPI, SCSI, IP, and SBCCS. In a device driver ULP typically refers to the operations that are managed by the class level of the driver, not the port level.

URL Uniform Resource Locater, the addressing system used by the World Wide Web. It describes the location of a file or server anywhere on the Internet.

Vvirtual switch A Fibre Channel switch function that allows users to subdivide a

physical switch into multiple virtual switches. Each virtual switch consists of a subset of ports on the physical switch, and has all the properties of a Fibre Channel switch. Multiple virtual switches can be connected through ISL to form a virtual fabric or VSAN.

iSCSI SAN Topologies TechBook

Page 197: TechBook: iSCSI SAN Topologies

Glossary

virtual storage areanetwork (VSAN)

An allocation of switch ports that can span multiple physical switches, and forms a virtual fabric. A single physical switch can sometimes host more than one VSAN.

volume A general term referring to an addressable logically contiguous storage space providing block I/O services.

VSAN Virtual Storage Area Network.

Wwarning message An indication that a possible error has been detected. See also “error

message” and “information message.”

World Wide Name(WWN)

A unique identifier, even on global networks. The WWN is a 64-bit number (XX:XX:XX:XX:XX:XX:XX:XX). The WWN contains an OUI which uniquely determines the equipment manufacturer. OUIs are administered by the Institute of Electronic and Electrical Engineers (IEEE). The Fibre Channel environment uses two types of WWNs; a World Wide Node Name (WWNN) and a World Wide Port Name (WWPN). Typically the WWPN is used for zoning (path provisioning function).

Zzone An information object implemented by the distributed Nameserver

(dNS) of a Fibre Channel switch. A zone contains a set of members which are permitted to discover and communicate with one another. The members can be identified by a WWPN or port ID. EMC recommends the use of WWPNs in zone management.

zone set An information object implemented by the distributed Nameserver (dNS) of a Fibre Channel switch. A Zone Set contains a set of Zones. A Zone Set is activated against a fabric, and only one Zone Set can be active in a fabric.

zonie A storage administrator who spends a large percentage of his workday zoning a Fibre Channel network and provisioning storage.

zoning Zoning allows an administrator to group several devices by function or by location. All devices connected to a connectivity product, such as a Connectrix switch, may be configured into one or more zones.

iSCSI SAN Topologies TechBook 197

Page 198: TechBook: iSCSI SAN Topologies

198

Glossary

iSCSI SAN Topologies TechBook

Page 199: TechBook: iSCSI SAN Topologies

Index

Bbridged solutions 60

CCHAP 49Congestion

network 28

Ddigests 50

EEMC native iSCSI targets 53

IInternet Protocol Security (IPsec) 40IP

overview 20IPsec

and tunneling 40terminology 41

IPv6 29addressing 32features 29IPsec 31larger address space 30packet 37transition mechanisms 38

iSCSIdiscovery 46error recovery 47

overview 44security 48solution features, comparison 73technology 42, 44

iSCSI targetsconfiguring 58

KKRB5 (Kerberos V5) 49

NNetwork congestion 28

SSPKM1 & 2 (Simple Public Key GSS-API

Mechanisim) 50SRP (Secure Remote Password) 49

TTCP

error recovery 25overview 18terminology 21

iSCSI SAN Topologies TechBook 199

Page 200: TechBook: iSCSI SAN Topologies

200

Index

iSCSI SAN Topologies TechBook


Top Related