strsw ed ilt san impwkshp exerciseguide

317
NetApp University SAN Implementation Workshop Exercise Guide NetApp University - Do Not Distribute

Upload: procomphys

Post on 04-Dec-2015

37 views

Category:

Documents


21 download

DESCRIPTION

NetApp study guide SAN implementation FCSAN NCDA

TRANSCRIPT

Page 1: Strsw Ed Ilt San Impwkshp Exerciseguide

NetApp University

SAN ImplementationWorkshopExercise Guide

NetApp University - Do Not Distribute

Page 2: Strsw Ed Ilt San Impwkshp Exerciseguide

E-1 SAN Implementation Workshop: Welcome © 2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.

NETAPP UNIVERSITY

SAN Implementation Workshop Exercise Guide Course Number: STRSW-ED-ILT-SAN-IMPWKSHP

Catalog Number: STRSW-ED-ILT-SAN-IMPWKSHP-EG

NetApp University - Do Not Distribute

Page 3: Strsw Ed Ilt San Impwkshp Exerciseguide

E-2 SAN Implementation Workshop: Welcome © 2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.

ATTENTION The information contained in this guide is intended for training use only. This guide contains information and activities that, while beneficial for the purposes of training in a closed, non-production environment, can result in downtime or other severe consequences and therefore are not intended as a reference guide. This guide is not a technical reference and should not, under any circumstances, be used in production environments. To obtain reference materials, please refer to the NetApp product documentation located at www.now.com for product information.

COPYRIGHT © 2008 NetApp. All rights reserved. Printed in the U.S.A. Specifications subject to change without notice.

No part of this book covered by copyright may be reproduced in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system—without prior written permission of the copyright owner.

NetApp reserves the right to change any products described herein at any time and without notice. NetApp assumes no responsibility or liability arising from the use of products or materials described herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product or materials does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NetApp.

The product described in this manual may be protected by one or more U.S. patents, foreign patents, or pending applications.

RESTRICTED RIGHTS LEGEND Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).

TRADEMARK INFORMATION NetApp, the NetApp logo, and Go further, faster, FAServer, NearStore, NetCache, WAFL, DataFabric, FilerView, SecureShare, SnapManager, SnapMirror, SnapRestore, SnapVault, Spinnaker Networks, the Spinnaker Networks logo, SpinAccess, SpinCluster, SpinFS, SpinHA, SpinMove, SpinServer, and SpinStor are registered trademarks of Network Appliance, Inc. in the United States and other countries. Network Appliance, Data ONTAP, ApplianceWatch, BareMetal, Center-to-Edge, ContentDirector, gFiler, MultiStore, SecureAdmin, Smart SAN, SnapCache, SnapDrive, SnapMover, Snapshot, vFiler, Web Filer, SpinAV, SpinManager, SpinMirror, and SpinShot are trademarks of NetApp, Inc. in the United States and/or other countries.

Apple is a registered trademark and QuickTime is a trademark of Apple Computer, Inc. in the United States and/or other countries.

Microsoft is a registered trademark and Windows Media is a trademark of Microsoft Corporation in the United States and/or other countries.

RealAudio, RealNetworks, RealPlayer, RealSystem, RealText, and RealVideo are registered trademarks and RealMedia, RealProxy, and SureStream are trademarks of RealNetworks, Inc. in the United States and/or other countries.

All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such.

NetApp is a licensee of the CompactFlash and CF Logo trademarks.

NetApp University - Do Not Distribute

Page 4: Strsw Ed Ilt San Impwkshp Exerciseguide

E-3 SAN Implementation Workshop: Welcome © 2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.

EXERCISE TABLE OF CONTENTS MODULE 1: SAN DEPLOYMENTS ..........................................................................................E1-1 MODULE 2: HOST SOFTWARE STACK .................................................................................E2-1 MODULE 3: SAN IMPLEMENTATION PHASES .....................................................................E3-1 MODULE 4: FC SWITCHING CONCEPTS ...............................................................................E4-1 MODULE 5: FC LINUX ..............................................................................................................E5-1 MODULE 6: FC AND IP SOLARIS............................................................................................E6-1 MODULE 7: FC AND IP VMWARE............................................................................................E7-1 APPENDIX A ...............................................................................................................................A-1 APPENDIX B ...............................................................................................................................B-1

NetApp University - Do Not Distribute

Page 5: Strsw Ed Ilt San Impwkshp Exerciseguide

SA

N D

eployments

NetApp University - Do Not Distribute

Page 6: Strsw Ed Ilt San Impwkshp Exerciseguide

E1-1 SAN Implementation Workshop: SAN Deployments © 2008 NetApp. This material is intended for training use only: Not authorized for reproduction purposes.

MODULE 1: SAN DEPLOYMENTS

Exercise

Module 1: SAN Implementation Workshop

Estimated Time: None

EXERCISE There is no exercise for this module.

NetApp University - Do Not Distribute

Page 7: Strsw Ed Ilt San Impwkshp Exerciseguide

Softw

are Host

Stack

NetApp University - Do Not Distribute

Page 8: Strsw Ed Ilt San Impwkshp Exerciseguide

E2-1 SAN Implementation Workshop: Host Software Stack © 2008 NetApp. This material is intended for training use only: Not authorized for reproduction purposes.

MODULE 2: HOST SOFTWARE STACK

Exercise

Module 2: Host Software Stack

Estimated Time: None

EXERCISE There is no exercise for this module.

NetApp University - Do Not Distribute

Page 9: Strsw Ed Ilt San Impwkshp Exerciseguide

SA

N Im

plementation

Phases

NetApp University - Do Not Distribute

Page 10: Strsw Ed Ilt San Impwkshp Exerciseguide

E3-1 SAN Implementation Workshop: SAN Implementation Phases © 2008 NetApp. This material is intended for training use only: Not authorized for reproduction purposes.

MODULE 3: SAN IMPLEMENTATION PHASES

Exercise

Module 3: SAN Implementation Phases

Estimated Time: None

EXERCISE There is no exercise for this module.

NetApp University - Do Not Distribute

Page 11: Strsw Ed Ilt San Impwkshp Exerciseguide

FC S

witching

NetApp University - Do Not Distribute

Page 12: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-1 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

MODULE 4: FC SWITCHING CONCEPTS

Exercise

Module 4: FC Switching Concepts

Estimated Time: 40 minutes

EXERCISE 1: DISABLE AND SET UP AN FC SWITCH

OVERVIEW In this exercise, you will disable an FC switch and set it up as if it came out of the box. Note that FC connectivity is disrupted during this process.

TIME ESTIMATE

20 minutes

NetApp University - Do Not Distribute

Page 13: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-2 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

START OF EXERCISE

STEP ACTION

1. Use PuTTY or establish a Telnet connection to log on to the FC switch console in your pod as the admin user (Brocade default password for admin is: password).

Enter the following command at the Brocade FC switch prompt to disable the switch.

san201_brocade:admin> switchDisable

Enter the following command to verify that the switch is disabled.

san201_brocade:admin> switchshow switchType: 34.0 switchState: Offline switchMode: Native switchRole: Disabled switchDomain: 1 (unconfirmed) switchId: fffc01 switchWwn: 10:00:00:05:1e:05:10:ff zoning: OFF switchBeacon: OFF Area Port Media Speed State ============================== 0 0 id N4 No_Light Disabled 1 1 id N4 No_Light Disabled 2 2 id N4 No_Light Disabled 3 3 id N4 No_Light Disabled 4 4 id N4 No_Sync Disabled 5 5 id N4 No_Sync Disabled 6 6 id N4 In_Sync Disabled 7 7 id N4 In_Sync Disabled 8 8 id N4 In_Sync Disabled 9 9 id N4 In_Sync Disabled 10 10 id N4 No_Light Disabled 11 11 id N4 No_Light Disabled 12 12 id N4 No_Light Disabled 13 13 id N4 No_Light Disabled 14 14 id N4 No_Light Disabled 15 15 id N4 No_Light Disabled

2. Enter the following command to set the switch name. Use the host name of the switch in your pod, for example san201_brocade for pod 201.

san201_brocade:admin> switchName “san<pod#>_brocade”

3. Enter the following command to configure the IP networking parameters of the switch. IMPORTANT: Press Enter to keep all parameters at their current values.

NetApp University - Do Not Distribute

Page 14: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-3 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

san205_brocade:admin> ipaddrset

Log out of the console server and log in by way of a Telnet connection to confirm IP network connectivity to the FC switch.

4. Enter the following command to configure Domain_ID and switch PID format:

san205_brocade:admin> configure

Configure... Fabric parameters (yes, y, no, n): [no] y

Domain: (1..239) [1] 1 BB credit: (1..27) [16] R_A_TOV: (4000..120000) [10000] E_D_TOV: (1000..5000) [2000] WAN_TOV: (0..30000) [0] MAX_HOPS: (7..19) [7] Data field size: (256..2112) [2112] Sequence Level Switching: (0..1) [0] Disable Device Probing: (0..1) [0] Suppress Class F Traffic: (0..1) [0] SYNC IO mode: (0..1) [0] VC Encoded Address Mode: (0..1) [0] Switch PID Format: (0..2) [1] 1 Per-frame Route Priority: (0..1) [0] Long Distance Fabric: (0..1) [0]

(select defaults for remaining prompts)

5. Enter the following command to set the time zone (offset from UTC):

san201_brocade:admin> tstimezone -5

6. Enter the following command to enable the switch:

san201_brocade:admin> switchEnable

7. Enter the following command to confirm that the switch has been enabled:

NetApp University - Do Not Distribute

Page 15: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-4 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION san201_brocade:admin> switchshow switchName: san201_brocade switchType: 34.0 switchState: Online switchMode: Native switchRole: Principal switchDomain: 1 switchId: fffc01 switchWwn: 10:00:00:05:1e:04:e9:96 zoning: OFF switchBeacon: OFF Area Port Media Speed State ============================== 0 0 id N2 Online F-Port 50:0a:09:81:86:b8:24:a1 1 1 id N2 Online F-Port 50:0a:09:82:86:b8:24:a1 2 2 id N2 Online F-Port 50:0a:09:81:96:b8:24:a1 3 3 id N2 Online F-Port 50:0a:09:82:96:b8:24:a1 4 4 id N4 Online F-Port 21:00:00:e0:8b:86:30:3c 5 5 id N4 Online F-Port 21:01:00:e0:8b:a6:30:3c 6 6 id N4 Online F-Port 10:00:00:00:c9:58:29:62 7 7 id N4 Online F-Port 10:00:00:00:c9:58:29:63 8 8 id N4 Online F-Port 10:00:00:00:c9:43:f2:d6 9 9 id N4 Online F-Port 10:00:00:00:c9:43:f2:d7 10 10 id N4 No_Light 11 11 id N4 No_Light 12 12 id N4 No_Light 13 13 id N4 No_Light 14 14 id N4 No_Light 15 15 id N4 No_Light

END OF EXERCISE

NetApp University - Do Not Distribute

Page 16: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-5 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 2: INSPECT FC FABRIC CONFIGURATION

OVERVIEW

In this exercise, you inspect the FC fabric configuration to ensure that the hosts and the storage system are properly connected into the switch.

TIME ESTIMATE

30 minutes

START OF EXERCISE

STEP ACTION

1. Use PuTTY or establish a Telnet connection to log on to the FC switch in your pod as the admin user. (The Brocade default password for admin is: password.)

Enter the following command at the Brocade FC switch prompt to view the current nodes connected to the switch as well as other FC switch parameters:

san201_brocade:admin> switchshow switchName: san201_brocade switchType: 34.0 switchState: Online switchMode: Native switchRole: Principal switchDomain: 1 switchId: fffc01 switchWwn: 10:00:00:05:1e:04:e9:96 zoning: OFF switchBeacon: OFF Area Port Media Speed State ============================== 0 0 id N2 Online F-Port 50:0a:09:81:86:b8:24:a1 1 1 id N2 Online F-Port 50:0a:09:82:86:b8:24:a1 2 2 id N2 Online F-Port 50:0a:09:81:96:b8:24:a1 3 3 id N2 Online F-Port 50:0a:09:82:96:b8:24:a1 4 4 id N4 Online F-Port 21:00:00:e0:8b:86:30:3c 5 5 id N4 Online F-Port 21:01:00:e0:8b:a6:30:3c 6 6 id N4 Online F-Port 10:00:00:00:c9:58:29:62 7 7 id N4 Online F-Port 10:00:00:00:c9:58:29:63 8 8 id N4 Online F-Port 10:00:00:00:c9:43:f2:d6 9 9 id N4 Online F-Port 10:00:00:00:c9:43:f2:d7 10 10 id N4 No_Light 11 11 id N4 No_Light 12 12 id N4 No_Light

NetApp University - Do Not Distribute

Page 17: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-6 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

13 13 id N4 No_Light 14 14 id N4 No_Light 15 15 id N4 No_Light Observe in bold that some FC ports have remote FC nodes connected. These nodes are all connected to F-Ports on the fabric (fabric ports).

Observe also that this FC switch is not zoned (zoning: OFF).

According to the output you get from switchshow, how many FC switches are there in this FC fabric? ________________________________

You will now identify each block of F-ports.

2. Use PuTTY or establish a Telnet connection to your Solaris host prompt and enter the following command to view the FC initiator WWPNs of the FC HBA ports on your host:

$ fcinfo hba-port

Observe that the WWPNs of your Solaris host start with 21:00 and 21:01 (for ports 0 and 1). You should be able to locate the WWPNs of your host in the output of switchshow on the Brocade FC switch. They should be connected to a fabric port (F-Port) and online.

Write down (or copy and paste into a new text file) the WWPNs of the FC initiator ports of your Solaris host:

Port 0 WWPN: __________________________________________________

Port 1 WWPN: __________________________________________________

To which port on the Brocade FC switch do the Solaris FC HBA ports connect?

Solaris FC HBA Port 0 connects to Brocade port _______________________

Solaris FC HBA Port 1 connects to Brocade port _______________________

3. While still at the prompt of your Solaris host, enter the following command to view the target WWPNs assigned to the target FC ports on the storage controller:

rsh <storage_ctlr> fcp show adapter

Repeat the command for both storage controllers in the dual controller storage system. Observe that the WWPNs of your storage controllers start with 50:0a.

NetApp University - Do Not Distribute

Page 18: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-7 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Write down (or copy and paste into a new text file) the WWPNs of the FC target ports of your storage system:

Storage 1 Port 0c WWPN: __________________________________________

Storage 1 Port 0d WWPN: __________________________________________

Storage 2 Port 0c WWPN: __________________________________________

Storage 2 Port 0d WWPN: __________________________________________

To which port on the Brocade FC switch do the Solaris FC HBA ports connect?

Storage 1 FC Port 0c connects to Brocade port _______________________

Storage 1 FC Port 0d connects to Brocade port _______________________

Storage 2 FC Port 0c connects to Brocade port _______________________

Storage 2 FC Port 0d connects to Brocade port _______________________

4. Use PuTTY or establish a Telnet connection to your VMware ESX Server host prompt and enter the following command to view the FC initiator WWPNs of the FC HBA ports on your host.

$ esxcfg-info | grep –i “port number”

Observe that the WWPNs of your ESX Server host start with 10:00. However, you will see only two WWPNs starting with 10:00 on your ESX Server, although there are 4 WWPNs starting with 10:00 connected to the Brocade switch. Since both the VMware ESX Server host and the Linux host in your pod are using Emulex FC HBAs, the WWPNs of both the VMware ESX Server and the WWPNs of your Linux host start with 10:00. So, you need to look at the other digits in the WWPN to locate where they connect on the Brocade switch.

Write down (or copy and paste into a new text file) the WWPNs of the FC initiator ports of your VMware ESX Server host:

Port 0 WWPN: __________________________________________________

Port 1 WWPN: __________________________________________________

To which port on the Brocade FC switch do the VMware ESX Server FC HBA ports connect?

ESX FC HBA Port 0 connects to Brocade port _______________________

ESX FC HBA Port 1 connects to Brocade port _______________________

NetApp University - Do Not Distribute

Page 19: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-8 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Use PuTTY or establish a Telnet connection to your Linux host prompt and enter the following command to view the FC initiator WWPNs of the FC HBA ports on your host. Replace <#> with the port number.

$ cat /sys/class/scsi_host/host<#>/port_name

Write down (or copy and paste into a new text file) the WWPNs of the FC initiator ports of your Linux host:

Port 0 WWPN: __________________________________________________

Port 1 WWPN: __________________________________________________

To which port on the Brocade FC switch do the Linux host FC HBA ports connect?

Linux FC HBA Port 0 connects to Brocade port _______________________

Linux FC HBA Port 1 connects to Brocade port _______________________

END OF EXERCISE

NetApp University - Do Not Distribute

Page 20: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-9 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 3: ZONE AN FC SWITCH

OVERVIEW:

In this exercise, you save the current configuration of the FC switch into a file on a server host. Next, you zone the FC switch and save the configuration with zoning enabled. Then, you reload the original configuration from the server host back onto the FC switch.

TIME ESTIMATE

90 minutes

START OF EXERCISE

STEP ACTION

1. Use PuTTY or establish a Telnet connection to log on to the FC switch console in your pod as the admin user. (The Brocade default password for admin is: password.)

Enter the following command to view the current FC zones on the Brocade FC switch. san201_brocade:admin> cfgshow Defined configuration: no configuration defined Effective configuration: no configuration in effect

Observe that there are no FC zones defined on the switch. Also, there are no configurations defined, and no configurations in effect.

2. You will now save the current configuration of the FC switch to a file on your workstation.

Browse to the IP address of the FC switch in your pod.

http://<san<pod#>_brocade

NetApp University - Do Not Distribute

Page 21: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-10 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

The Brocade SwitchExplorer GUI will appear.

Click the Admin button. You will be prompted to enter a user name and password. Use “admin” and “password.” The Switch Admin dialog box will appear. (Be sure that pop-up windows are not blocked for this site in Microsoft Internet Explorer.)

Click the Configure main tab and select the Upload/Download sub-tab as shown here.

NetApp University - Do Not Distribute

Page 22: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-11 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Type the IP address of the Solaris host in your pod, in the Host IP text box.

Enter root in the User Name text box and passwd in the Password text box.

Type /tmp/san<pod#>FCSwitchConfig in the File Name box.

NetApp University - Do Not Distribute

Page 23: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-12 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click Apply.

Click Yes.

Observe the Upload/Download Progress bar and the message log. You should see a message reporting ConfigUpload completed successfully as shown here.

NetApp University - Do Not Distribute

Page 24: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-13 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click Close to close the Switch Admin dialog box.

You can log on to your Solaris host and run ls /tmp/san<pod#>FCswconfig to verify that the FC switch configuration has been uploaded on the host.

You can also have a QUICK look at the configuration text file uploaded.

3. Now you will create FC aliases for the WWPNs of the FC initiator ports of your

Solaris, ESX, and Linux hosts and for the FC target ports of your dual storage controller system.

Use PuTTY or establish a Telnet connection to the Brocade FC switch in your pod using administrator/password for user name/password. Replace the WWPNs between “” in the following commands with the WWPNs of your hosts and storage system that you recorded in Lab 2.

NetApp University - Do Not Distribute

Page 25: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-14 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Create two aliases each containing two target ports on the storage system. san201_brocade:admin> aliCreate "STO_FAS3050_cl1", "50:0a:09:81:86:b8:24:a1; 50:0a:09:82:86:b8:24:a1" san201_brocade:admin> aliCreate "STO_FAS3050_cl2", "50:0a:09:81:96:b8:24:a1; 50:0a:09:82:96:b8:24:a1" Create alias for two Solaris FC initiator ports: san201_brocade:admin> aliCreate "SRV_SOLARIS1_c1", "21:00:00:e0:8b:86:30:3c; 21:01:00:e0:8b:a6:30:3c" Create alias for two Linux FC initiator ports: san201_brocade:admin> aliCreate "SRV_LINUX1_c1", "10:00:00:00:c9:58:29:62; 10:00:00:00:c9:58:29:63" Create alias for two VMware ESX Server FC initiator ports: san201_brocade:admin> aliCreate "SRV_ESX1_c1", "10:00:00:00:c9:43:f2:d6; 10:00:00:00:c9:43:f2:d7" Display aliases: san201_brocade:admin> aliShow

Observe that you created aliases that contain two WWPNs. When you specify more than one WWPN, you separate them with “;”. You could also assign just one WWPN to any given alias. In this case you simply put the single WWPN in between “”.

4. Now you will create FC zones for each host.

Create an FC zone containing the Solaris host and the storage system: san201_brocade:admin> zoneCreate "ZNE_SOLARIS1", "SRV_SOLARIS1_c1; STO_FAS3050_cl1; STO_FAS3050_cl2"

Create an FC zone containing the Linux host and the storage system:

NetApp University - Do Not Distribute

Page 26: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-15 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

san201_brocade:admin> zoneCreate "ZNE_LINUX1", "SRV_LINUX1_c1; STO_FAS3050_cl1; STO_FAS3050_cl2"

Create an FC zone containing the VMware ESX Server host and the storage system:

san201_brocade:admin> zoneCreate "ZNE_ESX1", "SRV_ESX1_c1; STO_FAS3050_cl1; STO_FAS3050_cl2"

Display FC zones: san201_brocade:admin> zoneShow

Observe that the storage system is part of all FC zones.

5. You now create a configuration that contains all FC zones.

san201_brocade:admin> cfgCreate "PROD_SWITCH1", "ZNE_SOLARIS1"

san201_brocade:admin> cfgAdd "PROD_SWITCH1", "ZNE_LINUX1"

san201_brocade:admin> cfgAdd "PROD_SWITCH1", "ZNE_ESX1"

Save the configuration: san201_brocade:admin> cfgSave (answer 'y' to the prompt)

Activate the configuration: san201_brocade:admin> cfgEnable “PROD_SWITCH1" (answer 'y' to prompt)

Verify that the configuration is effective (active): san201_brocade:admin> cfgShow Save the configuration to the Solaris host. Refer to the first section of this lab where you specified upload connection parameters in the Brocade SwitchExplorer GUI. Use the same connection parameters. Use the ftp

NetApp University - Do Not Distribute

Page 27: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-16 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

transfer option. san201_brocade:admin> configUpload Observe that the switch configuration file uploaded to the Solaris host is named config.txt by default and it is created in the root directory “/”. You can verify this by logging onto your Solaris host and running the ls / command.

6. You will use the Brocade SwitchExplorer GUI to inspect the zoning of the FC switch.

AdminBrowse to the IP address of the FC switch in your pod and click the Zone Admin button in the lower-left part of the GUI as shown below.

http://<san<pod#>_brocade

NetApp University - Do Not Distribute

Page 28: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-17 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

You will be prompted to authenticate. Enter admin for user and password for password.

7. The Zone Admin screen will appear.

Observe the Alias, Zone, QuickLoop, Fabric Assist, and Config tabs.

NetApp University - Do Not Distribute

Page 29: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-18 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click the Zone tab.

Observe that there are three FC zones available in the Name drop-down list. These are the FC zones you created previously at the Brocade FC switch prompt. As you select each zone, observe the Aliases change in the Zone Members list on the left.

NetApp University - Do Not Distribute

Page 30: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-19 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click the Config tab.

Observe that there is only one configuration available in the Name drop-down list. This is the configuration you created previously at the Brocade FC switch prompt. Observe also that this configuration is reported to be the effective configuration as shown by the Effective Config field in the upper-right portion of the screen. You create FC aliases, zones and a configuration using the Brocade CLI in this lab exercise. You could also use the Brocade SwitchExplorer GUI to manage the FC switch including FC zones and configurations.

8. Now you will reload the original configuration of the FC switch that you saved at the beginning of this lab exercise, effectively returning the switch to its initial state before you created FC zones. First, use PuTTY or establish a Telnet connection to your FC switch prompt and enter the following command to disable the switch. The switch needs to be disabled while a new configuration is being downloaded (in to the switch).

NetApp University - Do Not Distribute

Page 31: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-20 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

san201_brocade:admin> switchDisable Next, browse back to the IP address of the FC switch in your pod.

http://<san<pod#>_brocade You should be back in the Brocade SwitchExplorer GUI.

Click the Admin button. You will be prompted to enter a user name and password. Use “admin” and “password”. The Switch Admin dialog box will appear. (Be sure that pop-up windows are not blocked for this site in Internet Explorer).

NetApp University - Do Not Distribute

Page 32: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-21 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click the Configure main tab and select the Upload/Download sub-tab as shown here.

Select the Config Download to Switch radio button. This radio button would not be available if the FC switch was not disabled.

NetApp University - Do Not Distribute

Page 33: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-22 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Type the IP address of the Solaris host in your pod, in the Host IP text box.

Enter root in the User Name text box and passwd in the Password text box.

Type /tmp/san<pod#>FCSwitchConfig in the File Name box.

Click Apply.

NetApp University - Do Not Distribute

Page 34: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-23 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Make sure that you are downloading the configuration file of the switch in YOUR pod (double-check the pod number in the file name) and confirm the prompt below.

Observe the Upload/Download Progress bar while the download is in progress.

NetApp University - Do Not Distribute

Page 35: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-24 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

You should see that the download completed successfully in the status bar as shown here.

Now verify that the FC zones configuration was cleared. san201_brocade:admin> cfgShow You should see all the zones you created above although you downloaded the initial (empty) configuration from the host to an FC switch. The initial configuration did not contain any FC zones; nothing is added. When a configuration is downloaded to an FC switch nothing is removed. FC zones and configurations in the configuration file that are being downloaded are added to the ones already on the switch.

9. If you wanted to remove the existing FC zones on the FC switch to have a clean-sweep download instead of a cumulative download, you would need to complete the following steps BEFORE the download:

NetApp University - Do Not Distribute

Page 36: Strsw Ed Ilt San Impwkshp Exerciseguide

E4-25 SAN Implementation Workshop: FC Switching Concepts © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

san201_brocade:admin> cfgDisable (answer 'y' to the prompt) san201_brocade:admin> cfgClear (answer 'y' to the prompt) san201_brocade:admin> cfgSave (answer 'y' to the prompt) Next re-enable the switch san201_brocade:admin> switchEnable Verify that the switch has been re-enabled and that zoning is back to “OFF” san201_brocade:admin> switchShow

END OF EXERCISE

NetApp University - Do Not Distribute

Page 37: Strsw Ed Ilt San Impwkshp Exerciseguide

FC Linux

NetApp University - Do Not Distribute

Page 38: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-1 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

MODULE 5: FC LINUX

Exercise

Module 5: FC Linux

Estimated Time: 20 minutes

EXERCISE 4: VERIFY LINUX HOST COMPATIBILITY WITH THE NETAPP SAN SUPPORT MATRIX

OVERVIEW:

You will be logging into your systems and checking to see what is the version of your OS, if there are any HBA drivers currently installed, and what the version of those drivers are. If they are not the correct version, you will be updating them either by downloading from the Web, or installing from the <class_files> location. You will also confirm that the correct multipathing RPMs are installed, and if not, updating the files as needed.

OBJECTIVES

By the end of this exercise, you should be able to:

• Discover and document the host OS, HBA, HBA driver, and firmware versions on the host

• Discover the server platform type

• Read and interpret the compatibility matrix to confirm the correct setup

NetApp University - Do Not Distribute

Page 39: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-2 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TIME ESTIMATE

10 minutes

START OF EXERCISE

STEP ACTION

1. SSH into your group’s host using PuTTY or some similar utility.

• Login as root (password provided by instructor) 2. Check and document the version of the OS.

• Discover what type of Linux is installed uname –a

What is the kernel build number of the host? ____________________________________________________________

• Discover the release of Linux cat /etc/redhat-release

What is the OS version of the host? ____________________________________________________________

3. Check if FC HBAs are present. lspci | grep -i Fibre or lspci –vv or dmesg | grep –i lpfc* or dmesg | grep –i qla*

lspci: Lists information about devices connected to the PCI system bus.

dmesg: Shows which modules are loaded in kernel space.

• Are there FC HBAs installed? _____________________________________

• What brand of FC HBA is installed? ___________________________________________________________

4. • Browse to the NetApp SAN Support Matrix available at: http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/NetAppSANSupport_August2007RevA.pdf#page=72 and look at line item 31 in the Red Hat Linux section.

• Is the configuration that you have documented so far compatible with the support matrix? _________________________________________________________

NetApp University - Do Not Distribute

Page 40: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-3 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

___

• Does the current support matrix allow SnapDrive for UNIX with your configuration? _____________________________________________________________

• Does your configuration support the Linux Logical Volume Manager? _____________________________________________________________

END OF EXERCISE

NetApp University - Do Not Distribute

Page 41: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-4 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 5: INSTALL NETAPP HOST UTILITIES KIT FOR RED HAT ENTERPRISE LINUX

OVERVIEW:

In this exercise, you will be installing the correct NetApp Host Utilities for your environment. You will then remove any previously installed HBA drivers, and install the supported driver and utilities for the HBAs that are installed. Once they are installed, you will need to set the HBA and driver parameters. This will include unloading the driver and updating the modprobe.conf file. Finally you will be recording the worldwide port names (WWPNs) for future reference.

OBJECTIVES:

At the end of this exercise, you should be able to:

• Discover and document the host OS, HBA, HBA driver, and firmware versions on the host

• Discover the server platform type

• Read and interpret the compatibility matrix to confirm correct setup

TIME ESTIMATE:

20 minutes

START OF EXERCISE

STEP ACTION

1. Confirm that there is no previous version of the Host Utilities installed (default location /opt/sanlun/bin). cd /opt/sanlun/bin ls

If this folder does not exist, move on to Step 2. If it does exist, use the following command to remove it :

./uninstall 2. The NetApp Host Utilities are available for download at the following location

on the NOW site. http://now.netapp.com/NOW/download/software/sanhost_linux/Linux/

The NetApp Host Utilities have been provided for you in the <class_files> location. Replace the <class_files> string by the exact location as specified by your instructor.

NetApp University - Do Not Distribute

Page 42: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-5 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

• Decompress and extract the Host Utilities file (located in <class_files>) cp <class_files>/ netapp_linux_host_utils_3_0.tar.gz /tmp cd /tmp gunzip netapp_linux_host_utils_3_0.tar.gz tar -xvf netapp_linux_host_utils_3_0.tar

• The files will be extracted to the “netapp_linux_host_utils_3_0” subdirectory of your current working directory.

3. Enter the install command to execute the Host Utilities. cd netapp_linux_host_utils_3_0 ./install

• The diagnostic scripts are installed to the /opt/netapp/santools directory

4. Check for previously installed FC HBA drivers. If previous FC HBA drivers are not found, move to Step 5.

First verify it the LPFC driver is loaded in the kernel. Run: modprobe –c | grep lpf

If it is, unload it using: modprobe –r lpfc

Next, verify it the LPFC driver package is installed. Run: rpm –qa | grep lpf

NOTE: The drivers may be installed by the OS, but the full utilities may not be available. For that reason, it is suggested to reinstall the full driver suite.

• For Emulex drivers, change to the directory where the driver installer files are located (see Steps 5 and 6) and run ./lpfc-install --uninstall command to remove the Emulex driver. Does the current support matrix allow SnapDrive for UNIX with your configuration? ________________________________________________________

5. Decompress and extract the Emulex FC HBA driver compressed archive file. cp -R <class_files>/Emulex /tmp cd /tmp/Emulex gunzip lpfc_2.6_driver_kit-8.0.16.27-1.tar.gz tar –xvf lpfc_2.6_driver_kit-8.0.16.27-1.tar

• The files will be extracted to the “lpfc_2.6_driver_kit-8.0.16.27-1” directory within the working directory.

NetApp University - Do Not Distribute

Page 43: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-6 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

6. Move to the driver installer directory (/tmp/Emulex/lpfc_2.6_driver_kit-8.0.16.27-1). It is always a best practice to have a quick look at the README file before installing a driver. Next, run the driver setup script: cd lpfc_2.6_driver_kit-8.0.16.27-1 ./lpfc-install

• No options are needed.

• The installation procedure will take some time to complete. The installation steps involved are:

• Installing the lpfcdriver_2.6 driver package for Emulex FC HBA

• Building the LPFC driver: this implies rebuilding the driver in kernel space and installing the driver as a dynamically loadable kernel module

• Update the ramdisk to load the LPFC driver in the kernel upon bootup; observe that the installation program saves the current ramdisk image using a filename ending with .elx extension

• Update the modprobe.conf configuration file with parameters required by the LPFC driver, observe that the installation program saves the current modprobe.conf file using a filename ending with .elx extension

7. Verify that the LPFC driver was successfully built and installed as a kernel

driver module:

ls /lib/modules/<kernel_build_number>/kernel/drivers/scsi/lpfc

<kernel_build_number>: Recall from the “Host Configuration Check” exercise that you can find out the kernel build number using the uname –a command.

8. Reboot the Linux host to allow the LPFC driver to be loaded in the kernel upon reboot.

Reboot

9. Verify that the LPFC FC HBA driver was successfully loaded in the Linux kernel:

modprobe –c | grep lpfc

NetApp University - Do Not Distribute

Page 44: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-7 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

If the module is not already loaded, load it using modprobe:

modprobe –v lpfc

10. This step is informational only. You can read through it, but do not run the commands shown.

Device mapper multipathing (DM-MP) is used in these lab exercises. No special LPFC driver settings are required for supported Emulex FC HBA with dm-multipath. However, if dm-multipath is not used, the following step would need to be completed:

Unload the LPFC driver.

modprobe –r lpfc

Edit the /etc/modprobe.conf configuration file and add the following parameters:

options lpfc lpfc_nodev_tmo=180

Reload the LPFC driver module in the kernel.

modprobe –v lpfc

Update the ramdisk image with the new LPFC parameter.

/usr/src/lpfc/lpfc-install --createramdisk

Reboot the Linux host using the updated the ramdisk image.

Reboot

11. You will install the Emulex HBAnyware utility now. First change directory to the Emulex driver and utilities class files directory and extract the Emulex Linux Applications tar file.

cd /tmp/Emulex tar –xvf elxlinuxapps-3.0a14-8.0.16.27-1-1.tar

The files will be extracted in the “ElxLinuxApps-3.0a14-8.0.16.27-1” directory.

Next, install the Emulex Linux Applications.

cd ElxLinuxApps-3.0a14-8.0.16.27-1 ./install

When prompted, select “Local Mode” for the mode of operation of

NetApp University - Do Not Distribute

Page 45: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-8 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

HBAnyware.

When prompted, type “y” (yes) to allow the user to change the operation mode of HBAnyware using the set_operating_mode script.

12. Record the WWPN for each port that is listed. You can use either:

cat /sys/class/scsi_host/host<#>/port_name Note: Replace <#> with port number

or /usr/sbin/lpfc/lputil or /usr/sbin/hbanyware/hbacmd listwwpns

or sanlun fcp show adapter

WWPN Port0:______________________________________________________ WWNN Port0:______________________________________________________

WWPN Port1:______________________________________________________ WWNN Port1:______________________________________________________

END OF EXERCISE

NetApp University - Do Not Distribute

Page 46: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-9 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 6: CONFIGURE FCP SERVICE AND DISCOVER LUN ON LINUX HOST USING FCP

OVERVIEW:

In this exercise, you will be completing the HBA setup by configuring your system with dm-multipathing. Once that is completed you will mount a LUN and create a file system on the LUN. Then you will be creating another LUN, assigning an igroup, and accessing it as a raw device.

OBJECTIVES:

By the end of this exercise, you should be able to: • Configure the host for a multipathing configuration • Mounting and accessing LUNs • Create a file system on the LUN • Access the LUN as a raw device

TIME ESTIMATE:

45 minutes

START OF EXERCISE

STEP ACTION

1. To configure dm-Multipathing:

• Create a backup copy of the multipath.conf file located in the etc directory. cp /etc/multipath.conf /etc/multipath.conf.old

• Open the multipath.conf file with vi. vi /etc/multipath.conf Activate the editor with the insert key

• Comment out the first devnode_blacklist command (comment out: add the ‘#’ character at the beginning of each line to be commented out). # devnode_blacklist { # devnode “*” # }

• In the second devnode_blacklist section remove all comment characters to activate the command.

• This would be the # signs from devnode command to the closing } sign.

NetApp University - Do Not Distribute

Page 47: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-10 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

• At the end of the list add the local SCSI devices to be excluded. devnode “sd[a]$” -The $ sign prevents the multipathing from excluding all paths that may be /dev/sdab when using a high number of LUNs.

• Edit the device-specific section at the end of the file. You may leave the current devices section commented out, or remove it altogether. Copy and paste the section that looks like the one below from the “<class_files>/multipath.devs” file into the ”/etc/multipath.conf“ file):

devices { device {

vendor "NETAPP "

product "LUN"

path_grouping_policy group_by_prio

getuid_callout "/sbin/scsi_id -g -u -s /block/%n"

prio_callout "/opt/netapp/santools/mpath_prio_ontap /dev/%n"

features "1 queue_if_no_path"

path_checker readsector0

failback immediate

}

}

• Save the changes to the file and close. Press the Esc key (exit insert mode) :w (to write file) :q (quit editor)

• Add the multipath service to start automatically after reboot. chkconfig --add multipathd chkconfig multipathd on

• Verify at which runlevels multipathd will be loaded during the boot procedure.

NetApp University - Do Not Distribute

Page 48: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-11 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

chkconfig --list | multipathd

• Reload the Emulex driver. /sbin/modprobe –v lpfc

• Start the Device Mapper Multipath (DM-MP) daemon (multipathd) manually. /etc/init.d/multipathd start

2. To view a list of available LUNs:

• Rescan the Emulex HBAs to confirm all mapped LUNs are discovered. /usr/sbin/lpfc/lun_scan all

• Use the sanlun command in the /opt/netapp/santools directory. sanlun lun show all

• This will display all LUNs that have been discovered by the HBAs. You may see multiple paths to the same LUN depending on the paths available.

• Note four /dev/sdX labels for each path to LUN fslun discovered.

fslun/host1 /dev/sd___ /dev/sd___ /dev/sd___ /dev/sd___ fslun/host2 /dev/sd___ /dev/sd___ /dev/sd___ /dev/sd___

It seems that there are eight paths to each LUN. Why?

Hint: Run “rsh <storage_ctrler_ip> fcp config” on the target storage controller.

• 3. View the dm-Multipath configuration and mapped devices.

Using the multipath command, you can see the dm-Multipath configuration. multipath -v3 -d -ll | more

o –d runs the command in dry run mode so nothing is updated

o –v provides detailed information

Example output:

[root@kc105b1 ~]# multipath -v3 -d –ll load path identifiers cache ux_socket_connect error # # all paths in cache :

NetApp University - Do Not Distribute

Page 49: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-12 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

# dm-0 blacklisted dm-1 blacklisted md0 blacklisted ram0 blacklisted ram10 blacklisted ram11 blacklisted ram9 blacklisted sda blacklisted # # all paths : #

• Take a look at the device mapper device directory and observe the DM-MP devices currently available. ls -l /dev/mapper Example output: [root@san102rh ~]# ls -l /dev/mapper

• total 0

• crw------- 1 root root 10, 63 Jul 31 19:34 control

• Refresh the DM-MP devices currently configured on the host. multipath

• Take another look at the device mapper device directory. New devices should appear for the three LUNs you discovered earlier. Observe that these devices are named mpath#, where # is an indicator of the order in which these devices were created. ls /dev/mapper

Example output: [root@san102rh ~]# ls -l /dev/mapper

total 0

crw------- 1 root root 10, 63 Jul 31 19:34 control

brw-rw---- 1 root disk 253, 2 Jul 31 19:34 mpath0

brw-rw---- 1 root disk 253, 1 Jul 31 19:34 mpath1

brw-rw---- 1 root disk 253, 0 Jul 31 19:34 mpath2

• View a list of devices that are mapped. multipath -d -l

NetApp University - Do Not Distribute

Page 50: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-13 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Example output:

[root@kc106b9-e0 ~]# multipath -d -l mpath0 (360a98000433461504e342d4244645735) [size=500 MB][features="1 queue_if_no_path"][hwhandler="0"] \_ round-robin 0 [active] \_ 1:0:1:0 sdb 8:16 [active][ready] \_ 1:0:2:0 sdc 8:32 [active][ready] \_ round-robin 0 [enabled] \_ 1:0:3:0 sdd 8:48 [active][ready] \_ 1:0:4:0 sde 8:64 [active][ready]

NOTE: The /dev/mapper devices are persistent across reboots, but the /dev/sdx devices are not. Now, you need to correlate for each LUN, the /dev/sdx devices to the single mpath# DM-MP device.

• What is the mpath # for the fslun? ________________________________________________________

4. Create a file system on the fslun and mount the device.

• Use the df command to view the current mounted devices. df

• Use the mkfs command to setup the file system. mkfs –t ext3 /dev/mapper/mpathX (X = the mpath number of the fslun)

• Example command: mkfs -t type /dev/mapper/device

• The file system type for example ext2, or ext3, is type.

• The multipath device name of the LUN in the /dev/mpath directory (EX. mpath0) is device.

• Create a mountpoint in the /mnt directory. mkdir /mnt/fslun

• Modify the /etc/fstab file to map the mountpoint to the LUN. NOTE: Each portion of the entry should be separated by a tab and the last zeros should be separated by a space. Example command: device mount_point type defaults 0 0 /dev/mapper/(mpath for fslun) /mnt/fslun ext3 defaults 0 0

NetApp University - Do Not Distribute

Page 51: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-14 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

• Mount the LUN. mount /mnt/fslun

• Verify that the mountpoint has been created by using the df command to view the mounted devices. df

5. Create a directory tree, and files within the tree.

• Create a file in fslun to test write access to LUN. cd /mnt/fslun touch test.txt echo “NetApp rocks” > truth.txt

• Confirm that the files exist. ls /mnt/fslun

6. Accessing a LUN as a raw device can be done using the /dev/mapper

device that was created by Linux. This can then be presented to the application independently.

• Once the LUNs on the storage controller are mapped to the host system, and the HBAs are refreshed, the raw device will show up as another device in the /dev/mapper location.

7. Use the Linux Volume Manager to create a volume group from the two raw disk devices that are available (rawlun and rawlun2). You will then create a logical volume with an ext3 file system, and prove it is accessible by writing directories to it.

Discover the device names assigned to the rawlan and rawlun2. sanlun lun show all | grep rawlun

Match the device labels from the previous command to the device within the multipath configuration file. multipath –d –l For example: mpath0, mpath1 What is the mpath # of rawlun? ____________________________________ What is the mpath # of rawlun2? ___________________________________

NetApp University - Do Not Distribute

Page 52: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-15 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

• Activate the Linux Volume Manager (LVM) prompt to create logical volumes. lvm

• Create two LVM physical volumes for rawlun and rawlun2. pvcreate /dev/mapper/(mpath of rawlun) /dev/mapper/(mpath of rawlun2)

• List physical volumes available. pvs

• Create a volume group named lvmvg, provisioned by the physical volumes created above.

vgcreate lvmvg /dev/mapper/(mpath of rawlun) /dev/mapper//(mpath of rawlun2)

• List the available volume groups. vgs

• Create a logical volume named datalv provisioned by the volume group created above. Observe that although we aggregated two 500M LUNs into the lvmvg volume group, we use just 700/1000M of the space in the aggregated volume group. The remaining 300M can later be used for expansion of the logical volume (or creation of a new logical volume provisioned by the aggregated volume group).

lvcreate -L700 -ndatalv lvmvg

• List the available logical volumes. lvs

• Exit the Linux LVM utility program. exit

• List the contents of the device mapper device directory and observe that the lvmvg-datalv volume you just created now shows up as a DM-MP device. ls /dev/mapper

NetApp University - Do Not Distribute

Page 53: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-16 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

• Create a file system on the new logical volume. mkfs -t ext3 /dev/mapper/lvmvg-datalv

• Mount the new logical volume by first editing the /etc/fstab file. Example: device mount_point type defaults 0 0 /dev/mapper/lvmvg-datalv /mnt/lvmvol ext3 defaults 0 0

• Create a mountpoint. mkdir /mnt/lvmvol

• Mount the logical volume. mount /dev/mapper/lvmvg-datalv /mnt/lvmvol

• Verify mountpoint has been created. df

END OF EXERCISE

NetApp University - Do Not Distribute

Page 54: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-17 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 7: LUN CLONING AND CLEANUP

OVERVIEW:

In this exercise, you will be running a script that will be creating a clone of the fslun that was mounted and written to in the previous labs. You will then make changes to the files on the clone and then compare them to the files on the source to verify that the clone is independent of the source. After verifying the differences of the files, you will run a script to destroy the LUNs and then flush the multipath maps on the host to clean up the LUN system.

OBJECTIVES:

At the end of this exercise, you should be able to: • Connect to and mount a LUN clone • Verify modifications made to the clone have no effect on the source LUN • Clean up the host after the LUNs have been destroyed on the storage system

TIME ESTIMATE:

20 minutes

START OF EXERCISE

STEP ACTION

1. Verify that files exist on the fslun mount

cd /mnt/fslun ls –a

• If files do not exist in the data directory, or the directories do not exist, create a directory structure and create files to verify correct clone creation

2. Run the script to create a clone of the fslun on the storage system <class_files>/basiclunclone.sh <storage_ctrler_ip>

• Normally you would confirm that the data has been quiesced and that the file system has been unmounted to guarantee a consistent snapshot before running this script.

• Feel free to have a look at the basiclunclone.sh script to see the commands run on the NetApp storage controller to clone the fslun.

• If you get errors when running the script, you may need to run the “dos2unix basiclunclone.sh” command to remove ^M characters at the end of each line.

NetApp University - Do Not Distribute

Page 55: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-18 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

3. Rescan the HBAs, verify that the LUN clone was discovered, and view the paths used by DM-mapper

• Rescan the Qlogic HBAs to confirm all mapped LUNs are discovered /usr/sbin/lpfc/lun_scan all

• Use the sanlun command to verify that the LUN was discovered sanlun lun show | grep fslun What are the sdX mappings that are tied to the cloned LUN? (list 2) _________________________________________________________

• View a list of devices that are claimed within DM-mapper multipath -d -l What is the mpath # that is tied to the newly cloned LUN (fslun-clone)? _________________________________________________________

4. Create a mountpoint and mount the new LUN clone. Edit files and verify localized changes using diff command. There should be no changes that occur on the source LUN after modifying the file on the cloned LUN.

• Create a mountpoint mkdir /mnt/fslun-clone

• Mount the clone and edit a file created in a previous lab (IE. Truth.txt) mount /dev/mapper/(cloned lun mpath#) /mnt/fslun-clone

• cd /mnt/fslun-clone

• echo “NetApp storage is the best” > truth.txt

• Use the diff command to verify that the changes occurred only in the cloned LUN and had no effect on the source LUN diff /mnt/fslun/truth.txt /mnt/fslun-clone/truth.txt

• 5 Unmount the cloned LUN and flush the mappings within the DM-

multipath configuration

• Unmount the clone cd /

umount /mnt/fslun-clone

• Flush the unused device mappings (linked to unmounted volumes)

NetApp University - Do Not Distribute

Page 56: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-19 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

multipath –F

• Verify that the device mappings were removed for fslun-clone multipath –d –l

ls /dev/mapper

• Note: If you run “multipath“ with no arguments at this point, the device mappings will be re-created for LUNs which are discovered (even if these LUNs are unmounted). Thus, “multipath” and “multipath –F” toggle between “device mappings for fslun-clone there” and “device mappings for fslun-clone not there.”

6 Run the script to destroy the cloned LUN on the storage system, then rescan the HBAs, and verify that the mapping for the cloned LUN has been removed

• Execute the script to destroy the cloned LUN <class_files>/lundestroy.sh <storage_ctrler_ip>

o If you get errors when running the script, you may need to run the “dos2unix basiclunclone.sh” command to remove ^M characters at the end of each line.

• Inspect the LUNs currently available on the host and observe that an “<Unknown>” LUN shows up. This is fslun-clone. It shows up as “<Unknown>” because the LUN does NOT exist anymore on the storage system. However, the device files are still there on the Linux host. sanlun lun show all

Note the four /dev/sdX labels for each path to the “<Unknown>” LUN

<unknown>/host1 /dev/sd___ /dev/sd___ /dev/sd___ /dev/sd___ <unknown>/host2 /dev/sd___ /dev/sd___ /dev/sd___ /dev/sd___

• Clean up the dangling Linux device files on the host. Replace <X> in the command below with the device identifiers noted above for the “<unknown>” LUN. Important: ENSURE that you only REMOVE the <UNKNOWN> DEVICES. /rm /dev/sd<X>

• Use the sanlun command to confirm that the LUN clone (fslun-clone) is not available on the host any more. The “<Unknown>” LUN should be gone. sanlun lun show all

NetApp University - Do Not Distribute

Page 57: Strsw Ed Ilt San Impwkshp Exerciseguide

E5-20 SAN Implementation Workshop: FC Linux © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

• Use the multipath command to confirm that the devices have been removed from the mapper configuration multipath –v3

END OF EXERCISE

NetApp University - Do Not Distribute

Page 58: Strsw Ed Ilt San Impwkshp Exerciseguide

FC &

IP S

unS

olaris

NetApp University - Do Not Distribute

Page 59: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-1 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

MODULE 6: FC AND IP SOLARIS

Exercise

Module 6: FC and IP Solaris

Estimated Time: 3 hours

EXERCISE 8: VERIFY SOLARIS HOST COMPATIBILITY WITH THE NETAPP SAN SUPPORT MATRIX

OVERVIEW:

In this exercise, you will verify that the Solaris host is compatible with the NetApp SAN Support Matrix – Solaris for FCP and iSCSI. The NetApp SAN Support Matrix is available as a PDF on http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config and is also available as a database application at http://now.netapp.com/matrix/mtx/login.do.

OBJECTIVES:

By the end of this exercise, you should be able to: • Interpret a particular line in the NetApp SAN Support Matrix - Solaris • Verify whether or not the Solaris host complies with the NetApp SAN Support

Matrix (FCP) - Solaris • Verify whether or not the Solaris host complies with the NetApp SAN Support

Matrix (iSCSI) - Solaris

TIME ESTIMATE:

20 minutes

NetApp University - Do Not Distribute

Page 60: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-2 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

START OF EXERCISE TASK 1: INTERPRET A PARTICULAR LINE IN THE NETAPP SAN SUPPORT MATRIX - SOLARIS

STEP ACTION

1. Open the NetApp FC SAN Support Matrix – Solaris available at:

http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config

Each line item in the NetApp FC SAN Support Matrix – Solaris represents a supported configuration that combines several hardware and software elements on the host and on the storage system. The hardware and software combination on the host and on the storage system was tested and certified by NetApp. Each line item has the following columns:

1) No.: Line item number of a particular supported configuration

2) Protocol: Block storage protocol supported (either FCP or iSCSI)

3) Notes: References to matrix footnotes; these notes should not be overlooked

4) Host Utilities: Supported version of the NetApp Host Utilities Kit for Solaris

5) Host OS: Supported version of the Solaris operating system

6) Server Platform: Supported CPU and hardware architecture of the Solaris host

7) SW Initiator: Supported iSCSI software initiator

8) SW Initiator Version: Supported version of the iSCSI Software initiator

9) Host Bus: Supported host bus type and expansion slot type

10) HBA Model: Supported model of the FC or iSCSI HBA; HBA must comply with the type of host bus (the same HBA models can be available for several host bus types)

11) HBA Driver / FW: Supported driver and firmware of the HBA

12) Volume Manager: Supported host volume manager

13) Multipath: Supported host multipathing software solution

NetApp University - Do Not Distribute

Page 61: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-3 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

14) File System: Supported host file systems

15) Host Cluster: Supported host cluster manager solutions

16) Solaris Virtual: Supported Solaris virtual servers

17) Data ONTAP: Version of Data ONTAP supported with this host configuration

18) Cfmode: Supported cluster failover modes (applicable to FCP only)

19) NDU: Whether or not a nondisruptive upgrade is supported

20) SAN Boot: Whether or not SAN boot is supported

21) SnapDrive: Whether or not SnapDrive is supported and, if so, which version

2. Keep in mind that the NetApp FC SAN Support Matrix is also available as a researchable database here: http://now.netapp.com/matrix/mtx/login.do

TASK 2: VERIFY WHETHER OR NOT THE SOLARIS HOST COMPLIES WITH THE NETAPP SAN SUPPORT MATRIX (FCP) – SOLARIS

Your Solaris host is running Solaris 10 Update3. All QLogic FC HBA drivers and Solaris FC software stack components are included by default with Sol10_Update3. However, the default driver and firmware may not be the best suited for the particular FC HBA and Solaris host hardware used. It is always a best practice to verify the firmware and the driver version of the FC HBA to make sure it is supported by the NetApp support matrix.

This task shows you how to verify that the packages required for the Solaris FC software stack components are installed on your host.

You will need to complete these steps on the Solaris host.

STEP ACTION

1. Consider the line item 140 in the NetApp SAN Support Matrix – Solaris (July 2007) available at:

http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/NetAppSANSupport_July2007RevB.pdf#page=95

This is a supported FCP configuration. You need to ensure that your Solaris host

NetApp University - Do Not Distribute

Page 62: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-4 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

complies with this configuration.

1) No.: This is line item number 140.

2) Protocol: This line item shows a supported FCP SAN configuration.

3) Notes: References to matrix footnotes 1, 17, 19, 21, 22, 23, 27, 28, 29, 32, 35. Take a moment to review these footnotes.

2. 4) Host Utilities: Version 4.0 of the Solaris host utilities is supported

You need to download and install the FCP Solaris Host Utilities 4.2 for Native OS from the NOW site (now.netapp.com). Install the santoolkit_solaris_sparc_3.4.tar.Z package. This package has already been copied into the <class_files> directory on your host. Replace the <class_files> string with the exact directory specified by your instructor. You will install the FCP Solaris Host Utilities 4.2 for Native OS in the next lab exercise.

3. 5) Host OS: the OS version supported is Solaris 10 Update3 (32-, 64-bit).

6) Server Platform: the supported CPUs are Sun “UltraSPARC T1.”

You need to ensure that the Solaris server is one of the supported Server platforms.

You also need to make sure that the version of the Solaris operating system installed on the host is the one listed in the Host OS column.

Enter the following commands to view the platform and operating system version of your Solaris host.

# uname -a

SunOS san102sun 5.10 Generic_118833-33 sun4u sparc SUNW,Sun-Fire-V215

# cat /etc/release

Solaris 10 11/06 s10s_u3wos_10 SPARC

Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.

Use is subject to license terms.

Assembled 14 November 2006

The output items in bold indicate that the Solaris host named san102sun is a

NetApp University - Do Not Distribute

Page 63: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-5 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Sun SPARC system running Solaris 10 Update3.

Here are some other Solaris commands that you can use to find this information: prtdiag, prtconf.

OPTIONAL: If specific Solaris operating system patches were required, you could also enter the following command to view and maintain patches installed on the host.

# patchadd –p | more

Patch: 121430-10 Obsoletes: 121435-04 121437-02 Requires: Incompatibles: Packages: SUNWlur SUNWluu

Patch: 121306-02 Obsoletes: Requires: Incompatibles: Packages: SUNWlur

Patch: 113886-27 Obsoletes: Requires: Incompatibles: Packages: SUNWglrt SUNWglrtu SUNWglsrz SUNWgldp SUNWglsr

Patch: 120235-01 Obsoletes: Requires: 119254-03 Incompatibles: Packages: SUNWluzone

Patch: 121428-03 Obsoletes: Requires: 120235-01 Incompatibles: Packages: SUNWluzone

Patch: 113887-27 Obsoletes: Requires: Incompatibles: Packages: SUNWglrtx SUNWglsrx SUNWgldpx

You can also use the showrev –p command to view patches installed on the Solaris host.

4. 7) SW Initiator: “N/A” indicates that the type of software initiator is irrelevant. Software initiators are only relevant with iSCSI.

8) SW Initiator Version: “N/A” indicates that the version of the Solaris iSCSI software initiator is irrelevant for this configuration. Software initiators are only relevant with iSCSI.

5. 9) Host Bus: “PCI-Express” indicates that the type of the expansion bus is PCI-e.

10) HBA Model: “QLogic QLE2460 and QLE2462” are the FC HBAs that are supported with this configuration.

You can use the prtdiag Solaris command to find out more about the QLogic

NetApp University - Do Not Distribute

Page 64: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-6 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

FC HBA, the PCI slot it is being installed in, and its status.

# prtdiag | grep qlc

Bustype Mhz Slot/Status Name/Path

pciex 188 +SER-RIGHT/PCI0 SUNW,qlc-pci1077,138 (scsi-f+ QLA24xx

okay /pci@1e,600000/pci@0/pci@8/SUNW,qlc@0

pciex 188 +SER-RIGHT/PCI0 SUNW,qlc-pci1077,138 (scsi-f+ QLA24xx

okay /pci@1e,600000/pci@0/pci@8/SUNW,qlc@0,1

11) HBA Driver / FW: “SAN Foundation Software (SFS) distributed with Sol10 u3 qlc (SunFC QLogic FCA v20060630-2.16 / 4.0.22” indicates that the driver and firmware of the HBA that are supported with this configuration are the ones distributed with Solaris 10 Update3.

qlc (SunFC QLogic FCA v20060630-2.16: shows that this is the “qlc” driver distributed by Sun as Sun/QLogic driver v2.16.

4.0.22: shows that the firmware required is 4.0.22.

You need to verify that the FC HBA model installed complies with the type of host bus supported. The same HBA models can be available for several host bus types. For example, in this case, you need to use QLogic QLE2460 (single port) or QLE2462 (dual port) HBAs. The QLE family of QLogic FC HBAs works with PCI-e buses. In contrast, the QLA family of QLogic FC HBAs works with PCI-X buses. Check the QLogic Web site for more details about their families of FC HBAs. The model of FC HBA should be verified before the installation. Once the FC HBA installed, you can use the QLogic SANSurfer FC HBA CLI utility program (/usr/sbin/scli) to verify the model of the HBA installed on your host and the status of the FC HBA.

The Sun/QLogic FC HBA driver and utilities are installed by default with Solaris 10 Update3. You can enter the following command to verify that the Sun/QLogic FC HBA driver and utilities are installed on your host:

# pkginfo | grep SUNWqlc

system SUNWqlc Qlogic ISP 2200/2202 Fibre Channel

NetApp University - Do Not Distribute

Page 65: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-7 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Device Driver (root)

system SUNWqlcu Qlogic Fibre Channel Adapter Utilities (usr)

If using a PCI-Express QLogic FC HBA on Solaris 10 Update 3, you need to verify that a driver alias exists in /etc/driver_aliases for the qlc driver.

# cat /etc/driver_aliases | grep pci1077

qlc "pci1077,2200"

qlc "pci1077,2300"

qlc "pci1077,2312"

qlc "pci1077,2422"

qlc "pci1077,2432"

qus "pci1077,1016"

Ensure that the line in bold exists in /etc/driver_aliases on your Solaris host. This is a specific problem with QLogic QLE2462 (PCIe) FC HBAs on Solaris 10 Update3. The problem was introduced by Solaris patch 119130-26.

The QLogic SANSurfer FC CLI utility program is NOT installed by default with Solaris 10 Update3. You can download the QLogic SANSurfer FC CLI from the QLogic Web site. The package to download is scli-<version>.SPARC-X86.Solaris.pkg. This package is available in the <class_files> directory on your Solaris host. Ask your instructor about the exact location of the <class_files> directory. Run the following commands to install the QLogic SANSurfer FC CLI utility program on your host:

Copy the installation package into the temporary directory. # cp <class_files>/QLogic/scli-1.06.16-50.SPARC-X86.Solaris.pkg /tmp # cd /tmp Install the package from the temporary directory. Choose “1”for SPARC below, and answer “Y” to the confirmation prompt: # pkgadd –d scli-1.06.16-50.SPARC-X86.Solaris.pkg The following packages are available: 1 QLScli QLogic SANsurfer FC CLI (HBA Configuration

NetApp University - Do Not Distribute

Page 66: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-8 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Utility) (sparc) 1.06.16 Build 50 2 QLSclix QLogic SANsurfer FC CLI (HBA Configuration Utility) (x86) 1.06.16 Build 50 Select package(s) you wish to process (or 'all' to process all packages). (default: all) [?,??,q]: 1 … Installation of <QLScli> was successful. Enter the following command to view information about the FC HBA installed on your host using the QLogic SANSurfer FC CLI utility program. Choose menu items shown in bold below:

# scli

Scanning QLogic FC HBA(s) and device(s) ... |

SANsurfer FC HBA CLI

v1.06.16 Build 50

Main Menu

1: Display System Information

2: Display HBA Settings

3: Display HBA Information

4: Display Device List

5: Display LUN List

6: Configure HBA Settings

7: Boot Device

8: HBA Utilities

9: Flash Beacon

10: Diagnostics

11: Statistics

NetApp University - Do Not Distribute

Page 67: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-9 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

12: Help

13: Quit

Enter Selection: 3

HBA Information - Display Menu

1: HBA Information

2: HBA VPD

NOTE:: 0 to return to Main Menu

Enter Selection: 1

HBA Information - Display Menu

1: Select an HBA Port

2: Select All HBA Ports

3: Return to Previous Menu

NOTE:: 0 to return to Main Menu

Enter Selection: 2

--------------------------------------------------------------------

Host Name : san102sun

HBA Model : QLE2462

Port : 0

OS Instance : 0

Node Name : 20-00-00-E0-8B-93-01-8E

Port Name : 21-00-00-E0-8B-93-01-8E

Port ID : 01-08-00

NetApp University - Do Not Distribute

Page 68: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-10 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Serial Number : RFC0646M81537

Driver Version : qla-20070212-2.19

FCode Version : 1.08

Firmware Version : 4.00.27

OptionROM BIOS Version : 1.04

OptionROM FCode Version : 1.08

OptionROM EFI Version : 1.00

OptionROM Firmware Version : 4.00.12

Actual Connection Mode : Point to Point

Actual Data Rate : 2 Gbps

PortType (Topology) : FPort

Device Target Count : 4

HBA Status : Online

Press <Enter> to continue:

--------------------------------------------------------------------

Host Name : solsrv2-0

HBA Model : QLA2462

Port : 1

OS Instance : 1

Node Name : 20-01-00-E0-8B-B3-01-8E

Port Name : 21-01-00-E0-8B-B3-01-8E

Port ID : 01-09-00

Serial Number : RFC0646M81537

Driver Version : qla-20070212-2.19

NetApp University - Do Not Distribute

Page 69: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-11 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

FCode Version : 1.08

Firmware Version : 4.00.27

OptionROM BIOS Version : 1.04

OptionROM FCode Version : 1.08

OptionROM EFI Version : 1.00

OptionROM Firmware Version : 4.00.12

Actual Connection Mode : Point to Point

Actual Data Rate : 2 Gbps

PortType (Topology) : FPort

Device Target Count : 4

HBA Status : Online

Observe the FC HBA model, port number, driver version, firmware version, and status. At this point, enter “0” followed by “16” to exit back to the Solaris prompt. Enter “0” followed by “13” to exit back to the Solaris prompt.

The Solaris SAN Foundation Software (SFS) is installed by default with Solaris 10 Update3. With previous versions of Solaris, such as Solaris 9, it needs to be installed separately. The SAN Foundation Software (SFS) is also known as Sun StoreEdge SAN Foundation Software. You can enter the following commands to verify that the components of the Solaris SAN Foundation Software (SFS) are installed on your host:

# pkginfo | grep SUNWfc

system SUNWfchba Sun Fibre Channel Host Bus Adapter Library

system SUNWfchbar Sun Fibre Channel Host Bus Adapter Library (root)

system SUNWfcip Sun FCIP IP/ARP over Fibre Channel Device Driver

system SUNWfcmdb Fibre Channel adb macros and mdb modules

NetApp University - Do Not Distribute

Page 70: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-12 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

system SUNWfcp Sun FCP SCSI Device Driver

system SUNWfcprt Fibre Channel HBA Port utility

system SUNWfcsm FCSM driver

system SUNWfctl Sun Fibre Channel Transport layer

# pkginfo | grep SUNWcfc

system SUNWcfcl Common Fibre Channel HBA API Library (Usr)

system SUNWcfclr Common Fibre Channel HBA API Library (Root)

The Sun/QLogic FC HBA driver module is loaded by default in Solaris 10 Update3. You can enter the following commands to verify that the Sun/QLogic FC HBA driver module is loaded on your host:

# modinfo –c | grep qlc Id Loadcnt Module Name State 97 1 qlc LOADED/INSTALLED If the Sun/QLogic (OEM) FC HBA driver module is not LOADED, you can enter the following command to load it:

# modload /kernel/drv/sparcv9/qlc

If the Sun/QLogic (OEM) FC HBA driver module is not INSTALLED, you can use the add_drv Solaris operating system command to install the driver, or simply re-install/setup the SUNWqlc (QLogic ISP 2200/2202 Fibre Channel Device Driver) package.

To see the name of the FC HBA driver you can run the modinfo command without any arguments. This will show all the drivers installed on the Solaris host along with their name, ID, and address where they were loaded.

The Solaris FC fabric device configuration service is enabled by default in Solaris 10 Update3. You can enter the following commands to verify that the Solaris FC fabric device configuration service is enabled and started on your host:

# svcs | grep fc-fabric

NetApp University - Do Not Distribute

Page 71: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-13 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

online 21:40:22 svc:/system/device/fc-fabric:default

If the Solaris FC fabric device configuration service is not online, you can start it up using the following command:

# svcadm enable /system/device/fc-fabric

Once the service is enabled, it should start up after each system reboot. If it does not start up, run svcs –vx to look for errors related to Solaris 10 services.

6. 12) Volume Manager: “Sun ZFS” indicates that Sun’s ZFS file system is supported with this configuration. Note that although “Sun SVM" is not listed as a supported volume manager on this configuration line (140), it is listed on configuration lines 79, 83, 90 and 94. We will use Sun SVM in this workshop. For more information about the new Sun ZFS file system (and volume manager) see: http://en.wikipedia.org/wiki/ZFS.

The Solaris Volume Manager is installed by default with Solaris 10 Update3. You can enter the following commands to verify that the Solaris Volume Manager is installed on your host:

# pkginfo | grep SUNWlv

system SUNWlvma Solaris Volume Management APIs

system SUNWlvmg Solaris Volume Management Application

system SUNWlvmr Solaris Volume Management (root)

# pkginfo | grep SUNWmd

system SUNWlvma Solaris Volume Management APIs

system SUNWmdar Solaris Volume Manager Assistant (Root)

system SUNWmdau Solaris Volume Manager Assistant (Usr)

system SUNWmddr SVM RCM Module

system SUNWmdr Solaris Volume Manager, (Root)

system SUNWmdu Solaris Volume Manager, (Usr)

# pkginfo | grep SUNWvol

system SUNWvolr Volume Management, (Root)

system SUNWvolu Volume Management, (Usr)

NetApp University - Do Not Distribute

Page 72: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-14 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

7. 13) Multipath: “Sun Traffic Manager MPxIO” indicates that this

configuration is supported with Solaris MPxIO native multipathing solution. Solaris Native MPxIO components are built into Solaris 10 Update3.

8. 14) File System: “UFS” indicates that Sun Unix File System (UFS) is supported with this configuration.

Enter the following command to view the default file system on your Solaris host:

# cat /etc/default/fs

LOCAL=ufs 9. 15) Host Cluster: “Sun Cluster 3.1 Update 4 and Oracle 9i, 10g RAC”

indicates that Sun Clusters and Oracle RAC cluster management solutions are supported with this configuration. We do not use any of these cluster management solutions in these lab exercises.

10. 16) Host Virtual: “Containers” indicates that Solaris virtual servers, also known as virtual host containers are supported with this configuration. We do not use virtual host containers in these lab exercises.

11. 17) Data ONTAP: “7.0.5, 7.1.1, 7.2, 7.2.1” shows the versions of Data ONTAP that are supported with this configuration (replace <filer-x> with the name of your storage controller).

Enter one of the following commands to verify the version of Data ONTAP on your storage controllers:

# rsh <storage_ctlr> sysconfig

NetApp Release 7.2.1: Sun Dec 10 01:33:06 PST 2006

OR

# rsh <storage_ctlr> version 12. 18) Cfmode: “SSI” indicates that the single system image cluster failover

mode (CFMODE) is the only CFMODE supported with this configuration. Enter the following commands to verify that cluster failover is enabled and to verify which cluster failover mode is being used on the storage system (replace <filer-x> with the name of your

NetApp University - Do Not Distribute

Page 73: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-15 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

storage controller):

# rsh <storage_ctlr> cf status

Cluster enabled, filer-p is up.

# rsh <storage_ctlr> fcp show cfmode

fcp show cfmode: single_image 13. 19) NDU: “Minor” indicates that nondisruptive upgrades are only supported

between minor Data ONTAP releases (for example, from 7.2 to 7.2.1).

14. 20) SAN Boot: “Yes” indicates that booting from a SAN disk device is supported with this configuration.

15. 21) SnapDrive: “SDU 2.2” indicates that this configuration is supported with SnapDrive for UNIX version 2.2

TASK 3: VERIFY WHETHER OR NOT THE SOLARIS HOST COMPLIES WITH THE NETAPP SAN SUPPORT MATRIX (ISCSI) - SOLARIS

STEP ACTION

1. Consider the line item 515 in the NetApp SAN Support Matrix – Solaris (July 2007) available at: http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/NetAppSANSupport_July2007RevB.pdf#page=95

This is a supported iSCSI configuration. You need to ensure that your Solaris host complies with this configuration. Some steps are similar to the ones in Task 2 (FCP). Feel free to skip those steps.

1) No.: This is line item number 515.

2) Protocol: This line item shows a supported iSCSI SAN configuration.

3) Notes: References to matrix footnotes 500, 501 and 503:

500: Software Initiator is supported in a Guest OS on top of VMware ESX Server 3.0X.

501: MPxIO support only for active-active (round-robin) configurations

503: iSNS is not supported with Solaris 10 Update3 software iSCSI initiator due to known issues with Solaris iSCSI initiator.

NetApp University - Do Not Distribute

Page 74: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-16 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2. 4) Host Utilities: Version 3.0.1 of the Solaris host utilities is supported.

You need to download and install the iSCSI Solaris Host Utilities 3.0.1 for Native OS from the NetApp on the Web site (now.netapp.com). Install the santoolkit_solaris_sparc_3.4.tar.Z package. This package has already been copied into the <class_files> directory on your host. We will install the iSCSI Solaris Host Utilities 3.0.1 for Native OS in the next lab exercise.

3. 5) Host OS: “Solaris 10 Update3” indicates that the OS version supported is Solaris 10 Update3 (32-bit).

6) Server Platform: “SPARC” and “AMD64” indicate that the CPUs supported are Sun SPARC and AMD64 Opteron (x86).

You could run the same commands here as the ones you run to verify the server platform for FCP, previously. There is no need to run the commands again, though.

4. 7) SW Initiator: “Solaris s/w initiator” indicates that this configuration is supported with the Solaris iSCSI software initiator. You need to ensure that the Solaris iSCSI software initiator is installed.

8) SW Initiator Version: “N/A” indicates that the version of the Solaris iSCSI software initiator is irrelevant for this configuration. The Solaris iSCSI software initiator is part of the Solaris 10 Update3 installation. Whichever version of the Solaris iSCSI software initiator ships with Solaris 10 Update3 installation is the supported version.

The Sun iSCSI Device Driver is installed by default with Solaris 10 Update3. You can enter the following command to ensure that the Sun iSCSI Device Driver is installed and to verify the version and installation status of the Sun iSCSI Device Driver:

# pkginfo –l SUNWiscsir

PKGINST: SUNWiscsir

NAME: Sun iSCSI Device Driver (root)

CATEGORY: system

ARCH: sparc

VERSION: 11.10.0,REV=2005.01.04.14.31

BASEDIR: /

NetApp University - Do Not Distribute

Page 75: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-17 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

VENDOR: Sun Microsystems, Inc.

DESC: Sun iSCSI Device Driver

PSTAMP: bogglidite20060509113726

INSTDATE: Feb 13 2000 18:34

HOTLINE: Please contact your local service provider

STATUS: completely installed

FILES: 19 installed pathnames

13 shared pathnames

13 directories

2 executables

1239 blocks used (approx)

The Sun iSCSI Device Driver module is loaded by default in Solaris 10 Update3. You can enter the following commands to verify that the Sun iSCSI Device Driver module is loaded on your host:

# modinfo –c | grep iscsi

Id Loadcnt Module Name State

81 1 iscsi LOADED/INSTALLED

If the Sun iSCSI Device Driver driver module is not LOADED, you can enter the following command to load it:

# modload /kernel/drv/sparcv9/iscsi

If the Sun iSCSI Device Driver module is not INSTALLED, you can use the add_drv Solaris operating system command to install the driver, or simply re-install/setup the SUNWiscsir (Sun iSCSI Device Driver (root)) package.

The Sun iSCSI Management Utilities are installed by default with Solaris 10 Update3. You can enter the following command to ensure that the Sun iSCSI Management Utilities are installed and to verify the version and installation status of the Sun iSCSI Management Utilities:

NetApp University - Do Not Distribute

Page 76: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-18 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

# pkginfo –l SUNWiscsiu

PKGINST: SUNWiscsir

NAME: Sun iSCSI Management Utilities (usr)

CATEGORY: system

ARCH: sparc

VERSION: 11.10.0,REV=2005.01.04.14.31

BASEDIR: /

VENDOR: Sun Microsystems, Inc.

DESC: Sun iSCSI Management Utilities

PSTAMP: bogglidite20060421153221

INSTDATE: Feb 13 2000 18:34

HOTLINE: Please contact your local service provider

STATUS: completely installed

FILES: 15 installed pathnames

5 shared pathnames

5 directories

5 executables

1005 blocks used (approx)

5. 9) Host Bus: “N/A” indicates that the type of the expansion bus is irrelevant for this configuration.

10) HBA Model: “N/A” indicates that the HBA model is irrelevant for this configuration.

11) HBA Driver / FW: “N/A” indicates that the driver and firmware of the HBA are irrelevant for this configuration.

No HBA needs to be installed for this configuration because we are using a software initiator. Thus, the model, driver, firmware and bus type of the HBA are irrelevant.

NetApp University - Do Not Distribute

Page 77: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-19 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

6. 12) Volume Manager: “N/A” indicates that the type and version of volume manager are irrelevant for this configuration.

7. 13) Multipath: “MPxIO, IP/MP” indicates that this configuration is supported both with Solaris Native MPxIO and with IP Multipathing (IP/MP) multipathing solutions. Solaris Native MPxIO components are built into Solaris 10 Update3.

8. 14) File System: “Sun UFS” indicates that the Sun Unix File System (UFS) is supported with this configuration

Enter the following command to view the default file system on your Solaris host:

# cat /etc/default/fs

LOCAL=ufs

9. 15) Host Cluster: “No” indicates that host cluster manager solutions are not supported with this configuration.

10. 16) Host Virtual: “No” indicates that Solaris virtual servers are not supported with this configuration.

11. 17) Data ONTAP: “7.0.5, 7.1.1, 7.2, 7.2.1” shows the versions of Data ONTAP that are supported with this configuration (replace <filer-x> with the name of your storage controller):

Enter the following command to verify the version of Data ONTAP on your storage controllers:

# rsh <storage_ctlr> sysconfig

NetApp Release 7.2.1: Sun Dec 10 01:33:06 PST 2006

...

OR

# rsh <storage_ctlr> version

12. 18) Cfmode: “N/A” indicates that the cluster failover mode (CFMODE) supported is irrelevant with this configuration. The CFMODE is only relevant with FCP.

13. 19) NDU: “Minor” indicates that nondisruptive upgrades are only supported between minor Data ONTAP releases (for example, from 7.2 to 7.2.1).

NetApp University - Do Not Distribute

Page 78: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-20 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

14. 20) SAN Boot: “No” indicates that booting from a SAN disk device is not supported in this configuration.

15. 21) SnapDrive: “SDU 2.2” indicates that this configuration is supported with SnapDrive for UNIX version 2.2

END OF EXERCISE

EXERCISE 9: INSTALL NETAPP HOST UTILITIES KIT (ISCSI AND FCP) FOR SOLARIS FOR NATIVE OS

OVERVIEW:

In this exercise, you will install the NetApp SAN Toolkit (iSCSI and FCP) for Solaris for Native OS. This kit is currently distributed on the NOW site (now.netapp.com) under two product names:

1) FCP Solaris Host Utilities Kit 4.1 for Native OS (santoolkit_solaris_sparc_3.3.tar.Z)

2) iSCSI SolarisTM Host Utilities Kit 3.0.1 for Native OS (santoolkit_solaris_sparc_3.4.tar.Z)

OBJECTIVES:

By the end of this exercise, you should be able to install the latest version of the kit.

TIME ESTIMATE:

20 minutes

START OF EXERCISE TASK 1: INTERPRET A PARTICULAR LINE IN THE NETAPP SAN SUPPORT MATRIX - SOLARIS

STEP ACTION

1. NOTE: This step is shown here for documentation purposes only. The iSCSI SolarisTM Host Utilities Kit 3.0.1 for Native OS package has already been copied into the <class_files> directory on your Solaris host. Replace the <class_files> string with the exact directory specified by your instructor.

In this step, you just need to copy the

NetApp University - Do Not Distribute

Page 79: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-21 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

santoolkit_solaris_sparc_3.4.tar.Z file from <class_files> to /tmp and proceed to Step 2.

The iSCSI SolarisTM Host Utilities Kit 3.0.1 for Native OS can be downloaded from the NetApp On the Web site (now.netapp.com). Save the file to the /tmp directory on your Solaris host. The file to download is either:

Sun SPARC CPU: santoolkit_solaris_sparc_3.4.tar.Z

or

AMD64 (Opteron) CPU: santoolkit_solaris_amd_3.4.tar.Z

To determine the CPU of your Solaris host, run: cat /etc/release or uname –a

For Sun SPARC CPU you should get an output similar to:

# uname -a

SunOS sun220r-fak01 5.10 Generic_118833-33 sun4u sparc SUNW,Ultra-60

# cat /etc/release

Solaris 10 11/06 s10s_u3wos_10 SPARC

Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.

Use is subject to license terms.

Assembled 14 November 2006

For Sun x86 CPU (AMD64, which is also known as Opteron) you should get an output similar to:

# uname -a

SunOS sunx2100-fak03 5.10 Generic_118855-33 i86pc i386 i86pc

# cat /etc/release

NetApp University - Do Not Distribute

Page 80: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-22 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Solaris 10 11/06 s10x_u3wos_10 X86

Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.

Use is subject to license terms.

Assembled 14 November 2006 2. Change directory to /tmp.

cd /tmp 3. Enter the following command to uncompress the file:

/usr/local/bin/gunzip santoolkit_solaris_sparc_3.4.tar.Z

4. Enter the following command to extract the package file:

tar –xvf santoolkit_solaris_sparc_3.4.tar

5. Enter the following command to install the NTAPSANTool.pkg package:

pkgadd –d <path>/NTAPSANTool.pkg

<path> is the directory that contains the NTAPSANTool.pkg package file previously extracted from the tar file. If the pkg file is directory where pkgadd is run, you can discard the <path>/ part.

6. Follow the prompts to complete the installation of the NTAPSANTool.pkg package. You may get some messages about files being already installed on the system and being used by another package. Choose “y” to install conflicting files. Answer “y” to all prompts.

7. Observe the various components of the iSCSI SolarisTM Initiator Host Utilities Kit 3.0.1 for Native OS. ls /opt/NTAP/SANToolkit/bin

Please keep in mind that some components of the kit are needed for iSCSI, others are needed for FCP, and some are needed for both iSCSI and FCP.

NetApp University - Do Not Distribute

Page 81: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-23 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

8. Add the bin directory of the host utilities kit to the system path. PATH=/opt/NTAP/SANToolkit/bin:$PATH

export PATH

9. Enter the following command to verify that the bin directory is in the system path: which sanlun

You should get an output similar to: # which sanlun

/opt/NTAP/SANToolkit/bin/sanlun

OPTIONAL TROUBLESHOOTING STEPS

STEP ACTION

1. Enter the following command to obtain information about your Solaris host: solaris_info

You should get an output similar to: # solaris_info

.........

Solaris system info is in directory /tmp/netapp/ntap_sol_info

Compressed file is /tmp/netapp/ntap_sol_info.tar.Z

Please send this file to Network Appliance for analysis

2. Enter the following command to change directory to the results output directory created by the solaris_info utility: cd /tmp/netapp/ntap_sol_info

3. Observe the various items explored and documented by the solaris_info utility. Run:

ls /tmp/netapp/ntap_sol_info

NetApp University - Do Not Distribute

Page 82: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-24 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

4. Take a look at the lpfc.pkg file. This file contains the output of the “pkginfo -l lpfc” command ran by solairs_info. This command provides installation status and version information for the Emulex FC HBA driver package.

Look at the lpfc.pkg file in /tmp/netapp/ntap_sol_info.

cat /tmp/netapp/ntap_sol_info/lpfc.pkg

You should get an output similar to: # cat /tmp/netapp/ntap_sol_info/lpfc.pkg

ERROR: information for "lpfc" was not found

#

This shows that the output of the pkginfo -l lpfc command ran by solaris_info is empty. It means that there is currently no lpfc.pkg package installed on the Solaris host. Is this normal?

END OF EXERCISE

NetApp University - Do Not Distribute

Page 83: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-25 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 10: PROVISION VM WITH NETAPP FLEXCLONE

OVERVIEW

In this exercise you will provision two new virtual machines using NetApp® FlexClone® technology:

• First, you will provision a new virtual machine by cloning the VMFS data store hosting the virtual disk of an existing virtual machine.

• Second, you will provision another new virtual machine by cloning the raw device (RDM storage) of another existing virtual machine.

Provisioning new virtual machines by cloning existing ones using VMware technology can be time-consuming and generate a great deal of load on your ESX server and storage device, since data is copied. In this lab you will use FlexClone technology to rapidly provision new datastores and virtual machines.

OBJECTIVES

When you have completed this exercise, you should be able to do the following:

• Clone VMFS datastore using NetApp FlexClone • Discover a VM in a NetApp FlexClone clone • Add a VM from a NetApp FlexClone clone into the ESX inventory • Split the NetApp FlexClone clone from the original • Clone a physical-mode RDM using NetApp FlexClone • Inspect the files of a VM provisioned by RDM

NetApp University - Do Not Distribute

Page 84: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-26 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

START OF EXERCISE

TASK 1: CREATE FLEXCLONE OF EXISTING VMFS DATASTORE

In this task you will clone an existing VMware data store using NetApp FlexClone technology.

STEP ACTION

1. Use Putty (or another Telnet client) to connect to the prompt of the target storage controller in your pod and use the vol clone Data ONTAP command to create a FlexClone clone of your FCP datastore volume:

> vol clone create esx_fcp_vol1_clone –s volume –b esx_fcp_vol1

NOTE: You may need to license FlexClone if it is not licensed yet.

You should get output similar to:

Creation of clone volume ‘esx_fcp_vol1_clone’ has completed. LUN /vol/esx_fcp_vol1_clone/LUN has been taken offline to prevent map conflicts after a copy or move operation.

This shows that the FlexClone esx_fcp_vol1_clone of the volume esx_fcp_vol1 was successfully created and that LUNs hosted by the clone volume were taken offline to avoid mapping conflicts with LUNs hosted by the source esx_fcp_vol1 volume.

2. Use the lun map Data ONTAP command to map the LUN to the esx_fcp_ig igroup:

> lun map /vol/esx_fcp_vol1_clone/LUN esx_fcp_ig

Notice that Data ONTAP automatically assigns LUN id=2 to the LUN in the cloned volume, since there already are two LUNs mapped with id=0

NetApp University - Do Not Distribute

Page 85: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-27 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

and id=1 to the esx_fcp_ig irgroup.

3. Take the LUN online now:

> lun online /vol/esx_fcp_vol1_clone/LUN

4. Use the lun show and the lun show -m Data ONTAP commands to verify that the LUN is online and mapped to the esx_fcp_ig initiator group.

TASK 2: DISCOVER VM IN NETAPP FLEXCLONE AND ADD VM TO ESX

In this task you will discover a virtual machine in a NetApp FlexClone and add that virtual machine to the ESX server inventory. This virtual machine is a clone of an existing virtual machine.

STEP ACTION

1. If you are already logged on to the VIC GUI, skip to the next step.

Open a Remote Desktop Connection to start up the Virtual Infrastructure Client (VIC) GUI on the remote Windows RDP host. The VIC GUI prompts you for a server, user name, and password. At this point you can log in either as Administrator to the VirtualCenter Server software suite, which is installed on the Windows RDP host (the localhost), or as root to the VMware ESX server directly. The VirtualCenter Server software suite features are not needed for the first part of the class, so we will be logging on directly to the ESX server to keep it simpler. Log in as root to the remote VMware ESX server using the host name or IP address supplied by your instructor.

You get the warning shown below. This warns you that changes made to the ESX server directly, in this VIC GUI session, may not be visible to VIC GUI sessions logged in to the VirtualCenter server. This is ok, since there is no other VIC GUI session logged in to the VirtualCenter server at this point.

NetApp University - Do Not Distribute

Page 86: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-28 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2. Click the san<pod#>esx (or local host) server in the ESX Inventory tree. Click the Configuration tab. Select Storage Adapters from the Hardware menu.

Select the first vmhbaX port in the LP11000 4-GB Fibre Channel Host Adapter and click the Rescan... hyperlink in the upper-right corner of the screen.

NetApp University - Do Not Distribute

Page 87: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-29 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Make sure to select “Scan for New VMFS Volumes.”

Notice that the third FCP LUN appears with LUN id 2.

NetApp University - Do Not Distribute

Page 88: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-30 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Repeat the rescan procedure for the second FC HBA vmhbaX port.

NOTE: You may need to run the process twice for each FC port: once to find the LUN and a second time to discover the datastore.

3. To verify that the datastore has been discovered, go to the “Storage (SCSI, SAN, NFS)” heading under Hardware.

The datastore will automatically be renamed to something different than the production datastore (it should be something like snap-00000001-FCVMFS)

If you do not see a cloned datastore with a name like snap-00000001-FCVMFS in the Storage list, you need to ensure that LVM.EnableResignature is set to 1.

Select Advanced Settings in the Software section of the Configuration tab and click LVM. Then scroll down the parameter list and set LVM.EnableResignature to 1. This allows VMFS datastores with the same signature to be “restamped” by VMware ESX with a new volume signature. This is necessary, since you discover the same VMFS data store, with the same VMFS signature on the same host, whenever you discover a LUN in a cloned volume. Hence, you need to allow ESX to restamp data stores with new signatures.

NetApp University - Do Not Distribute

Page 89: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-31 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

You can then select the “Storage (SCSI, SAN, NFS)” heading under Hardware and click Refresh… You should now see the snap-00000001-FCVMFS cloned datastore in the list as shown below.

NetApp University - Do Not Distribute

Page 90: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-32 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Question 1: Why does snap-00000001-FCVMFS appear in the “Storage (SCSI, SAN and NFS)” list? What is this exactly?

_________________________________________________________________

4. Perform the following steps to add one of the virtual machines hosted by the cloned VMFS datastore to the ESX server inventory:

Right-click the datastore and rename it to FCVMFSCLONE.

Right-click the FCVMFSCLONE datastore and select Browse Datastore.

Open the W2K3FCVMFS directory, right-click the W2KFCVMFS.vmx file, and select “Add to inventory.”

In the window that opens, you will be asked to name the VM. Name it W2K3FCVMFSCLONE.

For the virtual machine inventory location, choose your data center or local host and click Next.

Review the options and click Finish.

NetApp University - Do Not Distribute

Page 91: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-33 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Close the Datastore browser window and notice the new, cloned, VM appears in the ESX inventory tree.

You have now created a VM replica that is running on a zero-space cloned LUN.

5. Connect via SSH to the prompt of your ESX server and use the ls command to inspect the cloned VMFS data store:

> ls /vmfs/volumes

You should see a directory for each of these: FCVMFS, iSCSIVMFS, NFS, and the new cloned FCVMFSCLONE data stores.

Use the ls command again to inspect the contents of the source FCVMFS datastore and its clone FCVMFSCLONE data store:

> ls /vmfs/volumes/FCVMFS /vmfs/volumes/FCVMFSCLONE

For both data stores, you should see a directory for each virtual machine file hosted by that data store. Notice that the virtual machines are the same, since FCVMFSCLONE is a clone of FCVMFS. Keep in mind that you only added one of the VMs hosted by the FCVMFSCLONE to the ESX inventory tree. You could add them all to the ESX inventory, if needed.

TASK 3: SPLIT FLEXCLONE FROM BACKING SNAPSHOT

In this task you will split the FlexClone clone from its backing Snapshot™ copy and remove the backing Snapshot copy.

STEP ACTION

1. Use Putty (or another Telnet client) to connect to the prompt of the target storage controller in your pod and use the vol clone split Data ONTAP command to create a FlexClone clone of your FCP datastore volume:

> vol clone split start esx_fcp_vol1_clone

NetApp University - Do Not Distribute

Page 92: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-34 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

NOTE: You may need to license FlexClone if it is not licensed yet.

You should get output similar to:

Clone volume ‘esx_fcp_vol1_clone’ will be split from its parent. Monitor system log or use ‘vol clone split status’ for progress.

This operation could take a few minutes complete.

2. Use the vol clone split status Data ONTAP command to verify the progress of the split operation:

> vol clone split status

No clone status.

When you get the “No clone status.” output, the clone split operation is complete.

3. Delete the backing Snapshot copy, as it is not needed anymore. The name of the backing Snapshot copy should be similar to clone_esx_fcp_vol1_clo.1. You can use the snap list command to verify the name.

> snap delete esx_fcp_vol1 clone_esx_fcp_vol1_clo.1

NetApp University - Do Not Distribute

Page 93: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-35 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 4: CREATE A FLEXCLONE CLONE OF AN EXISTING VM WITH A RAW DEVICE

In this task you will clone an existing raw device that provisions a VMware virtual machine using NetApp FlexClone technology.

STEP ACTION

1. Use Putty (or another Telnet client) to connect to the prompt of the target storage controller in your pod and use the vol clone Data ONTAP command to create a FlexClone clone of you the NetApp volume hosting the FCP raw device used by the W2K3FCRDM virtual machine (RDM):

> vol clone create esx_fcp_vol2_clone –s volume –b esx_fcp_vol2

NOTE: You may need to license FlexClone if it is not licensed yet.

You should get output similar to:

Creation of clone volume ‘esx_fcp_vol2_clone’ has completed. LUN /vol/esx_fcp_vol2_clone/LUN has been taken offline to prevent map conflicts after a copy or move operation.

This shows that the FlexClone clone esx_fcp_vol2_clone of the volume esx_fcp_vol2 was successfully created and that LUNs hosted by the clone volume were taken offline to avoid mapping conflicts with LUNs hosted by the source esx_fcp_vol1 volume.

2. Use the lun map Data ONTAP command to map the LUN to the esx_fcp_ig igroup:

> lun map /vol/esx_fcp_vol2_clone/LUN esx_fcp_ig

Notice that Data ONTAP automatically assigns LUN id=3 to the LUN in the cloned volume, since there already are three LUNs mapped with id=0, id=1 and id=2 to the esx_fcp_ig igroup.

NetApp University - Do Not Distribute

Page 94: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-36 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

3. Take the LUN online now:

> lun online /vol/esx_fcp_vol2_clone/LUN

4. Use the lun show and the lun show -m Data ONTAP commands to verify that the LUN is online and mapped to the esx_fcp_ig initiator group.

TASK 5: CREATE VM FROM RAW DEVICE IN NETAPP FLEXCLONE

In this task, you will discover a virtual machine in a raw LUN hosted by a NetApp FlexClone and add that virtual machine to the ESX server inventory. Since the source volume used for the NetApp FlexClone clone contained a raw device provisioning an existing virtual machine, the cloned raw device also contains the data of the existing virtual machine. Thus, we use the VM data in the cloned raw device to create a new virtual machine provisioned by the cloned raw device. The new virtual machine is an exact replica of the existing virtual machine. Keep in mind that as long as the FlexClone clone is not split from its backing Snapshot copy, there is almost no extra space taken on storage for the FlexClone clone : no new space consumed by the cloned raw device and no new space consumed by the cloned virtual machine.

STEP ACTION

1. If you are already logged on to the VIC GUI, skip to the next step.

Open a Remote Desktop Connection to start up the Virtual Infrastructure Client (VIC) GUI on the remote Windows RDP host. The VIC GUI prompts you for a server, user name, and password. At this point you can log in either as Administrator to the VirtualCenter Server software suite, which is installed on the Windows RDP host (the local host), or as root to the VMware ESX server directly. The VirtualCenter Server software suite features are not needed for the first part of the class, so we will be logging in directly to the ESX server to keep it simpler. Log in as root to the remote VMware ESX server using the host name or IP address supplied by your instructor.

You get the warning shown below. This warns you that changes made to the ESX server directly, in this VIC GUI session, may not be visible to VIC GUI sessions logged in to the VirtualCenter server. This is ok, since there is no other VIC GUI session logged in to the VirtualCenter server at this point.

NetApp University - Do Not Distribute

Page 95: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-37 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2. Click the san<pod#>esx (or local host) server in the ESX Inventory tree. Click the Configuration tab. Select Storage Adapters from the Hardware menu.

Select the first vmhbaX port in the LP11000 4GB Fibre Channel Host Adapter, then click the Rescan... hyperlink in the upper-right corner of the screen.

Clear the “Scan for New VMFS Volumes” check box, since there is no VMFS datastore on the raw LUN that hosts the files of the cloned VM.

NetApp University - Do Not Distribute

Page 96: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-38 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Notice the fourth FCP LUN has LUN id 3.

Repeat the rescan procedure for the second FC HBA vmhbaX port.

Question 1: Why do you not see a snap-0000000X-FCRDM entry in the “Storage (SCSI, SAN and NFS)” list here, as you did in task 2 above?

________________________________________________________________

3. You should still have the SAN<pod#>esx (or local host) branch selected in the Inventory browsing tree. Click the Summary tab, and then click the New Virtual Machine link in the Commands section.

Select Custom and click Next.

Name the Virtual Machine W2K3FCRDMCLONE. Select Next.

Select FCVMFS to store the configuration file (.vmx) and the RDMP (raw device mapping pointer) file. Click Next.

Keep the default selection of Microsoft Windows as the guest operating system and Microsoft Windows Server 2003, Enterprise Edition, as the

NetApp University - Do Not Distribute

Page 97: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-39 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

version. Click Next.

Keep the default selection of 1 for the Number of Virtual Processors and click Next.

Keep the default size of 256MB for the virtual machine’s memory size. Click Next.

Keep the defaults on the Choose Networks screen and select Next.

Keep the defaults on the Select I/O Adapter Types screen and select Next.

Select Raw Device Mappings on the Select a Disk screen and click Next.

Select FC LUN 3 (/vmfs/devices/disks/vmhba0:0:3:0) in the “Select and Configure a Raw LUN” screen and click Next.

NetApp University - Do Not Distribute

Page 98: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-40 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Question 1: How do you know that vmhba0:0:3 is FC LUN 3?

_________________________________________________________________

Select “Store with Virtual Machine” and click Next.

Select Physical as the compatibility mode and click Next.

NetApp University - Do Not Distribute

Page 99: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-41 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

NOTE: Although you clone from physical RDM, the new RDM does not have to be physical.

Keep the defaults on the Specify Advanced Options screen and select Next.

Review the parameters and click Finish.

NetApp University - Do Not Distribute

Page 100: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-42 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

4. Notice that the new Virtual Machine named W2K3FCRDMCLONE shows up in the “Inventory” browsing tree. This virtual machine is provisioned by FC LUN 3. FC LUN 3 is a clone of FC LUN 1. Recall that FC LUN 1 is a raw device that provisions the W2K3FCRDM virtual machine.

NetApp University - Do Not Distribute

Page 101: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-43 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Both of these virtual machines (W2K3FCRDM and W2K3FCRDMCLONE) are provisioned by raw LUNs (FC LUN 1 and FC LUN 3) and both of these virtual machines store their configuration file (.vmx) and the pointer to their RDM data store in the same VMFS file system, named FCVMFS. Recall that the FCVMFS is also used as data store for the W2K3FCVMFS virtual machine.

Optional step: You can use Putty to log in to your ESX server and change the directory to /vmfs/volumes/FCVMFS. Next, use the ls command to view the virtual machines that are using the FCVMFS datastore. You should see the new W2K3FCRDMCLONE VM on the list.

NetApp University - Do Not Distribute

Page 102: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-44 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE SUMMARY

You created two new virtual machines provisioned by storage cloned using NetApp FlexClone technology:

1. W2K3FCVMFSCLONE

a. Cloned from W2K3FCVMFS VM

b. Clone split from its backing Snapshot copy

2. W2K3FCRDMCLONE

a. Cloned from W2K3FCRDM VM

b. Clone not split from its backing Snapshot copy (sharing storage: near-zero additional space required for the W2K3FCRDMCLONE VM)

Notice that both new VMs were cloned while the parent VM was shut down. If the parent VM needs to be up during the clone procedure, particularly while the NetApp backing Snapshot copy is being created, the parent VM needs to be quiesced to ensure data consistency. You will learn more about data consistency and about how to quiesce a VM in the next module.

END OF EXERCISE

NetApp University - Do Not Distribute

Page 103: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-45 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 11: DISCOVER A LUN ON THE SOLARIS HOST USING FCP

OVERVIEW:

In this exercise, you will see how to discover LUNs accessed with FCP on Solaris in a Solaris MPxIO multipathing environment.

OBJECTIVES:

By the end of this exercise, you should be able to:

• Inspect LUNs and igroups created on the target storage controller

• Enable ALUA on the igroup to which the LUNs are mapped

• Discover the LUNs on the Solaris host

• Observe that LUNs and multiple paths to them managed by MPxIO

• Observe the underlying paths for the devices using sanlun and luxadm and learn how to map the host side paths to storage system HBA ports

• Label the LUN and use it

TIME ESTIMATE:

40 minutes

START OF EXERCISE

TASK 1: INSPECT LUNS AND IGROUPS CREATED ON THE TARGET STORAGE CONTROLLER

You will need to complete the following steps on your Solaris host by replacing <storage_ctrl> with the name (or the IP address) of your storage controller.

STEP ACTION

1. Enter the following command to identify the WWPNs of the FC HBA initiator ports qlc0 and qlc1 of your Solaris host:

$ sanlun fcp show adapter

qlc0 210000e08b922bf4

qlc1 210100e08bb22bf4

NetApp University - Do Not Distribute

Page 104: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-46 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

NOTE: The HBA vendor supplied utilities and the fcinfo Solaris command can also be used to obtain the WWPNs of the HBAs.

Now enter the following command to ensure that the FC initiators of the Solaris host can see the target FC ports on the storage controller:

$ rsh <storage-ctrl> fcp show initiator

Initiators connected on adapter 7a:

Portname Group

21:01:00:e0:8b:b2:2b:f4

21:00:00:e0:8b:92:2b:f4

...

21:01:00:e0:8b:ae:fb:7e

21:00:00:e0:8b:8e:fb:7e

21:01:00:e0:8b:a8:ab:76

21:00:00:e0:8b:88:ab:76

Initiators connected on adapter 7b:

Portname Group

21:01:00:e0:8b:b2:2b:f4

21:00:00:e0:8b:92:2b:f4

...

21:01:00:e0:8b:ae:fb:7e

21:00:00:e0:8b:8e:fb:7e

21:01:00:e0:8b:a8:ab:76

21:00:00:e0:8b:88:ab:76

NetApp University - Do Not Distribute

Page 105: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-47 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Identify the WWPNs of the FC initiator ports of your Solaris by looking at the varying part of the WWPN (the underlined part).

2. Enter the following command to confirm that the FC initiators of your Solaris host are members of the solaris_fcp_ig igroup:

$ rsh <storage-ctrl> igroup show -v

solaris_fcp_ig (FCP):

OS Type: solaris

Member: 21:00:00:e0:8b:92:2b:f4 (logged in on: 7a,7b,vtic)

Member: 21:01:00:e0:8b:b2:2b:f4 (logged in on: 7a,7b,vtic)

Note that ALUA is not enabled by default for the igroup.

TASK 2: ENABLE ALUA ON THE IGROUP TO WHICH THE LUNS ARE MAPPED

You will need to complete the following steps on your Solaris host by replacing <storage_ctrl> with the name (or IP address) of your storage controller.

STEP ACTION

1. Enter the following command on the target storage controller to enable ALUA for the solaris_fcp_ig igroup.

$ rsh <storage-ctrl> igroup set solaris_fcp_ig alua yes

$ rsh <storage-ctrl> igroup show -v

solaris_fcp_ig (FCP):

OS Type: solaris

Member: 21:00:00:e0:8b:92:2b:f4 (logged in on:

NetApp University - Do Not Distribute

Page 106: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-48 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION 7a,7b,vtic)

Member: 21:01:00:e0:8b:b2:2b:f4 (logged in on: 7a,7b,vtic)

ALUA: Yes

Observe that ALUA support is now enabled for the solaris_fcp_ig igroup.

Enter the following command on the target storage controller to confirm that the LUN(s) are mapped to the solaris_fcp_ig igroup:

$ rsh <storage-ctrl> lun show -m

LUN path Mapped to LUN ID Protocol

------------------------------------------------------------------

/vol/solarisvol1/lunC solaris_fcp_ig 0 FCP

/vol/solarisvol1/lunD solaris_fcp_ig 1 FCP

Enter the following command on the target storage controller to confirm that the cluster failover mode (CFMODE) is single image and clustering is enabled.

$ rsh <storage-ctrl> fcp show cfmode

fcp show cfmode: single_image

$ rsh <storage-ctrl> cf status

Cluster enabled, nau-dev2 is up.

NetApp University - Do Not Distribute

Page 107: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-49 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 3: DISCOVER THE LUNS ON THE SOLARIS HOST

You will need to complete the following steps on the Solaris host.

STEP ACTION

1. Enter the cfgadm –l command to see the different attachment points (controllers) on the Solaris host.

$ cfgadm -l

Ap_Id Type Receptacle Occupant Condition

c0 scsi-bus connected configured unknown

c1 scsi-bus connected unconfigured unknown

c2 fc-fabric connected unconfigured unknown

c3 fc-fabric connected unconfigured unknown

usb0/1 unknown empty unconfigured ok

usb0/2 unknown empty unconfigured ok

We see that c2 and c3 are the controllers that are connecting to the storage controller by way of the fc-fabric service.

2. Enter the following command to further see the expansion points:

$ cfgadm -al

Ap_Id Type Receptacle Occupant Condition

c0 scsi-bus connected configured unknown

c0::dsk/c0t0d0 disk connected configured unknown

c0::dsk/c0t1d0 disk connected configured unknown

c1 scsi-bus connected unconfigured unknown

c2 fc-fabric connected unconfigured unknown

c2::210100e08bb22bf4 unknown connected unconfigured

NetApp University - Do Not Distribute

Page 108: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-50 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

unknown

...

c2::500a098186a7af35 disk connected unconfigured unknown

c2::500a098196a7af35 disk connected unconfigured unknown

c2::500a098286a7af35 disk connected unconfigured unknown

c2::500a098296a7af35 disk connected unconfigured unknown

c3 fc-fabric connected unconfigured unknown

c3::210000e08b922bf4 unknown connected unconfigured unknown

...

c3::500a098186a7af35 disk connected unconfigured unknown

c3::500a098196a7af35 disk connected unconfigured unknown

c3::500a098286a7af35 disk connected unconfigured unknown

c3::500a098296a7af35 disk connected unconfigured unknown

The WWPNs that are highlighted in bold are the WWPNs of one of the target storage controllers. Note that the output on your host may contain the WWPNs of other FC initiator and FC target ports if the FC switch is not zoned. For example, if you see any WWPNs starting with “10:00,” they are likely FC initiator ports on Emulex FC HBAs on the Linux and ESX Server hosts. You can identify the WWPNs of your storage controllers by executing fcp show adapter on each of the storage controller. Observe also that the status is shown “unconfigured” in the output of cfgadm –al.

NetApp University - Do Not Distribute

Page 109: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-51 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

$ rsh <storage-ctrl-1> fcp show adapter

Slot: 7a

Description: Fibre Channel Target Adapter 7a (Dual-c channel, QLogic 2312 (2352) rev. 2)

Adapter Type: Local

Status: ONLINE

FC Nodename: 50:0a:09:80:86:a7:af:35 (500a098086a7af35)

FC Portname: 50:0a:09:81:96:a7:af:35 (500a098196a7af35)

Standby: No

Slot: 7b

Description: Fibre Channel Target Adapter 7b (Dual-channel, QLogic 2312 (2352) rev. 2)

Adapter Type: Local

Status: ONLINE

FC Nodename: 50:0a:09:80:86:a7:af:35 (500a098086a7af35)

FC Portname: 50:0a:09:82:96:a7:af:35 (500a098296a7af35)

Standby: No

$ rsh <storage-ctrl-2> fcp show adapter

Slot: 7a

Description: Fibre Channel Target Adapter 7a (Dual-channel, QLogic 2312 (2352) rev. 2)

Adapter Type: Local

NetApp University - Do Not Distribute

Page 110: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-52 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Status: ONLINE

FC Nodename: 50:0a:09:80:86:a7:af:35 (500a098086a7af35)

FC Portname: 50:0a:09:81:86:a7:af:35 (500a098186a7af35)

Standby: No

Slot: 7b

Description: Fibre Channel Target Adapter 7b (Dual-channel, QLogic 2312 (2352) rev. 2)

Adapter Type: Local

Status: ONLINE

FC Nodename: 50:0a:09:80:86:a7:af:35 (500a098086a7af35)

FC Portname: 50:0a:09:82:86:a7:af:35 (500a098286a7af35)

Standby: No

3. You have just seen that the Solaris host bus controllers that were connected to the fabric are c2 and c3. You can also verify this using the sanlun fcp show adapter –v on the Solaris host.

$ sanlun fcp show adapter -v

adapter name: qlc0

WWPN: 210000e08b922bf4

WWNN: 200000e08b922bf4

driver name: qlc

NetApp University - Do Not Distribute

Page 111: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-53 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

model: QLE2462

model description: QLogic PCI-Express 4Gb FC, Dual Channel

serial number: Not Available

hardware version: Not Available

driver version: 20070212-2.19

firmware version: 4.0.27

Number of ports: 1 of 2

port type: Fabric

port state: Operational

supported speed: 1 GBit/sec, 2 GBit/sec, 4 GBit/sec

negotiated speed: 4 GBit/sec

OS device name: /dev/cfg/c2

adapter name: qlc1

WWPN: 210100e08bb22bf4

WWNN: 200100e08bb22bf4

driver name: qlc

model: QLE2462

model description: QLogic PCI-Express 4Gb FC, Dual Channel

serial number: Not Available

hardware version: Not Available

driver version: 20070212-2.19

firmware version: 4.0.27

Number of ports: 2 of 2

NetApp University - Do Not Distribute

Page 112: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-54 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

port type: Fabric

port state: Operational

supported speed: 1 GBit/sec, 2 GBit/sec, 4 GBit/sec

negotiated speed: 4 GBit/sec

OS device name: /dev/cfg/c3

4. Configure the LUNs using cfgadm –c configure cX where X is the controller number that you obtained from the previous cfgadm/sanlun fcp commands.

In this example, it is c2 and c3, so execute,

$ cfgadm -c configure c2

$ cfgadm -c configure c3

$ cfgadm -al

Ap_Id Type Receptacle Occupant Condition

c0 scsi-bus connected configured unknown

c0::dsk/c0t0d0 disk connected configured unknown

c0::dsk/c0t1d0 disk connected configured unknown

c1 scsi-bus connected unconfigured unknown

c2 fc-fabric connected configured unknown

c2::210100e08bb22bf4 unknown connected unconfigured unknown

c2::500a098186a7af35 disk connected configured unknown

c2::500a098196a7af35 disk connected configured unknown

c2::500a098286a7af35 disk connected configured unknown

c2::500a098296a7af35 disk connected configured unknown

c3 fc-fabric connected configured unknown

c3::210000e08b922bf4 unknown connected unconfigured unknown

NetApp University - Do Not Distribute

Page 113: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-55 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

c3::500a098186a7af35 disk connected configured unknown

c3::500a098196a7af35 disk connected configured unknown

c3::500a098286a7af35 disk connected configured unknown

c3::500a098296a7af35 disk connected configured unknown

usb0/1 unknown empty unconfigured ok

usb0/2 unknown empty unconfigured ok

Observe that the disks on controllers c2 and c3 are now “configured.”

5. Execute the sanlun lun show command to see that the LUNs have been discovered on the Solaris host. Only LUNs from your storage controller are discovered because only the initiator groups on your storage controllers contain the WWPNs of the FC initiator ports on your Solaris host.

$ sanlun lun show

filer: lun-pathname device filename adapter protocol lun size lun state

nau-dev1: /vol/solarisvol1/lunC /dev/rdsk/c4t60A98000433461504E342D4A66586252d0s2 qlc1 FCP 3g (3221225472) GOOD

nau-dev1: /vol/solarisvol1/lunD /dev/rdsk/c4t60A98000433461504E342D4A69796C2Dd0s2 qlc1 FCP 10g (10737418240) GOOD

Observe the consolidated device file name assigned by MPxIO to the NetApp LUN lunC and lunD.

You can also use the format operating system command to ensure that Solaris

NetApp University - Do Not Distribute

Page 114: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-56 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

sees the LUNs.

$ format

Searching for disks...done

AVAILABLE DISK SELECTIONS:

0. c0t0d0 <SEAGATE-ST336706LC-010A cyl 26123 alt 2 hd 4 sec 686>

/pci@1c,600000/scsi@2/sd@0,0

1. c0t1d0 <SEAGATE-ST336706LC-010A cyl 26123 alt 2 hd 4 sec 686>

/pci@1c,600000/scsi@2/sd@1,0

2. c4t60A98000433461504E342D4A69796C2Dd0 <NETAPP-LUN-0.2 cyl 5118 alt 2 hd 16 sec 256>

/scsi_vhci/ssd@g60a98000433461504e342d4a69796c2d

3. c4t60A98000433461504E342D4A66586252d0 <NETAPP-LUN-0.2 cyl 1534 alt 2 hd 16 sec 256>

/scsi_vhci/ssd@g60a98000433461504e342d4a66586252

Observe that both LUNs are seen by format.

Key in CTRL-C to exit the format program.

NetApp University - Do Not Distribute

Page 115: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-57 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 4: OBSERVE THE LUN(S) AND MULTIPLE PATHS TO THEM MANAGED BY MPXIO

You will need to complete the following steps on the Solaris host.

STEP ACTION

1. Enter the following command to view the LUNs and the multiple paths that lead to them:

# sanlun lun show all -p

ONTAP_PATH: nau-dev1:/vol/solarisvol1/lunC

LUN: 0

LUN Size: 3g (3221225472)

Host Device: /dev/rdsk/c4t60A98000433461504E342D4A66586252d0s2

LUN State: GOOD Filer_CF_State: Cluster Enabled

Multipath_Policy: Native Multipath-provider: Sun Microsystems

TPGS flag: 0x10Filer Status: TARGET PORT GROUP SUPPORT ENABLED

Target Port Group : 0x1001

Target Port Group State: Active/optimized

Vendor unique Identifier : 0x10 (2GB FC)

Target Port Count: 0x2

Target Port ID : 0x1

Target Port ID : 0x2

Target Port Group : 0x3002

Target Port Group State: Active/non-optimized

Vendor unique Identifier : 0x30 (2GB FC)

Target Port Count: 0x2

Target Port ID : 0x101

Target Port ID : 0x102

ONTAP_PATH: nau-dev1:/vol/solarisvol1/lunD

LUN: 1

LUN Size: 10g (10737418240)

Host Device: /dev/rdsk/c4t60A98000433461504E342D4A69796C2Dd0s2

LUN State: GOOD Filer_CF_State: Cluster Enabled

NetApp University - Do Not Distribute

Page 116: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-58 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Multipath_Policy: Native Multipath-provider: Sun Microsystems

TPGS flag: 0x10 Filer Status: TARGET PORT GROUP SUPPORT ENABLED

Target Port Group : 0x1001

Target Port Group State: Active/optimized

Vendor unique Identifier : 0x10 (2GB FC)

Target Port Count: 0x2

Target Port ID : 0x1

Target Port ID : 0x2

Target Port Group : 0x3002

Target Port Group State: Active/non-optimized

Vendor unique Identifier : 0x30 (2GB FC)

Target Port Count: 0x2

Target Port ID : 0x101

Target Port ID : 0x102

Observe that the sanlun command shows the paths that are optimized versus non-optimized.

Observe also, that the host supports ALUA as shown by “TARGET PORT GROUP SUPPORT ENABLED.”

Notice that, in an MPxIO environment, it is not obvious to identify the multiple paths to the LUN in the output of sanlun lun show –p. The sanlun command does not show the underlying paths because Sun MPxIO masks the underlying paths and presents the LUNs as single consolidated MPxIO devices. However, you will see in a few moments that the Target Port IDs can be used to identify the paths to the LUN. Also, Solaris provides some operating system commands that can be used for this purpose.

Question: Why is the “Target Port Group: 0x1001” shown as active-optimized whereas the “Target Port Group: 0x3002” is shown as active-nonoptimized?

Hint: Identify to which storage controller each target port group is attached (refer to Step 3 and Step 4 in Task 5 below).

NetApp University - Do Not Distribute

Page 117: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-59 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2. You can use the format Solaris operating system command to view the devices that have multiple paths. All these devices start with /scsi_vhci:

# format

1. c3t500A098296A7AF35d2 <NETAPP-LUN-0.2 cyl 5118 alt 2 hd 16 sec 384>

/pci@1d,700000/QLGC,qlc@1,1/fp@0,0/ssd@w500a098296a7af35,2

2. c4t60A98000433461504E342D4A69796C2Dd0 <NETAPP-LUN-0.2 cyl 5118 alt 2 hd 16 sec 256>

/scsi_vhci/ssd@g60a98000433461504e342d4a69796c2d

Device 1 – That is c3t500, is a NetApp lun that is NOT multipathed.

Device 2 – That is c4t60, is a NetApp lun that IS multipathed.

Note the differences in the device paths.

Key in CTRL-C to exit the format program.

NetApp University - Do Not Distribute

Page 118: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-60 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 5: OBSERVE THE UNDERLYING DEVICE PATHS USING SANLUN AND LUXADM AND LEARN HOW TO MAP THE HOST SIDE PATHS TO STORAGE SYSTEM HBA PORTS

You will need to complete the following steps both on the Solaris host and on the target storage controller.

STEP ACTION

1. Enter the following command to identify the MPxIO device file names:

$ luxadm probe

No Network Array enclosures found in /dev/es

Found Fibre Channel device(s):

Node WWN:500a098086a7af35 Device Type:Disk device

Logical Path:/dev/rdsk/c4t60A98000433461504E342D4A69796C2Dd0s2

Node WWN:500a098086a7af35 Device Type:Disk device

Logical Path:/dev/rdsk/c4t60A98000433461504E342D4A66586252d0s2

2. Enter the following command to view the device properties for a particular MPxIO device:

$ luxadm display /dev/rdsk/c4t60A98000433461504E342D4A69796C2Dd0s2

DEVICE PROPERTIES for disk: /dev/rdsk/c4t60A98000433461504E342D4A69796C2Dd0s2

Vendor: NETAPP

Product ID: LUN

Revision: 0.2

Serial Num: C4aPN4-Jiyl-

Unformatted capacity: 10240.000 MBytes

Read Cache: Enabled

Minimum prefetch: 0x0

Maximum prefetch: 0x0

Device Type: Disk device

Path(s):

/dev/rdsk/c4t60A98000433461504E342D4A69796C2Dd0s2

/devices/scsi_vhci/ssd@g60a98000433461504e342d4a69796c

NetApp University - Do Not Distribute

Page 119: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-61 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2d:c,raw

Controller /devices/pci@1d,700000/QLGC,qlc@1,1/fp@0,0

Device Address 500a098196a7af35,1

Host controller port WWN 210100e08bb22bf4

Class primary

State ONLINE

Controller /devices/pci@1d,700000/QLGC,qlc@1,1/fp@0,0

Device Address 500a098296a7af35,1

Host controller port WWN 210100e08bb22bf4

Class primary

State ONLINE

Controller /devices/pci@1d,700000/QLGC,qlc@1,1/fp@0,0

Device Address 500a098186a7af35,1

Host controller port WWN 210100e08bb22bf4

Class secondary

State ONLINE

Controller /devices/pci@1d,700000/QLGC,qlc@1,1/fp@0,0

Device Address 500a098286a7af35,1

Host controller port WWN 210100e08bb22bf4

Class secondary

State ONLINE

Controller /devices/pci@1d,700000/QLGC,qlc@1/fp@0,0

Device Address 500a098196a7af35,1

Host controller port WWN 210000e08b922bf4

Class primary

State ONLINE

Controller /devices/pci@1d,700000/QLGC,qlc@1/fp@0,0

Device Address 500a098296a7af35,1

Host controller port WWN 210000e08b922bf4

Class primary

State ONLINE

Controller /devices/pci@1d,700000/QLGC,qlc@1/fp@0,0

Device Address 500a098186a7af35,1

Host controller port WWN 210000e08b922bf4

NetApp University - Do Not Distribute

Page 120: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-62 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Class secondary

State ONLINE

Controller /devices/pci@1d,700000/QLGC,qlc@1/fp@0,0

Device Address 500a098286a7af35,1

Host controller port WWN 210000e08b922bf4

Class secondary

State ONLINE

Observe in bold the target WWPNs of each path, the class of each path (primary or secondary), and the status of each path (online or offline).

There are eight paths to the LUN.

3. You can also identify the paths from the host to the storage controller manually by comparing the output of the fcp show adapter –v command run on the storage controller to the output of the sanlun lun show all –p command run on the Solaris host.

$ sanlun lun show all -p

ONTAP_PATH: nau-dev1:/vol/solarisvol1/lunC

LUN: 0

LUN Size: 3g (3221225472)

Host Device: /dev/rdsk/c4t60A98000433461504E342D4A66586252d0s2

LUN State: GOOD Filer_CF_State: Cluster Enabled

Multipath_Policy: Native Multipath-provider: Sun Microsystems

TPGS flag: 0x10Filer Status: TARGET PORT GROUP SUPPORT ENABLED

Target Port Group : 0x1001

Target Port Group State: Active/optimized

Vendor unique Identifier : 0x10 (2GB FC)

Target Port Count: 0x2

Target Port ID : 0x1

Target Port ID : 0x2

Target Port Group : 0x3002

NetApp University - Do Not Distribute

Page 121: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-63 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Target Port Group State: Active/non-optimized

Vendor unique Identifier : 0x30 (2GB FC)

Target Port Count: 0x2

Target Port ID : 0x101

Target Port ID : 0x102

Observe the Target Port ID numbers in bold.

4. Enter the following command to identify each Target Port ID with a specific target FC port on the storage controller.

$ rsh <storage-ctrl> fcp show adapter -v

Slot: 0c

Description: Fibre Channel Target Adapter 0c (Dual-channel, QLogic 2312 (2352) rev. 2)

Status: ONLINE

Host Port Address: 010000

Firmware Rev: 3.3.19

PCI Bus Width: 64-bit

PCI Clock Speed: 33 MHz

FC Nodename: 50:0a:09:80:86:a7:af:35 (500a098086a7af35)

FC Portname: 50:0a:09:81:96:a7:af:35 (500a098196a7af35)

Cacheline Size: 8

FC Packet Size: 2048

External GBIC: No

Data Link Rate: 2 GBit

Adapter Type: Local

Fabric Established: Yes

Connection Established: PTP

Mediatype: auto

Partner Adapter: None

Standby: No

Target Port ID: 0x1

Slot: 0d

NetApp University - Do Not Distribute

Page 122: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-64 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Description: Fibre Channel Target Adapter 0d (Dual-channel, QLogic 2312 (2352) rev. 2)

Status: ONLINE

Host Port Address: 010100

Firmware Rev: 3.3.19

PCI Bus Width: 64-bit

PCI Clock Speed: 33 MHz

FC Nodename: 50:0a:09:80:86:a7:af:35 (500a098086a7af35)

FC Portname: 50:0a:09:82:96:a7:af:35 (500a098296a7af35)

Cacheline Size: 8

FC Packet Size: 2048

External GBIC: No

Data Link Rate: 2 GBit

Adapter Type: Local

Fabric Established: Yes

Connection Established: PTP

Mediatype: auto

Partner Adapter: None

Standby: No

Target Port ID: 0x2

In this example we found FC target port 0x1 and 0x2 to belong to nau-dev1, the first storage controller in the dual storage controller system. You can compare these Target port IDs to the ones shown on the Solaris host by sanlun lun show –p to map each Target Port Group and paths to a specific storage controller and target FC ports. In this example, If we ran fcp show adapter –v on nau-dev2 we would have found FC target ports 0x1 and 0x2.

END OF EXERCISE

NetApp University - Do Not Distribute

Page 123: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-65 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 12: LABEL LUN AS SOLARIS DISK USING FORMAT

OVERVIEW:

In this exercise, you will see how to label a LUN as a Solaris disk using the format operating system command.

TIME ESTIMATE:

10 minutes

START OF EXERCISE

TASK 1: INSPECT LUNS AND IGROUPS CREATED ON THE TARGET STORAGE CONTROLLER

You will need to complete the following steps on your Solaris host to label both NetApp LUNs as Solaris disks.

STEP ACTION

1. $ format

Searching for disks...done

AVAILABLE DISK SELECTIONS:

0. c0t0d0 <SEAGATE-ST336706LC-010A cyl 26123 alt 2 hd 4 sec 686>

/pci@1c,600000/scsi@2/sd@0,0

1. c4t60A98000433461504E342D4A69796C2Dd0 <NETAPP-LUN-0.2 cyl 5118 alt 2 hd 16 sec 256>

/scsi_vhci/ssd@g60a98000433461504e342d4a69796c2d

2. c4t60A98000433461504E342D4A66586252d0 <NETAPP-LUN-0.2 cyl 1534 alt 2 hd 16 sec 256>

/scsi_vhci/ssd@g60a98000433461504e342d4a66586252

Specify disk (enter its number): 1

selecting c4t60A98000433461504E342D4A69796C2Dd0

[disk formatted]

Disk not labeled. Label it now? Y

format> disk

AVAILABLE DISK SELECTIONS:

NetApp University - Do Not Distribute

Page 124: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-66 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

0. c0t0d0 <SEAGATE-ST336706LC-010A cyl 26123 alt 2 hd 4 sec 686>

/pci@1c,600000/scsi@2/sd@0,0

1. c4t60A98000433461504E342D4A69796C2Dd0 <NETAPP-LUN-0.2 cyl 5118 alt 2 hd 16 sec 256>

/scsi_vhci/ssd@g60a98000433461504e342d4a69796c2d

2. c4t60A98000433461504E342D4A66586252d0 <NETAPP-LUN-0.2 cyl 1534 alt 2 hd 16 sec 256>

/scsi_vhci/ssd@g60a98000433461504e342d4a66586252

Specify disk (enter its number): 2

selecting c4t60A98000433461504E342D4A66586252d0

[disk formatted]

Disk not labeled. Label it now? y

Using format, you could rearrange the partitions on the NetApp LUNs, if need be. However, for the purpose of this lab exercise, simply exit the format program. Enter the following command at the format> prompt to exit back to the Solaris operating system prompt.

format> quit

END OF EXERCISE

NetApp University - Do Not Distribute

Page 125: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-67 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 13: CREATE A SUN SVM VOLUME (PART 1:USING SVM CLI)

OVERVIEW:

In this lab exercise you will see how to label a LUN as a Solaris disk using the format operating system command.

TIME ESTIMATE:

20 minutes

START OF EXERCISE

TASK 1: IDENTIFY THE CONSOLIDATED MPXIO DEVICE FILE NAME ASSIGNED TO NETAPP LUNS

You will need to complete the following steps on your the Solaris host.

STEP ACTION

1. Enter the following command to view the NetApp LUNs available on the Solaris host and record the host device paths for both FCP LUNs:

sanlun lun show

lunC

Host Device: /dev/rdsk/c____________________________________________

lunD

Host Device: /dev/rdsk/c____________________________________________

You will need to know the consolidated device file name assigned to lunD by MPxIO in a few moments to create Sun SVM state database replicas on it.

You will need to know the consolidated device file name assigned to lunC by MPxIO in a few moments in Sun SVM to provision the SVM volume.

NOTE: To ensure that you use the correct MPxIO consolidated device file names, copy and paste them from sanlun output into a text file so they are available when you need to look them up.

NetApp University - Do Not Distribute

Page 126: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-68 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 2: IDENTIFY SLICES OF LOCAL DISKS THAT CAN BE USED TO STORE SVM STATE DB REPLICAS

You will need to complete either Step 1 or Step 2 on your Solaris host. Step 1 shows format. Step 2 show prtvtoc.

STEP ACTION

1. Sun recommends creating SVM state database replicas on local disks. However, the replicas can also be stored on NetApp LUNs. There are some advantages to store them on NetApp LUNs, such as the possibility to have hourly Snapshot copies, which preserve the state of the Sun SVM metadata. The only caveat when storing on NetApp, is that the LUN storing the SVM metadata must be available before the SVM metadata needs to be accessed during the boot process. This is usually the case, because FCP and iSCSI drivers are loaded fairly early in the boot process.

Enter the following command to start the format program:

format

Next, select the disk to work with. Choose the disk that corresponds to lunD, and then choose partition to access the partition menu. Finally, choose print to look at available partitions (slices) on the disk corresponding to lunD. You should get an output similar to:

partition> p Current partition table (original): Total disk cylinders available: 498 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 31 32.00MB (32/0/0) 65536 1 swap wu 32 - 63 32.00MB (32/0/0) 65536 2 backup wu 0 - 497 498.00MB (498/0/0) 1019904 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 64 - 497 434.00MB (434/0/0) 888832 7 unassigned wm 0 0 (0/0/0) 0

Observe, slice 6 containing most of the free space on lunD. This is the slice that you will use to store the Sun SVM state database replicas.

Type “quit” (or just “q”) twice to exit the partition menu and go back to Solaris prompt.

2. You can also use the prtvtoc Solaris operating system command to view the current partition table of a disk. Enter the command below to view the

NetApp University - Do Not Distribute

Page 127: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-69 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

partition table of the disk corresponding to lunD. Make sure to fill in the blank with the MPxIO consolidated device name noted above for lunD MINUS the slice number (which you replace with s2, here, for slice 2)

prtvtoc /dev/rdsk/c_____________________________________s2

* /dev/rdsk/c1t60A9800043346D5A6334437564635262d0s2 partition map * * Dimensions: * 512 bytes/sector * 128 sectors/track * 16 tracks/cylinder * 2048 sectors/cylinder * 500 cylinders * 498 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 2 00 0 65536 65535 1 3 01 65536 65536 131071 2 5 01 0 1019904 1019903 6 4 00 131072 888832 1019903

Observe, in bold, slice 6, containing 888832 sectors of 512B each (434 MB).

TASK 3: CREATE SUN SVM STATE DATABASE REPLICAS USING THE METADB COMMAND

You will need to complete the following steps on your Solaris host.

STEP ACTION

1. Enter the command below to create three Sun SVM state database replicas on slice 6 of the disk corresponding to lunD. Fill in the blanks with the MPxIO device name assigned to lunD.

metadb –a –f –c 3 c____________________________________s6

2. Enter the following command to view information about the Sun SVM state database replicas:

metadb –i

NetApp University - Do Not Distribute

Page 128: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-70 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 4: CREATE A SUN SVM VOLUME PROVISIONED BY A NETAPP LUN USING THE METAINIT COMMAND

You will need to complete the following steps on your the Solaris host.

STEP ACTION

1. Enter the command below to create a Sun SVM volume named d0. This volume is provisioned by the consolidated MPxIO device corresponding to lunC that you identified in task 1. That device is in fact a NetApp LUN.

metainit d0 1 1 c______________________________________s6

The “1 1“arguments specify to create one stripe of one disk slice. This effectively means to create a concatenated volume provisioned by one slice, identified by the MPxIO device file name.

Observe that we use slice 6 (s6), which represents the usr slice of the disk. It contains most of the space of that disk. You could add any slices of the disk to the SVM volume. You could even create different volumes with other slices of the disk.

2. Go to the UNIX prompt of your Solaris host and enter the following command to view the device file name created by the metainit command in the previous step for the new Sun SVM volume named d0:

$ ls /dev/md/rdsk

d0

$

3. Enter the following command to display information about the SVM volume you have just created:

$ metastat

END OF EXERCISE

NetApp University - Do Not Distribute

Page 129: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-71 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 13: CREATE A SUN SVM VOLUME (PART 2: USING THE SUN SMC GUI)

OVERVIEW:

This portion of the exercise is shown for information purposes only.

Complete Exercise 13: Create Sun SVM Volume (Part 1:Using SVM CLI).

In this exercise, you will create a Sun SVM volume provisioned by a NetApp LUN. You will be using the Solaris Management Console GUI. Although we provision a Sun SVM volume with a LUN that is accessed using FCP in this lab exercise, we could also provision the Sun SVM volume with a LUN accessed using iSCSI. We could even mix and match LUNs accessed with FCP and LUNs accessed with iSCSI in the same Sun SVM volume.

OBJECTIVES:

By the end of this exercise, you should be able to: • Prepare to start and start up the Solaris Management Console GUI • Identify the consolidated MPxIO device file name assigned to the NetApp LUN • Create Sun SVM state database replicas • Create a Sun SVM volume provisioned by a NetApp LUN If you do not have Hummingbird Exceed or another Windows X Server Solution installed on your Windows workstation you will need to install one. Alternately, you can perform the tasks of this lab exercise with Sun SVM CLI (metadb commands) instead of Sun SVM GUI. Currently the Sun SVM GUI is only available as a UNIX X-Window application. Future versions may support a Web interface. Complete the “Create Sun SVM Volume (SVM CLI)” version of Lab 6 FCP to perform this lab using the Sun SVM CLI.

START OF EXERCISE

TASK 1: PREPARE TO START AND START UP THE SOLARIS MANAGEMENT CONSOLE GUI

You will need to complete the following steps on your the Solaris host or on your Windows Workstation.

STEP ACTION

1. Start up Hummingbird Exceed or any other Windows X-Server application on your Windows workstation.

Start->Programs->Hummingbird Connectivity v8.0->Exceed->Exceed

NetApp University - Do Not Distribute

Page 130: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-72 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2. Enter the commands below to set the current X display on your Solaris server. This effectively sends all Unix X Window displays to the Exceed X Server running on your workstation. Make sure to replace silviu-lxp.hq.netapp.com with the hostname or IP address of your workstation

$ ping silviu-lxp.hq.netapp.com

silviu-lxp.hq.netapp.com is alive

$ export DISPLAY=silviu-lxp.hq.netapp.com:0

$ echo $DISPLAY

silviu-lxp.hq.netapp.com:0

3. Enter the following command on your Solaris server to start up the Solaris Management Console GUI on your Solaris host:

$ smc &

4. The Solaris Management Console 2.1 GUI will appear on your Windows Workstation. All tasks that need to be completed in the Solaris Management Console 2.1 GUI will be performed on your Windows Workstation. However, the commands run in the SMC GUI are really run on your Solaris host.

NetApp University - Do Not Distribute

Page 131: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-73 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

5. Click successively on the navigation keys to expand the This Computer tab, and the Storage tab.

NetApp University - Do Not Distribute

Page 132: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-74 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

NetApp University - Do Not Distribute

Page 133: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-75 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

6. Disks to view the available disks on your host. You are prompted to log on as root. Type in the password of the root user on your Solaris host and click OK.

NetApp University - Do Not Distribute

Page 134: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-76 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

7. Observe the disks available on your host.

There are two local disks: c0t0d0 and c0t1d0. In the example above there are three MPxIO consolidated devices (the device file names starting with c3 and c4). Keep in mind that the device file names are likely to be different on your host.

TASK 2: IDENTIFY THE CONSOLIDATED MPXIO DEVICE FILE NAME ASSIGNED TO THE NETAPP LUN

You will need to complete the following steps on your the Solaris host.

STEP ACTION

1. The consolidated device file name assigned to lunC by MPxIO appears in bold text below. This device is one of the disks listed in the SMC GUI in the previous task. This is the device file name that you will need to use in a few moments in Sun SVM to provision the SVM volume. LUN Size: 3g (3221225472)

Host Device:

NetApp University - Do Not Distribute

Page 135: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-77 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

/dev/rdsk/c4t60A98000433461504E342D4A66586252d0s2

LUN State: GOOD Filer_CF_State: Cluster Enabled

Multipath_Policy: Native Multipath-provider: Sun Microsystems

TPGS flag: 0x10Filer Status: TARGET PORT GROUP SUPPORT ENABLED

Target Port Group : 0x1001

Target Port Group State: Active/optimized

Vendor unique Identifier : 0x10 (2GB FC)

Target Port Count: 0x2

Target Port ID : 0x101

Target Port ID : 0x102

Target Port Group : 0x3002

Target Port Group State: Active/non-optimized

Vendor unique Identifier : 0x30 (2GB FC)

Target Port Count: 0x2

Target Port ID : 0x1

Target Port ID : 0x2

IMPORTANT: The device file name is different on your host. Make sure to use the device file name as it shows up on your host. It is recommended to simply select and paste the device file name wherever needed.

TASK 3: CREATE SUN SVM STATE DATABASE REPLICAS

You will need to complete the following steps on your Windows workstation.

STEP ACTION

1. Click the “Enhanced Storage” tab.

NetApp University - Do Not Distribute

Page 136: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-78 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2. Now, expand the Enhanced Storage tab by clicking the navigation key and then, click State Database Replicas.

NetApp University - Do Not Distribute

Page 137: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-79 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Observe that there are no Sun SVM state database replicas currently created on this host. Nothing shows up in the main window and Status bar displays “0 Replicas.”

Observe also the Information window that provides contextual information in the Sun SMC GUI.

3. Click the Action menu and select Create Replicas…

When prompted to specify a Disk Set, leave it to None and click Next.

You are now prompted to Select Components:

NetApp University - Do Not Distribute

Page 138: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-80 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

The slices (partitions) of all disks on your host are shown in the “Available” list. We need to select slices on local disks to store the Sun SVM state database replicas.

4. Select slices 6 and 7 on the first local disk (c0t0d0) and slices 3 and 4 on the second local disk (c0t1d0). It is recommended to spread out the Sun SVM state database replicas across multiple disks and multiple SCSI controllers.

NetApp University - Do Not Distribute

Page 139: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-81 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Because you selected c0t0d0s6 and c0t0d0s7 and c0t1d0s3 and c0t1d0s4 you are spreading out the SunSVM state db replicas on 2 slices on disk 1 and two more slices on disk 2. Unfortunately, both disks are on the same controller (c0). There must be more than half the number of SVM state db replicas available at any time. You can create several replicas on each slice to compensate for the lack of separate disk controllers.

Click Next.

5. Below is where you specify the replica length (number of 512-KB blocks) and the number of replicas on each slice. Keep default number of blocks (8192) and enter “3” for three replicas on each slice.

NetApp University - Do Not Distribute

Page 140: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-82 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click Next.

6. Observe the Sun SVM CLI commands (metadb commands) that the Sun SMC GUI will run to create the Sun SVM state db replicas on local slices c0t0d0s6 and c0t0d0s7 and c0t1d0s3 and c0t1d0s4. These commands could be run at the UNIX prompt on the Solaris host instead of using the Sun SMC GUI.

NetApp University - Do Not Distribute

Page 141: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-83 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click Finish.

7. Click the View menu and select “Refresh to view the Sun SVM state db replicas that you have just created.

NetApp University - Do Not Distribute

Page 142: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-84 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Observe that there are three replicas on each slice.

QUESTION: Are we in danger of lacking space on disk 2 slice 3 (c0t1d0s3) for the three Sun SVM state db replicas?

Hints:

X = Find out the size of each replica

Y = Find out the size of c0t1d0s3

Is 3X < Y?

NetApp University - Do Not Distribute

Page 143: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-85 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 4: CREATE A SUN SVM VOLUME PROVISIONED BY A NETAPP LUN

You will need to complete the following steps on your the Windows workstation.

STEP ACTION

1. Click the Volumes tab.

Observe that there are no Sun SVM volumes currently available on this host. Nothing shows up in the main window and Status bar displays “0 Replicas.”

2. Click the Action menu and select Create Volume…

Choose “Don’t Create State Database Replicas,” because we have just created them. Click Next.

When prompted to specify a Disk Set, leave it to None and click Next.

Choose Volume Type “Concatenation (RAID-0)” and click Next.

Keep the default Volume Name “d0” and click Next.

NetApp University - Do Not Distribute

Page 144: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-86 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

You are now prompted to Select Components.

The slices (partitions) of all disks on your host are shown in the Available list. We need to select slices on local disks to store the Sun SVM state database replicas.

3. Select the slice that corresponds to the slice 2 of the consolidated MPxIO device file name assigned to the NetApp LUN lunC that you identified in Task 2 above. You choose slice 2 because this slide represents the whole disk. In a previous step, you labeled lunC as a Solaris disk with format and you put most of the free space on slice 6 of lunC because you were not planning to use the disk with multiple partitions reserved for different usage. Alternately, instead of putting most of the free space of lunC in a single partition (slice 6) you could have partitioned the lunC in several different partitions. Then, you could have added those partitions independently to Sun SVM volumes.

In our example this is slice c4t60A98000433461504E342D4A66586252d0s2. Keep in mind that the device file name and thus, the slice name, is different on your Solaris host. Make sure to choose the device file name as it appeared on your host in Task 2.

NetApp University - Do Not Distribute

Page 145: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-87 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click Next.

4. Keep the default No Hot Spare Pool. Click Next.

5. Observe the Sun SVM CLI command (metainit) that the Sun SMC GUI will run to create the Sun SVM volume named d0 provisioned by the slice 2 of lunC (represented below by its consolidated MPxIO device file name). This command could be run at the UNIX prompt on the Solaris host instead of using the Sun SMC GUI.

NetApp University - Do Not Distribute

Page 146: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-88 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click Finish.

6. Click the View menu and select “Refresh to view the Sun SVM volume d0 that you have just created.”

NetApp University - Do Not Distribute

Page 147: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-89 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

7. Go to the UNIX prompt of your Solaris host and enter the following command to view the device file name created by the metainit command in the previous step for the new Sun SVM volume named d0:

$ ls /dev/md/rdsk

d0

$

END OF EXERCISE

NetApp University - Do Not Distribute

Page 148: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-90 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 14: CREATE A UNIX FILE SYSTEM ON A SUN SVM VOLUME PROVISIONED BY A NETAPP LUN

OVERVIEW:

In this exercise, you will create a UNIX File System (UFS) on a Sun SVM volume that is provisioned by a NetApp LUN. This LUN was previously discovered on the Solaris host using FCP.

OBJECTIVES:

By the end of this exercise, you should be able to:

• Inspect the raw Sun SVM volume provisioned by the NetApp LUN on the Solaris host

• Create a UFS on the raw Sun SVM volume provisioned by the NetApp LUN

• Mount the UFS onto the active file system on the Solaris host

• Test writing access to the NetApp LUN

• Add entry in the Virtual File System Table to mount the LUN persistently across reboots

TIME ESTIMATE: 15 minutes

START OF EXERCISE

You will need to complete the following steps on the Solaris host.

STEP ACTION

1. Enter the following command to look at the Sun SVM volumes available on your Solaris host:

ls /dev/md/rdsk

You should get an output similar to:

bash-3.00# ls /dev/md/rdsk

d0

bash-3.00#

Observe in bold, the d0 Sun SVM volume that we created in the previous lab exercise. Observe that we are listing the contents of /dev/md/rdsk. This is the raw device directory for Sun SVM metadevices (md).

2. Enter the following command to install a UFS on the d0 Sun SVM volume that is provisioned by a NetApp LUN:

NetApp University - Do Not Distribute

Page 149: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-91 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

newfs /dev/md/rdsk/d0

newfs: construct a new file system /dev/md/rdsk/d0: (y/n)? y

/dev/md/rdsk/d0: 6283264 sectors in 1534 cylinders of 16 tracks, 256 sectors

3068.0MB in 62 cyl groups (25 c/g, 50.00MB/g, 8192 i/g)

super-block backups (for fsck -F ufs -o b=#) at:

32, 102688, 205344, 308000, 410656, 513312, 615968, 718624, 821280, 923936,

5325856, 5428512, 5531168, 5633824, 5736480, 5839136, 5941792, 6044448,

6147104, 6249760

QUESTION 1: On which slice (partition) of the Solaris MPxIO disk device did you create the file system?

HINT: Find out which slices you added to the d0 Sun SVM volume.

3. Enter the following command to create a mountpoint for the UFS created on the

d0 Sun SVM volume.

mkdir –p /mnt/lunC

Observe that we name the mountpoint using the name of the NetApp LUN that is provisioning the d0 Sun SVM volume.

4. Enter the following command to mount the lunC onto the active Solaris file system:

mount /dev/md/dsk/d0 /mnt/lunC

Observe that we use the dsk path at this point because we now have a file system created on d0.

5. Enter the following command to test writing to Sun SVM volume d0, which is

NetApp University - Do Not Distribute

Page 150: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-92 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

provisioned by NetApp LUN lunC:

touch /mnt/lunC/test_write.txt

6. Enter the following command to verify that the file test_write.txt was successfully created in Sun SVM volume d0:

ls -la /mnt/lunC

drwxr-xr-x 3 root root 512 Jan 23 13:50 .

drwxr-xr-x 3 root sys 512 Jan 23 13:46 ..

drwx------ 2 root root 8192 Jan 23 13:43 lost+found

-rw-r--r-- 1 root root 0 Jan 23 13:50 test_write.txt

7. This step is OPTIONAL. If you need to have the d0 Sun SVM volume automatically mounted after a system reboot, you need to add an entry in the Virtual File System Table file, in /etc/vfstab.

Add the following line into the /etc/vfstab file to persistently mount d0 across system reboots: /dev/md/dsk/d0 - /mnt/lunC ufs no yes -

Observe the comments at the beginning of the vfstab file; they explain each field.

END OF EXERCISE

NetApp University - Do Not Distribute

Page 151: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-93 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 15: TEST ACCESS TO LUN DURING FAILURE

OVERVIEW:

In this exercise, you will test access to a LUN during an FC path failure.

OBJECTIVES:

By the end of this exercise, you should be able to:

• Identify the FC switch ports where your Solaris FC HBA connects

• Disable the FC switch port where the first Solaris FC HBA port connects

• Test writing access to LUN on the Solaris host

• Re-enable the FC switch port where the first Solaris FC HBA port connects

TIME ESTIMATE:

20 minutes

START OF EXERCISE

TASK 1: IDENTIFY THE FC SWITCH PORTS WHERE YOUR SOLARIS FC HBA CONNECTS

You will need to complete the following steps on the Solaris host.

STEP ACTION

1. Enter the following command on the Solaris host to view the WWPNs of the FC HBA installed on your host. NOTE: You could also use the QLogic SANSurfer CLI utility (/usr/bin/scli) to perform this step.

sanlun fcp show adapter

You should get an output similar to:

$ sanlun fcp show adapter

qlc0 WWPN:210000e08b922bf4

qlc1 WWPN:210100e08bb22bf4

Observe the digits (in bold) that differ between the WWPNs of each FC HBA port on the Solaris host.

IMPORTANT: The WWPNs are different on your host. Record the WWPNs of

NetApp University - Do Not Distribute

Page 152: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-94 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

your host here, emphasizing the digits that differ between qlc0 and qlc1:

qlc0:____________________________________

qlc1:____________________________________

TASK 2: VERIFY SOLARIS HOST PORTS

You will need to complete the following steps on the Brocade FC switch.

STEP ACTION

1. Enter the following command on the Brocade switch console to view the WWPNs connected to each port. Locate the two F-Ports where your Solaris host connects.

switchshow

For example:

nau-48k:root> switchshow

Index Slot Port Address Media Speed State Proto

===================================================

...

68 7 4 014400 id N4 Online F-Port 21:00:00:e0:8b:92:2b:f4

69 7 5 014500 id N4 Online F-Port 21:01:00:e0:8b:b2:2b:f4

...

In this example we were looking for 210000e08b922bf4 (qlc0) and for 210100e08bb22bf4 (qlc0)

Observe that IN THIS EXAMPLE, the FC initiator port qlc0 on the Solaris host connects to port 4 on the Brocade switch. FC initiator port qlc1 on the Solaris host connects to port 5 on the Brocade switch.

IMPORTANT: Make sure to properly identify the slot and port where YOUR

NetApp University - Do Not Distribute

Page 153: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-95 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

Solaris host connects. If in doubt, ASK YOUR INSTRUCTOR. Record your slot and port number here:

qlc0 connected to slot:____________ port:___________

TASK 3: DISABLE THE FC SWITCH PORT WHERE THE FIRST SOLARIS FC HBA PORT CONNECTS You will need to complete the following steps on the Brocade FC switch.

STEP ACTION

1. IMPORTANT: Make sure to properly identify the slot and port where YOUR Solaris host connects. You do not want to disable the port of someone else’s host. If in doubt, ASK YOUR INSTRUCTOR.

Enter the following command to disable the port 4 in slot 7 on the Brocade Director switch. Slot 7, port 4 is where the qlc0 FC HBA connects IN THIS EXAMPLE. Make sure to identify the slot and port where YOUR Solaris host qlc0 FC HAB port connects.

portdisable 7/4

If you are not using a director FC switch you just need to specify the port number: portdisable 4

2. Enter the following command on the Brocade switch console to ensure that the slot and port connected to your Solaris qlc0 FC HBA is disabled.

switchshow

For example:

nau-48k:root> switchshow

Index Slot Port Address Media Speed State Proto

===================================================

...

68 7 4 014400 id N4 No_Sync Disabled

69 7 5 014500 id N4 Online F-Port 21:01:00:e0:8b:b2:2b:f4

...

NetApp University - Do Not Distribute

Page 154: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-96 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 4: TEST ACCESS TO LUN ON THE SOLARIS HOST

You will need to complete the following steps on the Solaris host.

STEP ACTION

1. Enter the following command to verify that some of the paths to the LUNs are currently unusable: cfgadm -al

You should see that some of the paths leading to devices of type “disk,” that is “cX::500…” FC targets are reported either “unavailable,” or “unusable,” or “failed.”

2. Enter the following command to test writing to Sun SVM volume d0 that is provisioned by NetApp LUN lunC while some of the FC paths to lunC are broken due to FC switch port failure. (Actually, you just disabled the FC switch port.)

touch /mnt/lunC/test_write_during_fail.txt

3. Enter the following command to verify that the file test_write_during_fail.txt was successfully created in the Sun SVM volume d0:

ls -la /mnt/lunC

drwxr-xr-x 3 root root 512 Jan 23 13:50 .

drwxr-xr-x 3 root sys 512 Jan 23 13:46 ..

drwx------ 2 root root 8192 Jan 23 13:43 lost+found

-rw-r--r-- 1 root root 0 Jan 23 13:50 test_write.txt

-rw-r--r-- 1 root root 0 Jan 23 13:50 test_write_during_fail.txt

NetApp University - Do Not Distribute

Page 155: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-97 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 5: REENABLE THE FC SWITCH PORT WHERE THE FIRST SOLARIS FC HBA PORT CONNECTS

You will need to complete the following steps on the Brocade FC switch.

STEP ACTION

1. IMPORTANT: Make sure to properly identify the slot and port where YOUR Solaris host connects. If in doubt, ASK YOUR INSTRUCTOR.

Enter the following command to enable the port 4 in slot 7 on the Brocade Director switch. Slot 7, port 4 is where the qlc0 FC HBA connects IN THIS EXAMPLE. Make sure to identify the slot and port where YOUR Solaris host qlc0 FC HAB port connects.

portenable 7/4

If you are not using a director FC switch, you just need to specify the port number: portenable 4

2. Enter the following command on the Brocade switch console to ensure that the slot and port connected to your Solaris qlc0 FC HBA is re-enabled:

switchshow

For example:

nau-48k:root> switchshow

Index Slot Port Address Media Speed State Proto

===================================================

...

68 7 4 014400 id N4 Online F-Port 21:00:00:e0:8b:92:2b:f4

69 7 5 014500 id N4 Online F-Port 21:01:00:e0:8b:b2:2b:f4

... 3. Enter the following command on the Solaris host to verify that all paths to the

LUNs are now usable on the host: cfgadm -al

You should see that all paths leading to devices of type “disk,” that is “cX::500…” FC targets, are now reported either “OK,” or “unknown.” You should NOT see “unavailable,” “unusable,” or “failed” in the “Condition” column.

END OF EXERCISE

NetApp University - Do Not Distribute

Page 156: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-98 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 16: CONFIGURE ISCSI SERVICE ON THE SOLARIS HOST

OVERVIEW:

In this exercise, you will run the basic_config script provided by the NetApp Host Utilities Kit to configure a Solaris host with the recommended values for FC access to LUNs on a NetApp storage system.

TIME ESTIMATE:

20 minutes

START OF EXERCISE

TASK 1: INSPECT THE CURRENT STATE OF THE ISCSI SERVICE ON THE SOLARIS HOST.

You will need to complete the following steps on the Solaris host.

STEP ACTION

1. Enter the command below to obtain the iSCSI node name of your Solaris host. The iSCSI initiator node name would have been need if you had to create initiator groups on the storage controller.

iscsiadm list initiator-node

You should get an output similar to: # iscsiadm list initiator-node

Initiator node name: iqn.1986-03.com.sun:01:00801784624b.458c0eaf

Initiator node alias: -

Login Parameters (Default/Configured):

Header Digest: NONE/-

Data Digest: NONE/-

Authentication Type: NONE

RADIUS Server: NONE

RADIUS access: unknown

Configured Sessions: 1

NetApp University - Do Not Distribute

Page 157: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-99 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2. Record the iSCSI initiator node name shown in Step 1:

3. Enter the following command to list the current iSCSI target discovery

parameters on your Solaris host: iscsiadm list discovery

You should get an output similar to: # iscsiadm list discovery

Discovery:

Static: disabled

Send Targets: disabled

iSNS: disabled

TASK 2: CONFIGURE THE ISCSI SERVICE ON THE NETAPP STORAGE SYSTEM

You will need to complete the following steps on your Solaris host by replacing <storage_ctlr> with the name of your storage controller.

STEP ACTION

1. Enter the following command to ensure that the iSCSI protocol is licensed on the storage controller:

$ rsh <storage_ctlr> license

iscsi site IKVAREM

2. Enter the command below to disable iSCSI traffic on the e0a Ethernet interface. It is recommended to disable iSCSI traffic on the default e0a management interface.

$ rsh <storage_ctlr> iscsi interface disable e0a 3. Enter the following command to enable the iSCSI service on the storage

controller:

$ rsh <storage_ctlr> iscsi start

Tue Jan 16 18:00:16 GMT [Filer1: iscsi.service.startup:info]: iSCSI service startup

NetApp University - Do Not Distribute

Page 158: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-100 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

iSCSI service started 4. Enter the command below to see the iSCSI interfaces currently enabled on the

storage controller. Make sure that the e0a interface is disabled for iSCSI traffic.

$ rsh <storage_ctlr> iscsi interface show

Interface e0a disabled

Interface e0b enabled

Interface e0c enabled

Interface e0d enabled

Observe that interface e0a is reserved for general purpose TCP/IP traffic to the storage controller. Thus, interface e0a is disabled for iSCSI traffic. VLANs can also be used on the switch to isolate iSCSI traffic from general purpose TCP/IP traffic.

5. Enter the following command to see the iSCSI Target Portal Groups (TPGs) currently available on the storage controller:

$ rsh <storage_ctlr> iscsi tpgroup show

TPGTag Name Member Interfaces

1000 e0a_default e0a

1001 e0b_default e0b

1002 e0c_default e0c

1003 e0d_default e0d

Why are there four different iSCSI TPGs on this storage controller?

6. Run the following command to view the IP address assigned to each Ethernet

interface on your storage controller:

$ rsh <storage_ctlr> ifconfig –a

NetApp University - Do Not Distribute

Page 159: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-101 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 3: CONFIGURE THE ISCSI SERVICE ON THE SOLARIS HOST

You will need to complete the following steps on the Solaris host.

STEP ACTION

1. Enter the following command to set the discovery address for iSCSI targets on the target storage controller: iscsiadm add discovery-address <e0b_ip_adress>:3260

<e0b_ip_adress> is the IP address of e0b Ethernet interface on the target storage controller in your pod.

2. Enter the following command to verify that the discovery addresses were properly set up: iscsiadm list discovery-address

You should get an output similar to:

# iscsiadm list discovery-address

Discovery Address: 10.61.170.25:3260 3. Enter the following command to have a quick look at the general syntax and

options of the Solaris iscsiadm command:

iscsiadm

4. Enter the following command to enable dynamic iSCSI targets discovery: iscsiadm modify discovery –-sendtargets enable

The console of the storage controllers, should output a message similar to: Filer1> Wed Jan 17 16:47:48 GMT [Filer1: iscsi.notice:notice]: ISCSI: New session from initiator iqn.1986-03.com.sun:01:00801784624b.458c0eaf at IP addr 10.61.170.21

This shows that the Solaris iSCSI software initiator has logged in to discover iSCSI targets on the storage controllers.

5. Enter the following command to verify that the dynamic iSCSI targets discovery was properly set up: iscsiadm list discovery

NetApp University - Do Not Distribute

Page 160: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-102 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

You should get an output similar to: # iscsiadm list discovery

Discovery:

Static: disabled

Send Targets: enabled

iSNS: disabled

# 6. Enter the following command to look at the scsi_vhci.conf file:

cat /kernel/drv/scsi_vhci.conf

If you see the text below in scsi_vhci.conf, you need to complete Step 8. Otherwise, you can just read through Step 8.

Added by NetApp to enable MPxIO for Data ONTAP LUNs

device-type-scsi-options-list =

"NETAPP LUN", "symmetric-option";

symmetric-option = 0x1000000;

7. The iSCSI Solaris Host Utilities 3.0.1 does not support Asymmetric Logical Unit

Access (ALUA) with iSCSI. While ALUA is currently supported in the iSCSI Solaris Host Utilities 3.0, if you upgrade to iSCSI Solaris Host Utilities 3.0.1 or Solaris 10 Update 3, ALUA will not be supported by NetApp.

Going forward, it is recommended to use the Solaris iSCSI software initiator with MPxIO and without ALUA for iSCSI on Solaris.

Because ALUA must be turned off for iSCSI, you need to disable ALUA on the Solaris initiator groups on the storage controller (including the FC initiator groups) using the following command: igroup set <igroup_name> alua off. If you are provisioning NetApp LUNs from the Solaris host using both FC and iSCSI from the SAME host, because ALUA needs to be disabled for that host, you need to manage the multiple FC paths manually, using the Solaris mpathadm command.

To enable multipathing, you must execute the mpxio_set script provided by the Host Utilities to configure the Sun StorageEdge Traffic Manager. You do this by adding the storage system’s vendor ID and product ID (VID/PID) to the Sun StorageEdge Traffic Manager configuration file.

NetApp University - Do Not Distribute

Page 161: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-103 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

The format of the entries in this file is very specific. To ensure that the entry is correct, the Host Utilities includes the mpxio_set script to automatically add the required storage vendor specific configuration variables. This script was placed in the /opt/NTAP/SANToolkit/bin directory when you installed the Host Utilities.

To add the NetApp VID/PID (Vendor Id/ Product Id) lines to the scsi_vhci.conf file enter the following command:

/opt/NTAP/SANToolkit/bin/mpxio_set -e

W A R N I N G

This script will modify /kernel/drv/scsi_vhci.conf

to add Vendor ID information for your storage system.

You should only run this script if you are using MPxIO

multipathing AND you have NOT enabled ALUA for this host's

igroup on the filer.

Do you wish to continue (y/n)?---> y

The original version of the scsi_vhci.conf file has been saved

to /kernel/drv/scsi_vhci.conf.1170105443

/kernel/drv/scsi_vhci.conf has been updated. Please reboot now

Once you ran the mpxio_set –e command, you need to reboot the Solaris host for changes to take effect. Enter the following command:

reboot -- -r

8. Enter the following command to enable MPxIO:

stmsboot –e

This command will warn you that a change will be done in the configuration files. Answer ‘Y’ to this question to update the configuration files. Then, it will ask you to reboot. Answer ‘N’ to avoid rebooting with stmsboot. Enter the following command instead to reboot and reconfigure the host for MPxIO:

NetApp University - Do Not Distribute

Page 162: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-104 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

reboot -- -r

9. Enter the following command to explore the iSCSI targets that Solaris found: iscsiadm list target –v | more

NOTE: If you do not see the IP address of any iSCSI target in the output of this command, run devfsadm –C to clean up dangling device links and rerun the iscsiadm list target -v command as shown above.

Also, verify that LUNs are mapped to the correct initiator group and that the initiator groups contain the correct IQN number. If the LUNs are already mapped to the correct igroup, but the igroup does not contain the correct IQN numbers, you may need to unmap or map the LUNs to update the map with the correct iSCSI IQNs.

10. Q1: Consider the output of the command run in the previous step. Why are some

iSCSI targets shown as connected to a certain IP address whereas other targets are shown not connected?

Q2: The Target Portal Group 1000 (TPGT: 1000) is not shown (discovered) at all on the Solaris host. Why?

Q3: The Target Portal Group 1003 (TPGT: 1003) is not shown (discovered) at all on the Solaris host. Why?

Hints:

TPGT is the Target Portal Group Tag

NetApp University - Do Not Distribute

Page 163: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-105 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Use the iscsi tpgroup show Data ONTAP command in conjunction with the ifconfig –a command on the target and partner storage controllers to find the answers.

END OF EXERCISE

EXERCISE 17: DISCOVER LUN ON HOST USING ISCSI

OVERVIEW:

In this exercise, you will learn how to discover a new LUN on a Solaris host using the iSCSI protocol. The Solaris host is using native MPxIO to manage multiple paths to the LUN. You will also learn how to interpret the output of sanlun lun show –p command in a Solaris Native MPxIO environment and how to use the iscsiadm list target Solaris command.

OBJECTIVES:

By the end of this exercise, you should be able to:

• Inspect LUNs and igroups created on the target storage controller

• Discover LUN 0 and LUN 1 on the Solaris host

• Check if MPxIO is working properly on the host

• Observe the multiple paths between the host and the storage controller

• Using native Solaris commands to check MPxIO

• Label the LUN and run newfs

TIME ESTIMATE:

40 minutes

START OF EXERCISE

TASK 1: INSPECT LUNS AND IGROUPS CREATED ON THE TARGET STORAGE CONTROLLER

You will need to complete the following steps on your Solaris host by replacing <storage_ctlr> with the name of your storage controller.

NetApp University - Do Not Distribute

Page 164: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-106 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

1. Enter the following command to inspect the LUNs available on the target storage controller and the way they are mapped to the initiator groups:

$ rsh <storage_ctlr>.rtp.netapp.com lun show -m

LUN path Mapped to LUN ID Protocol

-----------------------------------------------------------------------

/vol/solarisvol1/lunA solaris_iscsi_ig 0 iSCSI

/vol/solarisvol1/lunB solaris_iscsi_ig2 1 iSCSI

Observe that the LUN named lunA is mapped with lun_id 0 to an iSCSI igroup named solaris_iscsi_ig. The LUN named lunB is mapped with lun_id 1 to an iSCSI igroup named solaris_iscsi_ig2.

2. Enter the following command to inspect the initiator groups currently available on the target storage controller: $ rsh <storage_ctlr>.rtp.netapp.com igroup show -v

solaris_iscsi_ig (iSCSI):

OS Type: solaris

Member: iqn.1986-03.com.sun:01:san201.00000201 (logged in on: e0b, e0c)

Observe the “Member” iSCSI node name shown in bold type face in this example. This is the iSCSI node name of the iSCSI software initiator on the Solaris host.

Ensure that the iSCSI node name shown by the output of the igroup show command, is the iSCSI node name that you recorded previously when you ran the

iscsiadm list initiator-node command on the Solaris host.

Make sure that both ALUA and MPxIO entries in scsi_vhci.conf are NOT enabled at the same time. If scsi_vhci.conf has entries, disable ALUA. You know if ALUA is enabled by looking at the igroup show –v output. In this example, ALUA is not enabled.

NetApp University - Do Not Distribute

Page 165: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-107 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 2: DISCOVER LUN 0 AND LUN 1 ON THE SOLARIS HOST. You will need to complete the following steps on the Solaris host.

STEP ACTION

1. Enter the following command to discover the new LUNs on the Solaris host using the iSCSI protocol:

devfsadm –i iscsi 2.

Use the sanlun command to see if the LUNs have been discovered; you can also use format or iscsiadm.

sanlun lun show

You should get an output similar to:

filer: lun-pathname device filename adapter protocol lun size lun state

san201f1: /vol/solarisvol1/lunA /dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2 iscsi0 iSCSI 500m (524288000) GOOD

san201f1: /vol/solarisvol1/lunB /dev/rdsk/c3t60A9800043346D525A4A47494E586B74d0s2 iscsi0 iSCSI 500m (524288000) GOOD You can see that the LUNs have been discovered by the host.

3. Enter the following native Solaris command to verify that the LUNs have been discovered by the Solaris host:

iscsiadm list target -S

You should get an output similar to:

Target: iqn.1992-08.com.netapp:sn.101196961

Alias: -

TPGT: 1002

ISID: 4000002a0000

Connections: 1

LUN: 1

Vendor: NETAPP

Product: LUN

OS Device Name:

NetApp University - Do Not Distribute

Page 166: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-108 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

/dev/rdsk/c3t60A9800043346D525A4A47494E586B74d0s2

LUN: 0

Vendor: NETAPP

Product: LUN

OS Device Name: /dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2

Target: iqn.1992-08.com.netapp:sn.101196961

Alias: -

TPGT: 1001

ISID: 4000002a0000

Connections: 1

LUN: 1

Vendor: NETAPP

Product: LUN

OS Device Name: /dev/rdsk/c3t60A9800043346D525A4A47494E586B74d0s2

LUN: 0

Vendor: NETAPP

Product: LUN

OS Device Name: /dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2

Observe that the LUNs are discovered through the Target Portal Group identified by the Tag 1001 (TPGT) and Tag 1002(TPGT) on the iSCSI target node iqn.1992-08.com.netapp:sn.101196961. These Target Portal Groups correspond to the e0b and e0c Ethernet interfaces on the target storage controller. You can verify this by issuing the iscsi portal show or the iscsi tpgroup show command on the storage array.

san201f1> iscsi portal show

Network portals:

IP address TCP Port TPGroup Interface

10.254.135.101 3260 1001 e0b

10.254.135.121 3260 1002 e0c

NetApp University - Do Not Distribute

Page 167: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-109 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 3: CHECK IF MPXIO IS WORKING PROPERLY ON THE HOST

You will need to complete the following steps on your Solaris host by replacing <storage_ctlr> with the name of your storage controller.

STEP ACTION

1. If the Sun StorageEdge Traffic Manager (MPxIO) is working, you should see a long disk name similar to the following:

/dev/rdsk/c5t60A980004334686568343771474A4D42d0s2

2. You should also see as many LUNs as you mapped to the host. NOTE: LUNs that are offline are not visible though.

In this example, we had two LUNs mapped to the host by way of iSCSI initiator groups. If MPxIO was NOT working we would see four LUNs, one LUN per path. Enter the following commands to check and confirm: format

Searching for disks...done

AVAILABLE DISK SELECTIONS:

0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>

/pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/sd@0,0

1. c3t60A9800043346D525A4A47494E586B74d0 <NETAPP-LUN-0.2 cyl 498 alt 2 hd 16 sec 128>

/scsi_vhci/ssd@g60a9800043346d525a4a47494e586b74

2. c3t60A9800043346D525A4A47494E58726Ed0 <NETAPP-LUN-0.2 cyl 498 alt 2 hd 16 sec 128>

/scsi_vhci/ssd@g60a9800043346d525a4a47494e58726e

Specify disk (enter its number):

Observe the long consolidated MPxIO path names for the virtual disks (NetApp LUNs). This shows that MPxIO is working as intended. If MPxIO was not working, then we would see the drives with just a cXtXdX notation, similar to:

0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>

/pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/sd@0,0

Also, MPxIO devices have the physical path names starting with /scsi_vhci, as opposed to /pci@1e etc, note the differences in the output above.

NetApp University - Do Not Distribute

Page 168: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-110 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

3. You can also use sanlun as shown below: sanlun lun show

filer: lun-pathname device filename adapter protocol lun size lun state

san201f1: /vol/solarisvol1/lunA /dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2 iscsi0 iSCSI 500m (524288000) GOOD

san201f1: /vol/solarisvol1/lunB /dev/rdsk/c3t60A9800043346D525A4A47494E586B74d0s2 iscsi0 iSCSI 500m (524288000) GOOD

TASK 4: OBSERVE THE MULTIPLE PATHS BETWEEN THE HOST AND THE STORAGE CONTROLLER

You will need to complete the following steps on your Solaris host by replacing <storage_ctlr> with the name of your storage controller.

STEP ACTION

1. sanlun lun show -v

filer: lun-pathname device filename adapter protocol lun size lun state

san201f1: /vol/solarisvol1/lunA /dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2 iscsi0 iSCSI 500m (524288000) GOOD

Serial number: C4mRZJGINXhN

Filer iSCSI IP address: 10.254.135.121

Filer iSCSI port number 3260

Filer iSCSI adapter name: ism_sw1

Filer iSCSI portal group: 1002

Filer IP address: 10.254.135.101

10.254.135.121

Filer volume name:solarisvol1 FSID:0x1ad2968

Filer qtree name:/vol/solarisvol1 ID:0x0

Filer snapshot name: ID:0x0

LUN partition table permits multiprotocol access: no

why: there is no valid disk label on this disk.

LUN has valid label: no

san201f1: /vol/solarisvol1/lunB /dev/rdsk/c3t60A9800043346D525A4A47494E586B74d0s2 iscsi0 iSCSI 500m (524288000) GOOD

Serial number: C4mRZJGINXkt

Filer iSCSI IP address: 10.254.135.101 Filer iSCSI port number

NetApp University - Do Not Distribute

Page 169: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-111 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION 3260

Filer iSCSI adapter name: ism_sw1

Filer iSCSI portal group: 1001

Filer IP address: 10.254.135.101

10.254.135.121

Filer volume name:solarisvol1 FSID:0x1ad2968

Filer qtree name:/vol/solarisvol1 ID:0x0

Filer snapshot name: ID:0x0

LUN partition table permits multiprotocol access: no

why: there is no valid disk label on this disk.

LUN has valid label: no

2. Observe that the LUNs are discovered by way of TPGTs 1001 and 1002, which correspond to interfaces e0b and e0c.

TASK 5: USING NATIVE SOLARIS COMMANDS TO CHECK MPXIO

You can also use native Solaris commands to view the paths.

STEP ACTION

1. iscsiadm list target -S

Target: iqn.1992-08.com.netapp:sn.101196961

Alias: -

TPGT: 1002

ISID: 4000002a0000

Connections: 1

LUN: 1

Vendor: NETAPP

Product: LUN

OS Device Name: /dev/rdsk/c3t60A9800043346D525A4A47494E586B74d0s2

LUN: 0

Vendor: NETAPP

Product: LUN

OS Device Name: /dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2

Target: iqn.1992-08.com.netapp:sn.101196961

NetApp University - Do Not Distribute

Page 170: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-112 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Alias: -

TPGT: 1001

ISID: 4000002a0000

Connections: 1

LUN: 1

Vendor: NETAPP

Product: LUN

OS Device Name: /dev/rdsk/c3t60A9800043346D525A4A47494E586B74d0s2

LUN: 0

Vendor: NETAPP

Product: LUN

OS Device Name: /dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2

2. The mpathadm is a very useful native Solaris command that can be used to inspect LUNs and paths to them for both FCP and iSCSI.

Enter the following command to list the LUNs on the host:

mpathadm list lu

/dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2

Total Path Count: 2

Operational Path Count: 2

/dev/rdsk/c3t60A9800043346D525A4A47494E586B74d0s2

Total Path Count: 2

Operational Path Count: 2

Enter the following command to look at the details about a particular LUN on the host: mpathadm show lu /dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2

Logical Unit: /dev/rdsk/c3t60A9800043346D525A4A47494E58684Ed0s2

mpath-support: libmpscsi_vhci.so

Vendor: NETAPP

Product: LUN

Revision: 0.2

Name Type: unknown type

NetApp University - Do Not Distribute

Page 171: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-113 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Name: 60a9800043346d525a4a47494e58684e

Asymmetric: no

Current Load Balance: round-robin

Logical Unit Group ID: NA

Auto Failback: on

Auto Probing: NA

Paths:

Initiator Port Name: iqn.1986-03.com.sun:01:san201.00000201,4000002a00ff

Target Port Name: 4000002a0000,iqn.1992-08.com.netapp:sn.101196961,1002

Override Path: NA

Path State: OK

Disabled: no

Initiator Port Name: iqn.1986-03.com.sun:01:san201.00000201,4000002a00ff

Target Port Name: 4000002a0000,iqn.1992-08.com.netapp:sn.101196961,1001

Override Path: NA

Path State: OK Disabled: no

Target Ports:

Name: 4000002a0000,iqn.1992-08.com.netapp:sn.101196961,1002

Relative ID: 0

Name: 4000002a0000,iqn.1992-08.com.netapp:sn.101196961,1001

Relative ID: 0

Again, observe the target portal groups being used in the output above.

TASK 6: LABEL THE LUN

You will need to complete the following steps on the Solaris host.

STEP ACTION

1. Enter the following command to label one of the LUNs as a Solaris disk. Make sure to select the disk corresponding to lunA and lunB (use output of sanlun lun show; in this example lunB is disk 2 and lunA is disk 4). Once you have labeled both lunA and lunB, enter quit (or just “q”) to

NetApp University - Do Not Distribute

Page 172: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-114 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

exit back to the Solaris prompt.

format

Searching for disks...done c3t60A9800043354274465A4450596A3037d0: configured with capacity of 498.00MB c3t60A9800043354274465A445059697A2Fd0: configured with capacity of 498.00MB AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/sd@0,0 1. c3t60A9800043354274465A4450596A3037d0 <NETAPP-LUN-0.2 cyl 498 alt 2 hd 16 sec 128> /scsi_vhci/ssd@g60a9800043354274465a4450596a3037 2. c3t60A9800043354274465A445059697A2Fd0 <NETAPP-LUN-0.2 cyl 498 alt 2 hd 16 sec 128> /scsi_vhci/ssd@g60a9800043354274465a445059697a2f Specify disk (enter its number): 1 selecting c3t60A9800043354274465A4450596A3037d0 [disk formatted] Disk not labeled. Label it now? Y format> disk 2 selecting c3t60A9800043354274465A445059697A2Fd0 [disk formatted] Disk not labeled. Label it now? Y format> quit

Next you can run newfs on any slices that you have created to create a UNIX file system on that slice. You will do this in a subsequent lab exercise.

END OF EXERCISE

NetApp University - Do Not Distribute

Page 173: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-115 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 18: CREATE A UNIX FILE SYSTEM ON A LUN ACCESSED WITH ISCSI

OVERVIEW:

In this exercise, you will create a UNIX File System (UFS) on one of the LUNs discovered previously on the Solaris host using iSCSI. Here are the main tasks of this lab exercise:

OBJECTIVES:

By the end of this exercise, you should be able to:

• Inspect the raw disk devices created for NetApp LUNs on the Solaris host • Create UFS on one of the NetApp LUNs • Mount the UFS onto the active file system on the Solaris host • Test writing access to the NetApp LUN • Add entry in the Virtual File System Table to mount the LUN persistently across reboots

TIME ESTIMATE:

20 minutes

START OF EXERCISE

You will need to complete the following steps on the Solaris host:

STEP ACTION

1. Enter the following command to look at NetApp LUNs using the sanlun utility provided by the NetApp iSCSI Utilities Kit:

sanlun lun show

You should get an output similar to:

bash-3.00# sanlun lun show

filer: lun-pathname device filename adapter protocol lun size lun state

Filer1: /vol/solarisvol1/lunA /dev/rdsk/c1t60A9800043346C4C564A396F472F6B63d0s2 0 iSCSI 500m (524288000) GOOD

Filer1: /vol/solarisvol1/lunB /dev/rdsk/c1t60A9800043346C4C564A397163314164d0s2

NetApp University - Do Not Distribute

Page 174: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-116 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

0 iSCSI 500m (524288000) GOOD

Observe the consolidated raw disk device file name (in bold) given by MPxIO to each of the NetApp LUNs. IMPORTANT: These device file names are different on your host. Make sure to use the device file names as they show up on your host in the following steps.

2. IMPORTANT: In the following steps, make sure to use the device file name as it appears on your host.

Enter the following command to install a UFS on the MPxIO consolidated device created for lunA:

newfs /dev/rdsk/c1t60A9800043346C4C564A396F472F6B63d0s2

newfs: construct a new file system /dev/rdsk/c1t60A9800043346C4C564A396F472F6B63d0s2: (y/n)? y

/dev/rdsk/c1t60A9800043346C4C564A396F472F6B63d0s2: 1019904 sectors in 498 cylinders of 16 tracks, 128 sectors

498.0MB in 32 cyl groups (16 c/g, 16.00MB/g, 7680 i/g)

super-block backups (for fsck -F ufs -o b=#) at:

32, 32928, 65824, 98720, 131616, 164512, 197408, 230304, 263200, 296096,

721696, 754592, 787488, 820384, 853280, 886176, 919072, 951968, 984864,

1017760

Observe that we choose to install a UFS on slice 2 (partition 2) of the disk. This is the slice that represents the whole disk. Alternately, we could install different file systems on each partition of the disk, by choosing different slices of the MPxIO consolidated disk device with the newfs command.

Keep in mind that the newfs command creates a file system of the default FS type on the solaris host. To view the default file system type on your host, look at

NetApp University - Do Not Distribute

Page 175: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-117 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

the /etc/default/fs file: cat /etc/default/fs. If the file system type you wish to create is not listed in /etc/default/fs, you can use the mkfs –F <FS_type> command instead of the newfs command.

3. Enter the following command to create a mountpoint for the UFS created on

lunA:

mkdir –p /mnt/lunA

4. Enter the following command to mount the lunA onto the active Solaris file system:

mount /dev/dsk/c1t60A9800043346C4C564A396F472F6B63d0s2 /mnt/lunA

Observe that we use the dsk path at this point because we now have a file system created on lunA.

5. Enter the following command to test writing to lunA:

touch /mnt/lunA/test_write_into_lunA.txt

6. Enter the following command to verify that the file test_write_into_file_lunA.txt was successfully created in lunA:

ls -la /mnt/lunA

drwxr-xr-x 3 root root 512 Jan 23 13:50 .

drwxr-xr-x 3 root sys 512 Jan 23 13:46 ..

drwx------ 2 root root 8192 Jan 23 13:43 lost+found

-rw-r--r-- 1 root root 0 Jan 23 13:50 test_write_into_lunA.txt

7. This step is OPTIONAL. If you need to have the NetApp LUN automatically mounted after a system reboot, you need to add an entry in the Virtual File System Table file, in /etc/vfstab.

IMPORTANT: Make sure to use the device file name as it appears on your host.

NetApp University - Do Not Distribute

Page 176: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-118 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Add the following line into the /etc/vfstab file to persistently mount lunA across system reboots: /dev/dsk/c1t60A9800043346C4C564A396F472F6B63d0s2 - /mnt/lunA ufs no yes -

Observe the comments at the beginning of the vfstab file; they explain each field.

END OF EXERCISE

EXERCISE 19: CLONE A LUN

OVERVIEW:

In this exercise, you will clone a LUN.

OBJECTIVES:

By the end of this exercise, you should be able to:

• Create a Snapshot of the volume that contains the LUN to be cloned

• Clone a LUN backed by the Snapshot created previously

• Map a LUN clone to the same initiator group as the original LUN

• Discover a LUN clone on Solaris host

• Split a LUN clone from its backing Snapshot

• Delete the backing Snapshot

TIME ESTIMATE:

20 minutes

NetApp University - Do Not Distribute

Page 177: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-119 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

START OF EXERCISE

TASK 1: CREATE A SNAPSHOT OF THE VOLUME THAT CONTAINS THE LUN TO BE CLONED

STEP ACTION

1.

2.

3.

4.

5.

6.

7.

TASK 2: CLONE A LUN BACKED BY THE SNAPSHOT CREATED PREVIOUSLY

You will need to complete the following steps on your Solaris host by replacing <storage_ctlr> with the name of your storage controller.

STEP ACTION

1. Enter the command below to create a Snapshot of the NetApp volume where lunA sits. This Snapshot will be used as backing Snapshot for the clone of a LUN.

$ rsh <storage_ctlr> snap create solarisvol1 snap_lunA_clone

2. Enter the command below to clone the lunA using the Snapshot snap_lunA_clone of the solarisvol1 volume.

$ rsh <storage_ctlr> lun clone create /vol/solarisvol1/lunA_clone –b /vol/solarisvol1/lunA snap_lunA_clone

3. Enter the following command to view the Snapshot copies of the solarisvol1

volume:

$ rsh <storage_ctlr> snap list solarisvol1

Observe that the status of the snap_lunA_clone Snapshot is (busy,LUNs). This is due to the Snapshot being used by the clone of lunA that we just created in the

NetApp University - Do Not Distribute

Page 178: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-120 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

previous step. 4. Enter the following command to view available LUNs:

$ rsh <storage_ctlr> lun show

/vol/solarisvol1/lunA 500m (524288000) (r/w, online, mapped)

/vol/solarisvol1/lunA_clone 500m (524288000) (r/w, online)

/vol/solarisvol1/lunB 500m (524288000) (r/w, online, mapped)

Observe that the status of lunA_clone is (r/w,online). The lunA_clone is not mapped to an initiator group.

You can also verify this using the lun show –m command. The lunA_clone is not shown at all by lun show –m because lunA_clone is not currently mapped.

$ rsh <storage_ctlr> lun show -m

LUN path Mapped to LUN ID Protocol

-----------------------------------------------------------------------

/vol/solarisvol1/lunA solaris_iscsi_ig 0 iSCSI

/vol/solarisvol1/lunB solaris_iscsi_ig2 1 iSCSI

5. Enter the following command to view available LUNs in verbose format:

$ rsh <storage_ctlr> lun show –v

/vol/solarisvol1/lunA 500m (524288000) (r/w, online, mapped)

Serial#: C4lLVJ9oG/kc

Share: none

Space Reservation: enabled

Multiprotocol Type: solaris

Maps: solaris_iscsi_ig=0

NetApp University - Do Not Distribute

Page 179: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-121 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

/vol/solarisvol1/lunA_clone 500m (524288000) (r/w, online)

Serial#: C4lLVJ9vfk5G

Backed by: /vol/solarisvol1/.snapshot/snap_lun_clone/lun_sa1

Share: none

Space Reservation: enabled

Multiprotocol Type: solaris

/vol/solarisvol1/lunB 500m (524288000) (r/w, online, mapped)

Serial#: C4lLVJ9qc1Ad

Share: none

Space Reservation: enabled

Multiprotocol Type: solaris

Maps: solaris_iscsi_ig2=1

Observe that lunA_clone is backed by the snap_lun_clone Snapshot of the solarisvol1 volume. Observe also that lunA and lunA_clone have different serial numbers. Thus, they will be recognized as two different disks on the host.

TASK 3: MAP LUN CLONE TO THE SAME INITIATOR GROUP AS THE ORIGINAL LUN

You will need to complete the following steps on the NetApp1 target storage controller.

STEP ACTION

1. Enter the following command to map the LUN clone to the same initiator group as the original LUN:

$ rsh <storage_ctlr> lun map /vol/solarisvol1/lunA_clone solaris_iscsi_ig

2. Enter the following command to view available LUNs:

$ rsh <storage_ctlr> lun show

/vol/solarisvol1/lunA 500m (524288000) (r/w, online, mapped)

/vol/solarisvol1/lunA_clone 500m (524288000) (r/w, online, mapped)

NetApp University - Do Not Distribute

Page 180: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-122 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

/vol/solarisvol1/lunB 500m (524288000) (r/w, online, mapped)

Observe that lunA_clone is now mapped to an initiator group. You can also verify this using the lun show –m command.

$ rsh <storage_ctlr> lun show -m

LUN path Mapped to LUN ID Protocol

-----------------------------------------------------------------------

/vol/solarisvol1/lunA solaris_iscsi_ig 0 iSCSI

/vol/solarisvol1/lunB solaris_iscsi_ig2 1 iSCSI

/vol/solarisvol1/lunA_clone solaris_iscsi_ig 3 iSCSI

TASK 4: DISCOVER LUN CLONE ON SOLARIS HOST

You will need to complete the following steps on the Solaris host.

STEP ACTION

1. Enter the following command to discover the new LUN clone on the Solaris host using the iSCSI protocol:

devfsadm –i iscsi

2. Enter the following command to view the NetApp LUNs available on the Solaris host:

sanlun lun show

You should get an output similar to:

bash-3.00# sanlun lun show

filer:lun-pathname device filename adapter protocol lun size lun state

Filer1: /vol/solarisvol1/lunA /dev/rdsk/c1t60A9800043346C4C564A396F472F6B63d0s2 0 iSCSI 500m (524288000) GOOD

Filer1: /vol/solarisvol1/lunB /dev/rdsk/c1t60A9800043346C4C564A397163314164d0s2 0 iSCSI 500m (524288000) GOOD

NetApp University - Do Not Distribute

Page 181: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-123 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Filer1: /vol/solarisvol1/lunA_clone /dev/rdsk/c1t60A9800043346C4C564A3976666B3547d0s2 0 iSCSI 500m (524288000) GOOD

Observe that lunA_clone is shown as any other LUN on the Solaris host.

3. Enter the following command to create a mountpoint for the lunA_clone LUN:

mkdir –p /mnt/lunA_clone

4. Enter the following command to mount the file system on lunA_clone onto the active file system.

IMPORTANT: Make sure to use the lunA_clone device file name as it appears on your host.

mount /dev/dsk/c1t60A9800043346C4C564A3976666B3547d0s2 /mnt/lunA_clone

5. Enter the following command to view the contents of /mnt/lunA_clone:

ls –la /mnt/lunA_clone

Observe that the contents of /mnt/lunA_clone are the same as the contents of /mnt/lunA at this point. However, lunA can be changed independently from lunA_clone from now on.

6. Enter the following command to write to lunA:

touch /mnt/lunA/test2_write_into_lunA.txt

7. Enter the following command to write to lunA_clone:

touch /mnt/lunA_clone/test_write_into_lunA_clone.txt

8. Enter the following commands to view and compare the contents of lunA and

lunA_clone:

ls –la /mnt/lunA

drwxr-xr-x 3 root root 512 Jan 24 14:05 .

drwxr-xr-x 5 root sys 512 Jan 24 11:23 ..

NetApp University - Do Not Distribute

Page 182: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-124 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

drwx------ 2 root root 8192 Jan 23 13:43 lost+found

-rw-r--r-- 1 root root 0 Jan 24 14:05 test2_write_into_lunA.txt

-rw-r--r-- 1 root root 0 Jan 23 13:50 test_write_into_lunA.txt

ls –la /mnt/lunA_clone

drwxr-xr-x 3 root root 512 Jan 24 12:36 .

drwxr-xr-x 5 root sys 512 Jan 24 11:23 ..

drwx------ 2 root root 8192 Jan 23 13:43 lost+found

-rw-r--r-- 1 root root 0 Jan 23 13:50 test_write_into_lunA.txt

-rw-r--r-- 1 root root 0 Jan 24 12:36 test_write_into_lunA_clone.txt

Observe that the contents are now different. Keep in mind though that much of the space occupied by lunA and by lunA_clone in solarisvol1/.snapshot/snap_lunA_clone is still shared.

TASK 5: SPLIT LUN CLONE FROM ITS BACKING SNAPSHOT

You will need to complete the following steps on the NetApp1 target storage controller. Keep in mind that in most use cases, it is not necessary to split the LUN clone from its backing Snapshot. Also, the split can happen while the LUN is being used.

STEP ACTION

1. Enter the following command to split the lunA_clone from the snap_lunA_clone Snapshot. The LUN clone split process involves copying off the blocks that are shared between lunA and lunA_clone.

lun clone split start /vol/solarisvol1/lunA_clone

2. While the LUN is being split, get a status of the process.

lun clone split status /vol/solarisvol1/lunA_clone

NetApp University - Do Not Distribute

Page 183: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-125 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

If the splitting occurs too quickly for you to get the status, you should see: lun clone split status: /vol/FlexVol1/lunC_your_initials.clone: LUN is not a clone

3. Enter the following command to view available LUNs in verbose format:

lun show –v

/vol/solarisvol1/lunA 500m (524288000) (r/w, online, mapped)

Serial#: C4lLVJ9oG/kc

Share: none

Space Reservation: enabled

Multiprotocol Type: solaris

Maps: solaris_iscsi_ig=0

/vol/solarisvol1/lunA_clone 500m (524288000) (r/w, online)

Serial#: C4lLVJ9vfk5G

Share: none

Space Reservation: enabled

Multiprotocol Type: solaris

Maps: solaris_iscsi_ig=3

/vol/solarisvol1/lunB 500m (524288000) (r/w, online, mapped)

Serial#: C4lLVJ9qc1Ad

Share: none

Space Reservation: enabled

Multiprotocol Type: solaris

Maps: solaris_iscsi_ig2=1

Observe that lunA_clone is no longer backed by a Snapshot.

NetApp University - Do Not Distribute

Page 184: Strsw Ed Ilt San Impwkshp Exerciseguide

E6-126 SAN Implementation Workshop: FC and IP Solaris © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 6: DELETE THE BACKING SNAPSHOT

STEP ACTION

1. View the available Snapshot copies.

snap list solarisvol1

The status of the snap_lunA_clone Snapshot should not be (busy,LUNs) at this point because we split lunA_clone from its backing Snapshot, snap_lunA_clone. If the snap_lunA_clone Snapshot is still shown (busy,LUNs) you can use the lun snap usage solarisvol1 snap_lun_clone Data ONTAP command to verify if there are any subsequent Snapshot copies that depend on the snap_lunA_clone Snapshot. In this case, you need to delete the subsequent Snapshot copies before deleting snap_lunA_clone.

2. Enter the following command to delete the backing Snapshot.

snap delete solarisvol1 snap_lun_clone

END OF EXERCISE

NetApp University - Do Not Distribute

Page 185: Strsw Ed Ilt San Impwkshp Exerciseguide

FC &

IP V

Mw

are

NetApp University - Do Not Distribute

Page 186: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-1 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

MODULE 7: FC AND IP VMWARE

Exercise

Module 7: FC and IP VMware

Estimated Time: 6 hours

EXERCISE 20: CONNECT VMWARE TO A NETAPP FC SAN ENVIRONMENT

OVERVIEW:

In this exercise, you will gain hands-on experience working in a basic VMware FC SAN setup, dealing with the installation of the host utilities, FC HBA driver installation and configuration, multipathing policy setup, and understanding how the host interacts with the storage system.

OBJECTIVES:

By the end of this exercise, you should be able to: • Understand and interpret the Compatibility Matrix to confirm a supported installation • Install HBA drivers and support software on a VMware ESX Server • Install the NetApp Host Utilities package on a VMware ESX Server • Configure the FC HBA parameters to optimal values recommended by NetApp • Configure multipathing policy

TIME ESTIMATE:

45 minutes

NetApp University - Do Not Distribute

Page 187: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-2 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

START OF EXERCISE

TASK 1: HOST CONFIGURATION CHECK

STEP ACTION

1. SSH into your group’s host using PuTTY or some similar utility.

• Log in as root (password provided by instructor)

2. Check and document the version of the OS.

• Discover what Linux version the ESX Server is based on: uname –a

What is the kernel build number of the host? ______________________________________________________________

• Discover the release of the VMware ESX Server: cat /etc/vmware-release

• What is the OS version of the host? ______________________________________________________________

3. Check if FC HBAs are present: lspci | grep -i Fibre or lspci –vv or dmesg | grep –i lpfc* or dmesg | grep –i qla*

lspci: Lists information about devices connected to the PCI system bus

dmesg: Shows which modules are loaded in kernel space

• Are there FC HBAs installed? _____________________________________

• What brand of FC HBA is installed? _____________________________________________________________

4. • Browse to the NetApp SAN Support Matrix available at: http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/NetAppSANSupport_August2007RevA.pdf#page=72 and look at line item 74 in the VMware ESX Server section.

• Is the configuration that you have documented so far compatible with the support matrix? ____________________________________________________________

NetApp University - Do Not Distribute

Page 188: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-3 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

• Does the current support matrix allow SnapDrive for UNIX with your configuration? _____________________________________________________________

• Which Volume Manager is supported with this configuration?

TASK 2: NETAPP HOST UTILITIES INSTALLATION AND FC HBA CONFIGURATION

STEP ACTION

1. Confirm that there is no previous version of the Host Utilities installed (default location /opt/sanlun/bin). cd /opt/sanlun/bin ls

If this folder does not exist, move on to Step 2. If it does, use the following command to remove it: ./uninstall

2. The NetApp Host Utilities are available for download at the following location on the NOW site. http://now.netapp.com/NOW/download/software/sanhost_esx/ESX/

The NetApp Host Utilities have been provided for you in the <class_files> location provided by the instructor.

• Decompress and extract the Host Utilities file: cp <class_files>/netapp_fcp_esx_host_utilities_3_0.tar.gz /tmp cd /tmp gunzip netapp_fcp_esx_host_utilities_3_0.tar.gz tar -xvf netapp_fcp_esx_host_utilities_3_0.tar

• The files will be extracted to the “netapp_fcp_esx_host_utilities_3_0” subdirectory of your current working directory.

3. Enter the following command to install the NetApp Host Utilities. Answer “yes” to the prompt asking to open TCP ports through the ESX firewall. cd netapp_fcp_esx_host_utilities_3_0 ./install

• The diagnostic scripts are installed to the /opt/netapp/santools directory.

4. Ensure that the Emulex LightPulse FC (LPFC) driver is loaded in the ESX Server kernel. modprobe –c | grep lpf

NetApp University - Do Not Distribute

Page 189: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-4 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Observe that the FC HBA driver module is named “lpfcdd_732” and an alias named “scsi_hostadapter” points to it. You can either use the driver module name or the alias in the following command.

If the driver module is not already loaded, load it using modprobe: modprobe –v scsi_hostadapter

5. Verify that the timeout value for the LPFC driver is set to 120.

esxcfg-module –g lpfcdd_732

You should get an output similar to:

lpfcdd_732 enabled = 1 options = 'lpfc_nodev_tmo=120'

If the “lpfc_nodev_tmo” option is not set to 120, run the following command to set it to 120:

esxcfg-module –s “lpfc_nodev_tmo=120” lpfcdd_732

esxcfg-boot -b

Reboot the ESX Server host

reboot

NOTE: the “lpfc_nodev_tmo” option is normally set to 120 as part of the installation of the NetApp FC HUK for VMware ESX Server 3.0. Thus, you do not have to set it manually if you installed the HUK.

NOTE: If a Windows guest OS is set up to access NetApp storage in this ESX Server, the DiskTimeoutValue (HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk\TimeOutValue) will need to be set to 190 in the Windows registry.

6. Record the WWPN for each port on the FC HBA.

/usr/sbin/esxcfg-info | grep –i “Node Number” WWNN Port0:_______________________________________________________ WWNN Port1:_______________________________________________________

/usr/sbin/esxcfg-info | grep –i “Port Number” WWPN Port0:_______________________________________________________ WWPN Port1:_______________________________________________________

NetApp University - Do Not Distribute

Page 190: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-5 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Take a QUICK look at the other information items displayed by the esxcfg-info VMware command.

7. Run the following command provided by the NetApp FC HUK for VMware ESX Server 3.0 to collect information about your ESX Server:

/opt/netapp/santools/esx_info fcp

You should get an output similar to:

Gathering RPM information.........................DONE Gathering ESX Server information..................DONE Gathering FCP information.........................DONE Done gathering information ESX Server system info is in directory /tmp/netapp/netapp_esx_info Compressed file is /tmp/netapp/netapp_esx_info.tar.gz Please send this file for analysis

Take a quick look at the information items dumped into the /tmp/netapp/netapp_esx_info directory by this command.

TASK 3: CONFIGURING MULTIPATHING

STEP ACTION

1. Query the configuration of the FC HBA currently installed:

/opt/netapp/santools/config_hba --query

Observe that you obtain the same output as with the esxcfg-module –g lpfcdd_732 command that you ran earlier. This command has the same syntax and works the same for both Emulex and for QLogic FC HBAs.

2. Configure the FC HBA parameters to the optimal values recommended by

NetApp

/opt/netapp/santools/config_hba --configure

This command has the same effect as the command esxcfg-module –s lpfc_nodev_tmo=120 lpfcdd_732. The config_hba --configure command has the same syntax and works the same for both Emulex and for QLogic FC HBAs.

3. Query the current multipathing configuration: /opt/netapp/santools/config_mpath --query

NetApp University - Do Not Distribute

Page 191: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-6 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

4. This step is informational only. You can read through it, but do not run the commands shown.

You can configure the multipathing policy using the config_mpath command provided by the NetApp Host Utilities Kit for VMware ESX 3.0 provided by NetApp. For example, to configure multipathing by balancing the load amongst all of the primary paths and make the configuration persistent across ESX Server reboots you can run the following command:

/opt/netapp/santools/config_mpath --primary --loadbalance –persistent

END OF EXERCISE

EXERCISE 21: VMWARE STORAGE OPTIONS USING FC

OVERVIEW:

The objective of these exercises is to provide a hands-on experience working in a basic VMware FC SAN setup, dealing with the installation of the host utilities, FC HBA driver installation and configuration, multipathing policy set up, and understanding how the host interacts with the storage system.

OBJECTIVES:

At the end of this exercise, you should be able to understand and interpret the compatibility matrix to confirm a supported installation.

TIME ESTIMATE:

90 minutes

START OF EXERCISE

TASK 1: CREATE IGROUPS, VOLUMES, AND LUNS FOR FCP

STEP ACTION

1. Open a Remote Desktop Connection to start the Virtual Infrastructure Client on the remote VMware client host or simply double-click the Virtual Infrastructure Client icon on your Windows desktop. Log in as root to the remote VMware ESX Server using the host name or IP address supplied by your instructor.

In your Virtual Infrastructure Client, select Configuration tab and click Storage Adapters from the Hardware menu. Observe the FC worldwide port names (WWPNs) of each port on your Emulex LP11000 4-GB Fibre Channel Host Adapter in the SAN Identifier column Write down the vmhba number and the WWPNs for both ports as you will need them to create initiator groups on the

NetApp University - Do Not Distribute

Page 192: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-7 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

storage controller.

Port 0: vmhba________

Port 1: vmhba________

WWPN Port0: _____________________________________________

WWPN Port1: _____________________________________________

2. Now you add an initiator group on the target storage controller.

Navigate to FilerView. Expand LUNs and Initiator Groups. Select Add.

Name the initiator group esx_fcp_ig.

Set the Type to FCP and the operating system to VMware.

Locate the WWPNs by navigating to the Virtual Infrastructure Client and selecting Storage Adapters in the Hardware section of the Configuration tab. Then, click the vmhbaX adapters noted above and look at the SAN Identifier column. Type (or copy and paste in) the WWPN of each FC HBA port into the Initiators list in FilerView as shown below. You can also simply refer to the WWPNs noted above for each FC initiator port on your ESX Server host.

NetApp University - Do Not Distribute

Page 193: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-8 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click Add.

You will get a message indicating that the initiator group was successfully created.

3. Next, add a volume from FilerView or the command line. Instructions are

provided for use with FilerView.

Select Volumes and Add. The Volume Wizard appears.

Select Next.

Select Flexible and click Next.

Name the volume esx_fcp_vol1.

Keep Language set to POSIX and select Next.

The containing aggregate should be aggr1. The volume should be 2GB.

Set Space Guarantee to none.

Select Next.

Review the summary and click Commit.

NetApp University - Do Not Distribute

Page 194: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-9 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

4. Now you create a LUN in FilerView or with the command line. Instructions are provided for use with FilerView.

Select LUNs and Add.

The path to the LUN should be /vol/esx_fcp_vol1/LUN

Set the LUN Protocol Type to VMware.

Set the size of the LUN to 1500 MB.

Leave Space reservation checked on.

Click Add.

5. Add another LUN using FilerView or the command line. Instructions are

provided for use with FilerView.

Select LUNs and Add.

The path to the LUN should be /vol/esx_fcp_vol1/LUN2.

Set the LUN Protocol Type to Windows.

Set the size of the LUN to 50 MB.

Leave Space reservation checked on.

Click Add.

6. Map the LUN to the initiator group you previously created.

Select LUNs and Manage.

Click the /vol/esx_fcp_vol1/LUN and select Map LUN.

Select Add Groups to Map.

Select the esx_fcp_ig initiator group and select Add.

Leave the LUN ID blank.

Click Apply. A message appears indicating that the mapping was successful.

Repeat the mapping steps for LUN /vol/esx_fcp_vol1/LUN2.

7. Select Manage from the LUNs menu. Notice the /vol/esx_fcp_vol1/LUN

and the /vol/esx_fcp_vol1/LUN2 LUNs that you just created and mapped to the FCP initiator group named esx_fcp_ig.

NetApp University - Do Not Distribute

Page 195: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-10 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

NetApp University - Do Not Distribute

Page 196: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-11 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 2: DISCOVER LUNS ON VMWARE ESX SERVER USING FCP

STEP ACTION

1. Return to the Virtual Infrastructure Client. You should be on the Configuration tab. Select Storage Adapters from the Hardware menu.

Rescan the first vmhbaX port in the LP11000 4-GB Fibre Channel Host Adapter section by right-clicking the first vmhbaX port and selecting Rescan.

Repeat the rescan procedure for the second FC HBA vmhbaX port.

NetApp University - Do Not Distribute

Page 197: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-12 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Observe that there are four FC targets discovered on the vmhba0 FC HBA port and four other FC target discovered on the vmhba1 FC HBA port. This is eight targets in total corresponding to eight paths to each LUN that you previously created and mapped to the esx_fcp_ig initiator group. Why are there eight paths to the LUNs? ___________________________________________________________________

___________________________________________________________________

Observe that each path to a LUN is identified by a “vmhbaX:X:X” triplet.

“vmhba0:0:0” is the number of the port on the FC HBA.

“vmhba0:0:0” is the SCSI target on the vmhba0 FC HBA port.

“vmhba0:0:0” is the LUN number. This is the LUN ID that you used when you mapped the LUN to the esx_fcp_ig initiator group.

NetApp University - Do Not Distribute

Page 198: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-13 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Observe that in this example LUN 0 and LUN 1 are discovered on vmhba0 and vmhba1. However, on your ESX Server host, the vmhba adapter number may be different. For example, if you have a local SCSI adapter with a local disk attached to the local SCSI bus, then the local SCSI adapter will likely show up and vmhba0, the Emulex FC HBA, would then show up as vmhba 1 and vmhba2.

Look at the SCSI Target 0 for each LUN and record their vmhba adapter number below:

LUN 0 (1.5GB): vmhba__:0:0

LUN 1 (50MB): vmhba__:0:1

2. The canonical path is the path that is first discovered by ESX to a given LUN.

That path also becomes the ESX name of the LUN. Run:

/opt/netapp/santools/sanlun lun show

Observe that the vmkdisk name given to the LUN corresponds to the canonical path shown in the Virtual Infrastructure Client.

3. Now inspect the paths to one of the LUNs using the Virtual Infrastructure Client.

Click the first path of the first vmbhaX adapter.

NetApp University - Do Not Distribute

Page 199: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-14 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Next, right-click the path and select Manage paths… The Manage Paths dialog box is displayed.

NetApp University - Do Not Distribute

Page 200: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-15 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Observe that the vmhba0:0:0 is currently the preferred active path to LUN 0. You need to verify that this path is going to the primary storage controller (that is the controller that is hosting the LUN that we are accessing). To do this you need to verify that the target WWPN (the SAN Identifier) of this path is a WWPN on the primary storage controller.

Using PuTTY or another Telnet client, log on to each target storage controller and run the fcp show adapter Data ONTAP command. Look at the FC Portname entry and ensure that one of the WWPNs displayed on the storage controller owning LUN 0 is the WWPN, which shows up as the active preferred path in the Virtual Infrastructure Client. If that is not the case, you need to locate a path that targets one of the WWPNs on the primary storage controller. Then, click Change… in the Manage Paths dialog box and check Preferred.

END OF TASK 2

NetApp University - Do Not Distribute

Page 201: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-16 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 3: CREATE (FORMAT) A VMFS DATASTORE PROVISIONED BY LUN ACCESSED THROUGH FCP

STEP ACTION

1. Now, format one of the LUNs as VMFS.

Select Storage (SCSI, SAN, and NFS) from the Hardware menu.

Select Add Storage from the upper right-hand corner of the screen.

2. The Add Storage window appears.

Click Next.

3. Select LUN 0, the 1.5-GB LUN. In this example, vmhba0:0:0 is selected.

NetApp University - Do Not Distribute

Page 202: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-17 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

However, in your case the LUN may be on a different vmhba adapter. Please refer to the vmhba adapter number you recorded earlier.

Observe that only LUN 0 of the FC HBA appears as an available choice. Why?

Hint: Minimum VMFS datastore size.

Click Next.

4. Observe that the current disk layout is blank. Click Next.

5. Name the datastore FC VMFS. Click Next.

NetApp University - Do Not Distribute

Page 203: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-18 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

6. Accept the defaults on the Disk/LUN Formatting screen.

Click Next.

NetApp University - Do Not Distribute

Page 204: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-19 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

7. The Summary screen appears. Review the proposed disk layout and click Finish.

Notice that the Create VMFS datastore is in progress (in the Recent Tasks section of the screen).

NetApp University - Do Not Distribute

Page 205: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-20 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Once completed, the FC VMFS appears in the list of Storage:

NetApp University - Do Not Distribute

Page 206: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-21 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 4: CREATE VM WITH VMDK IN A VMFS DATASTORE PROVISIONED BY LUN ACCESSED THROUGH FCP

STEP ACTION

1. Select the Summary tab and click New Virtual Machine in the Commands pane.

The New Virtual Machine Wizard appears.

NetApp University - Do Not Distribute

Page 207: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-22 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Select Typical and click Next.

2. Name the Virtual Machine Win2003 FC VMFS.

Click Next.

Select the FC VMFS.

NetApp University - Do Not Distribute

Page 208: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-23 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click Next.

3. Select Microsoft Windows as the Guest Operating System and select the version

Microsoft Windows Server 2003, Enterprise Edition.

NetApp University - Do Not Distribute

Page 209: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-24 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click Next. 4. Select 1 Virtual Processor and select Next.

Leave 256 as the Virtual Memory size for the machine and click Next.

Accept the defaults for Choose Networks and click Next.

NetApp University - Do Not Distribute

Page 210: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-25 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

If you had multiple networks, you would use this screen to select a different network. In this example, the defaults are accepted.

5. On the Define Virtual Disk Capacity screen, set the Disk Size to 0.5 GB. Click

Next.

NetApp University - Do Not Distribute

Page 211: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-26 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

6. Review the defaults on the Summary screen and click Finish.

NetApp University - Do Not Distribute

Page 212: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-27 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

7. After the Create Virtual Machine task is complete, observe the new Win2003 FC VMFS virtual machine created on your ESX Server.

NetApp University - Do Not Distribute

Page 213: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-28 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 5: CREATE VM WITH RDM STORAGE PROVISIONED BY RAW LUN ACCESSED THROUGH FCP

STEP ACTION

1. Select the Summary tab and click New Virtual Machine in the Commands pane.

The New Virtual Machine Wizard appears.

NetApp University - Do Not Distribute

Page 214: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-29 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Select Custom and click Next. You need to select “Custom” here to be able to provision the new VM using raw device mapping (RDM) instead of using a typical VMFS datastore.

2. Name the Virtual Machine Win2003 FC RDM.

Click Next.

3. Select a location where the vmx file and the pointer to the RDM will be located. Observe that the dialog box asks to select a “datastore in which to store the files for the virtual machine.” When using RDM storage, only the vmx VM configuration file and the pointer to the RDM will be stored in the datastore you select here.

NetApp University - Do Not Distribute

Page 215: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-30 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Select FC VMFS.

Click Next.

4. Select Microsoft Windows as the Guest Operating System and select the version Microsoft Windows Server 2003, Standard Edition.

Click Next.

5. Select 1 as the Number of Virtual Processors. Click Next.

6. Leave 256 MB as the memory for the virtual machine and click Next.

7. Accept the defaults for Choose Networks and click Next.

NetApp University - Do Not Distribute

Page 216: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-31 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

If you had multiple networks, you would use this screen to select a different network. In this example, the defaults are accepted.

8. Leave LSI Logic as the default adapter and click Next.

9. Select Raw Device Mappings and click Next.

NetApp University - Do Not Distribute

Page 217: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-32 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

10. Select LUN 1 from the list and click Next.

NetApp University - Do Not Distribute

Page 218: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-33 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Question: Why do you only have one LUN (LUN1, 50 MB, which is vmhba0:1:0) showing up in this list when in fact you know that you created two LUNs accessed through FCP (LUN0=1.5 GB and LUN1=50 MB)?

Hint: Think of the fact that RDM stands for Raw Device Mapping.

11. Select Store with Virtual Machine and click Next.

12. Select Physical compatibility mode. Click Next.

Physical mode is used for NetApp SnapManager products. Virtual mode is used to take VMFS Snapshot copies.

13. Leave all options as default on the Specify Advanced Options screen. Click Next.

NetApp University - Do Not Distribute

Page 219: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-34 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

14. Review the parameters and click Finish.

END OF EXERCISE

NetApp University - Do Not Distribute

Page 220: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-35 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

EXERCISE 22: CONFIGURATION OF VMWARE VIRTUAL INFRASTRUCTURE

OVERVIEW:

In this exercise, you will establish FC connections between the virtual machine (VM) and the storage. In addition, you will learn to create VMs.

OBJECTIVES:

By the end of this exercise, you should be able to:

• Verify virtual switch information

• Add VMkernel devices to your virtual infrastructure

• Open a firewall to allow iSCSI traffic

TIME ESTIMATE:

50 minutes

START OF EXERCISE

TASK 1: VERIFY VIRTUAL SWITCHES

STEP ACTION

1. Open a Remote Desktop Connection to start up the Virtual Infrastructure Client on the remote VMware client host. Log in as root to the remote VMware ESX Server using the host name or IP address supplied by your instructor.

NetApp University - Do Not Distribute

Page 221: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-36 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2. Select the Configuration tab. Select Networking from the Hardware list on your screen.

You should see Virtual Switch vSwitch0 as shown here. Observe that vSwitch0 is currently used for both the Service Console and for the VM Network.

NetApp University - Do Not Distribute

Page 222: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-37 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Typically, in production environments vSwitch0 is used for the Service Console, and a separate vSwitch1 is used for the Virtual Machine Network.

3. You need to add vSwitch1 for the Virtual Machine Network. You should still be

in the Configuration tab. In the upper-right corner is an option to add networking. Select Add Networking.

The Add Network Wizard appears.

Select Virtual Machine and click Next.

4. Select Create a virtual switch and click Next

NetApp University - Do Not Distribute

Page 223: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-38 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Observe that the Create a virtual switch option automatically selects the second NIC (vmnic1) installed on the ESX Server host.

5. Click Next in the Connection Settings screen.

6. Click Finish in the Summary screen.

NetApp University - Do Not Distribute

Page 224: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-39 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Observe that we have two virtual machine networks defined now: one on vSwitch0 and one on vSwitch1.

Observe that the two virtual machines, named “Win2003 FC VMFS” and “Win2003 FC RDM” respectively, which you created earlier, are using the VM Network on vSwitch0. You need to reassign the network of those two VMs onto the “Virtual Machine Network” on vSwitch1.

NetApp University - Do Not Distribute

Page 225: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-40 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

7. Click the Win2003 FC VMFS VM entry in the ESX inventory tree.

Next, click the Edit Settings hyperlink in the Commands box.

NetApp University - Do Not Distribute

Page 226: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-41 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Select the Network Adapter entry in the Hardware list and pick Virtual Machine Network from the Network Connection list as shown here. Click OK to exit the VM Properties dialog box.

The “Virtual Machine Network” is the VM network on vSwitch1. You basically change the VMs to use the VM network on vSwitch1 instead of the VM network on vSwitch0.

Repeat the actions of Step 7 for the “Win2003 FC RDM” virtual machine.

NetApp University - Do Not Distribute

Page 227: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-42 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

8. Click the san<pod#>esx server entry in the ESX inventory tree again. Observe that the two VMs are now using the VM network on vSwitch1 instead of the VM network on vSwitch0.

Now you need to remove the VM Network entry from vSwitch0. Click the Properties hyperlink next to vSwitch0.

NetApp University - Do Not Distribute

Page 228: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-43 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

9. Select the VM Network entry in the list and click the Remove button.

10. Click Close to close the vSwitch0 Properties dialog box.

NetApp University - Do Not Distribute

Page 229: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-44 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

11. This is what you should now see in the Networking screen.

END OF TASK 1

NetApp University - Do Not Distribute

Page 230: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-45 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 2: ADD VMKERNEL DEVICE

STEP ACTION

1. Open a Remote Desktop Connection to start up the Virtual Infrastructure Client on the remote VMware client host. Log in as root to the remote VMware ESX Server using the host name or IP address supplied by your instructor.

2. Select the Configuration tab. Select Networking from the Hardware list on your

NetApp University - Do Not Distribute

Page 231: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-46 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

screen.

You should see Virtual Switch vSwitch0 as shown here. Observe that vSwitch0 is currently used for both the Service Console and for the VM Network.

Typically, in production environments vSwitch0 is used for the Service Console, and a separate vSwitch1 is used for the Virtual Machine Network.

3. You need to add vSwitch1 for the Virtual Machine Network. You should still be

in the Configuration tab. In the upper-right corner is an option to add networking. Select Add Networking.

NetApp University - Do Not Distribute

Page 232: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-47 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

The Add Network Wizard appears.

Select Virtual Machine and click Next.

4. Select Create a virtual switch and click Next.

NetApp University - Do Not Distribute

Page 233: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-48 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Observe that the Create a virtual switch option automatically selects the second NIC (vmnic1) installed on the ESX Server host.

5. Click Next in the Connection Settings screen.

NetApp University - Do Not Distribute

Page 234: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-49 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

6. Click Finish in the Summary screen.

Observe that we have two virtual machine networks defined now: one on vSwitch0 and one on vSwitch1.

Observe that the two virtual machines, named “Win2003 FC VMFS” and “Win2003 FC RDM” respectively, which you created earlier, are using the VM Network on vSwitch0. You need to reassign the Network of those two VMs onto the “Virtual Machine Network” on vSwitch1.

NetApp University - Do Not Distribute

Page 235: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-50 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 3: CRE

STEP ACTION

1. You should still be in the Configuration tab. From the Software list on the left select Security Profile.

2. Select Properties from the upper right-hand corner of the screen. The Firewall Properties window appears.

3. Check the box for the Software iSCSI Client entry. Click OK. The Software

NetApp University - Do Not Distribute

Page 236: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-51 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

iSCSI Client appears under Outgoing Connections.

You will use the iSCSI client in the next lab.

4. Verify that the Software iSCSI Client is listed under Outgoing Connections.

END OF EXERCISE

EXERCISE 23: VMWARE STORAGE OPTIONS USING ISCSI

NetApp University - Do Not Distribute

Page 237: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-52 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

OVERVIEW:

In this exercise, you will establish connections between the virtual machine (VM) and the storage. In addition, you will learn how to create VMs.

OBJECTIVES:

By the end of this exercise, you should be able to: • Create igroups, volumes, and LUNs • Discover LUNs using iSCSI • Connect to NFS storage • Create (format) a VMFS datastore accessed through iSCSI • Create (format) a VMFS datastore accessed through NFS • Create a VM provisioned by a VMFS datastore accessed through iSCSI • Create a VM provisioned by RDM with a raw LUN accessed through iSCSI

TIME ESTIMATE:

90 minutes

START OF EXERCISE

TASK 1: CREATE IGROUPS, VOLUMES, AND LUNS FOR ISCSI

STEP ACTION

1. Open a Remote Desktop Connection to start up the Virtual Infrastructure Client on the remote VMware client host. Log in as root to the remote VMware ESX Server using the hostname or IP address supplied by your instructor.

In your Virtual Infrastructure Client, select Configuration tab and click Storage (iSCSI, SAN, and NFS) from the Hardware menu.

Notice that you may have a VMFS datastore named storage1. This VMFS datastore is mounted onto /vmfs/volumes/46769262-a9ce61d4-2da7-00145e231ed9 on the ESX host used for this example. You can establish a Telnet session to your ESX Server and ls /vmfs to see at the VMware ESX (Linux) prompt the available VMFS datastores.

By default, VMware ESX Server discovers SCSI and NAS-attached local disks and creates VMFS datastores on them.

NetApp University - Do Not Distribute

Page 238: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-53 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2. Next, you enable iSCSI in order to create a LUN accessed by iSCSI.

Navigate to the Virtual Infrastructure Client. On the Configuration tab, select Storage Adapters.

Select the iSCSI Software Adapter and click Properties.

NetApp University - Do Not Distribute

Page 239: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-54 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

3. The iSCSI Initiator Properties window appears. Select Configure.

NetApp University - Do Not Distribute

Page 240: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-55 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Check Enabled and click OK.

4. Select the Dynamic Discovery tab in the iSCSI Initiator Properties window.

Click Add.

NetApp University - Do Not Distribute

Page 241: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-56 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Enter the target IP address specified by your instructor. Keep in mind that this is the IP address of the first iSCSI target on the storage controller, not the management IP address. Click OK.

Once you click OK, an iSCSI session is opened between the VMware host and the target storage controller. This can be verified on the storage controller using the iscsi session show Data ONTAP command.

Observe also the list of iSCSI discovery addresses, which are used by the VMware iSCSI software initiator to discover iSCSI targets dynamically:

NetApp University - Do Not Distribute

Page 242: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-57 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click Close to exit the iSCSI Initiator Properties.

5. Now you add an initiator group on the target storage controller.

Navigate to FilerView. Expand LUNs and Initiator Groups. Select Add.

Name the initiator group esx_iscsi_ig.

Set the Type to iSCSI and the operating system to VMware.

NetApp University - Do Not Distribute

Page 243: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-58 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Locate the iqn number by navigating to the Virtual Infrastructure Client and selecting Storage Adapters in the Hardware section of the Configuration tab. Then, click the iSCSI Software Adapter and look at the Details window as shown below.

Write the iqn in the space provided below.

iqn = _________________________________

NetApp University - Do Not Distribute

Page 244: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-59 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Type (or even better, copy and paste) the iqn into the Initiators section of FilerView.

Click Add.

You will get a message indicating that the initiator group was successfully created.

6. Next, you add a volume from FilerView or the command line. Instructions are

provided for use with FilerView.

Select Volumes and Add. The Volume Wizard appears.

Select Next.

Select Flexible and click Next.

Name the volume esx_iscsi_vol1

Keep Language set to POSIX and select Next.

The containing aggregate should be aggr1. The volume should be 50 GB.

NetApp University - Do Not Distribute

Page 245: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-60 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Set Space Guarantee to none.

Select Next.

Review the summary and click Commit.

7. Add another volume using FilerView or the command line. Instructions are

provided for use with FilerView.

Select Volumes and Add. The Volume Wizard appears.

Select Next.

Select Flexible and click Next.

Name the volume esx_iscsi_vol2 and select Next.

Keep Language set to POSIX.

The containing aggregate should be aggr1. The volume should be 9 GB.

Set Space Guarantee to none.

Select Next.

Review the summary and click Commit.

8. Now create a LUN in FilerView or with the command line. Instructions are

provided for use with FilerView.

Select LUNs and Add.

The path to the LUN should be /vol/esx_iscsi_vol1/LUN.

Set the LUN Protocol Type to VMware.

Set the size of the LUN to 20 GB.

Check OFF Space reservation.

Click Add.

9. Add another LUN using FilerView or the command line. Instructions are

provided for use with FilerView.

Select LUNs and Add.

The path to the LUN should be /vol/esx_iscsi_vol2/LUN.

Set the LUN Protocol Type to Windows.

Set the size of the LUN to 5 GB.

Check OFF Space reservation.

Click Add.

NetApp University - Do Not Distribute

Page 246: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-61 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

10. Map the LUNs to the initiator group you previously created.

Select LUNs and Manage.

Click the /vol/esx_iscsi_vol1/LUN and on /vol/esx_iscsi_vol2/LUN respectively and select Map LUN.

Select Add Groups to Map.

Select the esx_iscsi_ig initiator group and select Add.

Give the LUNs IDs of 0 and 1 respectively.

Click Apply. A message appears indicating that the mapping was successful. 11. Select Manage from the LUNs menu. Notice the

/vol/esx_iscsi_vol1/LUN and on /vol/esx_iscsi_vol2/LUN LUNs that you just created and mapped to iSCSI initiator group named esx_iscsi_ig.

END OF TASK 1

NetApp University - Do Not Distribute

Page 247: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-62 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 2: DISCOVER LUNS USING ISCSI, CONNECT TO NFS STORAGE

STEP ACTION

1. Return to the Virtual Infrastructure Client. You should be on the Configuration tab. Select Storage Adapters from the Hardware menu.

Rescan the Storage Adapters by right-clicking on the iSCSI Software Initiator and selecting Rescan.

De-select Scan for New VMFS Volumes. You will only scan for new storage devices here.

Click OK.

NetApp University - Do Not Distribute

Page 248: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-63 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Notice that both of your LUNs are now visible in the Details section of the window.

Note the path for each of your LUNs and write them below.

LUN 0 = ________________________ LUN 1= ________________________

In the example above, the paths are vmhba40:0:0 and vmhba40:0:1.

“vmhba40:0:0” is the name assigned by VMware to the HBA. This is a virtual HBA in this case: the ESX iSCSI software initiator.

“vmhba40:0:0” is the SCSI target on the vmhba40 HBA

“vmhba40:0:0” is the LUN number. This is the LUN id that you used when you mapped the LUN to the esx_iscsi_ig initiator group.

NetApp University - Do Not Distribute

Page 249: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-64 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 3: CREATE A VMFS DATASTORE ACCESSED THROUGH ISCSI

STEP ACTION

1. Next, format the 20-GB LUN as VMFS.

Select Storage (SCSI, SAN, and NFS) from the Hardware menu.

Select Add Storage from the upper right-hand corner of the screen.

NetApp University - Do Not Distribute

Page 250: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-65 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2. The Add Storage window appears.

Click Next.

NetApp University - Do Not Distribute

Page 251: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-66 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

3. Select LUN 0, the 20-GB LUN. In this example, vmhba40:0:0 is selected.

Click Next.

4. Observe that the current disk layout is blank. Click Next.

5. Name the datastore iSCSI VMFS. Click Next.

NetApp University - Do Not Distribute

Page 252: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-67 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

6. Accept the defaults on the Disk/LUN Formatting screen.

Click Next.

NetApp University - Do Not Distribute

Page 253: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-68 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

7. The Summary screen appears. Review the proposed disk layout and click Finish.

Notice that the Create VMFS datastore is in progress (in the Recent Tasks section of the screen).

NetApp University - Do Not Distribute

Page 254: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-69 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Once completed, the iSCSI VMFS appears in the list of Storage:

TASK 4: CONNECT TO NFS STORAGE

STEP ACTION

1. Create a flexible volume using either FilerView or the command line with the following characteristics:

Volume Name: esx_nfs_vol1 Size: 2 GB Containing Aggregate: aggr1 Space Guarantee: none

2. Establish a Telnet session to the storage system. Type vol status. Notice that the esx_nfs_vol1 is present in the list of volumes.

3. Type exportfs. Notice that esx_nfs_vol1 has the following values:

/vol/esx_nfs_vol1 -sec=sys, rw, nosuid

NetApp University - Do Not Distribute

Page 255: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-70 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

If you do not see any output when you type exportfs, run exportfs –a and re-execute exportfs. Also, make sure NFS is licensed on the storage controller.

4. Add anon=0 to this list of values by typing the following: exportfs –io anon=0 /vol/esx_nfs_vol1

5. Type exportfs again. Notice that anon=0 is present in the list. This allows

the volume to be mounted by root.

6. Next, add a VMFS volume on the NFS NetApp volume.

Return to the Virtual Infrastructure Client and select the Configuration tab.

Select Storage (SCSI, SAN, and NFS) from the Hardware menu.

Select Add Storage.

Select Network File System and click Next.

7. Type the IP address of the storage system (supplied by your instructor). The folder name should be /vol/esx_nfs_vol1.

NetApp University - Do Not Distribute

Page 256: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-71 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Name the datastore NFS

NetApp University - Do Not Distribute

Page 257: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-72 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

8. A summary screen appears with the parameters you selected. Review the parameters and click Finish.

NetApp University - Do Not Distribute

Page 258: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-73 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Notice that the NFS datastore now appears in your list of storage.

NetApp University - Do Not Distribute

Page 259: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-74 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 5: CREATE A NEW RDM VIRTUAL MACHINE

STEP ACTION

1. Select the Summary tab and click New Virtual Machine in the Commands pane.

NetApp University - Do Not Distribute

Page 260: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-75 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

The New Virtual Machine Wizard appears.

Select Custom and click Next. You need to select Custom here to be able to provision the new VM using raw device mapping (RDM) instead of using a typical VMFS datastore.

2. Name the Virtual Machine Win2003 iSCSI RDM.

Click Next.

3. Select a location where the vmx file and the pointer to the RDM will be located.

Observe that the dialog box asks to select a “datastore in which to store the files for the virtual machine.” When using RDM storage, only the vmx VM configuration file and the pointer to the RDM will be stored in the datastore you select here.

Observe also that there are several datastores where the vmx file and pointer to the RDM could be stored: “storage1” is a datastore that corresponds to a local SCSI disk; “NFS VMFS” is the NFS datastore that you previously created by mounting a NetApp NFS volume on ESX; “iSCSI VMFS” is the VMFS datastore that you previously created, which is provisioned by a NetApp LUN

NetApp University - Do Not Distribute

Page 261: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-76 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

accessed through iSCSI.

You could store the vmx file and the pointer to the RDM in any of these VMFS datastores. It is a good idea to keep vmx files and pointers to RDMs in a VMFS datastore reserved for this purpose and clearly identified as such.

Select iSCSI VMFS.

Click Next.

4. Select Microsoft Windows as the Guest Operating System and select Microsoft Windows Server 2003, Enterprise Edition.

Click Next.

5. Select 1 as the Number of Virtual Processors. Click Next.

6. Type 512 MB as the memory for the virtual machine and click Next.

NetApp University - Do Not Distribute

Page 262: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-77 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

7. Accept the defaults for Choose Networks and click Next.

If you had multiple networks, you would use this screen to select a different network. In this example, the defaults are accepted.

8. Leave LSI Logic as the default adapter and click Next.

NetApp University - Do Not Distribute

Page 263: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-78 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

9. Select Raw Device Mappings and click Next.

NetApp University - Do Not Distribute

Page 264: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-79 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

10. Select LUN 1 from the list and click Next.

Question: Why do you only have one LUN (LUN 1) showing up in this list when in fact you know that you created two LUNs accessed through iSCSI?

NetApp University - Do Not Distribute

Page 265: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-80 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

The Configuration/Storage Adapters window shown below lists the two iSCSI LUNs you previously created: vmhba40:0:0 and vmhba40:0:1. The New Virtual Machine Wizard shows only LUN1 (vmhba40:0:1) as an available choice. Why?

Answer: You previously created a VMFS file system on LUN0 (vmhba40:0:0). You named that VMFS file system iSCSI VMFS. Observe that iSCSI VMFS is listed in the Configuration/Storage window. Because LUN0 (vmhba40:0:0) already contains a VMFS file system it can not be used as raw LUN storage (RDM) for the virtual machine you are creating now. Thus, the only real choice for RDM storage for this VM is the second LUN (LUN1, vmhba40:0:1) which is still raw. Keep in mind though that ESX will use the iSCSI VMFS file system to store its vmx file and pointer to the RDM. You selected the iSCSI VMFS as vmx file and RDM pointer storage in a previous step.

11. Select Store with Virtual Machine and click Next.

12. Select Virtual compatibility mode. Click Next.

Physical mode is used for NetApp SnapManager products. Virtual mode is used to take VMFS snapshot copies.

NetApp University - Do Not Distribute

Page 266: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-81 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

13. Leave all options as default on the Specify Advanced Options screen.

Click Next.

14. Review the parameters and click Finish.

NetApp University - Do Not Distribute

Page 267: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-82 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

15. Right-click the Win2003 iSCSI RDM virtual machine and select Open Console.

NetApp University - Do Not Distribute

Page 268: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-83 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

The Win2003 iSCSI RDM Virtual Machine Console is displayed.

Select the green start arrow within the console window. The machine will start.

NetApp University - Do Not Distribute

Page 269: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-84 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

If the machine does not start due to licensing problems, ensure that your ESX Server has a valid license file installed as shown below:

Also make sure that the license is enabled under ESX Server License Type. Observe that in this case, although a license file is installed, the license is not enabled yet. To do this, click “Edit,” which is located next to ESX Server License Type.

NetApp University - Do Not Distribute

Page 270: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-85 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

The ESX Server License Type dialog box is displayed.

Select “ESX Server Standard” and click OK. Now your license should show up enabled as shown below in the “ESX Server License Type” section:

NetApp University - Do Not Distribute

Page 271: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-86 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 6: CREATE A NEW VMFS VIRTUAL MACHINE

STEP ACTION

1. Click the san<pod#>esx server in the ESX Inventory tree. Select the Summary tab and click New Virtual Machine in the Commands pane.

NetApp University - Do Not Distribute

Page 272: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-87 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

The New Virtual Machine Wizard appears.

Select Typical and click Next.

NetApp University - Do Not Distribute

Page 273: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-88 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2. Name the Virtual Machine Win2003 iSCSI VMFS.

Click Next. Select the iSCSI VMFS.

Click Next.

NetApp University - Do Not Distribute

Page 274: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-89 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

3. Select Microsoft Windows as the Guest Operating System and select Microsoft Windows Server 2003, Enterprise Edition.

Click Next. 4. Select 1 Virtual Processor and select Next.

Type 512 as the virtual memory size for the machine and click Next.

NetApp University - Do Not Distribute

Page 275: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-90 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Accept the defaults for Choose Networks and click Next.

If you had multiple networks, you would use this screen to select a different network. In this example, the defaults are accepted.

NetApp University - Do Not Distribute

Page 276: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-91 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

5. On the Define Virtual Disk Capacity screen, set the Disk Size to 4 GB. Click Next.

NetApp University - Do Not Distribute

Page 277: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-92 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

6. Review the defaults on the Summary screen and select Finish.

NetApp University - Do Not Distribute

Page 278: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-93 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

7. After the Create Virtual Machine task is complete, observe the new Win2003VMFS virtual machine created on your ESX Server.

There are four virtual machines created on the ESX Server at this point. Two VMs are accessing their storage through iSCSI and two other VMs are accessing their storage through FCP:

FCP

1) Win2003 FC RDM

a. Using LUN1 (vmhba1:0:1) as a raw device storage map (RDM)

b. Using the FC VMFS datastore (on LUN0, vmhba1:0:0) to store the vmx file and pointer to the RDM datastore

2) Win2003 FC VMFS

a. Using FC VMFS datastore (on LUN0, vmhba1:0:0) as VMFS storage

The FC VMFS datastore (on LUN0, vmhba1:0:0) is used both as VMFS storage for the Win2003 FC VMFS virtual machine and as vmx file and RDM pointer repository for the Win2003 FC RDM virtual machine.

NetApp University - Do Not Distribute

Page 279: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-94 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

iSCSI

3) Win2003 iSCSI RDM

a. Using LUN1 (vmhba40:0:1) as a raw device storage map (RDM)

b. Using the iSCSI VMFS datastore (on LUN0, vmhba40:0:0) to store the vmx file and pointer to the RDM datastore

4) Win2003 iSCSI VMFS

a. Using iSCSI VMFS datastore (on LUN0, vmhba40:0:0) as VMFS storage

The iSCSI VMFS datastore (on LUN0, vmhba40:0:0) is used both as VMFS storage for the Win2003 iSCSI VMFS virtual machine and as vmx file and RDM pointer repository for the Win2003 iSCSI RDM virtual machine.

8. Use PuTTY to connect to your ESX Server and cd to /vmfs/volumes/iSCSI

VMFS. Use the ls command to view the contents of /vmfs/volumes/iSCSI VMFS. Observe that there is a directory for each VM in the iSCSI VMFS datastore.

END OF EXERCISE

EXERCISE 24: STORAGE MANAGEMENT

OVERVIEW:

In this exercise, you will create NetApp and VMware snapshots, and you create NetApp FlexClone volumes provisioning RDM and VMFS datastores.

OBJECTIVES:

By the end of this exercise, you should be able to: • Create a VMware snapshot • Create a NetApp Snapshot • Use a quiesced snapshot to create an RDM FlexClone volume • Create a VMFS FlexClone volume

TIME ESTIMATE:

60 minutes

NetApp University - Do Not Distribute

Page 280: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-95 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

START OF EXERCISE

TASK 1: CREATE VMWARE SNAPSHOTS

STEP ACTION

1. Establish a Telnet session to the server and cd to /vmfs/volumes/iSCSI VMFS/Win2003 iSCSI VMFS.

You should see a number of files in this directory similar to those shown here.

[root@san206esx Win2003 iSCSI VMFS]# ls -l total 4194496 -rw------- 1 root root 4294967296 Sep 7 05:41 Win2003 iSCSI VMFS-flat.vmdk -rw------- 1 root root 322 Sep 7 05:41 Win2003 iSCSI VMFS.vmdk -rw------- 1 root root 0 Sep 7 05:41 Win2003 iSCSI VMFS.vmsd -rwxr-xr-x 1 root root 1015 Sep 7 05:41 Win2003 iSCSI VMFS.vmx -rw------- 1 root root 262 Sep 7 05:41 Win2003 iSCSI VMFS.vmxf

If the Win2003 iSCSI VMFS virtual machine (VM) is powered up you will also see a swap file (memory) and an NVRAM file (BIOS). The swap file is removed and re-created each time the VM is powered off or on. The NVRAM file is created the first time the VM is powered up and remains on disk afterwards. Do not confuse the VMware NVRAM file (VM BIOS configuration info) with the NetApp NVRAM card.

-rw------- 1 root root 268435456 Sep 7 22:41 Win2003 iSCSI VMFS-6b62d795.vswp -rw------- 1 root root 8664 Sep 7 22:41 Win2003 iSCSI VMFS.nvram

Leave this Telnet session open.

NetApp University - Do Not Distribute

Page 281: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-96 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2. Now, take a VMware snapshot. From the Virtual Infrastructure Client, right-click Win2003 iSCSI VMFS and select Snapshot and Take Snapshot.

NetApp University - Do Not Distribute

Page 282: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-97 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

3. When the Take Virtual Machine Snapshot window appears, name the snapshot Snapshot1. Note that the VMware snapshot does not immediately occur. The virtual machine is placed in a consistent state and then changes are written to a log file. A NetApp Snapshot occurs much more quickly.

4. Return to the Telnet session and type ls. You should see several new files in the

output including: Win2003 iSCSI VMFS-Snapshot1.vmsn Win2003 iSCSI VMFS-000001.vmdk Win2003 iSCSI VMFS-000001-delta.vmdk

Viewing this directory will let you know if active snapshots are present.

5. Take a second snapshot by repeating Step 2 and Step 3. Name this snapshot

Snapshot2.

NetApp University - Do Not Distribute

Page 283: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-98 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

6. Return to the Telnet session and type ls. Notice that new files were created for Snapshot2: Win2003 iSCSI VMFS-Snapshot2.vmsn Win2003 iSCSI VMFS-000002.vmdk Win2003 iSCSI VMFS-000002-delta.vmdk

7. You can also view active VMware snapshots using the Virtual Machine Snapshot

Manager. Right-click the Virtual Machine name and select Snapshot and Snapshot Manager…

NetApp University - Do Not Distribute

Page 284: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-99 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 2: CREATE NETAPP SNAPSHOT COPIES

STEP ACTION

1. Now take a NetApp Snapshot of the Win2003 iSCSI RDM. Establish a Telnet session to your server. Type the following commands:

vmware-cmd –l

This lists all of the vmx files. Find your Win2003 iSCSI RDM vmx file.

vmware-cmd <full_path_to_your_RDM_vmx_file> \ createsnapshot backup quiesce

Ensure to escape the blanks in the <full_path_to_your_RDM_vmx_file> using “\ “ (instead of just “ “).

This places the RDM in a quiesced state. The VM is now in hot backup mode.

2. Open FilerView for your storage system and select Volumes and Manage.

Notice the esx_iscsi_vol2 volume, which is the volume hosting the raw LUN1 (vmhba40:0:1) that you used to provision the datastore of the Win2003 iSCSI RDM virtual machine.

3. Select Snapshots and Add. Select the esx_iscsi_vol2 volume and name the

NetApp University - Do Not Distribute

Page 285: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-100 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

snapshot Quiesced Snapshot. Click Add.

NetApp University - Do Not Distribute

Page 286: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-101 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

4. Select Manage under Snapshots. Notice that the quiesced snapshot is now present.

You will use this snapshot in the next lab exercise.

5. Now you need take the Win2003RDM VM out of quiesced state (out of hot backup mode). Establish a Telnet session to your server. Type the following command:

vmware-cmd <full_path_to_your_RDM_vmx_file> removesnapshots

NOTE: Be sure to use the same <full_path to_your_RDM vmx_file> as above.

NetApp University - Do Not Distribute

Page 287: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-102 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

TASK 3: CREATE NETAPP FLEXCLONE

PART 1 – NETAPP FLEXCLONE OF A VIRTUAL MACHINE PROVISIONED BY A RDM DATASTORE

STEP ACTION

1. Create a FlexClone using FilerView or the command-line interface. Instructions are provided here for FilerView.

Select Volumes, then FlexClones and Create.

NOTE: If the FlexClone link does not appear in the Volumes section, check that flex_clone is licensed on the storage controller.

The FlexClone Wizard appears:

Name the clone RDM_FlexClone. The Parent Volume should be esx_iscsi_vol2.

NetApp University - Do Not Distribute

Page 288: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-103 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Leave the Space Guarantee set to volume. Click Next

2. Select the Quiesced Snapshot as the Parent Volume Snapshot. Click Next.

3. Review the summary and select Commit.

4. When the message appears that the clone was created successfully, select Close.

5.

Select Manage from the FlexClones menu.

Notice that the RDM_FlexClone was created.

6. Select Manage from the LUNs menu. Notice that the /vol/RDM_FlexClone/LUN is created offline. You need to bring it online.

NetApp University - Do Not Distribute

Page 289: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-104 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Select the /vol/RDM_FlexClone/LUN from the list and click Online. Confirm by clicking OK. The LUN is brought online.

7. Next, map the LUN to the initiator.

Select Map LUN. Click Add Groups to Map and select the esx_iscsi_ig initiator group. Click Add.

Give the LUN a LUN ID of 5 and click Apply.

A message appears indicating that the mapping was successful.

Select Manage from the LUNs menu to verify that the /vol/RDM_FlexClone/LUN is mapped and online:

NetApp University - Do Not Distribute

Page 290: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-105 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

8. Return to the Virtual Infrastructure Client, click the SAN<pod#>esx server tree branch, and select Storage Adapters from the Hardware menu (Configuration tab).

Right-click the iSCSI Software Adapter (vmhba40) and select Rescan… from the pop-up menu. Do not scan for new VMFS Volumes. Click OK.

Observe that the new LUN 5 (vmhba40:0:5) is available. This is the LUN 5 hosted by the /vol/RDM_FlexClone clone volume.

NetApp University - Do Not Distribute

Page 291: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-106 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

NetApp University - Do Not Distribute

Page 292: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-107 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

9. You should still have the SAN<pod#>esx branch selected in the Inventory browsing tree. Click the Summary tab, and then click the New Virtual Machine link in the Commands section.

Select Custom and click Next.

Name the Virtual Machine FlexClone of Win2003RDM. Select Next.

Select the iSCSI VMFS as the location for storing the configuration file (.vmx) and the pointer to the RDM. Click Next.

Select Microsoft Windows as the Guest Operating System and Microsoft Windows Server 2003, Enterprise Edition as the version. Click Next.

Select 1 as the Number of Virtual Processors and click Next.

Select 512 MB as the Virtual Machine’s memory size. Click Next.

Leave the defaults on the Choose Networks screen and select Next.

Leave the defaults on the Select I/O Adapter Types screen and select Next.

NetApp University - Do Not Distribute

Page 293: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-108 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Select Raw Device Mappings on the Select a Disk screen and click Next.

Select LUN 5 and click Next.

Select Store with Virtual Machine and click Next.

Select Virtual as the Compatibility Mode and click Next.

Leave the defaults on the Specify Advanced Options screen and select Next.

NetApp University - Do Not Distribute

Page 294: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-109 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Review the parameters and click Finish.

You can watch the progress of the Virtual Machine creation in the Recent Tasks portion of the screen.

10. Observe the new Virtual Machine named “FlexClone of Win2003RDM” appear in the “Inventory” browsing tree. This virtual machine is provisioned by iSCSI LUN 5, which is a clone of iSCSI LUN 1.

NetApp University - Do Not Distribute

Page 295: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-110 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Both of these virtual machines have VMware datastores provisioned by raw LUNs (LUN 1 and LUN 5, respectively). Both of these virtual machines store their configuration file (.vmx) and the pointer to their RDM datastore in the same VMFS file system, named iSCSI VMFS. Recall that the iSCSI VMFS is also used as a datastore for the Win2003 iSCSI VMFS virtual machine.

Optional task: You can use PuTTY to log on to your ESX Server and cd to /vmfs/volumes/iSCSI\ VMFS. Next, use the ls command to view the virtual machines that are using the iSCSI VMFS file system. You should see FlexClone of Win2003RDM appear in the list.

Question: Why isn’t the VMware snapshot, named “backup,” visible in the FlexClone of RDM file system?

Answer: Because the VMware snapshot is created and handled within the VMFS file system, which stores the raw device mapping file (the RDM pointer file). That is LUN1. It is not stored on the raw LUN 5 itself.

NetApp University - Do Not Distribute

Page 296: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-111 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

PART 2 – NETAPP FLEXCLONE OF A VIRTUAL MACHINE PROVISIONED BY A VMFS DATASTORE

STEP ACTION

1. Finally, you will clone an entire VMFS LUN.

Click the SAN<pod#>esx branch in the “Inventory” browsing tree. Click the Configuration tab, and then click the Storage (SCSI, SAN, and NFS) link in the Hardware section.

You will FlexClone the volume, which provisions the iSCSI VMFS LUN. That is the /vol/esx_iscsi_vol1 volume on NetApp.

NetApp University - Do Not Distribute

Page 297: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-112 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

2. Open FilerView on your storage system.

You will use the /vol/esx_iscsi_vol1 volume as source for the FlexClone. This is the volume that hosts the LUN 0 (vmhba40:0:0), which provisions the iSCSI VMFS VMware datastore as shown in the screenshot below.

NetApp University - Do Not Distribute

Page 298: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-113 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

In FilerView, select Volumes and FlexClones.

Click Create. The FlexClone Wizard appears. Select Next.

Name the clone VMFS_FlexClone. Use the esx_iscsi_vol1 Parent Volume.

NetApp University - Do Not Distribute

Page 299: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-114 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Leave Space Reservation to Volume.

NOTE: Make sure you use the esx_iscsi_vol1 volume, not the esx_nfs_vol1 that is selected by default.

Click Next.

Select Create new for the Parent Volume Snapshot. Click Next.

Review the summary and select Commit.

When a message appears that the FlexClone volume was successfully created, click Close.

Click Manage in the FlexClones section to view the new FlexClone volume. You

NetApp University - Do Not Distribute

Page 300: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-115 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

can also see the FlexClone volume in Volumes/Manage.

NetApp University - Do Not Distribute

Page 301: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-116 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

3. Select Manage from the LUNs menu. Notice that the FlexClone LUN (/vol/VMFS_FlexClone/LUN) is present.

Now you bring the LUN online and map it to an igroup.

Select the /vol/VMFS_FlexClone/LUN and click Online. Click OK to confirm.

Select Map LUN and Add Groups to Map. Select the esx_iscsi_ig initiator group and click Add.

Provide a LUN ID of 6 and click Apply. You should get a message indicating that the mapping was successful.

NetApp University - Do Not Distribute

Page 302: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-117 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Select Manage from the LUNs menu.

Verify that the /vol/VMFS_FlexClone/LUN is online and mapped to the esx_iscsi_ig initiator group with a LUN ID of 6 as shown above.

4. Return to the Virtual Infrastructure Client.

You should still be in the Configuration tab of your SAN<pod#>esx server.

Click the SAN<pod#>esx branch in the Inventory browsing tree. Click the Configuration tab, and then click the Storage (SCSI, SAN, and NFS) link in the Hardware section.

NetApp University - Do Not Distribute

Page 303: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-118 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Click Advanced Settings from the Software section.

Select LVM. Change the value for LVM.EnableResignature to 1. Click OK.

This gives us the ability to bring a clone back in on the same ESX Server. ESX will restamp the clone LUN with a new signature.

5. Select Storage Adapters. Rescan the iSCSI Software Adapter by right-clicking on the vmhba40 iSCSI adapter and selecting Rescan…

If you click Rescan, instead of right-clicking or rescanning the adapter, click OK when you are prompted to scan for both new storage devices and new VMFS volumes.

NOTE: If the process times out, rescan only once for New Storage Devices and only once for New VMFS Volumes.

NetApp University - Do Not Distribute

Page 304: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-119 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

Notice that the LUN 6 is now present.

This LUN is shown as LUN 6 (vmhba40:0:6) because we mapped the cloned LUN with LUN ID 6 to the initiator group esx_iscsi_ig (which contains the iqn number of the iSCSI Software initiator on your ESX Server). The LUN clone (LUN 6) was cloned as part of the FlexClone volume you created above from the esx_iscsi_vol1 volume, which contained LUN 0. So, basically, LUN 6 is a clone of LUN 0.

NetApp University - Do Not Distribute

Page 305: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-120 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

6. Select Storage (SCSI, SAN, and NFS) from the Hardware menu.

Notice that snap-00000002-iSCSI VMFS now appears in the Storage list.

Question: Why does snap-00000002-iSCSI VMFS appear in the Storage (SCSI, SAN and NFS) list? What is this exactly?

Answer: When you cloned the storage, which is provisioning the iSCSI VMFS datastore, you created a new volume containing an EXACT replica of iSCSI LUN0 (which is provisioning the iSCSI VMFS datastore). Therefore, upon rescan of the iSCSI bus, you discovered a “new” LUN (LUN 6), which already contains a VMFS datastore because it is an exact replica of LUN 0 (containing the original iSCSI VMFS). So, you end up with two “copies” of the original iSCSI VMFS: one on iSCSI LUN0 (=vmhba40:0:0 = iSCSI VMFS) and one on iSCSI LUN6 (=vmhba40:0:6 = snap-00000002-iSCSI VMFS).

Question: Why did you not see a similar snap-0000000X-iSCSI RDM entry appear in the Storage (SCSI, SAN and NFS) list for the FlexClone of Win2003RDM that you created earlier?

Answer: When you cloned the storage, which is provisioning the iSCSI RDM datastore, you created a new volume containing an EXACT replica of iSCSI LUN1 (which is provisioning the RDM datastore). LUN1 being an RDM

NetApp University - Do Not Distribute

Page 306: Strsw Ed Ilt San Impwkshp Exerciseguide

E7-121 SAN Implementation Workshop: FC and IP VMware © 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.

STEP ACTION

datastore, does NOT contain a VMFS file system on it (it is a raw LUN). Upon rescan of the iSCSI bus, you discovered a “new” LUN (LUN 5). However, contrary to LUN6, LUN5 does NOT contain a VMFS file system on it. Thus, no VMFS entry appears in the Storage (SCSI, SAN and NFS) list for the FlexClone of Win2003RDM.

You may now right-click and Browse the Datastore to create new VMs.

8. Finally, you will clone an entire VMFS LUN.

Click the SAN<pod#>esx branch in the “Inventory” browsing tree. Click the Configuration tab, and then click the Storage (SCSI, SAN, and NFS) link in the Hardware section.

Using FlexClone, you will create a flexible clone of the volume, which provisions the iSCSI VMFS LUN. That is the /vol/esx_iscsi_vol1 volume on NetApp.

END OF EXERCISE

NetApp University - Do Not Distribute

Page 307: Strsw Ed Ilt San Impwkshp Exerciseguide

Appendix A

NetApp University - Do Not Distribute

Page 308: Strsw Ed Ilt San Impwkshp Exerciseguide

A-1 SAN Implementation Workshop: Appendix A: Answer Key © 2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.

NETAPP UNIVERSITY

SAN Implementation Workshop Appendix A Course Number: STRSW-ED-ILT-SAN-IMPWKSHP

Catalog Number: STRSW-ED-ILT-SAN-IMPWKSHP-EG

NetApp University - Do Not Distribute

Page 309: Strsw Ed Ilt San Impwkshp Exerciseguide

A-2 SAN Implementation Workshop: Appendix A: Answer Key © 2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.

ANSWERS: CONFIGURE ISCSI SERVICE ON THE SOLARIS HOST

Answers

Lab 16 - Configure iSCSI Service on the Solaris Host

MODULE 6: FC & IP SOLARIS EXERCISE: LAB 16 - CONFIGURE ISCSI SERVICE ON SOLARIS AND ON NETAPP

TASK 2: CONFIGURE THE ISCSI SERVICE ON THE NETAPP STORAGE SYSTEM You will need to complete the following steps on your Solaris host by replacing <storage_ctlr> with the name of your storage controller.

STEP ACTION 5. Enter the following command to see the iSCSI Target Portal Groups (TPG)

currently available on the storage controller. $ rsh <storage_ctlr> iscsi tpgroup show

TPGTag Name Member Interfaces

1000 e0a_default e0a

1001 e0b_default e0b

1002 e0c_default e0c

1003 e0d_default e0d Why are there 4 different iSCSI Target Portal Groups on this storage controller? By default, Data ONTAP assigns each Ethernet interface to its own Target Portal Group (TPG). You can create new TPGs and assign interfaces to new TPGs. As interfaces are assigned to new TPGs, they are removed from the default TPG.

NetApp University - Do Not Distribute

Page 310: Strsw Ed Ilt San Impwkshp Exerciseguide

A-3 SAN Implementation Workshop: Appendix A: Answer Key © 2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.

TASK 3: CONFIGURE THE ISCSI SERVICE ON THE SOLARIS HOST You will need to complete the following steps on the Solaris host.

STEP ACTION 10. Q1: Consider the output of the command run in the previous step. Why are some

iSCSI targets shown as connected to a certain IP address whereas other targets are shown not connected? Some iSCSI targets are shown as connected to a certain IP address whereas other targets are shown as not connected, since some Ethernet interfaces on the storage controllers may be disconnected or down. Also, some iSCSI interfaces on the storage controllers may be disabled. Q2: The Target Portal Group 1000 (TPGT: 1000) is not shown (discovered) at all on the Solaris host. Why? Target Portal Group 1000 (TPGT: 1000) is not shown (discovered) at all on the Solaris host, since TPG 1000 contains an Ethernet interface (e0a) which is currently disabled for iSCSI. Q3: The Target Portal Group 1003 (TPGT: 1003) is not shown (discovered) at all on the Solaris host. Why? Target Portal Group 1003 (TPGT: 1003) is not shown (discovered) at all on the Solaris host since TPG 1003 contains an Ethernet interface (e0d) which is currently down and disconnected from the network.

NetApp University - Do Not Distribute

Page 311: Strsw Ed Ilt San Impwkshp Exerciseguide

Appendix B

NetApp University - Do Not Distribute

Page 312: Strsw Ed Ilt San Impwkshp Exerciseguide

B-1 SAN Implementation Workshop: Appendix B © 2008 NetApp. This material is intended for training use only. Not authorized for re-production purposes.

NETAPP UNIVERSITY

SAN Implementation Workshop Appendix B Course Number: STRSW-ED-ILT-SAN-IMPWKSHP

Catalog Number: STRSW-ED-ILT-SAN-IMPWKSHP-EG

NetApp University - Do Not Distribute

Page 313: Strsw Ed Ilt San Impwkshp Exerciseguide

LUN

3LU

N4

LUN

0

LUN

1

LUN

2

disk 1

disk 2

LUN

3LU

N4

disk 1

diskp 1

disk 1

diskp

disk n

diskp 2

LUN

3

LUN

4

LUN

0

LUN

1

LUN

2

Qtre

e -/

vo

l/

Pr

od

Vo

l/

Ap

pX

QT

/v

ol

/P

ro

dV

ol

/A

pp

XQ

T/

LU

N0

.1

un

/v

ol

/P

ro

dV

ol

/A

pp

XQ

T/

LU

N1

.1

un

/v

ol

/P

ro

dV

ol

/L

UN

2.

1u

n

Volu

me

-/v

ol

/P

ro

dV

ol

Vol R

oot

Vol

ume

-/v

ol

/v

ol

0

/e

tc

/h

om

e

Vol

Roo

t

/v

ol

/T

es

tV

ol

/L

UN

3.

1u

n

/v

ol

/T

es

tV

ol

/L

UN

4.

1u

n

Volu

me

-/v

ol

/T

es

tV

ol

Vol R

oot

WAF

L R

oot

Aggr

egat

e -a

gg

r1

Aggr

egat

e -a

gg

r0

Net

App

Sto

rage

Sys

tem

/d

ev

/r

ds

k/

c0

t0

d0

/d

ev

/r

ds

k/

c0

t1

d0

Loca

l Dis

ks

OS

Dev

ices /d

ev

/r

ds

k/

c1

t0

d3

/d

ev

/r

ds

k/

c1

t1

d3

/d

ev

/r

ds

k/

c1

t2

d3

/d

ev

/r

ds

k/

c1

t0

d3

/d

ev

/r

ds

k/

c1

t1

d3

/d

ev

/r

ds

k/

c1

t2

d3

FCP

iSC

SI

Sola

ris H

ost

/d

ev

/r

ds

k/

c2

t6

0A

98

00

04

33

46

16

E4

A3

42

D4

C6

83

44

B6

1d

0

/d

ev

/r

ds

k/

c2

t6

0A

98

00

04

33

46

16

E4

A3

42

D4

C6

83

44

B6

1d

0

/d

ev

/r

ds

k/

c2

t6

0A

98

00

04

33

46

16

E4

A3

42

D4

C6

83

44

B6

1d

0

LUN

3

LUN

4

de

v/

vx

/r

dm

p/

c1

t0

d3

de

v/

vx

/r

dm

p/

c1

t0

d4

Verit

as D

MP

Dev

ices

Sun

MP

xIO

Dev

ices

de

v/

vx

/r

ds

k/

vx

dg

1

Verit

as D

isk

Gro

upSu

n SV

M R

aw L

ogic

al V

olum

es

LUN

0LU

N1

LUN

2

de

v/

rd

sk

/m

d/

d1

de

v/

rd

sk

/m

d/

d0

de

v/

vx

/r

ds

k/

vx

dg

1/

vx

vo

l1

de

v/

vx

/d

sk

/v

xd

g1

/v

xv

ol

1/

mn

t/

te

st

in

g/

mn

t/

Ap

pX

/m

nt

/A

pp

Y

/d

ev

/d

sk

/m

d/

d0

/d

ev

/d

sk

/m

d/

d1

Sun

SVM

Log

ical

Vol

umes

(File

Sys

tem

)

File

Sys

tem

sM

ount

ed o

n H

ost

Verit

as V

xVM

Raw

Log

ical

Vol

ume

File

Sys

tem

Mou

nted

on

Hos

tVe

ritas

VxV

MLo

gica

l Vol

ume

(FS)

LUN

3LU

N4

LUN

0

LUN

1

LUN

2

disk 1

disk 2

LUN

3LU

N4

disk 1

diskp 1

disk 1

diskp

disk n

diskp 2

LUN

3

LUN

4

LUN

0

LUN

1

LUN

2

Qtre

e -/

vo

l/

Pr

od

Vo

l/

Ap

pX

QT

/v

ol

/P

ro

dV

ol

/A

pp

XQ

T/

LU

N0

.1

un

/v

ol

/P

ro

dV

ol

/A

pp

XQ

T/

LU

N1

.1

un

/v

ol

/P

ro

dV

ol

/L

UN

2.

1u

n

Volu

me

-/v

ol

/P

ro

dV

ol

Vol R

oot

Vol

ume

-/v

ol

/v

ol

0

/e

tc

/h

om

e

Vol

Roo

t

/v

ol

/T

es

tV

ol

/L

UN

3.

1u

n

/v

ol

/T

es

tV

ol

/L

UN

4.

1u

n

Volu

me

-/v

ol

/T

es

tV

ol

Vol R

oot

WAF

L R

oot

Aggr

egat

e -a

gg

r1

Aggr

egat

e -a

gg

r0

Net

App

Sto

rage

Sys

tem

/d

ev

/r

ds

k/

c0

t0

d0

/d

ev

/r

ds

k/

c0

t1

d0

Loca

l Dis

ks

OS

Dev

ices /d

ev

/r

ds

k/

c1

t0

d3

/d

ev

/r

ds

k/

c1

t1

d3

/d

ev

/r

ds

k/

c1

t2

d3

/d

ev

/r

ds

k/

c1

t0

d3

/d

ev

/r

ds

k/

c1

t1

d3

/d

ev

/r

ds

k/

c1

t2

d3

FCP

iSC

SI

Sola

ris H

ost

/d

ev

/r

ds

k/

c2

t6

0A

98

00

04

33

46

16

E4

A3

42

D4

C6

83

44

B6

1d

0

/d

ev

/r

ds

k/

c2

t6

0A

98

00

04

33

46

16

E4

A3

42

D4

C6

83

44

B6

1d

0

/d

ev

/r

ds

k/

c2

t6

0A

98

00

04

33

46

16

E4

A3

42

D4

C6

83

44

B6

1d

0

LUN

3

LUN

4

de

v/

vx

/r

dm

p/

c1

t0

d3

de

v/

vx

/r

dm

p/

c1

t0

d4

Verit

as D

MP

Dev

ices

Sun

MP

xIO

Dev

ices

de

v/

vx

/r

ds

k/

vx

dg

1

Verit

as D

isk

Gro

upSu

n SV

M R

aw L

ogic

al V

olum

es

LUN

0LU

N1

LUN

2

de

v/

rd

sk

/m

d/

d1

de

v/

rd

sk

/m

d/

d0

de

v/

vx

/r

ds

k/

vx

dg

1/

vx

vo

l1

de

v/

vx

/d

sk

/v

xd

g1

/v

xv

ol

1/

mn

t/

te

st

in

g/

mn

t/

Ap

pX

/m

nt

/A

pp

Y

/d

ev

/d

sk

/m

d/

d0

/d

ev

/d

sk

/m

d/

d1

Sun

SVM

Log

ical

Vol

umes

(File

Sys

tem

)

File

Sys

tem

sM

ount

ed o

n H

ost

Verit

as V

xVM

Raw

Log

ical

Vol

ume

File

Sys

tem

Mou

nted

on

Hos

tVe

ritas

VxV

MLo

gica

l Vol

ume

(FS)

NetApp University - Do Not Distribute

Page 314: Strsw Ed Ilt San Impwkshp Exerciseguide

Sol

aris

Hos

t

Fig.

1: D

ata

ON

TAP

Def

ault

(One

TPG

for e

ach

iSC

SI ta

rget

) -Su

ppor

ted

by S

olar

is

Sol

aris

iSC

SI

Sof

twar

eIn

itiat

or

Net

App

Sto

rage

Sys

tem

e0a

e0b

e0c

e0d

Eth

erne

t Int

erfa

ces

e0b

e0c

e0d

TPG

T: 1

001

TPG

T: 1

002

TPG

T: 1

003

iSC

SI T

arge

t Net

wor

kP

orta

ls (i

SC

SI T

arge

ts)

iSC

SI T

arge

tP

orta

l Gro

ups

Sol

aris

Hos

t

Fig.

2: D

ata

ON

TAP

Def

ault

(One

TPG

for e

ach

iSC

SI ta

rget

) -Su

ppor

ted

by S

olar

is

iSC

SI

Sof

twar

eIn

itiat

or

Net

App

Sto

rage

Sys

tem

e0a

e0b

e0c

e0d

Eth

erne

t Int

erfa

ces

e0b

e0c

e0d

TPG

T: 1

001

TPG

T: 1

002

iSC

SI T

arge

t Net

wor

kP

orta

ls (i

SC

SI T

arge

ts)

iSC

SI T

arge

tP

orta

l Gro

ups

iSC

SI C

onne

ctio

niS

CSI

Ses

sion

NetApp University - Do Not Distribute

Page 315: Strsw Ed Ilt San Impwkshp Exerciseguide

disk 1 diskp 1 disk 1 diskpdisk n diskp 2

LUN3

LUN4

Volume - /vol/vol0

/etc

/home

Vol Root

.../LUN3.1un

.../LUN4.1un

Volume - /vol/TestVol/.snapshot/TestVolSnap

Vol Root

WAFL Root

Aggregate - aggr1 Aggregate - aggr0

NetApp Storage System - LUN Clone

LUN cloneLUN3

LUN4

LUN3

/vol/TestVol/LUN3.1un

/vol/TestVol/LUN4.1un

/vol/TestVol/LUN3_clone.1un

Volume - /vol/TestVol

Vol Root

The LUN3 clone is online but is NOT mapped to an initiator group

Based on

disk 1 diskp 1 disk 1 diskpdisk n diskp 2

LUN3

LUN4

Volume - /vol/vol0

/etc

/home

Vol Root

.../LUN3.1un

.../LUN4.1un

Volume - /vol/TestVol/.snapshot/TestVolSnap

Vol Root

WAFL Root

Aggregate - aggr1 Aggregate - aggr0

NetApp Storage System - LUN Clone

LUN cloneLUN3

LUN4

LUN3

/vol/TestVol/LUN3.1un

/vol/TestVol/LUN4.1un

/vol/TestVol/LUN3_clone.1un

Volume - /vol/TestVol

Vol Root

The LUN3 clone is online but is NOT mapped to an initiator group

Based on

NetApp University - Do Not Distribute

Page 316: Strsw Ed Ilt San Impwkshp Exerciseguide

disk 1 diskp 1 disk 1 diskpdisk n diskp 2

LUN3

LUN4

Volume - /vol/vol0

/etc

/home

Vol Root

.../LUN3.1un

.../LUN4.1un

Volume - /vol/TestVol/.snapshot/TestVolSnap

Vol Root

WAFL Root

Aggregate - aggr1 Aggregate - aggr0

NetApp Storage System -

...

LUN3

LUN4

/vol/TestVol/LUN3.1un

/vol/TestVol/LUN4.1un

Volume - /vol/TestVol

Vol Root

Snapshot

disk 1 diskp 1 disk 1 diskpdisk n diskp 2

LUN3

LUN4

Volume - /vol/vol0

/etc

/home

Vol Root

.../LUN3.1un

.../LUN4.1un

Volume - /vol/TestVol/.snapshot/TestVolSnap

Vol Root

WAFL Root

Aggregate - aggr1 Aggregate - aggr0

NetApp Storage System - Snapshot

...

LUN3

LUN4

/vol/TestVol/LUN3.1un

/vol/TestVol/LUN4.1un

Volume - /vol/TestVol

Vol Root

Snapshot

NetApp University - Do Not Distribute

Page 317: Strsw Ed Ilt San Impwkshp Exerciseguide

disk 1 diskp 1 disk 1 diskpdisk n diskp 2

LUN3

LUN4

Volume - /vol/vol0

/etc

/home

Vol Root

.../LUN3.1un

.../LUN4.1un

Volume - /vol/TestVol/.snapshot/clone_TestVolClone.1

Vol Root

WAFL Root

Aggregate - aggr1 Aggregate - aggr0

LUN3

LUN4

/vol/TestVol/LUN3.1un

/vol/TestVol/LUN4.1un

Volume - /vol/TestVol

Vol Root

VOL Clone

LUN3

LUN4

/vol/TestVolClone/LUN3.1un

/vol/TestVolClone/LUN4.1un

Volume - /vol/TestVolClone

Vol Root

NetApp Storage System - VOL Clone

Base Snapshot forVOL Clone (automatic)

Based on

The LUNs in the volume clone are offline and NOT mapped to initiator groups

NetApp University - Do Not Distribute