Technical Market ing Solut ions Guide
Nimble Storage SmartStack
Getting Started Guide for
Fibre Channel Connectivity
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 2
Document Revision
Date Revision Description (author)
4/28/2015 1. 0 Draft release (Steve Sexton)
6/04/2015 1.1 QA and TSE review (Steve Sexton)
7/09/2015 1.2 Additional QA and SE review (Steve Sexton)
THIS TECHNICAL REPORT IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN
TYPOGRAPHICAL ERRORS AND TECHNICAL INACCUURACIES. THE CONTENT IS PROVIDED AS
IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 3
Nimble Storage: All rights reserved. Reproduction of this material in any manner whatsoever without the
express written permission of Nimble Storage is strictly prohibited.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 4
Table of Contents AUDIENCE ............................................................................................................................................................................. 6 ASSUMPTIONS ........................................................................................................................................................................ 6 LIMITATIONS AND OTHER CONSIDERATIONS ................................................................................................................................. 7
CONFIGURING UCSM SERVICE PROFILE SETUP ........................................................... 7
VHBAS: ................................................................................................................................................................................ 7 VNICS: ................................................................................................................................................................................. 7 NETWORK CABLING ................................................................................................................................................................. 7
UCS FC TOPOLOGY CONFIGURATION ......................................................................... 8
UCS FC OPERATION MODE ....................................................................................................................................................... 8 NIMBLE STORAGE FIRMWARE, CISCO UCS FIRMWARE, AND HOST DRIVER REQUIREMENTS .................................................................... 9
OPTION 1 - NIMBLE STORAGE CONTROLLERS CONNECTED DIRECTLY TO THE FABRIC INTERCONNECTS (LOCAL ZONING) ............................................................................. 9
OPTION 2 - NIMBLE STORAGE CONTROLLERS ATTACHED TO A STANDARD FC SWITCH WITH FABRIC INTERCONNECTS IN FC END HOST MODE. ........................................... 28
OPTION 3 - NIMBLE STORAGE CONTROLLERS ATTACHED TO AN ACCESS LAYER SWITCH (E.G., NEXUS 5K) WITH INTERCONNECTS IN FC SWITCH MODE. ................... 40
SUMMARY ............................................................................................................... 53
Table of Figures Nimble Storage controllers connected directly to the Fabric Interconnects (local zoning)
Figure 1 – Equipment Tab FI Operation Mode ....................................................................................... 9
Figure 2 – FC direct attach with local zoning enabled .......................................................................... 10 Figure 3 – FC mode type for FC local zoning ....................................................................................... 10 Figure 4 – Verify Unified port type (FC or Ethernet) ............................................................................ 11 Figure 5 – Verify Unified port type (FC or Ethernet) ............................................................................ 11
Figure 6– WWN pool sequential setup .................................................................................................. 12
Figure 7– WWN pool block setup ......................................................................................................... 12
Figure 8– WWPN pool A sequential setup ............................................................................................ 13 Figure 9 – WWPN pool A sequential setup ........................................................................................... 13 Figure 10– WWPN pool B sequential setup .......................................................................................... 14 Figure 11 – WWN pool A sequential setup ........................................................................................... 14 Figure 12 – WWNN / WWPN pool summary ....................................................................................... 15
Figure 13 – create vHBA for FI-A ......................................................................................................... 16 Figure 14 – create vHBA for FI-B ......................................................................................................... 17 Figure 15 – Service Profile vHBA summary ......................................................................................... 17
Figure 16 – Nimble Storage target port WWPN .................................................................................... 18
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 5
Figure 17 – Creating FC boot policy ..................................................................................................... 19
Figure 18 – creating boot policy (identify vHBAs) ............................................................................... 20 Figure 19 – creating boot policy (identify target WWPNs for FC5.1) .................................................. 21
Figure 20 – creating boot policy (identify target WWPNs for FC6.1) .................................................. 21 Figure 21 – creating boot policy (identify installation media) .............................................................. 22 Figure 22 – Identify target endpoints for local zoning .......................................................................... 22 Figure 23 – Identify target endpoint connectivity for FI-A (VSAN 100) .............................................. 23 Figure 24 – Identify target endpoint connectivity for FI-B (VSAN 200) .............................................. 23
Figure 25 – Create vHBA initiator group for fc0 .................................................................................. 24 Figure 26 – Create vHBA initiator group for fc0 (summary) ................................................................ 25 Figure 27 – Create vHBA initiator group for fc1 (summary) ................................................................ 26 Figure 28 – vHBA FC login status ........................................................................................................ 26 Figure 29– Nimble Storage FC session list............................................................................................ 27
Figure 30 – Map installation media for OS install ................................................................................. 27
Nimble Storage controllers attached to a standard FC switch with FIs in FC End Host mode
Figure 31 – FC End Host mode topology with upstream switches ........................................................ 28 Figure 32 – FC End Host mode topology with upstream switches ........................................................ 28 Figure 33 – Configuring unified port type ............................................................................................. 29
Figure 34 – Configuring Dual VSAN configuration ............................................................................. 30 Figure 35 – Configuring FC Uplink interfaces ...................................................................................... 30
Figure 36 – Creating a WWNN sequential pool .................................................................................... 31 Figure 37 – Creating a WWNN block ................................................................................................... 31 Figure 38 – Creating a WWPN sequential pool for FI-A ...................................................................... 32
Figure 39 – Creating WWPN block for FI-A ........................................................................................ 32
Figure 40 – Creating a WWPN sequential pool for FI-B ...................................................................... 33 Figure 41 – Creating a WWPN block for FI-A ...................................................................................... 33 Figure 42 – WWNN / WWPN summary ............................................................................................... 34
Figure 43 – Create vHBA fc0 ................................................................................................................ 34 Figure 44 – Create vHBA fc1 ................................................................................................................ 35
Figure 45 – Service profile summary ..................................................................................................... 35 Figure 46 – Nimble Storage target WWPN listing ................................................................................ 36
Figure 47 – Create FC SANboot policy (vHBA fc0) ............................................................................ 37 Figure 48 – Create FC SANboot policy summary ................................................................................. 37 Figure 49 – Create FC SANboot policy (add WWPN target ports for VSAN 100) .............................. 38 Figure 50 – Create FC SANboot policy (add WWPN target ports for VSAN 200) .............................. 38
Figure 51 – Create FC SANboot policy – add install media ................................................................. 39 Figure 52 – Verify FC session connectivity from Nimble Storage array .............................................. 39 Figure 53 – Map installation media ....................................................................................................... 39
Nimble Storage controllers attached to an access layer switch (e.g., Nexus 5K) with FIs in FC
Switch mode
Figure 54 – FC Switch mode with upstream zoning.............................................................................. 40 Figure 55 – Verify FC operation mode is Switch .................................................................................. 40 Figure 56 – Verify Unified port type ..................................................................................................... 41
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 6
Figure 57 – Dual VSAN configuration .................................................................................................. 41
Figure 58 – Enable FC Uplink Trunking ............................................................................................... 42 Figure 59 – WWNN sequential pool configuration ............................................................................... 43
Figure 60 – Create WWN block ............................................................................................................ 43 Figure 61 –WWPN sequential pool configuration for FI-A .................................................................. 44 Figure 62 –WWPN block configuration for FI-A.................................................................................. 44 Figure 63–WWPN sequential pool configuration for FI-B ................................................................... 45 Figure 64 –WWPN block configuration for FI-B .................................................................................. 45
Figure 65 –WWNN / WWPN summary ................................................................................................ 46 Figure 66 –vHBA fc0 creation ............................................................................................................... 46 Figure 67 – vHBA fc1 creation .............................................................................................................. 47 Figure 68 – vHBA summary .................................................................................................................. 47 Figure 69 – Nimble Storage target WWPN ........................................................................................... 48
Figure 70 – Creating vHBA fc0 ............................................................................................................. 48 Figure 71 – vHBA summary .................................................................................................................. 49
Figure 72 – Create boot policy - Configure VSAN 100 targets ............................................................ 50
Figure 73 – Create boot policy - Configure VSAN 200 targets ............................................................ 50 Figure 74 – Create boot policy - Add install media ............................................................................... 50 Figure 75 – vHBA FC login................................................................................................................... 52
Figure 76 – FC session login ................................................................................................................. 52 Figure 77 – Map installation media ....................................................................................................... 52
OVERVIEW
The Nimble Storage SmartStack is an example of a converged infrastructure for virtualization, building
upon the basic components of storage, network, compute and hypervisor. Once the basic environment is
assembled, the specific use cases for the virtualized infrastructure are left to the reader.
Audience
This Getting Started Guide was developed to help new SmartStack administrators quickly setup a Nimble
Storage, Cisco UCS and VMware ESXi5 environment as defined in many of the Nimble Storage
SmartStack solutions.
This document is not intended to be a complete implementation or customization guide. Where choices
are available, we will identify the optimal or chosen methods applicable to the particular area.
If the reader has further questions, please contact Nimble Storage, Cisco or VMware technical support.
This guide is intended for administrators and architects new to Nimble Storage UCS SmartStack solution
configurations. It will cover most of the basic setup steps and considerations for a reference architecture
style deployment.
Assumptions
General knowledge of Cisco UCS and UCSM
Familiarity with Nimble Storage UI and basic setup tasks
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 7
Familiarity with standard Fibre Channel concepts
This guide will not address all of the possible configuration options for Cisco UCSM. Where the configuration has an impact on the operation of the Nimble Storage solution, details, options and recommendations will be provided.
Limitations and Other Considerations
This is a typical setup / “how to” guide. This does not cover the complete customized options which could be applied to the Nimble Storage array, Cisco UCSM, or the host OS. Step by step setup is covered with examples of screen shots and settings should be sufficient for the reader to apply the right changes to implement the steps outlined in this guide.
Configuring UCSM Service Profile setup
This section will cover the key aspects of connecting the Nimble Storage in the same manner done for the
SmartStack solutions. The first step is to create the appropriate Service Profile. We will work from a copy
of an existing profile or create a new one. Listed below are the basic resources which need to be
assigned to this profile.
vHBAs:
For the SmartStack configuration, we typically define two vHBAs for data traffic. One vHBA needs to
have a presence in each Fabric Interconnect. Each Fabric Interconnect will have connectivity to one
VSAN. Pay attention to these key attributes:
Fabric ID
VSAN
Adapter Policy
WWPN pool assignment
vNICs:
For the FC SmartStack configuration, we typically define at least one vNIC in the service profile. These
address basic management connectivity to the host, inter-host cluster traffic, as well as any upstream
public facing traffic.
Network Cabling
Data Ports - FC5.1 and FC6.1 from each Nimble Storage controller. Note: FC5.1 and FC6.1 could be
other FC ports in your Storage array.
Management ports – connect the 1GbE (eth1 and eth2) NIC ports on the motherboard to the same
management network used by other devices connected to the UCS.
Server ports – Any UCSM managed B-series or C-series hosts need to be connected to the FI. In the
below examples you will see a B-series chassis with dual IOMs connected to the Fabric Interconnects.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 8
Again this is not a comprehensive guide. The full customization and setup of these ports are outside of
the scope of this document. Refer to the UCS Manager configuration guide for guidance:
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/gui/config/guide/2-
2/b_UCSM_GUI_Configuration_Guide_2_2.html
UCS FC Topology configuration
This guide is based on the SmartStack configuration approach to deploying UCS. There are alternate
UCS network topologies possible. Note: the possible topologies below have both a physical connectivity
and a logical selection of UCS FC operation mode. SmartStack solutions can be attached to the Cisco
UCS systems in one of three methods. (Click on the hyperlink to walk through the details of the
configuration)
Option 1 - Nimble Storage controllers connected directly to the Fabric Interconnect ports (local zoning) as FC Storage ports. FC operational mode is Switch with local zoning enabled. All FLOGI and FC zoning operations are local to the FIs. No FC upstream connectivity is permitted.
Option 2 - Nimble Storage controllers attached to an FC switch (e.g., Nexus 5K) with Fabric Interconnects in FC End Host mode. Then in turn the upstream switch (e.g., Nexus 5K) is attached to the Cisco Fabric Interconnect (FI) ports as FC or FCoE Uplink ports. The FC operation mode is End Host. All FLOGI and FC zoning management operations are performed at the upstream FC switch.
Option 3 - Nimble Storage controllers attached to an access layer switch (e.g., Nexus 5K) with Interconnects in FC Switch mode. The upstream switch (Nexus 5K) is attached to the Cisco Fabric Interconnect (FI) ports as FC or FCoE Uplink ports. The FC operation mode is Switch mode. FLOGI operations are performed at the FIs. Zoning management operations occur at the upstream FC switch. Note: In this configuration the data path from the UCS blades to the Nimble Storage does NOT traverse the upstream switch. Rather it is only used for zoning management
operations.
** Note any FCoE uplink configuration requires proper QoS configuration which is not covered in this guide. Refer to the Cisco UCSM administration guide for details.
Nimble Storage SmartStack solutions are a joint solution between Nimble Storage and Cisco systems.
This getting Started Guide is not intended to cover all of the various scenarios and options for connecting
FC storage to a Cisco UCS environment; but is focused on the SmartStack configuration aspects. The
next few sections will highlight a few of the considerations in the two choices. For more information about
general purpose connectivity, please consult with your Cisco UCS solutions team.
UCS FC operation mode
UCS FC operation mode determines how the Fabric Interconnects forwards FC traffic. Note in this case
we are only interested in the Fibre Channel operation mode. To confirm which mode you are currently in,
perform the following steps from the UCSM GUI:
Go to the Equipment tab
Expand the Fabric Interconnects section
Select one of the Fabric Interconnects and observe the details in the General section.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 9
You should see something similar to this:
Figure 1 – Equipment Tab FI Operation Mode
Nimble Storage firmware, Cisco UCS firmware, and host driver requirements
From the Nimble Storage array, verify you are running a current GA release which has been certified to
work with Cisco UCS and supported for SAN boot. Additionally, from the host operating system side
verify the fnic driver is supported. This should be verified prior to putting the host into production.
From the Cisco UCS and operating system driver side, the authoritative place to verify this information is
in the Cisco UCS Hardware and Software Interoperability Matrix:
http://www. cisco.com/web/techdoc/ucs/interoperability/matrix/matrix. html
Option 1 - Nimble Storage controllers connected directly to the Fabric Interconnects (local zoning)
In the SmartStack UCS configuration, the Nimble Storage is connected to the FI switches. The first FC
ports (FC5.1 for example) from each controller are attached to FI-A. The second FC ports (FC6.1) from
each controller are attached to FI-B.
In this configuration, the Nimble Storage controller fail-over should always have two paths for MPIO – one
through FI-A and one through FI-B. This configuration also covers failure of any networking components
(e. g., vHBA, cable, or FI) and will not result in data path loss. There should always be at least one active
path from the UCS servers to the Nimble Storage.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 1 0
Verify physical network topology. Note in this case all zoning and Fabric Login (FLOGI) operations occur at the Fabric Interconnect. Due to the way local zoning configuration behaves, there can be no connectivity to any other FC switch.
Figure 2 – FC direct attach with local zoning enabled
Verify FC mode type is set to Switch. Login to the UCSM and navigate to the Equipment tab -> Fabric Interconnects -> Fabric Interconnect A
Figure 3 – FC mode type for FC local zoning
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 1 1
Cisco UCS Fabric Interconnects support either FC or Ethernet connectivity to the same physical port. We can select what the port type will be based on the slider below. Note if we do make changes to the existing ports a reboot of the FIs are required in order for the config to go into effect. Verify Unified port type by going to the Equipment tab -> Fabric Interconnects -> Configured Unified port type.
Figure 4 – Verify Unified port type (FC or Ethernet)
Setup VSAN configurations (dual VSANs recommended). Note local zoning must be enabled.
Figure 5 – VSAN and FCoE VLAN configuration
Verify no FC_Uplink interfaces are present in the “SAN Cloud” section. No upstream switches can be connected with local zoning is enabled. Create a World Wide Node Name (WWNN) pool. Typically only one WWNN pool is needed. Note the assignment order is set to sequential.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 1 2
Figure 6– WWN pool sequential setup
Figure 7– WWN pool block setup
Note: In this example the WWN block is 20:00:00:25:b5:11:11:00 and has a pool size of 32 entries. Create WWPN pools: Create one WWPN pool is present in each Fabric Interconnect. You should have a total of two WWPN pools.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 1 3
Figure 8– WWPN pool A sequential setup
Figure 9 – WWPN pool A sequential setup
Note: In this example a size of 128 entries for a single WWPN pool. Optionally you can use a pool range which
includes an identifier for pool A (11:AA in this case).
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 1 4
Repeat the process for the FI-B_wwpn pool:
Figure 10– WWPN pool B sequential setup
Figure 11 – WWN pool A sequential setup
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 1 5
When complete it should look something like this:
Figure 12 – WWNN / WWPN pool summary
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 1 6
In an existing service profile select the vHBA section and right click to create. Create a vHBA with the appropriate Fabric ID, VSAN, wwpn pool, and appropriate adapter policy. Note the Adapter Performance Profile defines the SCSI behavior which is specific to the OS you are installing. (See example for vHBA FC0)
Figure 13 – create vHBA for FI-A
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 1 7
Repeat this process for vHBA FC1:
Figure 14 – create vHBA for FI-B
When finished, the vHBAs of the Service Profile should look like this:
Figure 15 – Service Profile vHBA summary
Boot Policy Creation: The first thing to do is to identify the appropriate target WWPN on each controller. Navigate to the Nimble Storage GUI and select on: Administration-> Network Config -> Active Settings -> Interfaces -> Fibre Channel. It will look something like this:
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 1 8
Figure 16 – Nimble Storage target port WWPN
The UCS boot policy we will be creating in future steps will require both Primary and Secondary connectivity. Also each Primary and Secondary connection will require a presence into both Nimble Storage controllers. In this example Controller A interface FC5.1 has a WWPN of 56:c9:ce:90:79:20:51:01 and Controller B interface FC5.1 has a WWPN of 56:c9:ce:90:79:20:51:05. These will be the primary connections. The secondary connections will come from Controller A interface FC9.1 with a WWPN of 56:c9:ce:90:79:20:51:03 and Controller B interface FC9.1 with a WWPN of 56:c9:ce:90:79:20:51:07. (This allows for SANboot connectivity even in the event of a controller failover or if a Fabric Interconnect is not present).
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 1 9
Create a new boot policy. (Steps) Add a vHBA with a name of the vHBA you created in the Service Profile (fc0 in this case).
Figure 17 – Creating FC boot policy
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 2 0
Repeat the process for the other vHBA (fc1 in this case). When complete it should look something like this:
Figure 18 – creating boot policy (identify vHBAs)
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 2 1
Select “Add SAN Boot Target” followed by “Add San Boot Target to SAN primary”. Select the WWPN for Controller A port FC5.1. Repeat the process and add the WWPN for Controller B port 5.1. Example:
Figure 19 – creating boot policy (identify target WWPNs for FC5.1)
Repeat the process for vHBA fc1. When finished it should look like this:
Figure 20 – creating boot policy (identify target WWPNs for FC6.1)
Lastly add in a CDROM or CIMC boot device for installation purposes. Make sure it is the second device in the boot order for installation purposes.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 2 2
Figure 21 – creating boot policy (identify installation media)
Navigate to SAN cloud -> Policies -> Storage Connection Policies. Right click for “Create Storage Connection Policies”. Give a name of the policy to be the name of the vHBA on the initiator side (fc0 and fc1 in this case). Fill in the WWPN of the first target interface on the Nimble Storage controller A and fill in the description. Select the appropriate VSAN as well. Do this for each target port connected to FI-A.
Figure 22 – Identify target endpoints for local zoning
Select the zoning type as “Single Initiator Multiple Targets”. (Note: be aware Nimble Storage controllers can have more than 2 FC ports per controller. We are adding just the connections which are connected to one particular Fabric Interconnect). When finished we should have target endpoint connectivity that looks like this:
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 2 3
Figure 23 – Identify target endpoint connectivity for FI-A (VSAN 100)
Repeat this for fc1 targets as well:
Figure 24 – Identify target endpoint connectivity for FI-B (VSAN 200)
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 2 4
Navigate to the service profile in question. Select Storage -> vHBA Initiator Groups. Select on the Green “+” button on the right hand side to create a new Initiator Group. Give it a name like “fc0_target”. Then select the “fc0” checkbox to select the vHBA Initiators. Select the Storage Connection Policy you created earlier “fc0” in this case.
Figure 25 – Create vHBA initiator group for fc0
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 2 5
It should look like this when complete:
Figure 26 – Create vHBA initiator group for fc0 (summary)
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 2 6
Repeat this activity for the “fc1” side.
Figure 27 – Create vHBA initiator group for fc1 (summary)
Reboot the host and observe the login status of the vHBA. We should see 2 active logins. In this case the host points to the associated WWPN target port on the array controller.
Figure 28 – vHBA FC login status
Additionally on the array side we should see the following connections:
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 2 7
Figure 29– Nimble Storage FC session list
From the UCS KVM activate the Virtual Media and select the ISO file which you want to install the OS from.
Figure 30 – Map installation media for OS install
Proceed with bare metal OS installation. Ensure MPIO is setup properly on the host OS. Also note Microsoft Windows OS types typically need to be installed with only a single path enabled.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 2 8
Option 2 - Nimble Storage controllers attached to a standard FC switch with Fabric Interconnects in FC End Host mode.
Verify physical network topology:
Figure 31 – FC End Host mode topology with upstream switches
Verify FC operation mode is in End Host:
Figure 32 – FC End Host mode topology with upstream switches
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 2 9
Cisco UCS Fabric Interconnects support either FC or Ethernet connectivity to the same physical port. You can select what the port type will be based on the slider below. Note if you do make changes to the existing ports a reboot of the FIs are required in order for the config to go into effect. Verify Unified port type (FC or Ethernet)
Figure 33 – Configuring unified port type
Configure VSAN configurations (dual VSANs recommended). The first consideration is local zoning is required to be disabled. Also an FCoE VLAN is required to be entered (even if we aren’t using FCoE uplink ports), be sure to use a free VLAN which can be assigned. Also note the VSAN construct is not recognized by upstream Brocade switches. However, it is recognized by the UCS Fabric Interconnects so a VSAN selection is still required.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 3 0
Figure 34 – Configuring Dual VSAN configuration
Under the SAN tab select the SAN Cloud and then the Uplink FC interfaces. Configure FC_Uplink interfaces with the appropriate VSAN for the individual FC interfaces.
Figure 35 – Configuring FC Uplink interfaces
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 3 1
Create WWNN pool:
Figure 36 – Creating a WWNN sequential pool
Note: If there is more than one UCS chassis to be connected, we may want to slightly change the suffix of the WWNN. In this case we used the example of 20:00:00:25:b5:11:11:00 and entered a size of 32 nodes.
Figure 37 – Creating a WWNN block
Create WWPN pools: Create one pool for each Fabric Interconnect to identify presence.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 3 2
Figure 38 – Creating a WWPN sequential pool for FI-A
In this example we entered a size of 128 entries for a single pool.
Figure 39 – Creating WWPN block for FI-A
Repeat the process for the FI-B_wwpn pool
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 3 3
Figure 40 – Creating a WWPN sequential pool for FI-B
Figure 41 – Creating a WWPN block for FI-A
When complete it should appear something like this:
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 3 4
Figure 42 – WWNN / WWPN summary
In an existing service profile create a vHBA with the appropriate Fabric ID, VSAN, wwpn domain pool, and adapter policy. (See example for FC0)
Figure 43 – Create vHBA fc0
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 3 5
Repeat this process for vHBA FC1
Figure 44 – Create vHBA fc1
When finished, the Service Profile should look something like this:
Figure 45 – Service profile summary
Boot Policy Creation: The first thing to do is to identify the appropriate target WWPN on each controller. Navigate to the Nimble Storage GUI and select on: Administration-> Network Config -> Active Settings -> Interfaces -> Fibre Channel. It would appear like this:
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 3 6
Figure 46 – Nimble Storage target WWPN listing
The UCS boot policy we will be creating in future steps will require both Primary and Secondary connectivity. Also each Primary and Secondary connection will require a presence into both Nimble Storage controllers. In this example we are identifying Controller A interface fc5.1 with WWPN of 56:c9:ce:90:79:20:51:01 and Controller B interface fc5.1 with 56:c9:ce:90:79:20:51:05. This will be the primary connections. The secondary connections will come from Controller A interface fc9.1 with WWPN of 56:c9:ce:90:79:20:51:03 and Controller B interface fc9.1 with WWPN of 56:c9:ce:90:79:20:51:07. (This allows for SANboot connectivity even in the event of a controller failover or if a Fabric Interconnect is not present). Create a new boot policy. Add a vHBA with a name of the vHBA we created in the Service Profile (fc0 in this case).
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 3 7
Figure 47 – Create FC SANboot policy (vHBA fc0)
Repeat the process for the other vHBA (fc1 in this case). When finished it should should something like this:
Figure 48 – Create FC SANboot policy summary
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 3 8
Select “Add SAN Boot Target” followed by “Add San Boot Target to SAN primary”. Select the WWPN for Controller A port 5.1. Repeat the process and add the WWPN for Controller B port 5.1. Example:
Figure 49 – Create FC SANboot policy (add WWPN target ports for VSAN 100)
Repeat the process for vHBA fc1. When finished it should look like this:
Figure 50 – Create FC SANboot policy (add WWPN target ports for VSAN 200)
Lastly add in a CDROM boot device for installation purposes. Make sure the CDROM is the second in the boot order.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 3 9
Figure 51 – Create FC SANboot policy – add install media
Boot the Service profile. From the array side you should see a total of 4 FC sessions for initial boot. Note there are two Active/Optimized sessions and two Standby sessions.
Figure 52 – Verify FC session connectivity from Nimble Storage array
From the UCS KVM activate the Virtual Media and select the ISO file which we want to install the OS from.
Figure 53 – Map installation media
Proceed with bare metal OS installation. Ensure MPIO is setup properly. Also note Microsoft Windows OS types typically need to be installed with only a single path enabled.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 4 0
Option 3 - Nimble Storage controllers attached to an access layer switch (e.g., Nexus 5K) with Interconnects in FC Switch mode.
Verify physical network topology:
Figure 54 – FC Switch mode with upstream zoning
Verify FC mode type:
Figure 55 – Verify FC operation mode is Switch
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 4 1
Cisco UCS Fabric Interconnects support either FC or Ethernet connectivity to the same physical port. You can select what the port type will be based on the slider below. Note if we do make changes to the existing ports a reboot of the FIs are required in order for them to go into effect. Verify Unified port type by going to the Equipment tab -> Fabric Interconnects -> Configured Unified port type.
Figure 56 – Verify Unified port type
Setup VSAN configurations (dual VSANs recommended). Note local zoning must be enabled. Note when you setup a VSAN, you are also required to setup an associated VLAN. (This is a UCS requirement to support FCoE connectivity). Make certain the VLAN is not already in use.
Figure 57 – Dual VSAN configuration
Verify FC_Uplink interfaces are present in the “SAN Cloud” section. Note the upstream connectivity is not needed for data IO, but it is still required in order to use the upstream FC switch for zoning configuration. Also note the FC uplink ports are not required to be in the
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 4 2
same VSAN as the Storage Cloud VSANs. In order to enable zoning management from the upstream switch we need to make sure to enable FC uplink trunking for each Fabric Interconnect.
Figure 58 – Enable FC Uplink Trunking
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 4 3
Create UCS initiator WWNN pool:
Figure 59 – WWNN sequential pool configuration
Note: Depending on the number of UCS blades or rack servers we have, we may want to slightly change the size of the WWNN pool. In this case we used 20:00:00:25:b5:11:11:00 and entered a size of 32 nodes.
Figure 60 – Create WWN block
Create WWPN pools: Create one pool for each Fabric Interconnect to identify presence.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 4 4
Figure 61 –WWPN sequential pool configuration for FI-A
In this example a size of 128 entries for a single WWPN pool.
Figure 62 –WWPN block configuration for FI-A
Repeat the process for the FI-B_wwpn pool
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 4 5
Figure 63–WWPN sequential pool configuration for FI-B
Figure 64 –WWPN block configuration for FI-B
When complete it should look something like this:
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 4 6
Figure 65 –WWNN / WWPN summary
In an existing service profile create a vHBA with the appropriate Fabric ID, VSAN, wwpn domain pool, and adapter policy. (See example for FC0)
Figure 66 –vHBA fc0 creation
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 4 7
Repeat this process for vHBA FC1
Figure 67 – vHBA fc1 creation
When finished, the Service Profile should look something like this:
Figure 68 – vHBA summary
Boot Policy Creation: The first thing to do is to identify the appropriate target WWPN on each controller. Navigate to the Nimble Storage GUI and select on: Administration-> Network Config -> Active Settings -> Interfaces -> Fibre Channel. It would look something like this:
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 4 8
Figure 69 – Nimble Storage target WWPN
The UCS boot policy we will be creating in future steps will require both Primary and Secondary connectivity. Also each Primary and Secondary connection will require a presence into both Nimble Storage controllers. In this example we identify Controller A interface fc5.1 with WWPN of 56:c9:ce:90:5dba:40:01 and Controller B interface fc5.1 with 56:c9:ce:90:5dba:40:03. These will be the primary connections. The secondary connections will come from Controller A interface fc6.1 with WWPN of 56:c9:ce:90:5dba:40:02 and Controller B interface fc9.1 with WWPN of 56:c9:ce:90:5dba:40:04. (This allows for SANboot connectivity even in the event of a controller failover or if a Fabric Interconnect is not present). Create a new boot policy. Add a vHBA with a name of the vHBA we created in the Service Profile (fc0 in this case).
Figure 70 – Creating vHBA fc0
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 4 9
Repeat the process for the other vHBA (fc1 in this case). When finished it should look something like this:
Figure 71 – vHBA summary
Select “Add SAN Boot Target” followed by “Add San Boot Target to SAN primary”. Select the WWPN for Controller A port 5.1. Repeat the process and add the WWPN for Controller B port 5.1. Example:
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 5 0
Figure 72 – Create boot policy - Configure VSAN 100 targets
Repeat the process for vHBA fc1. When finished it should look like this:
Figure 73 – Create boot policy - Configure VSAN 200 targets
Lastly add in a CDROM boot device for installation purposes. Make sure the CDROM is the second in the boot order.
Figure 74 – Create boot policy - Add install media
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 5 1
Setup the upstream switch zoning configuration. We need to do this for each VSAN. Keep in mind the VSAN must exist in the upstream zoning configuration (even if there are no interfaces physically attached).
Example config for VSAN 100. Repeat this process for VSAN 200. mds-fc-a# conf t Enter configuration commands, one per line. End with CNTL/Z. mds-fc-a(config)# vsan database mds-fc-a(config-vsan-db)# vsan 100 mds-fc-a(config-vsan-db)# show vsan 100 vsan 100 information name:VSAN0100 state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:up mds-fc-a# conf t mds-fc-a(config)# zone name ucs_gsg_fc0 vsan 100 mds-fc-a(config-zone)# member pwwn 20:00:00:25:b5:00:00:ea mds-fc-a(config-zone)# member pwwn 56:c9:ce:90:5d:ba:40:01 mds-fc-a(config-zone)# member pwwn 56:c9:ce:90:5d:ba:40:03 mds-fc-a(config-zone)# end mds-fc-a# conf t mds-fc-a(config)# zoneset name Zoneset1 vsan 100 mds-fc-a(config-zoneset)# member ucs_gsg_fc0 zoneset activate name Zoneset1 vsan 100 zoneset distribute vsan 100 mds-fc-b# conf t Enter configuration commands, one per line. End with CNTL/Z. mds-fc-b(config)# vsan database mds-fc-b(config-vsan-db)# vsan 200 mds-fc-b(config-vsan-db)# show vsan 200 vsan 200 information name:VSAN0200 state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:up mds-fc-b# conf t mds-fc-b(config)# zone name ucs_gsg_fc1 vsan 200 mds-fc-b(config-zone)# member pwwn 20:00:00:25:b5:00:00:da mds-fc-b(config-zone)# member pwwn 56:c9:ce:90:5d:ba:40:02 mds-fc-b(config-zone)# member pwwn 56:c9:ce:90:5d:ba:40:04 mds-fc-b(config-zone)# end mds-fc-b# conf t mds-fc-b(config)# zoneset name Zoneset1 vsan 200 mds-fc-b(config-zoneset)# member ucs_gsg_fc1 zoneset activate name Zoneset1 vsan 200 zoneset distribute vsan 200
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 5 2
Reboot the host and observe the login status of the HBA. We should see 2 active logins for the active Nimble Storage controller.
Figure 75 – vHBA FC login
Additionally on the array side we will see the following connections. Note two connections are active and the other two are standby.
Figure 76 – FC session login
From the UCS KVM activate the Virtual Media and select the ISO file we want to install the OS from.
Figure 77 – Map installation media
Proceed with bare metal OS installation. Ensure MPIO is setup properly. Also note Microsoft Windows OS types typically need to be installed with only a single path enabled.
N I M B L E S T O R A G E S M A R T S T A C K T E C H N I C A L M A R K E T I N G G U I D E 5 3
Summary
This document provides a high level set of steps which can be followed to help get started with
configuring Nimble Storage, Cisco UCS with FC connectivity to form the basis of the Nimble Storage
SmartStack integrated infrastructure solution suite. With a little experience in the base products from
Nimble Storage, Cisco, this guide should help you get on the right track to supporting many of the
SmartStack solutions built on these technologies.
For more information, contact your local integrator or vendor for more details.
Nimble Storage, Inc.
211 River Oaks Parkway, San Jose, CA 95134
Tel: 877-364-6253; 408-432-9600 | www. nimblestorage. com | info@nimblestorage. com
© 2015 Nimble Storage, Inc. . Nimble Storage, InfoSight, CASL, SmartStack, and NimbleConnect are trademarks or registered trademarks of Nimble Storage, Inc. All other trademarks are the property of their respective owners. SG-FBC-0715