ibm reference configuration for microsoft private … · international technical support...

92
ibm.com/redbooks Redpaper Front cover IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide Scott Smith Eric Johnson Understand the components of IBM Reference Configuration Set up and configure the Microsoft Private Cloud Fast Track solution Follow preferred practices for implementing the solution

Upload: nguyennhi

Post on 09-Sep-2018

231 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

ibm.com/redbooks Redpaper

Front cover

IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Scott SmithEric Johnson

Understand the components of IBM Reference Configuration

Set up and configure the Microsoft Private Cloud Fast Track solution

Follow preferred practices for implementing the solution

Page 2: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide
Page 3: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

International Technical Support Organization

IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

December 2011

REDP-4829-00

Page 4: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

© Copyright International Business Machines Corporation 2011. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.

First Edition (December 2011)

This edition applies to the IBM Reference Configuration for Microsoft Private Cloud.

This document was created or updated on December 28, 2011.

Note: Before using this information and the product it supports, read the information in “Notices” on page v.

Page 5: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .vTrademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiThe team who wrote this paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiNow you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiComments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiStay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Chapter 1. Components, Microsoft Hyper-V, and failover clustering . . . . . . . . . . . . . . . 11.1 Overview of the components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Microsoft Hyper-V and failover clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Chapter 2. Components of the IBM Reference Configuration . . . . . . . . . . . . . . . . . . . . 52.1 Microsoft System Center VMM 2008 R2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Microsoft Systems Center Operations Manager 2007 R2. . . . . . . . . . . . . . . . . . . . . . . . 62.3 Microsoft System Center VMM Self-Service Portal 2.0. . . . . . . . . . . . . . . . . . . . . . . . . . 62.4 Microsoft System Center Opalis Integration 6.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.5 Microsoft Data Protection Manager 2010 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.6 IBM System x3550 M3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.7 IBM System x3650 M3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.8 IBM XIV Storage System and XIV Storage System Gen3 . . . . . . . . . . . . . . . . . . . . . . . 92.9 IBM Converged Switch B32 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.10 IBM Ethernet Switch B24Y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Chapter 3. Best practices and implementation guidelines . . . . . . . . . . . . . . . . . . . . . . 133.1 Racking and power distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.2 Networking and VLANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.2.1 VLAN description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.2.2 Switch port layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.2.3 IBM Converged Switch B32 port configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.2.4 B24Y switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.2.5 Brocade network adapter teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.2.6 FCoE Storage Network (VLAN 1002) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.2.7 iSCSI storage network (VLAN 20). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.2.8 Cluster heartbeat and Cluster Shared Volume networks (VLANs 30, 70, and 90) 213.2.9 Production live migration network (VLAN 40) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.2.10 Production virtual machine communication network (VLAN 50) . . . . . . . . . . . . . 213.2.11 Management network (VLAN 60) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.2.12 Routing summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.3 Active Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.4 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.4.1 Microsoft Hyper-V cluster storage considerations. . . . . . . . . . . . . . . . . . . . . . . . . 233.4.2 Cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.4.3 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.4.4 Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.4.5 Multipath I/O fault-tolerance driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.4.6 IBM XIV storage pool sizing guidelines to support VSS snapshots . . . . . . . . . . . 29

3.5 Setting up the IBM System x3550 M3 Active Directory. . . . . . . . . . . . . . . . . . . . . . . . . 29

© Copyright IBM Corp. 2011. All rights reserved. iii

Page 6: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

3.6 Setting up the IBM System x3650 M3 management cluster . . . . . . . . . . . . . . . . . . . . . 313.6.1 Configuring the network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.6.2 Validating the storage area network configuration . . . . . . . . . . . . . . . . . . . . . . . . 323.6.3 Creating the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.6.4 Setting up Windows Server 2008 R2 SP1 for VMs. . . . . . . . . . . . . . . . . . . . . . . . 333.6.5 Microsoft SQL Server guest cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.6.6 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.6.7 Microsoft System Center Operations Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.6.8 System Center Virtual Machine Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.6.9 Microsoft System Center Self-Service Portal 2.0 . . . . . . . . . . . . . . . . . . . . . . . . . 363.6.10 Microsoft System Center Opalis Integration Server . . . . . . . . . . . . . . . . . . . . . . 37

3.7 Setting up the IBM System x3650 M3 production Hyper-V cluster . . . . . . . . . . . . . . . . 373.7.1 Configuring the network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.7.2 Storage area network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.7.3 Creating the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.8 Setting up IBM System x3550 M3 Data Protection Manager 2010. . . . . . . . . . . . . . . . 403.9 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Appendix A. Brocade 2-port 10 GbE CNA for IBM System x . . . . . . . . . . . . . . . . . . . . 43Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44Windows Device Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Managing CNAs in the Hyper-V IBM Reference Configuration environment . . . . . . . . . 45Configuring the Hyper-V network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Appendix B. Brocade Switch Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Fast Track switch configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54IBM Converged Switch B32E Fabric zoning configuration. . . . . . . . . . . . . . . . . . . . . . . . . . 63

Appendix C. Networking worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

iv IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 7: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

© Copyright IBM Corp. 2011. All rights reserved. v

Page 8: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Trademarks

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:

IBM®Redbooks®Redpaper™

Redbooks (logo) ®System Storage®System x®

XIV®

The following terms are trademarks of other companies:

Intel Xeon, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.

Other company, product, or service names may be trademarks or service marks of others.

vi IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 9: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Preface

The IBM® Reference Configuration for Microsoft Private Cloud provides businesses an affordable, interoperable, reliable, and industry-leading virtualization solution. Validated by the Microsoft Private Cloud Fast Track program, the IBM Reference Configuration for Microsoft Private Cloud combines Microsoft software, consolidated guidance, and validated configurations for compute, network, storage, and value-added software components.

The Microsoft program requires a minimum level of redundancy and fault tolerance across the servers, storage, and networking for both the management and production virtual machine (VM) clusters. These requirements help to ensure a certain level of fault tolerance while managing private cloud pooled resources.

This IBM Redpaper™ publication explains how to set up and configure the IBM 8-Node Microsoft Private Cloud Fast Track solution used in the actual Microsoft program validation. The solution design consists of Microsoft Windows 2008 R2 Hyper-V clusters powered by IBM System x3650 M3 servers with IBM XIV® Storage System connected to IBM converged and Ethernet networks. This paper includes a short summary of the Reference Configuration software and hardware components, followed by best practice implementation guidelines.

This paper targets mid-to-large sized organizations that consist of IT engineers who are familiar with the hardware and software that make up the IBM Cloud Reference Architecture. It also benefits the technical sales teams for IBM System x® and XIV and their customers who are evaluating or pursuing Hyper-V virtualization solutions.

Before reading this paper, you should advanced comprehensive experience with the various IBM Reference Configuration components. However, for more information about the entirety of the solution, the paper provides technical reviews and supplemental references.

This paper is a partner to IBM Reference Configuration for Microsoft Private Cloud: Deployment Guide, REDP-4828.

The team who wrote this paper

This paper was produced by a team of specialists from around the world in collaboration with the International Technical Support Organization (ITSO).

Scott Smith is an IBM System x Systems Engineer working at the IBM Center for Microsoft Technology. Over the past 15 years, Scott has worked to optimize the performance of IBM x86-based servers running the Microsoft Windows Server operating system and Microsoft application software. Recently his focus is on Microsoft Hyper-V based solutions with IBM System x servers, storage, and networking. He has extensive experience in helping IBM customers understand the issues that they are facing and developing solutions that address them.

Eric Johnson has over 15 years of experience in the IT industry specializing in Microsoft high availability and clustering solutions. Recently he focus is on IBM XIV solutions in the Microsoft storage ISV space. He has routinely designed IBM and Microsoft business critical application solutions using XIV storage in physical and virtual environments. In addition to planning, deploying, and testing Microsoft-centric solutions using IBM XIV storage, his IBM core responsibilities included delivering technical sales and marketing collateral.

© Copyright IBM Corp. 2011. All rights reserved. vii

Page 10: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Thanks to the following people for their contributions to this project:

Karen LawrenceLinda RobinsonDavid WattsIBM Redbooks®

David HartmanJim MeyerWilliam WatsonMichael WilcoxIBM

Steven TongBrocade

Special thanks to Avanade Inc, especially Pat Cimprich, for their contributions to the creation of this paper. Avanade spent many hours evaluating reference configuration components, running performance tests, assessing component sizings, and discovering opportunities for automation and integration within the Microsoft System Center Suite. Avanade is an IBM Business Partner and provides integration and customization services for cloud and virtualization projects.

Now you can become a published author, too!

Here’s an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base.

Find out more about the residency program, browse the residency index, and apply online at:

ibm.com/redbooks/residencies.html

Comments welcome

Your comments are important to us!

We want our papers to be as helpful as possible. Send us your comments about this paper or other IBM Redbooks publications in one of the following ways:

� Use the online Contact us review Redbooks form found at:

ibm.com/redbooks

� Send your comments in an email to:

[email protected]

viii IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 11: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

� Mail your comments to:

IBM Corporation, International Technical Support OrganizationDept. HYTD Mail Station P0992455 South RoadPoughkeepsie, NY 12601-5400

Stay connected to IBM Redbooks

� Find us on Facebook:

http://www.facebook.com/IBMRedbooks

� Follow us on Twitter:

http://twitter.com/ibmredbooks

� Look for us on LinkedIn:

http://www.linkedin.com/groups?home=&gid=2130806

� Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter:

https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm

� Stay current on recent Redbooks publications with RSS Feeds:

http://www.redbooks.ibm.com/rss.html

Preface ix

Page 12: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

x IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 13: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Chapter 1. Components, Microsoft Hyper-V, and failover clustering

This chapter begins by listing the software and hardware components of the IBM Reference Configuration for Microsoft Private Cloud framework. Then it highlights Microsoft Hyper-V technology as a key component of cloud environments and explains the importance of Microsoft failover clustering.

This chapter includes the following sections:

� Overview of the components� Microsoft Hyper-V and failover clustering

1

© Copyright IBM Corp. 2011. All rights reserved. 1

Page 14: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

1.1 Overview of the components

The IBM Reference Configuration for Microsoft Private Cloud framework encompasses the following essential software components:

� 2-Node Hyper-V failover cluster running virtual machine (VM) management tools� Microsoft System Center Virtual Machine Manager (VMM) 2008 R2� Microsoft System Center Operations Manager 2007 R2� Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0� Microsoft System Center Opalis 6.3� Microsoft Data Protection Manager 2010� Microsoft SQL Server 2008 SP1 2-Node VM Guest Cluster� 8-Node Hyper-V failover cluster for the production VM environment

The IBM Reference Configuration foundation is constructed of the following enterprise-class hardware components:

� 1 IBM System x3550 M3 1U server (optional Active Directory server)� 2 IBM System x3650 M3 2U servers (Hyper-V HA Cluster Management servers)� 8 IBM System x3650 M3 2U servers (Hyper-V HA Cluster Production servers)� 1 IBM System x3550 M3 1U server (optional Data Protection Manager server)� 1 IBM XIV Family Storage System (15 module 2810/2812)� 2 IBM Converged Switch B32� 2 IBM B24Y Ethernet switches� 20 Brocade 10 GbE Dual-Port 1020 Converged Networking Adapters

These software and hardware components form a high-performing, cost-effective solution that supports Microsoft Hyper-V cloud environments for the most popular business-critical applications and many custom third-party solutions. Equally important, these components meet the criteria set by Microsoft for the Private Cloud Fast Track program. This program promotes robust cloud environments to help satisfy the most demanding virtualization requirements.

2 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 15: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Figure 1-1 illustrates a diagram of the overall architecture.

Figure 1-1 Fast track architecture of the IBM Reference Configuration for Microsoft Private Cloud

1.2 Microsoft Hyper-V and failover clustering

Microsoft Hyper-V technology continues to gain competitive traction as a key cloud component in many customer virtualization environments. Hyper-V is included as a role in x64 versions of Windows Server 2008 R2 Standard, Enterprise, and Datacenter editions. Windows 2008 R2 Microsoft Hyper-V VMs supports up to four virtual processors and 64 GB of memory, depending on the installed operating system.

Individual VMs have their own operating system instance and are isolated from the host operating system and other VMs. VM isolation helps promote higher business-critical application availability. The Microsoft failover clustering feature, in Windows Server 2008 R2 Enterprise and Datacenter Editions, can dramatically improve production uptimes. As such, Microsoft clustering plays a pivotal role in the IBM Reference Configuration solution.

Microsoft failover clustering helps eliminate single points of failure so that users have near-continuous access to important server-based, business-productivity resources. If a physical or logical outage occurs that is linked to unplanned failures or scheduled maintenance, VMs can automatically migrate to other cluster member nodes. As a result, clients experience little-to-no downtime. This seamless operation is attractive for organizations that are trying to create new business and maintain healthy service level agreements.

Zones 1 and 2

Zones 3 and 4

VLAN 20

VLAN 30

VLAN 40

VLAN 50

VLAN 60

VLAN 70

VLAN 80

VLAN 90

VLAN 100

FC Storage

FC Storage

iSCSI Storage

Production C luster Private / CSV

Production C luster Private / Live Migration

Production C luster Public / VM Communication

Management

Management Cluster Private

Management Cluster Private – Live Migration

SQL Cluster Private

Out of Band Management - IMM

IBM

B3

2 S

witch

IBM

B3

2 S

witc

h

The XIV

system is co

nnected to the FC

ports onth

e B32. The hosts connect to the 10 G

B FC

oE

ports. The hosts use MPIO

on the storage side

and F

T team

s of 10 Gbe

devices carved into

the required VL

ANs

for data comm

unication.

Two stacked B24Y2C with 10 Gb uplinkmodules for 1 GB connections (iSCSI, inband and out-of-band management) 10 GbEuplink are connected to B32 switches.

IBM XIV System Storage

IBM System x3550 M3• 2 – 1 GbE – Management• IMM Port to VLAN 100

2 – IBM System x3650 M3Windows Server 2008 R2 SP1Enterprise runn ing Hyper-V

• SQL Server 2008• System Center Operations Manager 2007 R2• System Center VMM 2008 R2• Self Service Portal 2.01 – FT Pair 10 Gb CNA

• FC Zone 1 and 2• vSwitch team

1 – FT Pair 10 Gb CNA• FC Zones 3 and 4• VLAN 60 (Mgmt Cluster Pub lic)• VLAN 70 (Mgmt Cluster Private)• VLAN 80 (Mgmt Cluster L ive Migration)

IMM Port to VLAN 100

8 – IBM System x3650 M3Windows Server 2008 R2 SP1Datacenter running Hyper-V

1 – FT Pair 10 Gb CNA• FC Zones 1 and 2• vSwitch team

1 – FT Pair 10 Gb CNA• FC Zone 3 and 4• VLAN 30 (Prod Cluster Priv)• VLAN 40 (Prod Cluster LM)• VLAN 60 (Mgmt Network)

IMM Port to VLAN 100

Chapter 1. Components, Microsoft Hyper-V, and failover clustering 3

Page 16: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Additionally, Microsoft failover clustering promotes optimal physical resource utilization by load balancing VMs across cluster members in active/active configurations. In the IBM Reference Configuration, after a management cluster is successfully deployed, you can create a VM for System Center VMM to help expedite and propagate highly available VM implementations.

4 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 17: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Chapter 2. Components of the IBM Reference Configuration

The IBM Reference Configuration is a highly available private cloud environment. It consists of IBM System x servers, storage, and networking running Microsoft Windows 2008 R2 SP1 operating system and Microsoft System Center management software. Each component provides a key element to the overall solution.

This chapter includes the following sections:

� Microsoft System Center VMM 2008 R2� Microsoft Systems Center Operations Manager 2007 R2� Microsoft System Center VMM Self-Service Portal 2.0� Microsoft System Center Opalis Integration 6.3� Microsoft Data Protection Manager 2010� IBM System x3550 M3� IBM System x3650 M3� IBM XIV Storage System and XIV Storage System Gen3� IBM Converged Switch B32� IBM Ethernet Switch B24Y

2

© Copyright IBM Corp. 2011. All rights reserved. 5

Page 18: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

2.1 Microsoft System Center VMM 2008 R2

Microsoft System Center Virtual Machine Manager (VMM) simplifies private cloud management for both physical and virtual systems from a single administrative console. Administrators can create virtual machine (VM) templates to rapidly and intelligently deploy VMs to Microsoft failover clusters and stand-alone hosts. Acting as a VM deployment hub, System Center VMM supports live migrations, quick storage migrations, and straightforward automation capabilities using Windows PowerShell. It enables Performance and Resource Optimization (PRO) tips for VMs triggered by System Center Operations Manager (SCOM) alerts, which contribute to greater VM control and efficiency.

PRO Packs, such as the IBM PRO Pack, allow automated VM cluster migrations in response to defined SCOM triggers. PRO Packs can resolve problematic resource states or balance critical workloads across cluster hosts. Part of the same family, System Center VMM tightly integrates with SCOM and substantially enhances private cloud management and monitoring.

2.2 Microsoft Systems Center Operations Manager 2007 R2

Microsoft Systems Center Operations Manager also provides centralized administration from a single GUI. It also provides multilayer monitoring of health, performance, and availability of private cloud environments, across hardware, hypervisors, operating systems, and applications. Unlike System Center VMM, which focuses solely on VM management and control, SCOM focuses on monitoring and reporting for the entire physical and virtual data center. SCOM monitoring and reporting capabilities are expanded by importing the IBM Hardware and Storage Management Packs, which add functionality specific to IBM.

SCOM primarily caters to monitoring based on Windows but also supports heterogeneous environments for customers who want to build VMs that are not Windows based. Additionally, SCOM 2007 R2 offers uncomplicated reporting and authoring capabilities to track performance against private cloud service-level agreements. Through the Microsoft System Center VMM Self-Service Portal, administrators define many of the service-level resources that are used to complete the Microsoft Private Cloud Fast Track program management software requirements.

2.3 Microsoft System Center VMM Self-Service Portal 2.0

System Center VMM Self-Service Portal 2.0 is the primary customer web-based interface for dynamic VM management. It is a partner-extensible software package, available at no cost, that allows service providers to dynamically pool, allocate, and manage data center resources. Private cloud administrators sell virtual resource services to customers seeking to reduce IT costs. Budget-oriented organizations can purchase virtual infrastructure resources made up of server, storage, and networking hardware that is managed and hosted by service providers for IBM Reference Configuration. This accessibility increases IT flexibility for those customers who benefit from an automated web portal.

System Center VMM Self-Service Portal 2.0 easily supports cloud customer solutions by providing metering, billing, and reporting. System Center VMM Self-Service Portal 2.0 is also tightly integrated with the System Center family of products and simplifies service provider billable unit mappings for server, storage, and networking hardware resources.

6 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 19: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

2.4 Microsoft System Center Opalis Integration 6.3

Microsoft System Center Opalis Server provides a workflow automation framework and improves service response time. Opalis server integrates with other System Center products such as Operations Manager, Virtual Machine Manager, and other third-party management tools to share information and initiate task sequences. Improved service delivery is achieved by consistent documented processes, enforcing best practices, and timely automated responses.

2.5 Microsoft Data Protection Manager 2010

With Microsoft System Center Data Protection Manager (DPM) 2010, administrators can safeguard private cloud business critical applications including those hosted by Hyper-V VMs. Because DPM 2010 is also part of the Microsoft System Center family, it is optimally designed for integration with the entire management infrastructure of IBM Reference Configuration. It provides online continuous data protection and monitoring support for both clustered and stand-alone systems.

DPM 2010 also supports crash and application consistent backups with item-level recovery capabilities. Customers can take advantage of various disk-to-disk and disk-to-tape solutions. Although Microsoft supports DPM VM implementations, the backup functionality is limited. Therefore, customers must use a separate, dedicated backup server, such as the optional IBM System x3550 M3.

2.6 IBM System x3550 M3

The IBM System x3550 M3 is a 1U rack-mount server that boasts power-optimized performance with its energy efficient design. The x3550 M3 is a highly scalable, streamlined server. It has up to two 3.46 GHz six-core Intel Xeon 5600 series processors with Quick Path Interconnect (QPI) technology, up to 288 GB of RAM, and up to eight 2.5-inch hot-swappable SAS/SATA hard disks or solid-state drives (SSDs).

The x3550 M3 easily meets the Microsoft Private Cloud program architectural demands for the Microsoft Active Directory role. Additionally for convenience in the test environment, it serves as the repository for the management and production cluster quorum file share witnesses.

Figure 2-1 shows a rear view of the x3550 M3.

Figure 2-1 IBM System x3550 M3

Four hot-swap2.5-in. bays

Pop-out light path diagnostics panel with

power button and status LEDs

Videoport

Light pathdiagnosticspanel eject

buttonTwo USB2.0 ports

Four hot-swap2.5-in. bays

or one DVD bay

Chapter 2. Components of the IBM Reference Configuration 7

Page 20: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

2.7 IBM System x3650 M3

At the core of the IBM Reference Configuration solution, the 2U IBM System x3650 servers deliver the performance and reliability required for virtualizing business-critical applications in Hyper-V cloud environments. To provide the expected virtualization performance to handle any Microsoft production environment, IBM System x3650 M3 servers can be equipped as follows:

� With up to two 3.46 GHz six-core (3.60 GHz four-core) Intel Xeon 5600 series processors� With Quick Path Interconnect technology� With up to 288 GB of memory

Similar to the x3550 M3, the x3650 M3 is highly scalable with a storage capacity of up to sixteen 2.5-in. hot-swappable SAS/SATA hard disks or SSDs. It also contains hot-swappable power supplies, fans, and remote management through a keyboard, video, and mouse, for continuous management capabilities. All of these key features, in addition to others, help solidify the dependability that IBM customers have grown accustomed to with System x servers.

By virtualizing with Microsoft Hyper-V technology on IBM System x3650 M3 servers, businesses reduce physical server sprawl, power consumption, and total cost of ownership (TCO). Virtualizing the server environment also results in lower server administrative overhead, giving IT administrators the capability to manage more systems than exclusive physical environments. Highly available critical applications on clustered host servers can be managed with greater flexibility and minimal downtime due to Microsoft Hyper-V live and quick migration capabilities.

Figure 2-2 shows a rear view of the x3650 M3.

Figure 2-2 IBM System x3650 M3

Hot-swap2.5-in. HDDs

Statuslights

Pop-out lightpath diagnosticspanel

Opticaldrive

Videoport

PowerButton with sliding cover

USB 2.0ports

Light pathdiagnosticspanel release

8 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 21: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

2.8 IBM XIV Storage System and XIV Storage System Gen3

IBM XIV Storage System is well-suited for Microsoft virtualized cloud environments. The IBM XIV family of arrays was built to provide easy-to-use, enterprise-class storage, offering a full range of included benefits. The XIV system complements IBM System x3650 M3 servers and the Brocade converged networking infrastructure in an end-to-end Microsoft Hyper-V private cloud solution. The XIV system delivers proven disk storage in flexible, scalable configurations that start as small as six modules, with 27 TB of usable capacity, up to 15 modules, with over 243 TB of usable capacity.

The XIV system eliminates the complexity of managing enterprise storage, offering performance, reliability, and incredibly low TCO. Its grid architecture delivers virtual storage that optimizes performance and integrates seamlessly with cloud technologies, which require agility to handle growth and to ensure high availability and data protection.

The XIV data protection design incorporates active/active N+1 redundancy of all data modules, disk drives, interconnect switches, and uninterruptible power supply (UPS) units. This design also offers multipath Fibre Channel (FC) and iSCSI host connectivity. Three built-in UPS units protect all disks, cache, and electronics with redundant power supplies and fans, further promoting hardware and software reliability with enterprise-class availability.

The XIV system also employs a predetermined data distribution model. This model helps to ensure fast recovery from failures by using prefailure detection and proactive corrective healing before potential problems arise.

From a storage administrator’s perspective, the XIV system has earned a reputation for being one of the easiest storage arrays to use. The XIV storage array incorporates a management architecture that allows XIV storage to grow or change without a need for data rebalancing. Combining such management features with the XIV family’s ability to self-heal and automatically load-balance server workloads results in a dramatic, global reduction in storage administrative overhead.

In addition to the high availability and reliability features, the XIV family offers competitive performance characteristics to meet demanding cloud-based workloads. The 15-module XIV 2810/2812 distributed architecture provides a combined total of up to 240 GB of cache and individual modules powered by quad-core Intel Xeon processors. Similarly, the new XIV Gen3 model provides even higher performance with up to 360 GB of cache among numerous additional hardware improvements.

Six dedicated host interface modules ensure optimal, balanced data distribution among all 180 1 TB or 2 TB disks to eliminate hot spots. This data distribution feature has become increasingly important due to the popularity of using larger logical unit numbers (LUNs) for multiple VMs typically deployed in cloud environments. Because every LUN has access to all operating spindles, all the time, the chance of saturating the storage I/O is greatly reduced compared to traditional architectural approaches using RAID sets and hot spares.

This unique grid architecture also helps to provide the following key cloud environment benefits:

� Continuous, predictable high performance without traditional complex tuning requirements

� 4 Gb FC and 1 Gb iSCSI host connectivity

� No single point of failure

� Industry-leading rebuild times in the event of disk or module failures (less than 60 minutes for 2 TB drives)

Chapter 2. Components of the IBM Reference Configuration 9

Page 22: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

� Innovative snapshot functionality that includes snap-of-snap, restore-of-snap, and a nearly unlimited number of snapshots

� Nondisruptive maintenance and upgrades

� For each host or cluster, a quality of service (QoS) capability to prioritize workloads based on business criticality

The newest model in the XIV family, the IBM XIV Storage System Gen3 storage array (model 2810/2812-114), contains all the benefits of the XIV family plus the following performance enhancements:

� Up to 4 times the throughput (10 GBps) of the previous generation, improving performance for business intelligence, archiving, and other I/O-intensive applications

� Up to 3 times the improved response time of the previous generation, enabling faster transaction processing and greater scalability for online transaction processing and database and email applications

� Power to serve even more applications from a single system with a substantial hardware upgrade:

– It includes an InfiniBand interconnect, larger cache (up to 360 GB of combined memory), faster SAS disk controllers, and increased processing power.

– Each Gen3 interface module delivers 8 Gb FC and 1 Gb iSCSI connectivity.

� Option for future upgradeability to SSD caching for breakthrough SSD performance levels at a fraction of typical SSD storage costs (planned availability for Gen3 in the first half of 2012)

With the XIV family’s “all-inclusive” pricing model, there are no hidden costs for multipath software or replication features. Specifically, every XIV system includes the following functions with purchase:

� Snapshot capability� Thin provisioning� Asynchronous and synchronous data replication� Advanced management� Performance reporting� Monitoring and alerting� Full support of Microsoft technologies including GeoClustering, Volume Shadow Copy

Services (VSS), and multipath I/O (MPIO)

2.9 IBM Converged Switch B32

The IBM Converged Switch B32 enables access to local area network (LAN) and storage area network (SAN) environments over a common server connection by using Data Center Bridging (DCB) and Fibre Channel over Ethernet (FCoE) protocols. The B32 uses full Fibre Channel Forwarding capabilities and connects to servers by using Brocade Converged Network Adapters (CNAs). The switch offers twenty-four 10 Gbps DCB Ethernet ports that can transport both FCoE and regular Ethernet traffic. FCoE can be forwarded into an FC SAN network through the eight 8 Gbps FC ports on the B32.

The consolidation of server SAN and LAN ports and corresponding cables simplifies configuration and cabling in server cabinets and reduces acquisition costs. With fewer components using power and cooling, organizations can also save significant operating costs. For this IBM private cloud solution, 10-Gbps DCB Ethernet ports provide connectivity

10 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 23: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

between hosts and uplinks to the LAN aggregation layer and to the IBM Ethernet Switch B24Y carrying iSCSI traffic. FC ports are used to connect to the IBM XIV Storage System.

Figure 2-3 shows a rear view of the IBM Converged Switch B32.

Figure 2-3 IBM Converged Switch B32

2.10 IBM Ethernet Switch B24Y

The IBM Ethernet Switch B24Y is a high performance Ethernet/IP switch offering twenty-four 1 GbE ports and four 10 GbE ports. It is versatile enough to be used for stacking, uplinks, or connectivity to end devices. Designed for wire-speed and non-blocking performance, the B24Y with symmetric flow control capabilities is well-suited for connecting servers to storage in an iSCSI environment. Users can manage the device by using an industry-standard command-line interface (CLI) or through a web management GUI.

The IBM private cloud solution uses the B24Y to provide a secure, redundant back-end management network to the attached hosts and Microsoft Active Directory servers. It also uses the B24Y to provide iSCSI connectivity to the XIV system.

Figure 2-4 shows a rear view of the IBM Ethernet Switch B24Y.

Figure 2-4 IBM Ethernet Switch B24Y

24x 10 Gb Ethernet Converged Ports 8x FC ports

Chapter 2. Components of the IBM Reference Configuration 11

Page 24: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

12 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 25: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Chapter 3. Best practices and implementation guidelines

A successful Microsoft Hyper-V deployment and effective operation for this type of private cloud solution can be attributed to a set of test-proven techniques. Proper planning that uses best practices based on such experience plays a key factor in achieving optimal performance and growth necessary for any solution, whether virtual or physical.

The Microsoft Private Cloud Fast Track program and IBM’s enterprise-class hardware, vast partnerships, and consulting experience help prepare IT administrators to successfully meet their virtualization performance and growth objectives by deploying private clouds efficiently and reliably.

This chapter presents a collection of best practices and implementation guidelines for IBM Reference Configuration. These guidelines are based on a collaboration between Microsoft, Brocade, and IBM and aid in planning and configuration of the solution.

Categorically, the guidelines are divided into the following sections:

� Racking and power distribution� Networking and VLANs� Active Directory� Storage� Setting up the IBM System x3550 M3 Active Directory� Setting up the IBM System x3650 M3 management cluster� Setting up the IBM System x3650 M3 production Hyper-V cluster� Setting up IBM System x3550 M3 Data Protection Manager 2010

3

© Copyright IBM Corp. 2011. All rights reserved. 13

Page 26: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

3.1 Racking and power distribution

Before any system is racked, install the power distribution units (PDUs) and their cabling. When cabling the PDUs, keep in mind the following considerations:

� Ensure sufficient, separate electrical circuits and receptacles to support the required PDUs.

� To minimize the chance of a single electrical circuit failure taking down a device, ensure that sufficient PDUs are available to feed redundant power supplies using separate electrical circuits.

� For devices that have redundant power supplies, plan for individual electrical cords from separate PDUs.

� Power each converged and Ethernet switch by a separate PDU.

� Locate switches to promote optimal cabling.

� Maintain appropriate shielding and surge suppression practices; employ appropriate battery back-up techniques.

3.2 Networking and VLANs

A combination of physical and virtual isolated networks are configured at the host, switch, and storage layers to satisfy validation requirements. At the physical host layer, four Converged Network Adapter (CNA) ports are available for each Hyper-V server (two dual-port Brocade 1020 CNAs). The CNAs use 10 GbE full-duplex copper-based Twinax cables to connect to the IBM B32 switch.

A total of four switches are at the physical switch layer. Redundant IBM B32 converged switches are used for storage and host connectivity. Stacked, redundant IBM B24Y switches provide iSCSI and storage management connectivity. They also provide out-of-band Integrated Management Module (IMM) connectivity to the Hyper-V physical servers. In the validation test environment of the IBM Reference Configuration, the B24Y also provides Layer3 routing services, which might not be required in production environments.

At the physical storage layer, the IBM XIV Storage System uses both Fibre Channel (FC) and iSCSI ports for connectivity. Both second and third generation XIV storage arrays have 24 FC ports, but only half of them are used for host connectivity. The remaining half is reserved for mirror connectivity or data replication, which is not required and is outside the scope of the Microsoft Private Cloud Fast Track program validation.

Twelve FC ports are connected by using fiber optic cables to six dedicated FC ports on each IBM B32 converged network switch. Six iSCSI ports (up to 22 ports for XIV Storage System Gen3 models) are connected by using standard Cat 5 Ethernet cables to three 1 GbE ports on each IBM B24Y switch.

Both storage and network sublayers, which use virtual local area networks (VLANs), are at the host virtual layer. For the storage sublayer, all Hyper-V physical servers use all four CNA paths to connect to the XIV array by using a dedicated Fibre Channel over Ethernet (FCoE) VLAN 1002. With the IBM XIV Host Attachment Kit software, the multipath I/O (MPIO) feature can complete the storage-based fault tolerance and load balanced host storage sublayer requirements.

For the network sublayer, all Hyper-V physical servers also use four CNA paths to connect to the B32 converged switch, but the Ethernet packets do not simultaneously traverse all paths. Instead, two active/passive failover teams handle the network traffic by using Brocade

14 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 27: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

adapter teaming software. Thus, only one port in each failover team is active at a time, resulting in a total of two active ports in two separate failover teams.

Furthermore, the two failover teams split role-based VLAN network traffic. One failover team is dedicated to host network traffic, and the other is dedicated to virtual machine (VM) network traffic. Specifically, one failover team contains host VLANs, and the other failover team contains only the default Passthru VLAN 0. Passthru VLAN 0 allows both untagged and tagged VLAN traffic and is used to create a Hyper-V virtual switch (vSwitch). Thus, the latter failover team is configured to allow only VM network traffic. To complete the role-based network assignments, the network adapter settings of each VM must assign the failover team interface that contains VLAN 0 as the network. The settings must also enable virtual LAN identification with the desired VLAN ID.

At the physical switch layer, all 10 VLANs must be properly configured to allow both storage and Ethernet traffic. This configuration is the cornerstone prerequisite for the virtual network where everything depends on the switch configurations. However, based on individual environment preferences, there is flexibility regarding how many VLANs are created and the type of role-based traffic that they handle. However, after a final selection is made, ensure that the switch configurations are saved or backed up.

A few VLAN concerns exist at the virtual storage layer. You must assign IP addresses to iSCSI ports that belong to the dedicated iSCSI VLAN 20. As explained in the following sections, for the iSCSI maximum transmission unit (MTU) settings, select 4500 bytes, which is the largest supported and default value for the XIV iSCSI ports.

For further assistance with network planning and configuration for IBM Reference Configuration, see Appendix A, “Brocade 2-port 10 GbE CNA for IBM System x” on page 43, through Appendix C, “Networking worksheets” on page 69.

3.2.1 VLAN description

Table 3-1 describes the ten VLANs. For additional information, such as an example of port layouts, configurations, and worksheets to assist in network layout, see Appendix C, “Networking worksheets” on page 69.

Table 3-1 VLAN definitions

FCoE VLAN 1002: FCoE VLAN 1002 does not need to be created or configured on the hosts for either failover team.

VLAN 1002 on B24Y: The B24Y handles only Ethernet network traffic. Therefore, it does not support FCoE VLAN 1002 traffic. You do not need to configure anything for VLAN 1002 on the B24Y.

Network Name Description

VLAN 20 iSCSI Storage Network Used for all iSCSI storage traffic. Physical servers maintain dedicated physical fault-tolerant connection to this network. VMs use a VLAN created off the communications network.

VLAN 30 Production Cluster Private Network Used for cluster heartbeat and Cluster Shared Volume (CSV) traffic.

VLAN 40 Production Cluster Private Network Used for Production Cluster Live Migration.

Chapter 3. Best practices and implementation guidelines 15

Page 28: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

3.2.2 Switch port layout

The IBM Reference Configuration uses two IBM converged B32 network switches that contain twenty-four 10 GbE ports and eight 8 Gbps Fibre Channel ports. The B32 provides primary storage access and data communication services. Additionally a pair of stacked IBM B24Y switches, containing twenty-four 1GbE ports, provides 1 GbE iSCSI storage connectivity. This pair of switches also provides in-band and out-of-band server and device connectivity as needed. With the optional 10 Gb uplink modules installed in the IBM B24Y, inter-switch connectivity can be established with the B32s, allowing host connections to the iSCSI storage, servers, and devices.

3.2.3 IBM Converged Switch B32 port configuration

The IBM Converged Switch B32 supports 8 Gbps Fibre Channel storage connectivity and 10 Gb Ethernet traffic. Each CNA port presents an FC and Ethernet port to the operating system. Twelve IBM XIV FC ports from six interface modules are spread evenly across six dedicated Fibre Channel ports on each B32 converged switch. Each server connects two CNA ports to each B32 switch for FCoE communication and for the creation of two data and network fault-tolerant network interface card (NIC) teams.

Configure the redundant B32 switch ports to allow the VLAN traffic shown in Table 3-2. Connect FC ports 0 - 5 to XIV FC port 1 or 3 (depending on the switch).

Table 3-2 B32 port configuration

VLAN 50 Production Cluster VM Network Used as a primary communication network for Production Cluster VMs.

VLAN 60 Management Network Used for Active Directory and all server management communication (also known as Cluster Public Network).

VLAN 70 Management Cluster Private Network Used for cluster heartbeat and CSV traffic.

VLAN 80 Management Cluster Live Migration Network Used for cluster live migration traffic.

VLAN 90 SQL Cluster Private Network Used for guest cluster heartbeat traffic.

VLAN 100 Out-of-Band Management Network IMM out-of-band management access.

VLAN 1002 FCoE Network Used for FCoE traffic. Configure only on the IBM converged switch B32.

Network Name Description

Port Where used VLAN traffic

Port 0 Uplink to B24Y VLAN 20, 50, 60 and 100 traffic

Ports 1-8 Hyper-V production nodes VLAN 20, 50 and 1002 (FCoE) traffic

Ports 9-10 Management nodes VLAN 20, 60, 90 and 1002 (FCoE) traffic

Port 11 B32 Inter-Switch Link (ISL)for cross-switch traffic

VLAN 20, 30, 40, 50 60, 70, 80, 90, 100 and 1002 (FCoE) traffic

Ports 12-13 CorpNet uplink VLAN 50, 60, and 100 traffic

Ports 14-15 Management nodes VLAN 60, 70, 80, 100 and 1002 (FCoE) traffic

Ports 16-23 Production nodes VLAN 30, 40, 60, and 1002 (FCoE) traffic

16 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 29: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Figure 3-1 illustrates this port configuration.

Figure 3-1 Switch port layout for B32

3.2.4 B24Y switch

The B24Y switch provides the additional 1 GbE connections that the environment needs for management access (both in-band and out-of-band) and for iSCSI connectivity to the XIV system.

The B24Y switches use the optional 10 GbE uplink modules to stack the switches into a single virtual configuration and to provide an uplink to the B32 switches. The B24Y can also provide Layer3 routing services if needed.

Configure the two B24Y switch ports as shown in Table 3-3.

Table 3-3 B24Y port configuration

Switch11 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 230

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 230 Switch2

B24Y Cross SW Traffic Mgmt, VM Comm, iSCSI (1 GB) B32 Cross SW Traffic CorpNet Uplink Private/CSV/Live Migration

FC1 FC2 FC3 FC4 FC5 FC6 FC7 FC8 Switch1

FC1 FC2 FC3 FC4 FC5 FC6 FC7 FC8 Switch2

FC Connections to XIV

8 – FC Ports on B32

Port Where used VLAN traffic

Uplink 1 Stacking port between B24Y switches Not applicable

Uplink 3 ISL or crosslink to B32 Not applicable

Ports 1-3 XIV management (spread across both switches) VLAN 60 traffic

Ports 4-6 XIV iSCSI VLAN 20 traffic

Ports 7-9 Unused Not applicable

Ports 10-11 Management server IMMs (balance across both switches) VLAN 100 traffic

Ports 12-19 Production server IMMs (balance across both switches) VLAN 100 traffic

Chapter 3. Best practices and implementation guidelines 17

Page 30: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

3.2.5 Brocade network adapter teaming

Using the Brocade Host Connectivity Manager, two fault tolerant teams are created by using a single CNA port from each dual-port adapter. Each failover team port is connected to a physically separate switch, providing additional fault tolerance. After the teams are created, you can add individual VLANs and present them to the respective hosts to logically isolate environment traffic.

The Brocade Host Connectivity Manager in Figure 3-2 shows the creation of a fault tolerant NIC team and its associated Media Access Control (MAC) address. Then add multiple VLANs to the team as shown.

Figure 3-2 Brocade Host Connectivity Manager failover team and host VLANs

The newly added VLAN devices are shown under Network Connections in Windows and can be assigned an appropriate IP address. They can also be configured as virtual network switches using the Hyper-V Virtual Network Manager for VM use. Consider setting a Windows quality of service (QoS) bandwidth limit to an appropriate level for each network. For more information, see the Hyper-V Live Migration and QoS Configuration Guide at:

http://technet.microsoft.com/en-us/library/ff428137(WS.10).aspx

The following sections describe all VLAN networks. For additional step-by-step teaming details, see “Managing CNAs in the Hyper-V IBM Reference Configuration environment” on page 45.

Physical CNA ports: Before the team is created, set all physical CNA ports (represented as host network adapters) to support 9000-byte jumbo frames using the Device Manager in Windows. By defining this setting, all failover teams and their corresponding VLANs support jumbo frames.

18 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 31: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

3.2.6 FCoE Storage Network (VLAN 1002)

Most VLAN IDs, except for a few reserved system IDs, can be used for FCoE traffic, but common multivendor network practices recommend reserving VLAN ID 1002. This recommendation promotes greater interoperability between multivendor solutions if required. In the IBM Reference Configuration environment, the IBM B32 switches must be properly configured to support FCoE using VLAN ID 1002. Upon configuring this support, hosts can access primary FC storage on the XIV array. This access becomes possible when the B32 uses its bridging protocols to converge XIV FC traffic into FCoE traffic using VLAN 1002 to the host CNAs.

Keep in mind the following considerations:

� Storage controller access:

– There must be a total of 12 Fibre Channel connections, two from each of the six XIV interface modules to the redundant B32 switches. Balance the FC connections across the two converged switches (six to each), which promotes fault tolerance if a link fails.

– Set the IBM B32 switch to support 9000-byte jumbo frames.

� Physical host storage access:

– Each physical server must use all four port worldwide names (WWNs) presented by the CNA devices to connect to the storage.

– Set the switch ports and CNA ports to use 9000 byte jumbo frames.

– Install the IBM XIV Host Attachment Kit on each host system to provide MPIO and load balancing services.

The default, round-robin, load balancing policy for the IBM XIV Host Attachment Kit takes advantage of the Microsoft DSM by enabling MPIO in Windows. You can verify this support by using the Windows mpclaim command-line utility. To send the output of the basic MPIO configuration to a file, run the following command:

mpclaim –b mpiocfg.txt

The output file (mpiocfg.txt) contains the information shown in Figure 3-3. Notice that the string Round Robin is highlighted in bold.

Figure 3-3 Partial output of the mpclaim command

MPIO Storage Snapshot on Monday, 24 October 2011, at 17:38:24.560

Registered DSMs: 1================+--------------------------------|-------------------|----|----|----|---|-----+|DSM Name | Version |PRP | RC | RI |PVP| PVE ||--------------------------------|-------------------|----|----|----|---|-----||Microsoft DSM |006.0001.07601.21680|0020|0003|0001|030|False|+--------------------------------|-------------------|----|----|----|---|-----+

Microsoft DSM=============MPIO Disk7: 12 Paths, Round Robin, Symmetric AccessSN: 0173804E60043 Supported Load Balance Policies: FOO RR RRWS LQD WP LB

Chapter 3. Best practices and implementation guidelines 19

Page 32: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

3.2.7 iSCSI storage network (VLAN 20)

This iSCSI storage network is reserved for server access to the iSCSI storage. All iSCSI traffic must be isolated on VLAN 20.

Keep in mind the following considerations:

� Storage controller access:

– To help balance iSCSI workloads, connect iSCSI ports from all XIV interface modules to the IBM B24Y Ethernet switches. Three interface modules support up to six iSCSI ports in XIV Gen2 models, and six interface modules support up to 22 iSCSI ports in XIV Gen3 models. Likewise, balance the iSCSI connections across the redundant Ethernet switches to provide fault tolerance. In the Private Cloud Fast Track configuration, three iSCSI ports were connected to each switch for a total of six connections.

– Enable jumbo frames on the B24Y switch.

� Physical host storage access:

– After adding the VM dedicated failover team, which uses the default Passthru VLAN 0, you can use the network device to create a vSwitch under Hyper-V to allow VM use. Additionally, configure the VLAN ID in the VM settings for the iSCSI network adapter properties.

– VMs use iSCSI for direct storage access. This requirement is for the SQL guest cluster that hosts all of the Microsoft Systems Center databases as part of the Microsoft Private Cloud Fast Track validation.

� VM storage access:

– VMs might require direct access to pass-through disks for application data and log files or iSCSI storage for guest clustering. Direct VM storage access is achieved by using the following methods:

• A pass-through disk is presented to the parent partition and seen under the Disk Management MMC of the host. You must bring the disk online to the host for initialization, and then you must take it back offline. The disk can then be presented or passed through to a VM where it becomes available in the Disk Management MMC of the VM.

• For VM direct connections to iSCSI storage, such as the SQL guest clustering validation requirement, install the IBM XIV Host Attachment Kit for Windows on the VM, which automatically configures the Windows iSCSI initiator.

• Set the VM network adapters to use 4088-byte jumbo frames.

To support this setting, you must make VLAN 20 network connections available to the VMs.

– Configure the VLAN ID for the VM settings of the virtual network adapter.

mpclaim output: The output shown in Figure 3-3 on page 19 is an extract from the mpclaim output. For more assistance regarding the command-line utility, type mpclaim /?.

Default byte setting: The default is 4500 bytes, which is the maximum supported setting for the XIV iSCSI ports.

20 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 33: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

– At a minimum, the two physical management servers require this setting to support the guest clustering of two non-HA VM SQL servers.

– QoS settings can be made to cap I/O traffic and cannot overwhelm network communication for the remaining VMs.

3.2.8 Cluster heartbeat and Cluster Shared Volume networks (VLANs 30, 70, and 90)

The cluster heartbeat and CSV networks are reserved for cluster private (heartbeat) communication between clustered servers. The management servers, Hyper-V production servers, and SQL servers have their own logically isolated networks using separate VLANs. Configure switch ports to appropriately limit the scope of each of these VLANs. Ensure that there is no VLAN routing or default gateways for cluster private networks.

The following devices can access this network:

� All eight production Hyper-V servers. VLAN 30 must be unique for this cluster.� Both management Hyper-V servers. VLAN 70 must be unique for this cluster.� Both SQL server guest cluster VMs. VLAN 90 must be unique for this cluster.

3.2.9 Production live migration network (VLAN 40)

Create a separate VLAN to support live migration for the production cluster. To accomplish this task, you use the Brocade Host Connectivity Manager to create VLAN 40 off the fault tolerant team. Consider setting a QoS limit to ensure that plenty of bandwidth remains for VM communication during live migration. The live migration VLAN must not have any routing on it.

For more information, see the Hyper-V Live Migration and QoS Configuration Guide, which references QoS considerations:

http://technet.microsoft.com/en-us/library/ff428137(WS.10).aspx

3.2.10 Production virtual machine communication network (VLAN 50)

The production VM communication network supports VM communication for the Hyper-V production cluster. VLAN 50 is created from the fault-tolerant failover team as explained previously. The production Hyper-V servers use this network to create a virtual NIC through Hyper-V to present to production VMs. The production servers themselves must not maintain an IP address on this network.

3.2.11 Management network (VLAN 60)

The management network is reserved for Hyper-V cloud administrative purposes and for providing the management cluster public interface. The Active Directory and System Center management servers communicate with the Hyper-V servers and supporting devices across this network.

The following devices and servers are on this network (VLAN 60):

� IBM XIV storage management

� Two physical management cluster servers: Management Virtual Machines (System Center Operations Manager (SCOM), System Center Virtual Machine Manager (VMM), SQL, Self-Service Portal)

Chapter 3. Best practices and implementation guidelines 21

Page 34: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

� Virtual NICs using VLAN ID 60 (must be created to support the VMs)

� Eight production Hyper-V servers

This network is reserved for management and must not be exposed to the VMs.

3.2.12 Routing summary

Layer 3 routing is configured on the B24Y switches to allow traffic between VLANs 50, 60, 100, and other corporate network resources as needed. Do not enable routing for VLANs 20, 30, 40, 70, 80, and 90 because they are private networks that are used to support the cluster operations of this configuration.

3.3 Active Directory

The Private Cloud Fast Track configuration must be part of an Active Directory domain. This requirement forms the clusters and the System Center management framework. An Active Directory server is included in this configuration (Figure 1-1 on page 3), but might be replaced by an existing domain structure if desired.

A total of three distinct environments exist in the IBM Reference Configuration that create Active Directory objects and require domain service accounts:

� A two-node Hyper-V management cluster hosts Microsoft Systems Center VMs, which serve as the primary VM deployment, monitoring, and management engine.

� An eight-node Hyper-V production cluster hosts VMs created by customers using the Microsoft Self-Service Portal.

� A two-node non-high availability (HA) SQL guest cluster provides database services to all of the Microsoft System Center VMs.

When creating these three distinct environments, implement a simple yet useful naming convention for the physical and virtual servers to help quickly identify them at all levels including at the Active Directory level. For example the production Hyper-V servers might be named (Hyper1, Hyper2, … Hyper8).

3.4 Storage

With a distinctive parallel, distributed architecture, the IBM XIV Storage System is a compelling platform for cloud computing. It delivers virtual storage that features optimal resource sharing, robust flexibility, dynamic self-tuning performance, and industry-leading ease of use. Of the many XIV features listed in this guide, cloud administrators prefer the robust flexibility and ease-of-use.

With conventional RAID storage architectures, countless time and resources are invested in the design planning and sizing of VM volumes. With the virtual storage design and intuitive storage management of the XIV system, such considerations and resource investments are reduced. The reason is because storage sizing adjustments can be made quickly and dynamically, and they typically involve only minor operating system considerations. Most of the time is spent planning for VM compute and memory sizing based on physical host resource availability.

Because the XIV virtual storage design uses all spindles for each volume, it performs best when a smaller number of large volumes (LUNs) are provisioned for hosts. The same

22 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 35: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

principle applies to VM storage sizing. For the IBM Reference Configuration, it is worth reviewing how this recommended practice applies to Microsoft Hyper-V failover clusters.

3.4.1 Microsoft Hyper-V cluster storage considerations

The Microsoft Private Cloud Fast Track program validation includes two separate submissions for testing 15-module IBM XIV Storage System 2810/2812-A14 models and XIV 2810/2812-114 Gen3 models. Although there are notable differences between the two storage arrays, both are highly virtualized solutions that use all 180 disks for each individual volume or CSV. Therefore, both XIV storage family reference architectures are configured almost identically to provide primary storage for the management and production clusters of IBM Reference Configuration.

The only configuration difference between the two XIV siblings is the iSCSI volume presentation to the management VM guest cluster. With the IBM XIV Storage System Gen3 models, six interface module iSCSI ports are used. Because the Gen2 predecessor has fewer ports, only three interface modules, which contain a total of six iSCSI ports, are used. For either XIV generation, storage administrators must ensure that iSCSI ports are used from each interface module to balance the iSCSI workloads.

For convenience and switch port limits, only six iSCSI ports are used in both tests. However, storage administrators can configure up to 22 iSCSI ports for the XIV Storage System Gen3 models, including volume and CSV sizing based on the number of VMs offered for individual service agreements of IBM Reference Configuration.

For both XIV family storage systems and associated IBM Reference Configuration environments, most IBM XIV Storage System Cloud Reference Architecture volumes must be allocated for VM CSV use. To eliminate storage array-related “hot spots,” XIV volumes (LUNs) are automatically spread across every disk inside the storage array. The XIV LUN is then defined by single LUN-to-CSV mapping using the easy-to-use XIV GUI. Each individual XIV LUN represents a single CSV. All CSVs are concurrently visible to all cluster nodes and store the VM configuration and virtual hard disk files.

Creating several large cluster-shared volumes to host VM operating system files helps to balance the load across Microsoft cluster nodes. Optional CSVs help to host virtual hard drive (VHD) files for application data depending on their functional design. Using fixed VHDs promotes maximum performance. Additional storage volumes provide iSCSI or optional VM pass-through disks as desired.

Furthermore, to help maximize hardware resource utilization, spread VMs and CSVs across cluster members to maintain an active/active cluster configuration. Similarly, the automatic spreading of XIV LUNs across every spindle inside the XIV storage array provides consistent I/O response to the cluster without needing administrator planning or intervention. Regardless, system administrators must ensure that enough physical server resources are available to handle all VMs in an N-1 configuration to accommodate maintenance windows or unplanned cluster failovers.

Chapter 3. Best practices and implementation guidelines 23

Page 36: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Figure 3-4 illustrates the CSV configuration for the XIV system.

Figure 3-4 Storage layout

3.4.2 Cabling

IBM XIV Storage System Gen3 requires redundant power feeds with input voltages of 180- 264 V ac at 60 A or 30 A (+/- 10%). An IBM customer engineer connects the power supplies during the installation process and configures phone home support for added customer value. Usually, the XIV system is shipped assembled and cabled, ready for customers to connect the storage array to their storage area network (SAN) infrastructure using FC and Ethernet cables.

Six 1 Gb XIV iSCSI connections must be plugged into their assigned ports on the IBM Brocade B24Y switch:

� Configure these ports as untagged ports on the switch for VLAN 20 use by the VMs.� Enable jumbo frames on the switches, and configure the server CNAs for 9000 bytes.� Leave the default MTU 4500-byte jumbo frame value for the XIV iSCSI ports.

Connect three 1 GbE XIV management ports to the assigned 1 GbE ports of the stacked B24Y switches. These ports are configured as untagged ports and assigned to VLAN 60 on the switch.

Connect twelve XIV FC ports to the assigned and dedicated FC ports of the IBM Converged Switch B32. The ports must be zoned for three or six XIV interface modules. For conceptual guidelines, see the zoning diagram in Figure B-1 on page 63 and the switch output in Example B-3 on page 63.

Storage:Volumes/LUNS

IBM XIV � 15 Modules

Production CSV Volumes:Volume1 Disk - VM OS/Config Files (4 TB)Volume2 Disk � Database / Random I/O (4 TB)Volume3 Disk � Logging / Sequential I/O (4 TB)Volume4 Disk � VM Specific Data (4 TB)

Management CSV Volume:Volume1 Disk - VM OS/Config Files (1 TB)

Additional Management Volume:Disk1 - SCVMMLib (500GB � PassThru)Disk2 - SQLVM1 OS Vol (50 GB)Disk3 - SQLVM2 OS Vol (50 GB)Disk4 - SQLData (200 GB) (iSCSI)Disk5 - SQLLog (200 GB) (iSCSI)

Additional Production Disks:Others as needed

Optional Backup Volumes:Varies by backup architecture

24 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 37: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

3.4.3 Management

IBM XIV storage administrators have options to choose from their centralized management toolkit. The IBM XIV Management Tools package consists of three industry-leading, intuitive, and user-friendly utilities:

� The IBM XIV Graphical User Interface (XIVGUI)� The IBM XIV online monitoring tool (XIVTop)� The IBM XIV Command Line Utility (XCLI)

You can accomplish the following partial list of common storage administrative tasks by using the preferred utility:

� Pool creation� Volume creation� iSCSI target assignments

To begin management of the IBM XIV Storage System:

1. Verify that at least one (three available) connection from the XIV system is connected to a 1 GbE RJ45 port on the B24Y switches.

2. Assign IP addresses from VLAN 60 to all three XIV management ports. If desired, use the XCLI utility to set the IP addresses for each management port.

3. Ensure that each of the 12 XIV FC and six iSCSI connections are connected to the appropriate ports on the switches.

3.4.4 Configuration

The following step-by-step processes include sample window captures to illustrate how quick and easy it is to configure XIV storage.

Creating the XIV poolTo create the XIV pool:

1. Click the Windows Start button, and then expand All Programs.

2. Expand XIV, and then select XIVGUI, which starts the XIV Storage Management application.

3. Click the XIV System to go to the XIV Storage Management view.

Chapter 3. Best practices and implementation guidelines 25

Page 38: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

4. Move the mouse pointer over the third icon down on the left side of the window, and select Storage Pools from the menu (Figure 3-5).

Figure 3-5 XIV Storage Management (XIVGUI)

5. In the Storage Pools view, click the Add Pool toolbar button at the top of the window.

26 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 39: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

6. In the Add Pool dialog box (Figure 3-6), select the type of pool, and then enter the pool size, snapshot size, and pool name. Click the Add button.

Figure 3-6 XIVGUI Add Pool dialog box

7. Confirm the successful creation of the storage pool.

Creating a volumeTo create a volume:

1. In the Storage Pools view, click the Volumes by Pools toolbar button at the top of the window.

2. Select the newly created storage pool, and then click the Add Volumes toolbar button.

Storage pool snapshot size: When XIV storage pools are created, the default reserve snapshot size is 10%. Adjust the storage pool snapshot size to a preferred reserve size based on backup practices.

Chapter 3. Best practices and implementation guidelines 27

Page 40: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

3. In the Create Volumes dialog box (Figure 3-7), ensure that the Select Pool field is populated with the newly created pool. Enter the number of volumes, volume size, and volume name. Then click Create.

Figure 3-7 XIVGUI Create Volumes dialog box

4. Confirm the successful creation of the volume.

Before mapping the newly created volume, ensure that all hosts are properly configured for multipath fault tolerance.

3.4.5 Multipath I/O fault-tolerance driver

MPIO provides balanced and fault tolerant paths to XIV storage for all hosts and VMs in the IBM Reference Configuration. The Cloud Reference Architecture ensures this balanced and fault tolerant storage connectivity using a two-fold approach that consists of hardware and software components.

For the hardware components, the XIV storage array is connected by using FC cables to a converged networking framework. Two 10 Gb dual-port CNA cards are in each Hyper-V host, which provide redundant multipath storage I/O. The CNAs are connected to a pair of IBM B32 converged network switches. The XIV storage interface modules contain six fiber optic connections to dedicated fiber optic ports on each B32. A pair of IBM B24Y 1GbE switches with 10 GbE uplinks to the B32s is used to provide secondary iSCSI-based storage as needed. The XIV storage maintains six connections to the B24Y switches, which use dedicated iSCSI storage VLANs.

For the software components, each server is required to install the IBM XIV Host Attachment Kit for Windows software which is analogous to traditional MPIO device-specific modules (DSMs). The Host Attachment Kit provides both graphical and command-line wizard

Volume configuration details: The actual volume configuration details are illustrated in the storage layout shown in Figure 3-4 on page 24.

28 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 41: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

interfaces, which enable the native Windows MPIO feature to establish communication with the XIV system. The resulting default multipath configuration uses round-robin load balancing, which greatly boosts performance.

In order for Hyper-V VMs to access IBM XIV iSCSI-based storage, you must install the XIV Host Attachment Kit\. Upon successful attachment to the XIV system, the host and its FC or iSCSI ports are automatically populated in the XIV host connectivity table. For additional flexibility, you can add hosts (physical or virtual) and corresponding FC or iSCSI ports manually. However, this task is typically unnecessary when using the host attachment wizard method.

3.4.6 IBM XIV storage pool sizing guidelines to support VSS snapshots

Before creating snapshots on the XIV system, size the storage pools to handle the desired number of snapshots. Based on the rate of change of the data, the desired snapshot retention policy and the number of snapshots taken, you can estimate the size of the snapshot pool. Also, because it is easy to increase or decrease the size of the snapshot space non-disruptively, you can make adjustments over time to avoid under or over allocating space.

After a snapshot pool becomes 100% full, the XIV system deletes snapshots (by using their deletion priority and timestamp data) until the snapshot pool is below 100% full. Thus ensure that adequate snapshot space exists to prevent unwanted snapshot deletion from occurring, especially during a backup.

As a simple rule of thumb, you can calculate the minimum snapshot space needed by using the following equation:

(17 GB x number of snapshots desired) + 17 GB

For simple backups with one snapshot for each LUN that is created, backed-up, and then deleted, this rule of thumb might be sufficient. However, if there is a high rate of change on the original LUNs during the backup process, or a highly random write component to the original LUN or LUNs being backed up, you might need additional space.

Other considerations include any asynchronous replication, write-enabled snap LUNs mounted for host use, snapshots saved as part of a disaster-recovery rollback strategy, or any other sources of snapshot growth.

For complete information about all the uses for XIV snapshots, which is beyond the scope of this paper, see the IBM XIV Storage System resources website at:

http://ibm.com/systems/storage/disk/xiv/resources.html

3.5 Setting up the IBM System x3550 M3 Active Directory

An Active Directory server is required for the servers to participate in a Microsoft cluster and is needed by several parts of the management solution. The IBM Reference Configuration x3550 Active Directory server is optional based on the presence of existing Active Directory servers.

Chapter 3. Best practices and implementation guidelines 29

Page 42: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

The setup of the IBM x3550 M3 is straight forward because no optional devices are installed in the system. Complete the following steps before you configure the software:

1. Confirm that both embedded NIC ports are connected to the assigned 1 GbE RJ45 ports of each B24Y switch.

2. Use the Broadcom Advanced Control Suite (BACS) software to create an fault-tolerant team.

3. Confirm that the stacked B24Y switch ports are configured as aggregates and that they allow VLAN 60 traffic.

Configure the two local disks as a RAID1 set. For more information, see the IBM ServRAID 1015 User Guide at:

http://download.boulder.ibm.com/ibmdl/pub/systems/support/system_x_pdf/ibm_doc_sraidmr_m1015-2ndedition_user-guide.pdf

Then complete the following actions:

1. Set the IMM external address in the Unified Extensible Firmware Interface (UEFI) during the boot process by selecting F1 when prompted.

2. Install the Windows Server 2008 R2 SP1 Standard Edition.

3. Assign a static TCP/IP address (VLAN 60).

4. Verify network connectivity.

5. Ensure the system UEFI code, integrated management module, and latest device drivers are updated to the latest supported version. You can find the x3550 M3 driver matrix at:

http://www.ibm.com/support/fixcentral/systemx/selectFixes?parent=ibm~Systemx3550M3&product=ibm/systemx/7944&&platform=Windows+2008+x64&function=all

6. Install the IBM Director Platform agent for System Center upward integration.

7. Install the Microsoft Forefront client.

8. Promote the system to an Active Directory server.

9. Run Windows Update. You must update Windows with all the roles and features enabled that are needed to support the environment so that all components are brought up to the latest code levels.

Add system service accounts to the domain to support the System Center components that are installed later. At a minimum, you must create the following service accounts:

� SVC_SCOM� SVC_SCVMM� SVC_VMMSSP� SVC_SQL

Optional: Set the IMM later by using the in-band method of connecting to the web-based management interface (http://169.254.95.118). This interface is accessed by the default IBM USB Remote NDIS Network Device. You must enable this network device for in-band IMM use, but you can temporarily disable it, such as during failover cluster wizard validations to avoid network flags.

30 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 43: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

3.6 Setting up the IBM System x3650 M3 management cluster

The management cluster consists of two IBM x3650 M3 systems with 24 GB of RAM each. Complete the following steps before configuring the software:

1. Confirm that two dual-port CNA cards are installed in each server.

2. Confirm that the ports from each CNA card are connected to the assigned ports on the IBM B32 switches.

3. Confirm that the IMM management ports are connected to the assigned B24Y switch ports.

The setup involves installing Windows Server 2008 R2 EE SP1 on each server followed by confirming network and storage connectivity. After the installation is complete, enable and configure Hyper-V and Microsoft Clustering. VMs are then created to perform various Microsoft System Center server roles as required by the Microsoft Private Cloud Fast Track validation.

You must configure the two local disks as a RAID 1 array as explained in the IBM ServRAID 1015 User Guide, which you can find at:

http://download.boulder.ibm.com/ibmdl/pub/systems/support/system_x_pdf/ibm_doc_sraidmr_m1015-2ndedition_user-guide.pdf

Then complete the following actions:

1. Install Windows Server 2008 R2 SP1 Enterprise.

2. Set the IMM external TCP/IP address in the UEFI during the boot process by selecting F1 when prompted.

3. Join the domain.

4. Enable Hyper-V.

5. Optional: When creating VMs, change the VM Memory allocation from static to dynamic to allow greater system resource flexibility.

6. Enable Windows Failover Clustering.

7. Run Windows Update. Update Windows with all the roles and features enabled that are needed to support the environment so that all components are brought up to the latest code levels.

8. Ensure the system UEFI code, integrated management module, and latest device drivers are updated to the latest supported version. You can find the x3650 M3 driver matrix at:

http://www.ibm.com/support/fixcentral/systemx/selectFixes?parent=ibm~Systemx3650M3&product=ibm/systemx/5454&&platform=Windows+2008+x64&function=all

9. Install the IBM XIV Host Attachment Kit for MPIO and load balancing services. The Host Attachment Kit must be available from your registered support link with IBM.

10.Install the IBM Director Platform agent for System Center upward integration.

Optional: Set the IMM later using the in-band method of connecting to the web-based management interface (http://169.254.95.118). This interface is accessed by the default IBM USB Remote NDIS Network Device. You must enable this network device for in-band IMM use. However, you can temporarily disable it, such as during failover cluster wizard validations to avoid network flags.

Chapter 3. Best practices and implementation guidelines 31

Page 44: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

11.Install the Microsoft Forefront client.

12.Install the Brocade Host Connectivity Manager (HCM) to manage the CNA devices.

3.6.1 Configuring the network

Configure the Ethernet host networks for IBM Reference Configuration to use two fault tolerant teams each consisting of two CNA ports. Each team must contain a single port from each CNA card to protect against individual device failures compromising the team:

� Use the first team to create multiple host-based VLANs that are presented as Windows network devices. The Windows VLAN devices support parent partition network requirements, such as cluster heartbeat or live migration roles.

� Use the second team for VM-based traffic. This team requires only the default passthru VLAN 0. It can then be used as a Hyper-V virtual switch.

Complete the following steps:

1. Using the Brocade Host Connectivity Manager in the parent partition, create two failover teams. Before creating the failover teams, modify the TCP/IP MTU size under the device properties to support jumbo frames on each device. This setting must be 9000 to take advantage of the performance benefits of jumbo frames.

2. Using the Brocade Host Connectivity Manager, create the VLANs as explained in the B32 switch configuration in Appendix A, “Brocade 2-port 10 GbE CNA for IBM System x” on page 43. The result is VLAN devices that are presented under Windows Networking.

3. Either make TCP/IP address assignments under Windows networking as specified, or create vSwitches by using the Hyper-V manager:

– Assign static IP addresses to the VLAN 60, 70, and 80 devices.

– Create vSwitches from the VM-only failover team using the default Passthru VLAN 0 device, which allows both tagged and untagged traffic.

4. Deselect the option to allow management traffic on these devices when creating the vSwitch under Hyper-V. This step removes access to this device from the physical host.

5. Validate the network connectivity between the two servers on each respective VLAN. Cluster private networks must not have a default gateway.

3.6.2 Validating the storage area network configuration

Confirm that the physical cabling for each device is connected to the correct target port in each IBM B32 switch. Each CNA port presents an HBA port to the operating system for a total of four HBA ports.

Volume mapping is used to ensure that the XIV storage volumes are accessible only to the specific servers that are assigned to them. Unique WWNs are assigned to each port and can be seen using the XIVGUI or Brocade HCM.

Validate the SAN configuration as follows:

1. Upon installing the IBM XIV Host Attachment Kit, validate that all ports are added for storage access using the XIVGUI.

2. Ensure that the disks presented by the XIV system are visible in the Windows Disk Management MMC.

32 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 45: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

3. On one of the servers, use the Windows Disk Management MMC to bring the disks online. At a minimum, they must consist of LUNS for the following roles:

– Cluster shared volume– System Center VMM library pass-through disk– Cluster quorum disk (using a file share witness for the quorum model (recommended))

3.6.3 Creating the cluster

To provide a highly available management environment, a cluster is created between the two management servers. The management servers host HA VMs on this cluster. The SQL Server VMs are not HA VMs, but form their own cluster to support the SQL server in a highly available configuration. If you are not familiar with setting up a Microsoft cluster under Windows 2008, see “Hyper-V: Using Hyper-V and Failover Clustering” at:

http://technet.microsoft.com/en-us/library/cc732181(WS.10).aspx

Using the Failover Cluster Manager, run the cluster validation wizard to assess the two physical management servers as potential cluster candidates and then address any errors:

� A two-node cluster needs a quorum to determine cluster ownership. The cluster validation wizard checks for feasible quorum devices. However, after the default quorum model is implemented during the installation process, the quorum model can be changed to the recommended Node and File Share Majority.

For more information, see “Understanding Quorum Configurations in a Failover Cluster” at:

http://technet.microsoft.com/en-us/library/cc731739.aspx

� Make sure that the intended cluster storage is online to only one of the cluster nodes (CSV and other preferred shared storage).

� Make sure the cluster public network is at the top of the network binding order (VLAN 60).

� Temporarily disable the default IBM USB Remote NDIS Network Device on all cluster nodes because it causes the validation to fail during network detection due to all nodes sharing the same IP address.

Using the Failover Cluster manager, create a cluster with the two physical management servers. Then complete the following actions:

1. Validate the cluster name and IP address.2. Validate the cluster networking.3. Validate the storage in the cluster.4. Enable CSVs, and add the designated volume as a CSV.5. Using Hyper-V Manager, set the default paths for VM creation to use the CSV.

3.6.4 Setting up Windows Server 2008 R2 SP1 for VMs

Each of the management servers hosts Windows 2008 R2 SP1 VMs. The operating system can be installed by using various methods. A straightforward approach is to modify the VM DVD drive settings to specify an image file that points to the Windows installation ISO image. Then start the VM to begin the installation. Other deployment methods, such as a VHD file with a sysprep image, Windows Deployment Services server, or System Center Configuration Manager, are also acceptable.

Chapter 3. Best practices and implementation guidelines 33

Page 46: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

With the operating system installed and the VM running, complete the following actions before installing the application software:

1. Run Windows Update.

2. Review the appropriate Microsoft Systems Center or other application installation documentation. If available, run application prerequisites tools to help determine and meet software prerequisites, for example: required role services, such as Internet Information Services (IIS) for Windows Server, or features, such as the .NET framework.

3. Install the integration services in the VM. Despite the latest Windows Server builds having integration services built in, ensure that the Hyper-V child and parent run the same version.

4. Activate Windows.

5. Using Hyper-V Manager, adjust the VM settings to use dynamic memory with appropriate upper and lower ranges.

3.6.5 Microsoft SQL Server guest cluster

Several Microsoft System Center databases (SCOM, System Center VMM, Self-Service Portal) are hosted on Microsoft SQL Server VMs. To provide application fault tolerance, a two node SQL Server guest cluster is created using virtual VMs. A single non-HA SQL VM runs on each of the two management servers. To reiterate, when these VMs are on clustered management servers, they must not be enabled as HA VMs. Microsoft does not support the use of HA VMs to form guest clusters.

This configuration has the following requirements:

� 2 non-HA VMs (one on each management server)

� 4 virtual CPUs

� 6 GB of RAM

� 3 virtual NICs (1 client connection (VLAN 60), 1 cluster private (VLAN 90), and 1 iSCSI (VLAN 20))

� Windows Server 2008 R2 SP1 Enterprise installed and updated in VMs

� A Windows version that supports failover clustering

� SQL Server 2008 SP1 x64

3.6.6 Storage

Storage used for the SQL Server cluster must be allocated from the XIV system. Volumes for the VM OS VHD are presented and mapped to each individual host. Volumes for database and log volumes must be presented and mapped to both SQL VMs as follows:

� OS VHD on XIV volumes (50 GB fixed)� LUN1: SQL databases (200 GB physical iSCSI disk)� LUN2: SQL logging (200 GB physical iSCSI disk)

34 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 47: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

The following databases are used by Microsoft System Center:

� VMM <VMM_DB>. It creates the database on the SQL Server during setup.

� VMM SSP <SCVMMSSP>. It creates the database on the SQL Server during setup.

� SCOM <Ops_Mgr_DB>.

� SCOM <Ops_Mgr_DW_DB>. SCOM has a database creation utility on the installation media to create the required databases.

3.6.7 Microsoft System Center Operations Manager

A Microsoft SCOM system must be available to support the private cloud management environment. To provide fault tolerance, a highly available VM is created for this purpose.

The Microsoft SCOM VM must have the following minimum specifications:

� HA VM� 2 virtual CPUs� 4 GB of RAM� 1 virtual NIC (Management Network Connection (VLAN 60))� OS VHD on a CSV (50 GB fixed)� SCOM 2007 R2 with Cumulative Update 4

Configure the SCOM VM:

1. Run Windows Update for the Windows Server 2008 R2 SP1 VM.

2. Install the SCOM database on the SQL server.

3. Use the DBCreateWizard tool, which is on the SCOM ISO image in the \Support Tools\AMD64 directory.

4. Assign a fixed IP address on VLAN 60.

5. Run the required checker (does not detect remote SQL Server).

6. Install SCOM 2007 R2.

7. Update SCOM to cumulative update 4.

The following components are required by SCOM:

� Root management server� Reporting server (database on SQL Server)� Data warehouse (database on SQL Server)� Operator console� Command shell

Recommendations:

� Use the Node and File Share Witness cluster quorum model.

� Use the SQL Service account created on the Active Directory server when installing SQL Server. For guidance about installing and creating the clustered SQL Server 2008 SP1 systems, see “How to: Create a New SQL Server Failover Cluster (Setup)” at:

http://technet.microsoft.com/en-us/library/ms179530(SQL.100).aspx

Chapter 3. Best practices and implementation guidelines 35

Page 48: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Install the following SCOM packs:

� Virtual Machine Manager 2008 R2 SP1 pack� Windows Server Base Operating System pack� Windows Failover Clustering pack� Windows Server 2008 Hyper-V pack� Microsoft SQL Server pack� Microsoft Windows Server Internet Information Server pack� System Center packs (For links, see “Related publications” on page 75.)� IBM Director Upward Integration module � IBM Storage Management Pack for Microsoft SCOM

3.6.8 System Center Virtual Machine Manager

A Microsoft System Center VMM system must be available to support the private cloud management environment. To provide fault tolerance, a highly available VM is created for this purpose.

The Microsoft System Center VMM VM must have the following minimum specifications:

� HA VM� 2 virtual CPUs� 4 GB of RAM� 1 virtual NIC (Management Network Connection (VLAN 60))� OS VHD (50 GB fixed disk)� 1 pass-through disk for the VMM library (500 GB)� System Center Virtual Machine Manager 2008 R2 SP1

Configure the System Center VMM VM:

1. Run Windows Update for the Windows Server 2008 R2 SP1 VM.2. Assign a static IP address from VLAN 60.3. Confirm that all prerequisites are met.4. Confirm that the VMM Library disk is online.5. Create a directory and file share to accommodate the SCVMM Library.6. Install System Center VMM 2008 R2 SP1.7. Update System Center VMM.

The following components are required by System Center VMM:

� VMM server� Administrative Console� VMM library� SQL database on the remote SQL cluster� Command shell

In addition, integrate System Center VMM with Operations Manager. VMM uses SCOM to monitor the health and availability of its managed VMs and parent hosts. (For integration guidance, see “Related publications” on page 75.) To support this integration, install the SCOM Management Console on the VMM system.

3.6.9 Microsoft System Center Self-Service Portal 2.0

A Microsoft System Center Self Service Portal 2.0 system must be available to support the private cloud management environment and self-service provisioning of VMs. To provide fault tolerance, a highly available VM is created for this purpose.

36 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 49: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

For Microsoft installation and configuration information, see “Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 SP1” in the Microsoft Download Center at:

http://www.microsoft.com/download/en/details.aspx?id=26701

The Microsoft System Center Self-Service Portal 2.0 VM must have the following minimum specifications:

� HA VM� 2 Virtual CPUs� 4 GB of RAM� 1 Virtual NIC: Management Network Connection (VLAN 60)� 1 OS VHD (50 GB fixed disk)

Configure the System Center Self-Service Portal 2.0 VM:

1. Run Windows Update for the Windows Server 2008 R2 SP1 VM.2. Assign a static IP address from VLAN 60.3. Install and update IIS with all required components.4. Install and update Self-Service Portal 2.0.

3.6.10 Microsoft System Center Opalis Integration Server

A Microsoft System Center Opalis Integration Server can provide a datacenter orchestration and integration layer between the System Center (SCOM, VMM, Data Protection Manager (DPM), and Configuration Manager), and third-party platforms. The Opalis server can automate and sequence tasks to better support your dynamic private cloud environment. To provide fault tolerance, a highly available VM is created for this purpose.

For Microsoft installation and configuration references, see the Opalis Integration Server Administrator Guide at:

http://technet.microsoft.com/en-us/library/gg464955.aspx

The Microsoft System Center Opalis Server must have the following minimum specifications:

� HA VM� 2 virtual CPUs� 4 GB of RAM� 1 Virtual NIC (Management Network Connection (VLAN 60))� 1 OS VHD (50 GB fixed disk)� 1 application disk (500 GB VHD)

3.7 Setting up the IBM System x3650 M3 production Hyper-V cluster

The production cluster consists of eight IBM x3650 M3 systems with up to 288 GB of RAM each. You must complete the following actions before you configure the software:

1. Confirm that two dual-port CNA cards are installed in each server.

2. Confirm that the ports from each CNA card are connected to the assigned ports on the IBM B32 switches.

3. Confirm that the IMM management ports are connected to the assigned B24Y switch ports.

Chapter 3. Best practices and implementation guidelines 37

Page 50: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Setup involves installing Windows Server 2008 R2 Datacenter SP1 on each server and then confirming network and storage connectivity. Then, Hyper-V and Microsoft Clustering can be enabled and configured. Use the Datacenter version of the Windows product due to an unlimited number of Windows Server 2008 R2 VM licenses. The cost of the Datacenter version is based on the number of server sockets.

Configure the two local disks as a RAID 1 array. For more information, see the IBM ServRAID 1015 User Guide at:

http://download.boulder.ibm.com/ibmdl/pub/systems/support/system_x_pdf/ibm_doc_sraidmr_m1015-2ndedition_user-guide.pdf

Then complete the following actions:

1. Install Windows Server 2008 R2 SP1 Datacenter.

2. Set the IMM external TCP/IP address in the UEFI during the boot process by selecting F1 when prompted.

3. Join the domain.

4. Enable Hyper-V.

5. Enable Windows Failover Clustering.

6. Run Windows Update. Update Windows with all the roles and features enabled that are needed to support the environment so that all components are brought up to the latest code levels.

7. Ensure that the system UEFI code, integrated management module, and latest device drivers are updated to the latest supported version. You can find the x3650 M3 driver matrix at:

http://www.ibm.com/support/fixcentral/systemx/selectFixes?parent=ibm~Systemx3650M3&product=ibm/systemx/7945&&platform=Windows+2008+x64&function=all

8. Install the IBM XIV Host Attachment Kit for MPIO and load balancing services.

9. Install the IBM Director platform agent for System Center upward integration.

10.Install the Microsoft Forefront client.

11.Install the Brocade HCM to manage the CNA devices.

3.7.1 Configuring the network

Configure the Ethernet host networks to use two fault tolerant teams, each of which consist of two CNA ports. Each team must contain a single port from each CNA card to protect against individual device failures compromising the team. Use the first team to create multiple host-based VLANs that are presented as Windows network devices. The Windows VLAN devices support parent partition network requirements, such as cluster heartbeat or live migration roles. Use the second team for VM-based traffic. This team requires only the default passthru VLAN 0. It can then be used as a Hyper-V virtual switch.

Optional: Set the IMM later by using the in-band method of connecting to the web-based management interface (http://169.254.95.118). This interface is accessed by the default IBM USB Remote NDIS Network Device. This network device must be enabled for in-band IMM use but can be temporarily disabled such as during failover cluster wizard validations to avoid network flags.

38 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 51: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Use the Brocade Host Connectivity Manager in the parent partition to create two failover teams as follows:

1. In the Brocade Host Connectivity Manager, create the VLANs as detailed in the B32 switch configuration explained in Appendix A, “Brocade 2-port 10 GbE CNA for IBM System x” on page 43. The result is that VLAN devices are presented under Windows Networking.

2. TCP/IP address assignments can then be made under Windows networking as specified or vSwitches can be created using the Hyper-V manager.

3. Assign static IP addresses to the VLAN 30 and 40 and 60 devices.

4. Use the default Passthru VLAN 0 device, which allows both tagged and untagged traffic, to create vSwitches from the VM-only failover team.

5. Clear the option to allow management traffic on these devices when creating the vSwitch under Hyper-V. This action removes access to this device from the physical host.

6. Validate the network connectivity between the two servers on each VLAN.

3.7.2 Storage area network

Confirm that the physical cabling for each device is connected to the correct target port in each IBM B32 switch. Each CNA port presents a host bus adapter (HBA) port to the operating system for a total of four HBA ports.

Volume mapping is used to ensure that the XIV storage volumes are only accessible to the specific servers assigned to them. Unique WWNs are assigned to each port and are viewable by using the XIVGUI or Brocade HCM.

To ensure XIV host connectivity:

1. Upon installing the IBM XIV Host Attachment Kit, validate all ports are added for storage access by using the XIVGUI. The disks presented by the XIV Storage System must also be visible in the Windows Disk Management MMC.

2. On one of the servers, use the Windows Disk Management MMC to bring the disks online. At a minimum, this step must consist of LUNs for the following roles:

– Cluster Shared Volume– SCVMM library pass-through disk– Cluster quorum disk (using a file share witness for the quorum model (recommended))

3.7.3 Creating the cluster

To provide a highly available VM environment, a cluster is created between the eight production servers. The production servers host HA VMs on this cluster. If you are not familiar with setting up a Microsoft cluster under Windows 2008, see “Related publications” on page 75 for a link to more information.

TCP/IP MTU size: Before creating the failover teams, modify the TCP/IP MTU size under device properties to support jumbo frames on each device. Set it to 9000 to take advantage of jumbo frame performance benefits.

Default gateway: Cluster private networks must not use a default gateway.

Chapter 3. Best practices and implementation guidelines 39

Page 52: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Using the Failover Cluster Manager, run the cluster validation wizard to assess the eight physical production servers as potential cluster candidates and address any errors. To avoid cluster validation wizard failures, ensure that the following considerations and prerequisites are addressed:

� An eight node cluster needs a quorum to determine cluster ownership. The cluster validation wizard checks for feasible quorum devices. However, after the default quorum model is implemented during the installation process, the quorum model can be changed to the recommended Node and File Share Majority.

For more information, see “Understanding Quorum Configurations in a Failover Cluster” at:

http://technet.microsoft.com/en-us/library/cc731739.aspx

� Make sure that the intended cluster storage is online to only one of the cluster nodes (CSV and other preferred shared storage).

� Ensure that the cluster management network is at the top of the network binding order (VLAN 60).

� Temporarily disable the default IBM USB Remote NDIS Network Device on all cluster nodes because it causes the validation to fail during network detection due to all nodes sharing an IP address.

Using the Failover Cluster manager, create a cluster with the eight physical management servers. Then complete the following actions:

1. Validate the cluster name and IP address.2. Validate the cluster networking.3. Validate the storage in the cluster.4. Enable CSVs, and add the designated volume as a CSV.5. Using Hyper-V Manager, set the default paths for VM creation to use the CSV.

3.8 Setting up IBM System x3550 M3 Data Protection Manager 2010

Implement a private cloud backup and recovery strategy using Microsoft Data Protection Manager (DPM) 2010 that eliminates a storage frame single point of failure. To alleviate such data protection vulnerabilities for IBM Reference Configuration, use a secondary XIV array or similar enterprise-class storage or tape backup system. Additionally, use a separate, optional physical server, such as the IBM System x3550 M3, to manage and drive the Microsoft DPM 2010 backup and recovery engine.

The private cloud cluster hosts must have the latest IBM XIV VSS Hardware Provider v2.3.1 to take advantage of Microsoft Volume Shadow Copy Services (VSS) framework. Full DPM initial replicas (online backups) of the VMs can then be generated by using XIV snapshots. DPM acts as the VSS requestor to communicate with the Hyper-V writer of the IBM Reference Configuration hosts to quiesce the tenant VMs for crash consistent backups. Then, the XIV VSS hardware provider generates near-instantaneous snapshots of the host CSV where all VMs reside for the full replicas. VM application consistent backups use the Windows 2008 R2 native VSS system provider for LAN-based incremental replicas (online backups) to the dedicated DPM server. To facilitate private cloud data protection, DPM modules support the most popular business-critical applications from Microsoft, such as Microsoft Exchange, SQL, and SharePoint.

40 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 53: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

The setup of the IBM x3550 M3 is straightforward as explained in 3.3, “Active Directory” on page 22. Complete the following actions before you configure the software:

1. Confirm that both embedded NIC ports are connected to the assigned ports of each network switch.

2. Confirm that the network switch ports are configured as aggregates and to allow backup traffic for IBM Reference Configuration on VLAN 60.

The two local disks must be configured as a RAID1 set. For more information, see IBM ServRAID 1015 User Guide at:

http://download.boulder.ibm.com/ibmdl/pub/systems/support/system_x_pdf/ibm_doc_sraidmr_m1015-2ndedition_user-guide.pdf

Then complete the following actions:

1. Set the IMM external address in the UEFI during the boot process by selecting F1 when prompted.

2. Install the Windows Server 2008 R2 SP1 Standard Edition.

3. Use the BACS software to create an fault-tolerant team.

4. Assign a static TCP/IP address (VLAN 60).

5. Verify network connectivity.

6. Ensure that the system UEFI code, integrated management module, and latest device drivers are updated to the latest supported version. You can find the x3550 M3 driver matrix at:

http://www.ibm.com/support/fixcentral/systemx/selectFixes?parent=ibm~Systemx3550M3&product=ibm/systemx/7944&&platform=Windows+2008+x64&function=all

7. Install the IBM Director Platform agent for System Center upward integration.

8. Install the Microsoft Forefront client for antivirus protection.

9. Install updated storage HBA firmware and drivers.

10.Install the IBM XIV Host Attachment Kit for MPIO and load balancing services.

11.Run Windows Update. Update Windows with all the roles and features enabled that are needed to support the environment so that all components are brought up to the latest code levels.

12.Create multiple 4 TB volumes or greater on the XIV system or other storage system for DPM backup use. (The total capacity and number of disks might vary based on individual IBM Reference Configuration environments.)

13.Install Microsoft DPM 2010.

For more information about Microsoft DPM 2010 prerequisites, implementation, and best practices, see “How to protect Hyper-V with DPM 2010 whitepaper” on the Microsoft Download Center at:

http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=14575

Optional: Set the IMM later using the in-band method of connecting to the web-based management interface (http://169.254.95.118). This interface is accessed by the default IBM USB Remote NDIS Network Device. This network device must be enabled for in-band IMM use.

Chapter 3. Best practices and implementation guidelines 41

Page 54: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

3.9 Summary

Upon completing the best practice implementation steps, two operational, highly available Microsoft Hyper-V failover clusters (one for management and one for production VMs) form a high-performing, interoperable, and reliable IBM private cloud solution. Enterprise-class multilevel software and hardware fault tolerance is achieved by configuring a robust collection of industry-leading IBM System x servers, XIV Storage Systems, and Brocade networking components to meet the Microsoft Private Cloud Fast Track program guidelines. The unique framework of the program promotes standardized and highly manageable cloud environments that help to satisfy the most challenging business critical virtualization demands.

The intelligent coupling of IBM, Brocade, and virtualized Microsoft System Center management software creates a capacity-on-demand environment that pools compute, storage, and networking resources.

Microsoft System Center VMM, acting as a VM deployment hub, provides cloud administrators the ability to create VM templates to rapidly deploy VMs to Microsoft failover clusters and to stand-alone hosts. System Center VMM takes advantage of Performance and Resource Optimization (PRO) Packs, such as the IBM PRO Pack. In doing so, System Center VMM allows automated VM cluster migrations in response to defined SCOM triggers that can resolve problematic resource states and balance critical workloads across cluster hosts.

Microsoft SCOM provides centralized administration from a single GUI with multilayer monitoring of health, performance, and availability of private cloud environments, across hardware, hypervisors, operating systems, and applications. SCOM also takes advantage of the IBM Hardware and Storage Management Pack to collect and provide IBM-specific performance monitoring and reporting.

System Center VMM Self-Service Portal 2.0 delivers an intuitive web-based portal that both service providers and customers can use. Service providers use the portal to dynamically pool, allocate, and manage data center resources that can be easily mapped to billable units for server, storage, and networking hardware resources. Using the web-interface of the Self-Service Portal, customers seeking to reduce costs can easily purchase affordable, virtual IT infrastructure, and business-critical application resources built on a dependable and highly available IBM and Brocade proven hardware platform.

42 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 55: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Appendix A. Brocade 2-port 10 GbE CNA for IBM System x

The Brocade 2-port 10 GbE Converged Network Adapter for IBM System x is a two-port Converged Network Adapter (CNA) PCIe 2.0 x8 card that mounts inside supported IBM System x servers. This card combines the functions of a 10 GbE network interface card (NIC) and Fibre Channel (FC) host bus adapter (HBA).

The CNA supports full Fibre Channel over Ethernet (FCoE) protocol offload and allows Ethernet and storage traffic to run simultaneously over a converged link. Advanced capabilities, such as support for jumbo frames, virtual local area network (VLAN) tagging, TCP Segmentation Offload (TSO), and Large Send Offload (LSO), also help in iSCSI storage environments.

This appendix includes the following sections:

� Features� Windows Device Manager� Configuring the Hyper-V network

A

Driver v2.3 or later: Ensure that driver v2.3 or later is running to support the configurations required for Fast Track.

© Copyright IBM Corp. 2011. All rights reserved. 43

Page 56: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Features

The CNA supports the following features:

� PCI Express x8 Gen2 host interface

� Communication module (two 400 MHz processors ASIC)

� Support for IPv4 and IPv6

� Brocade Host Connectivity Manager (HCM) device management and Brocade Command Line Utility (BCU) tools

� Unified management with IBM System Storage® Data Center Fabric Manager (DCFM) and IBM Systems Director

The Ethernet supports the following features:

� 10-Gbps throughput for each port full duplex

� 1.8 million packets per second for each port in each direction (700-byte packets, latency greater than 2 µs)

� Checksum or CRC offloads for FCoE packets, IPv4/IPv6 TCP and UDP packets, and IPv4 header

� VLAN support (up to 64)

� Windows NIC teaming (up to eight teams for each system, eight ports for each team)

� Jumbo frame support (up to 9600 bytes)

� Header data split (HDS) feature for advanced link layer

� Receive side scaling (RSS) feature for advanced link layer

� TCP segmentation offload and large send offload

� Link aggregation (NIC teaming)

� Priority-based Flow Control (802.1Qbb)

� Enhanced Transmission Selection (802.1Qaz)

� Data Center Bridging eXchange Protocol (DCBX)

Windows Device Manager

To configure jumbo frames and other options, such as IPv4 Checksum Offload, TSO, LSO, RSS, HDS, VLAN support, and VLAN ID:

1. Go to Windows Device Manager, right-click the Brocade adapter to be configured, and select Properties.

2. In the Brocade 10G Ethernet Adapter Properties window (Figure A-1 on page 45), complete these steps:

a. Click the Advanced tab. b. From the Property list, select Jumbo Packet Size.c. In the Value field, type 9000. d. Click OK.

You must complete this task before building teams off the devices.

44 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 57: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Figure A-1 Windows Device Manager Properties window

Managing CNAs in the Hyper-V IBM Reference Configuration environment

You can manage the Brocade CNAs for the System x server by using either the GUI-based Brocade HCM or the BCU that is installed with the drivers. The Brocade HCM uses a client agent-based model. The Brocade HCM agent is also installed with the adapter drivers. The Brocade HCM GUI client can be installed locally or remotely on a separate management system, and context-sensitive menus are used to configure devices within the Brocade HCM.

Brocade CNAs support the virtualization of two or more physical network interfaces into a single logical interface known as a team. In essence, the driver advertises the same IP and MAC address from all team constituents or CNA ports.

Although two types of teaming methods are available, this configuration uses the active/passive method:

Active/active Transmits across all devices in the team in a round-robin method, which provides redundancy and load balancing. You can use this method only if the teamed interfaces have uplinks to the same physical switch (or multiple switches that support Brocade multi-chassis trunking (MCT)).

Active/passive One physical interface in a teamed group called the primary port is active, and the other interfaces are on standby for redundancy. If the primary port goes down, a secondary port is chosen to be the next primary. An active/passive configuration can be in failback mode, which means that, if the original port designated to be the primary goes down and comes back up, it becomes the primary port again. Alternatively, the configuration can be in failover mode, which means

VLAN ID configuration: Although you can configure the VLAN ID using this task, do not use Windows Device Manager in the context of configuring the Microsoft Fast Track environment. Perform multiple VLAN configurations by using either Brocade HCM or BCU.

Appendix A. Brocade 2-port 10 GbE CNA for IBM System x 45

Page 58: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

that the secondary port that became the primary port stays as the primary port until it fails.

The current Microsoft Private Cloud Fast Track configuration supports only active/passive mode. Typically, all settings, such as maximum transmission unit (MTU) size, link speed, and port VLAN ID, on the interfaces to be teamed must match. Up to eight separate teams can be created on a single system with up to eight ports for each team. All ports must be from Brocade CNAs.

To configure failover teaming and VLANs to mirror the IBM Reference Configuration environment:

1. Click Start All Programs Brocade Adapter Software Host Connectivity Manager to start the application.

2. In the upper left part of the window, select Localhost.

3. From the menu bar, expand Configure, and then select Teaming.

4. In the Teaming Configuration dialog box:

a. Create the left CNA team:

i. Enter the team name.ii. Set Team Mode to Failover.iii. Move the upper left and lower left CNA ports to the Selected Ports list box.iv. Select the upper left port, and click Set Primary.v. Confirm that the primary port has a check mark. vi. Click Apply.

FCoE traffic: FCoE traffic is not supported when using active/active NIC teaming or bonding.

Before you configure the failover teams: Make sure that you identify and label all CNA ports in the Network Connections Explorer view of the Windows operating system. This step helps you to select the desired CNA ports for optimal fault tolerance. In the lab configuration, a simple naming convention is used to reflect the physical layout of the CNAs that can be easily identified. All top CNAs are plugged into the top B32 switch, and all bottom CNAs are plugged into the bottom B32 switch.

46 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 59: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

The failover team is successfully created (Figure A-2).

Figure A-2 Brocade left CNA failover team used for Hyper-V VM traffic

b. Under Teams on the left side, click Add.

c. Create the right CNA team:

i. Enter the team name.ii. Set Team Mode to Failover.iii. Move the upper right and lower right CNA ports to the Selected Ports list box. iv. Select the upper right port, and click Set Primary.v. Confirm that the primary port has a check mark.vi. Click Apply.

d. After the failover team is successfully created, click Add to the right of the VLANs field.

e. In the Add VLAN dialog box, enter the VLAN ID, VLAN Name and click OK.

For the management cluster, do this step for VLANs 60, 70, and 80. For the production cluster, do this step, for VLANs 30, 40 and 60.

Appendix A. Brocade 2-port 10 GbE CNA for IBM System x 47

Page 60: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

The VLANs are successfully created (Figure A-3).

Figure A-3 Brocade right CNA failover team used for Hyper-V parent host traffic

5. Confirm that the new network devices are shown in the Network Connections Explorer view of the Windows operating system as shown in Figure A-4.

Figure A-4 Windows Network Connection view of newly created failover teams and VLANs

6. In the Network Connections Explorer view, re-label the new network devices by using a preferred naming convention.

VLAN 0: In the Device Name column, notice TEAM#LeftCNATeam, which uses the default PASSTHRU VLAN 0. As a reminder, VLAN 0 allows both untagged and tagged VLAN traffic.

48 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 61: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Configuring the Hyper-V network

Due to the complexity of the network solution, additional step-by-step processes with window captures are included to clarify the necessary Hyper-V networking configuration. Again, examples are shared for the management cluster, but the same principles apply to the production cluster. All host traffic is associated with one failover team (LeftCNATeam), and all VM traffic is associated with the other team (RightCNATeam). The following steps resume after the failover teams and VLANs are created as explained in “Managing CNAs in the Hyper-V IBM Reference Configuration environment” on page 45.

From a Hyper-V networking perspective, there are two critical configuration requirements: the virtual switch configuration and the VM settings. Only the VM-dedicated failover team is used for the virtual switch creation.

To create the Hyper-V virtual network switch (vSwitch):

1. In the Hyper-V Manager with the server highlighted, in the right Actions pane, click Virtual Network Manager.

2. In the Virtual Network Manager dialog box:

a. Select New virtual network.b. Under the type of virtual network, select External.c. Click Add. d. Under Virtual Network Properties on the right side, complete the following steps:

i. Complete the Name and Notes fields.

ii. For the connection type, ensure that External is selected.

iii. In the drop-down field, select the desired default team, which contains only PASSTHRU VLAN 0 (TEAM#LeftCNATeam).

iv. Clear the Allow management operating system to share this network adapter check box.

e. Click Apply (not shown) to add the new virtual network switch.

Appendix A. Brocade 2-port 10 GbE CNA for IBM System x 49

Page 62: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Figure A-5 shows the settings that you just completed.

Figure A-5 Hyper-V Virtual Network/Switch creation using dedicated VM failover team

3. Repeat step 2 for all cluster nodes, ensuring that identical names are used.

To set up Hyper-V VM settings:

1. Using the Cluster Failover Manager on the active node, in the left pane, expand the Cluster Services and applications. Then select the HA VM.

2. In the top middle pane, right-click the VM and select Settings.

3. In the left pane of the Settings for the VM (name of the VM) dialog box, select the Network Adapter.

4. In the Network Adapter group box, complete these steps:

a. In the Network drop-down field, select the newly created virtual network or vSwitch.b. Select the Enable virtual LAN identification check box.c. Enter the VLAN ID.d. Click Apply.

Standardization of vSwitches: If the vSwitches are not standardized across the cluster, migrations will fail.

50 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 63: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Figure A-6 shows the network adapter settings for the iSCSI VLAN 20 associated with a SQL guest cluster VM.

Figure A-6 Hyper-V VM settings for the Network Adapter

5. Log on to the VM, and confirm that the network adapter is present. Assign a static IP address that belongs to VLAN 20.

6. Confirm storage connectivity by pinging any of the XIV iSCSI ports.

Connectivity testing: You can substitute connectivity testing for other VLANs as necessary. Ping tests might suffice.

Appendix A. Brocade 2-port 10 GbE CNA for IBM System x 51

Page 64: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

52 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 65: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Appendix B. Brocade Switch Management

Both the IBM B32E and B24Y switches provide command-line interface (CLI) or web-based management interfaces. For CLI preferences, standard Telnet sessions can be established. The Java technology-based web tools of Brocade B32E include support for Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and Converged Enhanced Ethernet. They can be used for common monitoring and administrative tasks. Although the web-management interface for the B24Y is not as flexible as the B32E, it supports common Ethernet switch monitoring and administrative tasks for administrators who prefer web-based GUIs.

For additional details regarding the B24Y switch, see the following B24Y switch documentation from the IBM Support & downloads website:

� IBM y-series of Ethernet Switches Installation and User Guide

http://www.ibm.com/support/docview.wss?uid=isg3T7000191

� Release Notes for the applicable firmware release on the switch

This appendix includes the following sections:

� Fast Track switch configurations� IBM Converged Switch B32E Fabric zoning configuration

B

© Copyright IBM Corp. 2011. All rights reserved. 53

Page 66: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Fast Track switch configurations

The following configurations are actual tested configurations for IBM Reference Configuration that are used for the IBM-Microsoft Fast Track validation. Redundant B32E switches are used, although a single configuration is shared, rather than duplicating the output, because both are configured identically. Because the B24Y switches are stacked, only a single configuration is required and listed here.

Example B-1 shows the configuration output for the IBM Converged Switch B32E.

Example B-1 Switch configuration for the B32E

Fabric OS (IBM_3758_32E_Top)Fabos Version 7.0.0a

IBM_3758_32E_Top#show run!protocol spanning-tree rstp bridge-priority 4096!cee-map default priority-group-table 1 weight 40 pfc priority-group-table 2 weight 60 priority-table 2 2 2 1 2 2 2 2!fcoe-map default fcoe-vlan 1002!interface Vlan 1!interface Vlan 20!interface Vlan 30!interface Vlan 40!interface Vlan 50!interface Vlan 60!interface Vlan 70!interface Vlan 80!interface Vlan 90!interface Vlan 100!interface TenGigabitEthernet 0/0 mtu 9000 switchport switchport mode trunk switchport trunk allowed vlan add 20 switchport trunk allowed vlan add 50 switchport trunk allowed vlan add 60

54 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 67: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

switchport trunk allowed vlan add 100 no shutdown!interface TenGigabitEthernet 0/1 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 20 switchport converged allowed vlan add 50 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/2 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 20 switchport converged allowed vlan add 50 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/3 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 20 switchport converged allowed vlan add 50 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/4 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 20 switchport converged allowed vlan add 50 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/5 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 20 switchport converged allowed vlan add 50 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/6 mtu 9000

Appendix B. Brocade Switch Management 55

Page 68: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

fcoeport switchport switchport mode converged switchport converged allowed vlan add 20 switchport converged allowed vlan add 50 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/7 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 20 switchport converged allowed vlan add 50 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/8 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 20 switchport converged allowed vlan add 50 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/9 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 20 switchport converged allowed vlan add 60 switchport converged allowed vlan add 90 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/10 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 20 switchport converged allowed vlan add 60 switchport converged allowed vlan add 90 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/11 mtu 9000 switchport switchport mode trunk switchport trunk allowed vlan add 20

56 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 69: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

switchport trunk allowed vlan add 30 switchport trunk allowed vlan add 40 switchport trunk allowed vlan add 50 switchport trunk allowed vlan add 60 switchport trunk allowed vlan add 70 switchport trunk allowed vlan add 80 switchport trunk allowed vlan add 90 switchport trunk allowed vlan add 100 no shutdown!interface TenGigabitEthernet 0/12 mtu 9000 switchport switchport mode converged switchport converged allowed vlan add 50 switchport converged allowed vlan add 60 switchport converged allowed vlan add 100 no shutdown!interface TenGigabitEthernet 0/13 mtu 9000 switchport switchport mode converged switchport converged allowed vlan add 50 switchport converged allowed vlan add 60 switchport converged allowed vlan add 100 no shutdown!interface TenGigabitEthernet 0/14 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 60 switchport converged allowed vlan add 70 switchport converged allowed vlan add 80 switchport converged allowed vlan add 100 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/15 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 60 switchport converged allowed vlan add 70 switchport converged allowed vlan add 80 switchport converged allowed vlan add 100 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/16 mtu 9000 fcoeport

Appendix B. Brocade Switch Management 57

Page 70: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

switchport switchport mode converged switchport converged allowed vlan add 30 switchport converged allowed vlan add 40 switchport converged allowed vlan add 60 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/17 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 30 switchport converged allowed vlan add 40 switchport converged allowed vlan add 60 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/18 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 30 switchport converged allowed vlan add 40 switchport converged allowed vlan add 60 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/19 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 30 switchport converged allowed vlan add 40 switchport converged allowed vlan add 60 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/20 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 30 switchport converged allowed vlan add 40 switchport converged allowed vlan add 60 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/21 mtu 9000 fcoeport

58 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 71: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

switchport switchport mode converged switchport converged allowed vlan add 30 switchport converged allowed vlan add 40 switchport converged allowed vlan add 60 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/22 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 30 switchport converged allowed vlan add 40 switchport converged allowed vlan add 60 no shutdown spanning-tree edgeport bpdu-filter!interface TenGigabitEthernet 0/23 mtu 9000 fcoeport switchport switchport mode converged switchport converged allowed vlan add 30 switchport converged allowed vlan add 40 switchport converged allowed vlan add 60 no shutdown spanning-tree edgeport bpdu-filter!protocol lldp advertise dcbx-fcoe-app-tlv advertise dcbx-fcoe-logical-link-tlv!line console 0 loginline vty 0 31 login!end

Example B-2 shows the configuration output for IBM Ethernet Switch B24Y.

Example B-2 IBM Ethernet Switch B24Y configuration (stacked to create a single virtual switch)

IBM_B24Y_BOTTOM>show runCurrent configuration:!ver 07.2.02aT7f3!stack unit 1 module 1 fcx-24-4x-port-management-module module 2 fcx-sfp-plus-4-port-10g-module priority 128 stack-port 1/2/1

Appendix B. Brocade Switch Management 59

Page 72: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

stack unit 2 module 1 fcx-24-4x-port-management-module module 2 fcx-sfp-plus-4-port-10g-module stack-port 2/2/1 stack enable!global-stp!!!spanning-tree single!vlan 1 name DEFAULT-VLAN by port spanning-tree!vlan 20 name Vlan20 by port tagged ethe 1/2/3 ethe 2/2/3 untagged ethe 1/1/4 to 1/1/6 ethe 2/1/4 to 2/1/6 spanning-tree!vlan 50 by port tagged ethe 1/1/23 to 1/1/24 ethe 1/2/3 ethe 2/1/23 to 2/1/24 ethe 2/2/3 router-interface ve 1 spanning-tree!vlan 60 name Vlan60 by port tagged ethe 1/1/23 to 1/1/24 ethe 1/2/3 ethe 2/1/23 to 2/1/24 ethe 2/2/3 untagged ethe 1/1/1 to 1/1/3 ethe 2/1/1 to 2/1/3 router-interface ve 2 spanning-tree!vlan 100 by port untagged ethe 1/1/10 to 1/1/16 ethe 2/1/10 to 2/1/16 router-interface ve 3 spanning-tree!!spanning-tree single 802-1wspanning-tree single 802-1w ethe 2/1/23 admin-edge-port!!!aaa authentication login default localjumboenable telnet password .....enable super-user-password .....enable aaa consolehostname IBM_B24Y_BOTTOMno ip dhcp-client auto-update enableusername admin password .....web-management allow-no-passwordinterface ethernet 1/1/1 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/2

60 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 73: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/3 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/4 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/5 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/6 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/10 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/11 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/12 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/13 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/14 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/15 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/16 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/17 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/18 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/19 spanning-tree 802-1w admin-edge-port!interface ethernet 1/1/23 spanning-tree 802-1w admin-edge-port link-aggregate configure key 10000 link-aggregate active!interface ethernet 1/2/3 spanning-tree 802-1w admin-pt2pt-mac!interface ethernet 2/1/1 spanning-tree 802-1w admin-edge-port!

Appendix B. Brocade Switch Management 61

Page 74: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

interface ethernet 2/1/2 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/3 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/4 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/5 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/6 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/10 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/11 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/12 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/13 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/14 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/15 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/16 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/17 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/18 spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/19spanning-tree 802-1w admin-edge-port!interface ethernet 2/1/23 link-aggregate configure key 10000 link-aggregate active!interface ethernet 2/2/3 spanning-tree 802-1w admin-pt2pt-mac!interface ve 1 ip address 192.168.50.1 255.255.255.0!

62 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 75: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

interface ve 2 ip address 192.168.60.1 255.255.255.0!interface ve 3 ip address 192.168.100.1 255.255.255.0!!!End

IBM Converged Switch B32E Fabric zoning configuration

The modular architecture of the IBM XIV Storage System performs best when clients enable round-robin multipath I/O (MPIO, default setting) on each host when installing the IBM XIV Host Attachment Kit for Windows. As a result, host connectivity benefits from collective processing and cache power from multiple XIV interface modules.

Figure B-1 and the B32 switch zone show the output of the configuration.

Figure B-1 Microsoft failover cluster hosts zoned to XIV interface modules

Example B-3 shows how to configure storage-to-host paths in an IBM XIV SAN environment.

Example B-3 Output from using the zoneshow command for the B32 switch

IBM_3758_32E_Top:admin> zoneshowDefined configuration: cfg: PCO_Zone_Cfg_Top Hyper1_XIV_Zone_01; Hyper1_XIV_Zone_02; Hyper2_XIV_Zone_01; Hyper2_XIV_Zone_02; Hyper3_XIV_Zone_01; Hyper3_XIV_Zone_02; Hyper4_XIV_Zone_01; Hyper4_XIV_Zone_02; Hyper5_XIV_Zone_01; Hyper5_XIV_Zone_02; Hyper6_XIV_Zone_01; Hyper6_XIV_Zone_02;

Top B32 Switch

Bottom B32 Switch

Hosts

IM 9IM 8IM 7

IM 6IM 5IM 4

Fibre channel cables

TwinAx copper cables

Appendix B. Brocade Switch Management 63

Page 76: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Hyper7_XIV_Zone_01; Hyper7_XIV_Zone_02; Hyper8_XIV_Zone_01; Hyper8_XIV_Zone_02; MNGHost1_XIV_Zone_01; MNGHost1_XIV_Zone_02; MNGHost2_XIV_Zone_01; MNGHost2_XIV_Zone_02 zone: Hyper1_XIV_Zone_01 Hyper1_FCPort_Top_Left; XIV_Mod_4_FCPort_1; XIV_Mod_6_FCPort_1; XIV_Mod_8_FCPort_1 zone: Hyper1_XIV_Zone_02 Hyper1_FCPort_Top_Right; XIV_Mod_4_FCPort_1; XIV_Mod_6_FCPort_1; XIV_Mod_8_FCPort_1 zone: Hyper2_XIV_Zone_01 Hyper2_FCPort_Top_Left; XIV_Mod_5_FCPort_1; XIV_Mod_7_FCPort_1; XIV_Mod_9_FCPort_1 zone: Hyper2_XIV_Zone_02 Hyper2_FCPort_Top_Right; XIV_Mod_5_FCPort_1; XIV_Mod_7_FCPort_1; XIV_Mod_9_FCPort_1 zone: Hyper3_XIV_Zone_01 Hyper3_FCPort_Top_Left; XIV_Mod_4_FCPort_1; XIV_Mod_6_FCPort_1; XIV_Mod_8_FCPort_1 zone: Hyper3_XIV_Zone_02 Hyper3_FCPort_Top_Right; XIV_Mod_4_FCPort_1; XIV_Mod_6_FCPort_1; XIV_Mod_8_FCPort_1 zone: Hyper4_XIV_Zone_01 Hyper4_FCPort_Top_Left; XIV_Mod_5_FCPort_1; XIV_Mod_7_FCPort_1; XIV_Mod_9_FCPort_1 zone: Hyper4_XIV_Zone_02 Hyper4_FCPort_Top_Right; XIV_Mod_5_FCPort_1; XIV_Mod_7_FCPort_1; XIV_Mod_9_FCPort_1 zone: Hyper5_XIV_Zone_01 Hyper5_FCPort_Top_Left; XIV_Mod_4_FCPort_1; XIV_Mod_6_FCPort_1; XIV_Mod_8_FCPort_1 zone: Hyper5_XIV_Zone_02 Hyper5_FCPort_Top_Right; XIV_Mod_4_FCPort_1; XIV_Mod_6_FCPort_1; XIV_Mod_8_FCPort_1 zone: Hyper6_XIV_Zone_01 Hyper6_FCPort_Top_Left; XIV_Mod_5_FCPort_1; XIV_Mod_7_FCPort_1; XIV_Mod_9_FCPort_1 zone: Hyper6_XIV_Zone_02 Hyper6_FCPort_Top_Right; XIV_Mod_5_FCPort_1; XIV_Mod_7_FCPort_1; XIV_Mod_9_FCPort_1 zone: Hyper7_XIV_Zone_01 Hyper7_FCPort_Top_Left; XIV_Mod_4_FCPort_1; XIV_Mod_6_FCPort_1; XIV_Mod_8_FCPort_1 zone: Hyper7_XIV_Zone_02 Hyper7_FCPort_Top_Right; XIV_Mod_4_FCPort_1; XIV_Mod_6_FCPort_1; XIV_Mod_8_FCPort_1 zone: Hyper8_XIV_Zone_01 Hyper8_FCPort_Top_Left; XIV_Mod_5_FCPort_1; XIV_Mod_7_FCPort_1; XIV_Mod_9_FCPort_1 zone: Hyper8_XIV_Zone_02 Hyper8_FCPort_Top_Right; XIV_Mod_5_FCPort_1; XIV_Mod_7_FCPort_1; XIV_Mod_9_FCPort_1 zone: MNGHost1_XIV_Zone_01 MNGHost1_FCPort_Bottom_Rightx; XIV_Mod_4_FCPort_1; XIV_Mod_6_FCPort_1; XIV_Mod_8_FCPort_1

64 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 77: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

zone: MNGHost1_XIV_Zone_02 MNGHost1_FCPort_Top_Left; XIV_Mod_4_FCPort_1; XIV_Mod_6_FCPort_1; XIV_Mod_8_FCPort_1 zone: MNGHost2_XIV_Zone_01 MNGHost2_FCPort_Top_Right; XIV_Mod_5_FCPort_1; XIV_Mod_7_FCPort_1; XIV_Mod_9_FCPort_1 zone: MNGHost2_XIV_Zone_02 MNGHost2_FCPort_Top_Left; XIV_Mod_5_FCPort_1; XIV_Mod_7_FCPort_1; XIV_Mod_9_FCPort_1 alias: Hyper1_FCPort_Top_Left 10:00:00:05:1e:e8:1f:a7 alias: Hyper1_FCPort_Top_Right 10:00:00:05:1e:e8:1f:a6 alias: Hyper2_FCPort_Top_Left 10:00:00:05:1e:e9:84:25 alias: Hyper2_FCPort_Top_Right 10:00:00:05:1e:e9:84:24 alias: Hyper3_FCPort_Top_Left 10:00:00:05:1e:e8:24:3d alias: Hyper3_FCPort_Top_Right 10:00:00:05:1e:e8:24:3c alias: Hyper4_FCPort_Top_Left 10:00:00:05:1e:73:e5:30 alias: Hyper4_FCPort_Top_Right 10:00:00:05:1e:73:e5:2f alias: Hyper5_FCPort_Top_Left 10:00:00:05:1e:f4:d4:28 alias: Hyper5_FCPort_Top_Right 10:00:00:05:1e:f4:d4:27 alias: Hyper6_FCPort_Top_Left 10:00:00:05:33:26:06:a9 alias: Hyper6_FCPort_Top_Right 10:00:00:05:33:26:06:a8 alias: Hyper7_FCPort_Top_Left 10:00:00:05:1e:e8:23:d2 alias: Hyper7_FCPort_Top_Right 10:00:00:05:1e:e8:23:d1 alias: Hyper8_FCPort_Top_Left 10:00:00:05:1e:c4:bc:37 alias: Hyper8_FCPort_Top_Right 10:00:00:05:1e:c4:bc:36 alias: MNGHost1_FCPort_Bottom_Right 10:00:00:05:1e:73:e6:90 alias: MNGHost1_FCPort_Top_Left 10:00:00:05:1e:73:e6:8f alias: MNGHost2_FCPort_Top_Left 10:00:00:05:1e:f4:cc:40 alias: MNGHost2_FCPort_Top_Right 10:00:00:05:1e:f4:cc:3f alias: XIV_Mod_4_FCPort_1 50:01:73:80:4e:60:01:40 alias: XIV_Mod_5_FCPort_1 50:01:73:80:4e:60:01:50 alias: XIV_Mod_6_FCPort_1 50:01:73:80:4e:60:01:60

Appendix B. Brocade Switch Management 65

Page 78: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

alias: XIV_Mod_7_FCPort_1 50:01:73:80:4e:60:01:70 alias: XIV_Mod_8_FCPort_1 50:01:73:80:4e:60:01:80 alias: XIV_Mod_9_FCPort_1 50:01:73:80:4e:60:01:90

Effective configuration: cfg: PCO_Zone_Cfg_Top zone: Hyper1_XIV_Zone_01 10:00:00:05:1e:e8:1f:a7 50:01:73:80:4e:60:01:40 50:01:73:80:4e:60:01:60 50:01:73:80:4e:60:01:80zone: Hyper1_XIV_Zone_02 10:00:00:05:1e:e8:1f:a6 50:01:73:80:4e:60:01:40 50:01:73:80:4e:60:01:60 50:01:73:80:4e:60:01:80zone: Hyper2_XIV_Zone_01 10:00:00:05:1e:e9:84:25 50:01:73:80:4e:60:01:50 50:01:73:80:4e:60:01:70 50:01:73:80:4e:60:01:90 zone: Hyper2_XIV_Zone_02 10:00:00:05:1e:e9:84:24 50:01:73:80:4e:60:01:50 50:01:73:80:4e:60:01:70 50:01:73:80:4e:60:01:90 zone: Hyper3_XIV_Zone_01 10:00:00:05:1e:e8:24:3d 50:01:73:80:4e:60:01:40 50:01:73:80:4e:60:01:60 50:01:73:80:4e:60:01:80zone: Hyper3_XIV_Zone_02 10:00:00:05:1e:e8:24:3c 50:01:73:80:4e:60:01:40 50:01:73:80:4e:60:01:60 50:01:73:80:4e:60:01:80zone: Hyper4_XIV_Zone_01 10:00:00:05:1e:73:e5:30 50:01:73:80:4e:60:01:50 50:01:73:80:4e:60:01:70 50:01:73:80:4e:60:01:90 zone: Hyper4_XIV_Zone_02 10:00:00:05:1e:73:e5:2f 50:01:73:80:4e:60:01:50 50:01:73:80:4e:60:01:70 50:01:73:80:4e:60:01:90 zone: Hyper5_XIV_Zone_01 10:00:00:05:1e:f4:d4:28 50:01:73:80:4e:60:01:40 50:01:73:80:4e:60:01:60 50:01:73:80:4e:60:01:80zone: Hyper5_XIV_Zone_02

66 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 79: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

10:00:00:05:1e:f4:d4:27 50:01:73:80:4e:60:01:40 50:01:73:80:4e:60:01:60 50:01:73:80:4e:60:01:80zone: Hyper6_XIV_Zone_01 10:00:00:05:33:26:06:a9 50:01:73:80:4e:60:01:50 50:01:73:80:4e:60:01:70 50:01:73:80:4e:60:01:90 zone: Hyper6_XIV_Zone_02 10:00:00:05:33:26:06:a8 50:01:73:80:4e:60:01:50 50:01:73:80:4e:60:01:70 50:01:73:80:4e:60:01:90 zone: Hyper7_XIV_Zone_01 10:00:00:05:1e:e8:23:d2 50:01:73:80:4e:60:01:40 50:01:73:80:4e:60:01:60 50:01:73:80:4e:60:01:80zone: Hyper7_XIV_Zone_02 10:00:00:05:1e:e8:23:d1 50:01:73:80:4e:60:01:40 50:01:73:80:4e:60:01:60 50:01:73:80:4e:60:01:80zone: Hyper8_XIV_Zone_01 10:00:00:05:1e:c4:bc:37 50:01:73:80:4e:60:01:50 50:01:73:80:4e:60:01:70 50:01:73:80:4e:60:01:90 zone: Hyper8_XIV_Zone_02 10:00:00:05:1e:c4:bc:36 50:01:73:80:4e:60:01:50 50:01:73:80:4e:60:01:70 50:01:73:80:4e:60:01:90 zone: MNGHost1_XIV_Zone_01 10:00:00:05:1e:73:e6:90 50:01:73:80:4e:60:01:40 50:01:73:80:4e:60:01:60 50:01:73:80:4e:60:01:80zone: MNGHost1_XIV_Zone_02 10:00:00:05:1e:73:e6:8f 50:01:73:80:4e:60:01:40 50:01:73:80:4e:60:01:60 50:01:73:80:4e:60:01:80zone: MNGHost2_XIV_Zone_01 10:00:00:05:1e:f4:cc:3f 50:01:73:80:4e:60:01:50 50:01:73:80:4e:60:01:70 50:01:73:80:4e:60:01:90 zone: MNGHost2_XIV_Zone_02 10:00:00:05:1e:f4:cc:40 50:01:73:80:4e:60:01:50 50:01:73:80:4e:60:01:70 50:01:73:80:4e:60:01:90

Appendix B. Brocade Switch Management 67

Page 80: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Even though only one of the B32E switch zone configurations is shared, the same logic can be applied to the missing zoneshow command of the other switch. The zoning configuration of the other switch was omitted to avoid redundant data.

Worldwide name and FC ports: The last two numeric characters of the worldwide name (WWN) in the MNGHost2_XIV_Zone_02 zone indicate the XIV interface module and the FC port numbers. The first number represents the XIV interface module number, and the last number represents the FC port number. Each XIV interface module has four FC ports. The WWN convention lists the FC ports as 0 - 3, although the XIVGUI and XIV physical patch panel lists them as FC ports 1 - 4. The first three FC ports are target ports, and the last FC port is an initiator port, which is the factory default. The last two ports are typically reserved for data replication, which requires both target and initiator ports. Use any two of the first three FC target ports for host connectivity, remembering to reserve two for data replication. As suggested by the fabric zoning configuration, take advantage of the XIV distributed architecture by balancing the I/O among the interface modules.

68 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 81: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Appendix C. Networking worksheets

You can use the following tables as worksheets to help you document the layout of your own network and virtual local area networks (VLANs).

Table C-1 can help you with the B32 Switch layout.

Table C-1 B32 switch layout

C

IBM B32 switch ports Device VLANs

Port 0 B24Y Uplink VLAN 20, 50, 60, 100

Port 1 Hyper-V Production Server1 VLAN 20, 50, 1002

Port 2 Hyper-V Production Server2 VLAN 20, 50, 1002

Port 3 Hyper-V Production Server3 VLAN 20, 50, 1002

Port 4 Hyper-V Production Server4 VLAN 20, 50, 1002

Port 5 Hyper-V Production Server5 VLAN 20, 50, 1002

Port 6 Hyper-V Production Server6 VLAN 20, 50, 1002

Port 7 Hyper-V Production Server7 VLAN 20, 50, 1002

Port 8 Hyper-V Production Server8 VLAN 20, 50, 1002

Port 9 Management Server1 VLAN 20, 60, 90, 1002

Port 10 Management Server2 VLAN 20, 60, 90, 1002

Port 11 Crosslink to other B32 VLAN 20, 30, 40, 50, 60, 70, 80, 90, 100

Port 12 Uplink to CorpNet VLAN 50, 60, 100

Port 13 Uplink to CorpNet VLAN 50, 60, 100

Port 14 Management Server1 VLAN 60, 70, 80, 100, 1002

Port 15 Management Server2 VLAN 60, 70, 80, 100, 1002

Port 16 Hyper-V Production Server1 VLAN 30, 40, 60, 1002

© Copyright IBM Corp. 2011. All rights reserved. 69

Page 82: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Table C-2 can help you with the B24Y switch ports layout.

Table C-2 B24Y switch layout

Port 17 Hyper-V Production Server2 VLAN 30, 40, 60, 1002

Port 18 Hyper-V Production Server3 VLAN 30, 40, 60, 1002

Port 19 Hyper-V Production Server4 VLAN 30, 40, 60, 1002

Port 20 Hyper-V Production Server5 VLAN 30, 40, 60, 1002

Port 21 Hyper-V Production Server6 VLAN 30, 40, 60, 1002

Port 22 Hyper-V Production Server7 VLAN 30, 40, 60, 1002

Port 23 Hyper-V Production Server8 VLAN 30, 40, 60, 1002

FC0 XIV4 (Port 1 or 3) Zone 1 or 2 (depending on switch)

FC1 XIV5(Port 1 or 3) Zone 1 or 2 (depending on switch)

FC2 XIV6 (Port 1 or 3) Zone 1 or 2 (depending on switch)

FC3 XIV7 (Port 1 or 3) Zone 1 or 2 (depending on switch)

FC4 XIV8 (Port 1 or 3) Zone 1 or 2 (depending on switch)

FC5 XIV9 (Port 1 or 3) Zone 1 or 2 (depending on switch)

FC6

FC7

IBM B24 switch ports Device VLANs

Port 1 XIV Mgmnt1 VLAN 60 (switch 1)

Port 2 XIV Mgmnt2 VLAN 60 (switch 2)

Port 3 XIV Mgmnt3 VLAN 60 (switch 1)

Port 4 XIV iSCSI 1 VLAN 20

Port 5 XIV iSCSI 2 VLAN 20

Port 6 XIV iSCSI 3 VLAN 20

Port 7

Port 8

Port 9

Port 10 Management Server 1 - Integrated Management Module (IMM)

VLAN 100 (switch 1)

Port 11 Management Server 2 - IMM VLAN 100 (switch 2)

Port 12 Hyper-V Production Server 1 - IMM VLAN 100 (switch 1)

Port 13 Hyper-V Production Server 2 - IMM VLAN 100 (switch 2)

Port 14 Hyper-V Production Server 3 - IMM VLAN 100 (switch 1)

IBM B32 switch ports Device VLANs

70 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 83: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Table C-3 can help you with VLAN 20 addresses.

Table C-3 VLAN 20 addresses (iSCSI storage)

Port 15 Hyper-V Production Server 4 - IMM VLAN 100 (switch 2)

Port 16 Hyper-V Production Server 5 - IMM VLAN 100 (switch 1)

Port 17 Hyper-V Production Server 6 - IMM VLAN 100 (switch 2)

Port 18 Hyper-V Production Server 7 - IMM VLAN 100 (switch 1)

Port 19 Hyper-V Production Server 8 - IMM VLAN 100 (switch 2)

Port 20

Port 21

Port 22

Port 23 Active Directory server VLAN 60 (active/active team)

Port 24

Uplink1 Uplink to other B24Y All traffic

Uplink2

Uplink3 Uplink to B32 VLAN 20, 50, 60, 100

Uplink4

VLAN 20 addresses (iSCSI storage) IP addresses

XIV interface module 4 iSCSI

XIV interface module 4 iSCSI

XIV interface module 5 iSCSI

XIV interface module 5 iSCSI

XIV interface module 6 iSCSI

XIV interface module 6 iSCSI

XIV interface module 7 iSCSI

XIV interface module 7 iSCSI

XIV interface module 8 iSCSI

XIV interface module 8 iSCSI

XIV interface module 9 iSCSI

XIV interface module 9 iSCSI

Management Cluster SQL Server VM1

Management Cluster SQL Server VM2

Production Cluster VM1 (if needed)

Production Cluster VM2

IBM B24 switch ports Device VLANs

Appendix C. Networking worksheets 71

Page 84: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Table C-4 can help you with VLAN 30 addresses.

Table C-4 VLAN 30 addresses (Production Cluster Private)

Table C-5 can help you with VLAN 40 addresses.

Table C-5 VLAN 40 addresses (Production Live Migration)

Table C-6 can help you with VLAN 50 addresses.

Table C-6 VLAN 50 addresses (VM production)

Production Cluster VM3

VLAN 30 addresses (Production Cluster Private)

IP addresses

Hyper-V Production Server 1

Hyper-V Production Server 2

Hyper-V Production Server 3

Hyper-V Production Server 4

Hyper-V Production Server 5

Hyper-V Production Server 6

Hyper-V Production Server 7

Hyper-V Production Server 8

VLAN 40 addresses (Production Live Migration)

Hyper-V Production Server 1

Hyper-V Production Server 2

Hyper-V Production Server 3

Hyper-V Production Server 4

Hyper-V Production Server 5

Hyper-V Production Server 6

Hyper-V Production Server 7

Hyper-V Production Server 8

VLAN 50 addresses (VM production) IP address range

Routing IP address (virtual routing address)

Routing IP address (top switch)

Routing IP address (bottom switch)

VLAN 20 addresses (iSCSI storage) IP addresses

72 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 85: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Table C-7 can help you with VLAN 70 addresses.

Table C-7 VLAN 70 addresses (Management Cluster Private)

Table C-8 can help you with VLAN 80 addresses.

Table C-8 VLAN 80 addresses (Management Cluster Live Migration)

Table C-9 can help you with VLAN 90 addresses.

Table C-9 VLAN 90 addresses (Management Guest SQL Cluster Private)

Table C-10 can help you with VLAN 100 addresses.

Table C-10 VLAN 100 addresses (IMM)

VLAN 70 addresses (Management Cluster Private)

IP addresses

Management Server 1

Management Server 2

VLAN 80 addresses (Management Cluster Live Migration)

IP addresses

Management Server 1

Management Server 2

VLAN 90 addresses (Management Guest SQL Cluster Private)

IP addresses

SQL VM1

SQL VM2

VLAN 100 addresses (IMM) IP addresses

Management Server 1

Management Server 2

Hyper-V Production Server 1

Hyper-V Production Server 2

Hyper-V Production Server 3

Hyper-V Production Server 4

Hyper-V Production Server 5

Hyper-V Production Server 6

Hyper-V Production Server 7

Hyper-V Production Server 8

Appendix C. Networking worksheets 73

Page 86: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

74 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 87: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Related publications

The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this paper.

IBM Redbooks

The following IBM Redbooks publications provide additional information about the topic in this document. Note that some publications referenced in this list might be available in softcopy only.

� Brocade 10Gb CNA for IBM System x, TIPS0718

� IBM b-type Data Center Networking: Design and Best Practices Introduction, SG24-7786

� IBM b-type Data Center Networking: Product Introduction and Initial Setup, SG24-7785

� IBM System x3550 M3, TIPS0804

� IBM System x3650 M3, TIPS0805

You can search for, view, download or order these documents and other Redbooks, Redpapers, Web Docs, draft and additional materials, at the following website:

ibm.com/redbooks

Other publications

These publications are also relevant as further information sources:

� Brocade Adapters Administrator’s Guide

http://www.brocade.com/forms/getFile?p=documents/downloads/HBA/Documentation/Brocade_Adapters_v3.0.0.0_Admin_Guide.pdf

� How to protect Hyper-V with DPM 2010 white paper (Microsoft Download Center)

http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=14575

� IBM Storage Management Pack for Microsoft System Center Operations Manager (SCOM) Version 1.1.1 User Guide, GC27-3909-02

http://publib.boulder.ibm.com/infocenter/ibmxiv/r2/topic/com.ibm.help.xiv.doc/docs/IBM_Storage_MP_for_SCOM_1.1.x_UG.pdf

� IBM System x Firmware Update Best Practices IMM, UEFI, FPGA, and DSA Preboot guide

ftp://ftp.software.ibm.com/systems/support/system_x_pdf/firmware_update_best_practices.pdf

� IBM System x Life Without DOS Transitioning to UEFI and IMM white paper

ftp://ftp.software.ibm.com/systems/support/system_x/transitioning_to_uefi_and_imm.doc

� IBM y-series of Ethernet Switches Installation and User Guide

http://www.ibm.com/support/docview.wss?uid=isg3T7000191

© Copyright IBM Corp. 2011. All rights reserved. 75

Page 88: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

� Managing IBM System x Servers, Blades, and BladeCenter chassis using Microsoft’s System Center Operations Manager and the IBM Hardware Management Pack (SCOM integration white paper)

ftp://ftp.software.ibm.com/systems/support/system_x_pdf/ibm_hw_mp_mgt_guide.pdf

� ServeRAID M1015 SAS/SATA Controller User’s Guide

http://download.boulder.ibm.com/ibmdl/pub/systems/support/system_x_pdf/ibm_doc_sraidmr_m1015-2ndedition_user-guide.pdf

� ServeRAID M5014/M5015 SAS/SATA Controllers User’s Guide

http://download.boulder.ibm.com/ibmdl/pub/systems/support/system_x_pdf/ibm_doc_sraidmr_5014-5015-sept2011_userguide.pdf

Online resources

These websites are also relevant as further information sources:

� Brocade 1020 CNA Drivers, Firmware

http://www.brocade.com/services-support/drivers-downloads/adapters/IBM.page

� Configuring Operations Manager Integration with VMM

http://technet.microsoft.com/en-us/library/cc956099.aspx

� Hyper-V: Live Migration Network Configuration Guide

http://technet.microsoft.com/en-us/library/ff428137(WS.10).aspx

� Hyper-V: Using Hyper-V and Failover Clustering

http://technet.microsoft.com/en-us/library/cc732181(WS.10).aspx

� IBM B24Y Ethernet switches

http://ibm.com/systems/networking/hardware/ethernet/b-type/b48y/

� IBM ServerProven Network Adapter Compatibility

http://www.ibm.com/systems/info/x86servers/serverproven/compat/us/xseries/lan/matrix.html

� IBM System x Integration Offerings for Microsoft Systems Management Solutions

http://ibm.com/support/entry/portal/docdisplay?lndocid=SYST-MANAGE

� IBM Systems Director Download (Partner integration)

http://ibm.com/systems/software/director/downloads/integration.html

� IBM Systems Director Download (Platform Agent)

http://ibm.com/systems/software/director/downloads/agents.html

� IBM XIV Storage System: Storage Reinvented

http://ibm.com/systems/storage/disk/xiv/

� ISV Solutions Resource Library

http://www.ibm.com/systems/storage/solutions/isv/isv_microsoft.html

� Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 SP1 Download Center

http://www.microsoft.com/download/en/details.aspx?id=26701

76 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 89: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

� Microsoft System Center Operations Manager Deployment Guide: Operations Manager 2007 R2

http://technet.microsoft.com/en-us/library/bb310604.aspx

� Microsoft System Center Virtual Machine Manager Deployment Guide: Download Virtual Machine Manager Documentation

http://technet.microsoft.com/en-us/library/ee441285.aspx

� Microsoft System Center Opalis Integration Server Administrator Guide

http://technet.microsoft.com/en-us/library/gg464955.aspx

� Microsoft SQL Server 2008 Documentation

http://technet.microsoft.com/en-us/library/bb418470(SQL.10).aspx

� Understanding Quorum Configurations in a Failover Cluster

http://technet.microsoft.com/en-us/library/cc731739.aspx

Help from IBM

IBM Support and downloads

ibm.com/support

IBM Global Services

ibm.com/services

Related publications 77

Page 90: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

78 IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

Page 91: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide
Page 92: IBM Reference Configuration for Microsoft Private … · International Technical Support Organization IBM Reference Configuration for Microsoft Private Cloud: Implementation Guide

®

REDP-4829-00

INTERNATIONAL TECHNICALSUPPORTORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE

IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information:ibm.com/redbooks

Redpaper™

IBM Reference Configuration for Microsoft Private Cloud:Implementation Guide

Understand the components of IBM Reference Configuration

Set up and configure the Microsoft Private Cloud Fast Track solution

Follow preferred practices for implementing the solution

The IBM Reference Configuration for Microsoft Private Cloud provides businesses an affordable, interoperable, reliable, and industry-leading virtualization solution. Validated by the Microsoft Private Cloud Fast Track program, the IBM Reference Configuration for Microsoft Private Cloud combines Microsoft software, consolidated guidance, and validated configurations for compute, network, storage, and value-added software components.

The Microsoft program requires a minimum level of redundancy and fault tolerance across the servers, storage, and networking for both the management and production virtual machine (VM) clusters. These requirements help to ensure a certain level of fault tolerance while managing private cloud pooled resources.

This IBM Redpaper publication explains how to set up and configure the IBM 8-Node Microsoft Private Cloud Fast Track solution used in the actual Microsoft program validation. The solution design consists of Microsoft Windows 2008 R2 Hyper-V clusters powered by IBM System x3650 M3 servers with IBM XIV Storage System connected to IBM converged and Ethernet networks. This paper includes a short summary of the Reference Configuration software and hardware components, followed by best practice implementation guidelines.

This paper targets mid-to-large sized organizations that consist of IT engineers who are familiar with the hardware and software that make up the IBM Cloud Reference Architecture. It also benefits the technical sales teams for IBM System x and XIV and their customers who are evaluating or pursuing Hyper-V virtualization solutions.

This paper is a partner to IBM Reference Configuration for Microsoft Private Cloud: Deployment Guide, REDP-4828.

Back cover