integrated virtualization

140
ibm.com/redbooks Redpaper Front cover Integrated Virtualization Manager on IBM System p5 Guido Somers No dedicated Hardware Management Console required Powerful integration for entry-level servers Key administration tasks explained

Upload: helloibrahim

Post on 08-Apr-2015

221 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Integrated Virtualization

ibm.com/redbooks Redpaper

Front cover

Integrated Virtualization Manageron IBM System p5

Guido Somers

No dedicated Hardware Management Console required

Powerful integration for entry-level servers

Key administration tasks explained

Page 2: Integrated Virtualization
Page 3: Integrated Virtualization

International Technical Support Organization

Integrated Virtualization Manager on IBM System p5

December 2006

Page 4: Integrated Virtualization

© Copyright International Business Machines Corporation 2005, 2006. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.

Second Edition (December 2006)

This edition applies to IBM Virtual I/O Server Version 1.3 that is part of the Advanced POWER Virtualization hardware feature on IBM System p5 and eServer p5 platforms.

Note: Before using this information and the product it supports, read the information in “Notices” on page v.

Page 5: Integrated Virtualization

Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .vTrademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiThe team that wrote this Redpaper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiBecome a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiComments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

Chapter 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Hardware management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.1.1 Integrated Virtualization Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1.2 Hardware Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.1.3 Advanced System Management Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.2 IVM design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2.1 Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2.2 LPAR configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.2.3 Considerations for partition setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Chapter 2. Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.1 Reset to Manufacturing Default Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.2 Microcode update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.3 ASMI IP address setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.3.1 Address setting using the ASMI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.3.2 Address setting using serial ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.4 Virtualization feature activation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.5 VIOS image installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.6 Initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.6.1 Virtualization setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.6.2 Set the date and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.6.3 Initial network setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.6.4 Changing the TCP/IP settings on the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . 31

2.7 VIOS partition configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322.8 Network management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332.9 Virtual Storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342.10 Installing and managing the Virtual I/O Server on a JS21 . . . . . . . . . . . . . . . . . . . . . 35

2.10.1 Virtual I/O Server image installation from DVD. . . . . . . . . . . . . . . . . . . . . . . . . . 352.10.2 Virtual I/O Server image installation from a NIM server . . . . . . . . . . . . . . . . . . . 35

Chapter 3. Logical partition creation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.1 Configure and manage partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.2 IVM graphical user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3.2.1 Connect to the IVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.2.2 Storage pool disk management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.2.3 Create logical partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.2.4 Create an LPAR based on an existing partition . . . . . . . . . . . . . . . . . . . . . . . . . . 493.2.5 Shutting down logical partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.2.6 Monitoring tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523.2.7 Hyperlinks for object properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3.3 IVM command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

© Copyright IBM Corp. 2005, 2006. All rights reserved. iii

Page 6: Integrated Virtualization

3.3.1 Update the logical partition’s profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.3.2 Power on a logical partition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.3.3 Install an operating system on a logical partition . . . . . . . . . . . . . . . . . . . . . . . . . 55

3.4 Optical device sharing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.5 LPAR configuration changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3.5.1 Dynamic LPAR operations on an IVM partition. . . . . . . . . . . . . . . . . . . . . . . . . . . 573.5.2 LPAR resources management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593.5.3 Adding a client LPAR to the partition workload group. . . . . . . . . . . . . . . . . . . . . . 67

Chapter 4. Advanced configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714.1 Network management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4.1.1 Ethernet bridging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.1.2 Ethernet link aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

4.2 Storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764.2.1 Virtual storage assignment to a partition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764.2.2 Virtual disk extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774.2.3 IVM system disk mirroring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794.2.4 AIX 5L mirroring on the managed system LPARs. . . . . . . . . . . . . . . . . . . . . . . . . 814.2.5 SCSI RAID adapter use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

4.3 Securing the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.4 Connecting to the Virtual I/O Server using OpenSSH. . . . . . . . . . . . . . . . . . . . . . . . . . 86

Chapter 5. Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915.1 IVM maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5.1.1 Backup and restore of the logical partition definitions. . . . . . . . . . . . . . . . . . . . . . 925.1.2 Backup and restore of the IVM operating system . . . . . . . . . . . . . . . . . . . . . . . . . 935.1.3 IVM updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

5.2 The migration between HMC and IVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985.2.1 Recovery after an improper HMC connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985.2.2 Migration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005.2.3 Migration from HMC to an IVM environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035.2.4 Migration from an IVM environment to HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.3 System maintenance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105.3.1 Microcode update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105.3.2 Capacity on Demand operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

5.4 Logical partition maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135.4.1 Backup of the operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135.4.2 Restore of the operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

5.5 Command logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1145.6 Integration with IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Appendix A. IVM and HMC feature summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Appendix B. System requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

iv Integrated Virtualization Manager on IBM System p5

Page 7: Integrated Virtualization

Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.

Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurement may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment.

Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.

COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.

© Copyright IBM Corp. 2005, 2006. All rights reserved. v

Page 8: Integrated Virtualization

TrademarksThe following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:

AIX®AIX 5L™BladeCenter®eServer™HACMP™i5/OS®IBM®

Micro-Partitioning™OpenPower™POWER™POWER Hypervisor™POWER5™POWER5+™pSeries®

Redbooks™Redbooks (logo) ™System p™System p5™Virtualization Engine™

The following terms are trademarks of other companies:

Internet Explorer, Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

vi Integrated Virtualization Manager on IBM System p5

Page 9: Integrated Virtualization

Preface

The Virtual I/O Server (VIOS) is part of the Advanced POWER™ Virtualization hardware feature on IBM® System p5™ and IBM eServer™ p5 platforms and part of the POWER Hypervisor™ and VIOS feature on IBM eServer OpenPower™ systems. It is also supported on the IBM BladeCenter® JS21. It is a single-function appliance that resides in an IBM POWER5™ and POWER5+™ processor-based system’s logical partition (LPAR) and facilitates the sharing of physical I/O resources between client partitions (IBM AIX® 5L™ or Linux®) within the server. The VIOS provides virtual SCSI target and Shared Ethernet Adapter (SEA) virtual I/O function to client LPARs.

Starting with Version 1.2, the VIOS provided a hardware management function named the Integrated Virtualization Manager (IVM). The latest version of VIOS, 1.3.0.0, adds a number of new functions, such as support for dynamic logical partitioning for memory (dynamic reconfiguration of memory is not supported on the JS21) and processors in managed systems, task manager monitor for long-running tasks, security additions such as viosecure and firewall, and other improvements.

Using IVM, companies can more cost-effectively consolidate multiple partitions onto a single server. With its intuitive, browser-based interface, the IVM is easy to use and significantly reduces the time and effort required to manage virtual devices and partitions.

IVM is available on these IBM systems:

� IBM System p5 505, 51A, 52A, 55A, and 561� IBM eServer p5 510, 520, and 550� IBM eServer OpenPower 710 and 720� IBM BladeCenter JS21

This IBM Redpaper provides an introduction to IVM by describing its architecture and showing how to install and configure a partitioned server using its capabilities. A complete understanding of partitioning is required prior to reading this document.

The team that wrote this RedpaperThis Redpaper was produced by a team of specialists from around the world working at the International Technical Support Organization, Austin Center.

Guido Somers is a Senior Accredited IT Specialist working for IBM Belgium. He has 11 years of experience in the Information Technology field, eight years within IBM. His areas of expertise include AIX 5L, system performance and tuning, logical partitioning, virtualization, HACMP™, SAN, IBM eServer pSeries® and System p5, as well as other IBM hardware offerings. He currently works as an IT Architect for Infrastructure and ISV Solutions in the e-Business Solutions Technical Support (eTS) organization.

The authors of the First Edition were:Nicolas GuerinFederico Vagnini

© Copyright IBM Corp. 2005, 2006. All rights reserved. vii

Page 10: Integrated Virtualization

The project that produced this paper was managed by:Scott VetterIBM Austin

Thanks to the following people for their contributions to this project:

Amartey S. Pearson, Vani D. Ramagiri, Bob G. Kovacs, Jim Parumi, Jim PartridgeIBM Austin

Dennis JurgensenIBM Raleigh

Jaya SrikrishnanIBM Poughkeepsie

Craig WilcoxIBM Rochester

Peter Wuestefeld, Volker HaugIBM Germany

Morten VagmoIBM Norway

Dai Williams, Nigel GriffithsIBM U.K.

Become a published authorJoin us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You will team with IBM technical professionals, Business Partners, or clients.

Your efforts will help increase product acceptance and client satisfaction. As a bonus, you will develop a network of contacts in IBM development labs, and increase your productivity and marketability.

Find out more about the residency program, browse the residency index, and apply online at ibm.com/redbooks/residencies.html

Comments welcomeYour comments are important to us!

We want our papers to be as helpful as possible. Send us your comments about this Redpaper or other IBM Redbooks™ in one of the following ways:

� Use the online Contact us review redbook form found at ibm.com/redbooks

� Send your comments in an e-mail to [email protected]

� Mail your comments to:

IBM Corporation, International Technical Support OrganizationDept. HYTD Mail Station P0992455 South RoadPoughkeepsie, NY 12601-5400

viii Integrated Virtualization Manager on IBM System p5

Page 11: Integrated Virtualization

Chapter 1. Overview

This chapter describes several available methods for hardware management and virtualization setup on IBM System p5 and eServer p5, OpenPower solutions, and BladeCenter JS21, and introduces the Integrated Virtualization Manager (IVM).

The Integrated Virtualization Manager is a component that has been included since the Virtual I/O Server Version 1.2, which is part of the Advanced POWER Virtualization hardware feature. It enables companies to consolidate multiple partitions onto a single server in a cost-effective way. With its intuitive, browser-based interface, the IVM is easy to use and it significantly reduces the time and effort required to manage virtual devices and partitions.

1

© Copyright IBM Corp. 2005, 2006. All rights reserved. 1

Page 12: Integrated Virtualization

1.1 Hardware managementWith the exploitation of virtualization techniques, hardware management has become more of an independent task. Operating systems have a less direct visibility and control over physical server hardware; therefore, system administrators must now focus on the management of resources that have been assigned to them.

In order to be independent from operating system issues, hardware management requires a separate computing environment capable of accessing, configuring, controlling, monitoring, and maintaining the server hardware and firmware. This environment requires advanced platform management applications capable of:

� Server configuration prior to operating system deployment

� Service when operating systems are unavailable

� Coordination of platform-related operations across multiple operating system images, within an independent security model

� Presentation of virtual operating system consoles

IBM has developed several solutions for hardware management that target different environments depending on the complexity of hardware setup.

1.1.1 Integrated Virtualization ManagerThe HMC has been designed to be the comprehensive solution for hardware management that can be used either for a small configuration or for a multiserver environment. Although complexity has been kept low by design and many recent software revisions support this, the HMC solution might not fit in small and simple environments where only a few servers are deployed or not all HMC functions are required.

There are many environments where there is the need for small partitioned systems, either for test reasons or for specific requirements, for which the HMC solution is not ideal. A sample situation is where there are small partitioned systems that cannot share a common HMC because they are in multiple locations.

IVM is a simplified hardware management solution that inherits most of the HMC features. It manages a single server, avoiding the need of an independent personal computer. It is designed to provide a solution that enables the administrator to reduce system setup time and to make hardware management easier, at a lower cost.

IVM provides a management model for a single system. Although it does not offer all of the HMC capabilities, it enables the exploitation of IBM Virtualization Engine™ technology. IVM targets the small and medium systems that are best suited for this product. Table 1-1 lists the systems that were supported at the time this paper was written.

Table 1-1 Supported server models for IVM

IBM System p5 IBM eServer p5 IBM eServer OpenPower IBM BladeCenter

Model 505 Model 510 Model 710 JS21

Model 51A Model 520 Model 720

Model 52A Model 550

Model 55A

Model 561

2 Integrated Virtualization Manager on IBM System p5

Page 13: Integrated Virtualization

IVM is an enhancement of the Virtual I/O Server (VIOS), the product that enables I/O virtualization in POWER5 and POWER5+ processor-based systems. It enables management of VIOS functions and uses a Web-based graphical interface that enables the administrator to remotely manage the server with a browser. The HTTPS protocol and server login with password authentication provide the security required by many enterprises.

Because one of the goals of IVM is simplification of management, some implicit rules apply to configuration and setup:

� When a system is designated to be managed by IVM, it must not be partitioned.

� The first operating system to be installed must be the VIOS.

The VIOS is automatically configured to own all of the I/O resources and it can be configured to provide service to other LPARs through its virtualization capabilities. Therefore, all other logical partitions (LPARs) do not own any physical adapters and they must access disk, network, and optical devices only through the VIOS as virtual devices. Otherwise, the LPARs operate as they have previously with respect to processor and memory resources.

Figure 1-1 shows a sample configuration using IVM. The VIOS owns all of the physical adapters, and the other two partitions are configured to use only virtual devices. The administrator can use a browser to connect to IVM to set up the system configuration.

Figure 1-1 Integrated Virtualization Manager configuration

The system Hypervisor has been modified to enable the VIOS to manage the partitioned system without an HMC. The software that is normally running on the HMC has been rewritten to fit inside the VIOS and to provide a simpler user interface. Because the IVM is running using system resources, the design has been developed to have a minimal impact on disk, memory, and processor resources.

The IVM does not interact with system’s service processor. A specific device named the Virtual Management Channel (VMC) has been developed on the VIOS to enable a direct Hypervisor configuration without requiring additional network connections. This device is activated, by default, when the VIOS is installed as the first partition.

VIOS + IVM

LPAR #1

LPAR #2

Virtual adapters

Physical adapters

Corporate LAN

Administrator’sbrowser

VIOS + IVM

LPAR #1

LPAR #2

Virtual adapters

Physical adapters

Corporate LAN

Administrator’sbrowser

VIOS + IVM

LPAR #1

LPAR #2

Virtual adapters

Physical adapters

Corporate LAN

Administrator’sbrowser

Chapter 1. Overview 3

Page 14: Integrated Virtualization

The VMC enables IVM to provide basic logical partitioning functions:

� Logical partitioning configuration� Boot, start, and stop actions for individual partitions� Display of partition status� Management of virtual Ethernet� Management of virtual storage� Basic system management

Because IVM executes on an LPAR, it has limited service-based functions, and ASMI must be used. For example, a server power-on must be performed by physically pushing the server power-on button or remotely accessing ASMI, because IVM does not execute while the server power is off. ASMI and IVM together provide a simple but effective solution for a single partitioned server.

LPAR management using IVM is through a common Web interface developed for basic administration tasks. Being integrated within the VIOS code, IVM also handles all virtualization tasks that normally require VIOS commands to be run.

IVM has support for dynamic LPAR, starting with Version 1.3.0.0.

IVM and HMC are two unique management systems: The IVM is designed as an integrated solution designed to lower your cost of ownership, and the HMC is designed for flexibility and a comprehensive set of functions. This provides you the freedom to select the one ideal solution for your production workload requirements.

1.1.2 Hardware Management ConsoleThe primary hardware management solution developed by IBM relies on an appliance server named the Hardware Management Console (HMC), packaged as an external tower or rack-mounted personal computer. It has been deployed on all IBM POWER5 processor-based systems, each with its own specific set of management tools, and great effort is made to improve their functions and ease of use.

Figure 1-2 on page 5 depicts some possible configurations showing select systems that are managed by their own console.

Important: The IVM provides a unique setup and interface with respect to the HMC for managing resources and partition configuration. An HMC expert should study the differences before using the IVM.

Important: The internal design of IVM requires that no HMC should be connected to a working IVM system. If a client wants to migrate an environment from IVM to HMC, the configuration setup has to be rebuilt manually. This includes systems that had previous software levels of VIOS running on them, because they would also have been managed by an HMC.

4 Integrated Virtualization Manager on IBM System p5

Page 15: Integrated Virtualization

Figure 1-2 Hardware Management Console configurations

The HMC is a centralized point of hardware control. In an System p5 environment, a single HMC can manage multiple POWER5 processor-based systems, and two HMCs can manage the same set of servers in a dual-active configuration designed for high availability.

Hardware management is performed by an HMC using a standard Ethernet connection to the service processor of each system. Interacting with the service processor, the HMC is capable of modifying the hardware configuration of the managed system, querying for changes, and managing service calls.

A hardware administrator can either log in to the physical HMC and use the native GUI, or download a client application from the HMC. This application can be used to remotely manage the HMC from a remote desktop with the same look and feel of the native GUI.

Because it is a stand-alone personal computer, the HMC is does not use any managed system resources and can be maintained without affecting system activity. Reboots and software maintenance on the HMC do not have any impact on the managed systems.

In the unlikely case that the HMC requires manual intervention, the systems continue to be operational and a new HMC can be plugged into the network and configured to download from the managed systems the current configuration, thus becoming operationally identical to the replaced HMC.

The major HMC functions include:

� Monitoring of system status� Management of IBM Capacity on Demand� Creation of logical partitioning with dedicated processors� Management of LPARs including power on, power off, and console� Dynamic reconfiguration of partitions� Management of virtual Ethernet among partitions� Clustering � Concurrent firmware updates � Hot add/remove of I/O drawers

POWER5 and POWER5+ processor-based systems are capable of Micro-Partitioning™, and the Hypervisor can support multiple LPARs, sharing the processors in the system and enabling I/O sharing. System p™ servers require an Advanced POWER Virtualization feature, but OpenPower systems require a POWER Hypervisor and Virtual I/O Server feature.

Chapter 1. Overview 5

Page 16: Integrated Virtualization

On systems with Micro-Partitioning enabled, the HMC provides additional functions:

� Creation of shared processor partitions� Creation of the Virtual I/O Server (VIOS) partition for physical I/O virtualization� Creation of virtual devices for VIOS and client partitions

The HMC interacts with the Hypervisor to create virtual devices among partitions, and the VIOS partitions manage physical device sharing. Network, disk, and optical device access can be shared.

Partition configuration can be changed dynamically by issuing commands on the HMC or using the HMC GUI. The allocation of resources, such as CPU, memory, and I/O, can be modified without making applications aware of the change.

In order to enable dynamic reconfiguration, an HMC requires an Ethernet connection with every involved LPAR besides the basic connection with the service processor. Using a Remote Monitoring and Control (RMC) protocol, the HMC is capable of securely interacting with the operating system to free and acquire resources and to coordinate these actions with hardware configuration changes.

The HMC also provides tools to ease problem determination and service support, such as the Service Focal Point feature, call-home, and error log notification through a modem or the Internet.

1.1.3 Advanced System Management InterfaceMajor hardware management activity is done by interacting with the service processor that is installed on all POWER5 and POWER5+ processor-based systems. The HMC has access to the service processor through Ethernet and uses it to configure the system Hypervisor.

The service processor can be locally accessed through a serial connection using system ports when the system is powered down and remotely accessed in either power standby or powered-on modes using an HTTPS session with a Web browser pointing to the IP address assigned to the service processor’s Ethernet ports.

The Web GUI is called the Advanced System Management Interface (ASMI), as shown in Figure 1-3 on page 7.

6 Integrated Virtualization Manager on IBM System p5

Page 17: Integrated Virtualization

Figure 1-3 Advanced System Management Interface

ASMI is the major configuration tool for systems that are not managed by an HMC and it provides basic hardware setup features. It is extremely useful when the system is a stand-alone system. ASMI can be accessed and used when the HMC is connected to the system, but some of its features are disabled.

Using ASMI, the administrator can run the following basic operations:

� Viewing system information� Controlling system power� Changing the system configuration� Setting performance options� Configuring the service processor’s network services� Using on demand utilities� Using concurrent maintenance utilities� Executing system service aids, such as accessing the service processor’s error log

The scope of every action is restricted to the same server. In the case of multiple systems, the administrator must contact each of them independently, each in turn.

After the initial setup, typical ASMI usage is remote system power on and power off. The other functions are related to system configuration changes, such as virtualization feature activation, and troubleshooting, such as access to service processor’s logs.

The ASMI does not allow LPARs to be managed. In order to deploy LPARs, a higher level of management is required, going beyond basic hardware configuration setup. This can be done either with an HMC or using the Integrated Virtualization Manager (IVM).

Chapter 1. Overview 7

Page 18: Integrated Virtualization

1.2 IVM designAll System p servers and the IBM BladeCenter JS21 have the capability of being partitioned because they are all preloaded with all of the necessary firmware support for a partitioned environment. On some systems, Advanced Power Virtualization (APV) is a priced option. There are three components in APV:

� Micro-Partitioning

� Partition Load Manager

� The Virtual I/O Server (which includes the Integrated Virtualization Manager); on high-end systems, such as the p5-590 and p-5-595, it is a standard element

Because the partitioning schema is designed by the client, every system is set up by manufacturing in the same Manufacturing Default Configuration that can be changed or reset to when required.

While configured using the Manufacturing Default Configuration, the system has the following setup from a partitioning point of view:

� There is a single predefined partition.

� All hardware resources are assigned to the single partition.

� The partition has system service authority, so it can update the firmware.

� The partition is auto-started at power-on.

� Standard operating system installation methods apply for the partition (network or media-based).

� The system’s physical control panel is mapped to the partition, displaying its operating system messages and error codes.

� Base platform management functions, such as power control, are provided through integrated system control functions (for example, service processor and control panel).

The Manufacturing Default Configuration enables the system to be used immediately as a stand-alone server with all resources allocated to a single LPAR. If an HMC is attached to a POWER5 processor-based system’s service processor, the system configuration can be changed to make the Hypervisor manage multiple LPARs.

When an HMC is not available and the administrator wants to exploit virtualization features, the IVM can be used.

1.2.1 ArchitectureThe IVM has been developed to provide a simple environment where a single control program has the ownership of the physical hardware and other LPARs use it to access resources.

The VIOS has most of the required features because it can provide virtual SCSI and virtual networking capability. Starting with Version 1.2, the VIOS has been enhanced to provide management features using the IVM. The current version of the Virtual I/O Server, 1.3.0.0, comes with several IVM improvements, such as dynamic LPAR-capability of the client LPARs, security improvements (firewall, viosecure), and usability additions (TCP/IP GUI configuration, hyperlinks, simple LPAR creation, task monitor, and so on).

In order to set up LPARs, the IVM requires management access to the Hypervisor. It has no service processor connection used by the HMC and it relies on a new virtual I/O device type

8 Integrated Virtualization Manager on IBM System p5

Page 19: Integrated Virtualization

called Virtual Management Channel (VMC). This device is activated only when VIOS installation detects that the environment has to be managed by IVM.

VMC is present on VIOS only when the following conditions are true:

� The virtualization feature has been enabled.� The system has not been managed by an HMC.� The system is in Manufacturing Default Configuration.

In order to fulfill these requirements, an administrator has to use the ASMI. By using the ASMI they can enter the virtualization activation code, reset the system to the Manufacturing Default Configuration, and so on. A system reset removes any previous LPAR configuration and any existing HMC connection configuration.

On a VIOS partition with IVM activated, a new ibmvmc0 virtual device is present and a management Web server is started listening to HTTP port 80 and to HTTPS port 443. The presence of the virtual device can be detected using the lsdev -virtual command, as shown in Example 1-1.

Example 1-1 Virtual Management Channel device

$ lsdev -virtual | grep ibmvmc0ibmvmc0 Available Virtual Management Channel

Because IVM relies on VMC to set up logical partitioning, it can manage only the system on which it is installed. For each IVM managed system, the administrator must open an independent Web browser session.

Figure 1-4 on page 10 provides the schema of the IVM architecture. The primary user interface is a Web browser that connects to port 80 of the VIOS. The Web server provides a simple GUI and runs commands using the same command line interface that can be used for logging in to the VIOS. One set of commands provides LPAR management through the VMC, and a second set controls VIOS virtualization capabilities. VIOS 1.3.0.0 also enables secure (encrypted) shell access (SSH). Figure 1-4 on page 10 shows the integration with IBM Director (Pegasus CIM server).

Chapter 1. Overview 9

Page 20: Integrated Virtualization

Figure 1-4 IVM high-level design

LPARs in an IVM managed system are isolated exactly as before and cannot interact except using the virtual devices. Only the IVM has been enabled to perform limited actions on the other LPARs such as:

� Activate and deactivate� Send a power off (EPOW) signal to the operating system� Create and delete� View and change configuration

1.2.2 LPAR configurationThe simplification of the user interface of a single partitioned system is one of the primary goals of the IVM. LPAR management has been designed to enable a quick deployment of partitions. Compared to HMC managed systems, configuration flexibility has been reduced to provide a basic usage model. A new user with no HMC skills will easily manage the system in an effective way.

LPAR configuration is made by assigning CPU, memory, and virtual I/O using a Web GUI wizard. At each step of the process, the administrator is asked simple questions, which provide the range of possible answers. Most of the parameters that are related to LPAR setup are hidden during creation time to ease the setup and can be finely tuned, changing partition properties if needed after initial setup.

Resources that are assigned to an LPAR are immediately allocated and are no longer available to other partitions, regardless of whether the LPAR is activated or powered down.

POWER5 Hypervisor

Web Browser

CIM ClientIBM Director

IBM IVMCSM

TelnetSSH

Web Server

LPAR CLI

PegasusCIM Server

VIOS CLI

Command Shell

VMC

I/O Subsystem

Gig E Gig E

PARTITION

1

PARTITION

2

VIOSIVM

10 Integrated Virtualization Manager on IBM System p5

Page 21: Integrated Virtualization

This behavior makes management more direct and it is a change compared to HMC managed systems where resource over commitment is allowed.

It is important to understand that any unused processor resources do become available to other partitions through the shared pool when any LPAR is not using all of its processor entitlement.

System configuration is described in the GUI, as shown in Figure 1-5. In this example, an unbalanced system has been manually prepared as a specific scenario. The system has 4 GB of global memory, 2 processing units, and four LPARs defined. In the Partition Details panel, the allocated resources are shown in terms of memory and processing units. Even if the LPAR2 and LPAR3 partitions have not been activated, their resources have been allocated and the available system’s memory and processing units have been updated accordingly.

If a new LPAR is created, it cannot use the resources belonging to a powered-off partition, but it can be defined using the available free resources shown in the System Overview panel.

The processing units for the LPAR named LPAR1 (ID 2) have been changed from the default 0.2 created by the wizard to 0.1. LPAR1 can use up to one processor because it has one virtual processor and has been guaranteed to use up to 0.1 processing units.

Figure 1-5 System configuration status

Allocated resourcesAvailable resources

Chapter 1. Overview 11

Page 22: Integrated Virtualization

MemoryMemory is assigned to an LPAR using available memory on the system with an allocation unit size that can vary from system to system depending on its memory configuration. The wizard provides this information, as shown in Figure 1-6.

Figure 1-6 Memory allocation to LPAR

The minimum allocation size of memory is related to the system’s logical memory block (LMB) size. It is defined automatically at the boot of the system depending on the size of physical memory, but it can be changed using ASMI on the Performance Setup menu, as shown in Figure 1-7. The default automatic setting can be changed to the following values: 16 MB, 32 MB, 64 MB, 128 MB, or 256 MB.

Figure 1-7 Logical Memory Block Size setup

In order to change the LMB setting, the entire system has to be shut down. If an existing partition has a memory size that does not fit in the new LMB size, the memory size is

12 Integrated Virtualization Manager on IBM System p5

Page 23: Integrated Virtualization

changed to the nearest value that can be allowed by the new LMB size, without exceeding original memory size.

A small LMB size provides a better granularity in memory assignment to partitions but requires higher memory allocation and deallocation times because more operations are required for the same amount of memory. Larger LMB sizes can slightly increase the firmware reserved memory size. It is suggested to keep the default automatic setting.

ProcessorsAn LPAR can be defined either with dedicated or with shared processors. The wizard provides available resources in both cases and asks which processor resource type to use.

When shared processors are selected for a partition, the wizard only asks the administrator to choose the number of virtual processors to be activated, with a maximum value equal to the number of system processors. For each virtual processor, 0.1 processing units are implicitly assigned and the LPAR is created in uncapped mode with a weight of 128.

Figure 1-8 shows the wizard panel related to the system configuration described in Figure 1-5 on page 11. Because only 0.7 processing units are available, no dedicated processors can be selected and a maximum number of two virtual processors are allowed. Selecting one virtual processor will allocate 0.1 processing units.

Figure 1-8 Processor allocation to LPAR

The LPAR configuration can be changed after the wizard has finished creating the partition. Available parameters are:

� Processing unit value� Virtual processor number� Capped or uncapped property� Uncapped weight

The default LPAR configuration provided using the partition creation wizard is designed to keep the system balanced. Manual changes to the partition configuration should be made after careful planning of the resource distribution. The configuration described in Figure 1-5 on page 11 shows manually changed processing units, and it is quite unbalanced.

Virtual or dedicated processors

Chapter 1. Overview 13

Page 24: Integrated Virtualization

As a general suggestion:

� For the LPAR configuration, select appropriate virtual processors and keep the default processing units when possible.

� Leave some system processing units unallocated. They are available to all LPARs that require them.

� Do not underestimate processing units assigned to VIOS. If not needed, they remain available in the shared pool, but on system peak utilization periods, they can be important for VIOS to provide service to highly active partitions.

Virtual EthernetEvery IVM managed system is configured with four predefined virtual Ethernet devices, each with a virtual Ethernet ID ranging from 1 to 4. Every LPAR can have up to two virtual Ethernet adapters that can be connected to any of the four virtual networks in the system.

Each virtual Ethernet can be bridged by the VIOS to a physical network using only one physical adapter. If higher performance or redundancy is required, a physical adapter aggregation can be made on one of these bridges instead. The same physical adapter or physical adapter aggregation cannot bridge more than one virtual Ethernet. See 4.1, “Network management” on page 72 for more details.

Figure 1-9 shows a Virtual Ethernet wizard panel. All four virtual networks are described with the corresponding bridging physical adapter, if configured. An administrator can decide how to configure the two available virtual adapters. By default, adapter 1 is assigned to virtual Ethernet 1 and the second virtual Ethernet is unassigned.

Figure 1-9 Virtual Ethernet allocation to LPAR

The virtual Ethernet is a bootable device and can be used to install the LPAR’s operating system.

Virtual storageEvery LPAR can be equipped with one or more virtual devices using a single virtual SCSI adapter. A virtual disk device has the following characteristics:

� The size is defined by the administrator.

� It is treated by the operating system as a normal SCSI disk.

� It is bootable.

14 Integrated Virtualization Manager on IBM System p5

Page 25: Integrated Virtualization

� It is created using the physical storage owned by the VIOS partition, either internal or external to the physical system (for example, on the storage area network).

� It can be defined either using an entire physical volume (SCSI disk or a logical unit number of an external storage server) or a portion of a physical volume.

� It can be assigned only to a single partition at a time.

Virtual disk device content is preserved if moved from one LPAR to another or increased in size. Before making changes in the virtual disk device allocation or size, the owning partition should deconfigure the device to prevent data loss.

A virtual disk device that does not require an entire physical volume can be defined using disk space from a storage pool created on the VIOS, which is a set of physical volumes. Virtual disk devices can be created spanning multiple disks in a storage pool, and they can be extended if needed.

The IVM can manage multiple storage pools and change their configurations by adding or removing physical disks to them. In order to simplify management, one pool is defined to be the default storage pool and most virtual storage actions implicitly refer to it.

We recommend keeping the storage pool to a single physical SCSI adapter at the time of writing.

Virtual optical devicesAny optical device that is assigned to the VIOS partition (either CD-ROM, DVD-ROM, or DVD-RAM) can be virtualized and assigned to any LPAR, one at a time, using the same virtual SCSI adapter provided to virtual disks. Virtual optical devices can be used to install the operating system and, when a DVD-RAM is available, to make backups.

Virtual TTYIn order to allow LPAR installation and management, the IVM provides a virtual terminal environment for LPAR console handling. When a new LPAR is defined, two matching virtual serial adapters are created for console access, one on the LPAR and one on the IVM. This provides a connection from the IVM to the LPAR through the Hypervisor.

The IVM does not provide a Web-based terminal session to partitions. In order to connect to an LPAR’s console, the administrator has to log in to the VIOS and use the command line interface. Only one session for each partition is allowed, because there is only one virtual serial connection.

The following commands are provided:

mkvt Connect to a console.

rmvt Remove an existing console connection.

The virtual terminal is provided for initial installation and setup of the operating system and for maintenance reasons. Normal access to the partition is made through the network using services such as telnet and ssh. Each LPAR can be configured with one or two virtual networks that can be bridged by VIOS into physical networks connected to the system.

1.2.3 Considerations for partition setupWhen using the IVM, it is easy to create and manage a partitioned system, because most of the complexity of the LPAR setup is hidden. A new user can quickly learn an effective methodology to manage the system. However, it is important to understand how configurations are applied and can be changed.

Chapter 1. Overview 15

Page 26: Integrated Virtualization

The VIOS is the only LPAR that is capable of management interaction with the Hypervisor and is able to react to hardware configuration changes. Its configuration can be changed dynamically while it is running. The other LPARs do not have access to the Hypervisor and have no interaction with IVM to be aware of possible system changes.

Starting with IVM 1.3.0.0, it is possible to change any resource allocation for the client LPARs through the IVM Web interface. This enables the user to change processing unit configuration, memory allocation, and virtual adapter setup while the LPAR is activated. This is possible with the introduction of a new concept called DLPAR Manager (with an RMC daemon).

The IVM command line interface enables an experienced administrator to make modifications to a partition configuration. Changes using the command line are shown in the Web GUI, and a warning message is displayed to highlight the fact that the resources of an affected LPAR are not yet synchronized.

Figure 1-10 shows a case where the memory has been changed manually on the command line. In order to detect the actual values, the administrator must select the partition on the GUI and click the Properties link or by just clicking on the hyperlink for more details about synchronization of the current and pending values.

Figure 1-10 Manual LPAR configuration

16 Integrated Virtualization Manager on IBM System p5

Page 27: Integrated Virtualization

Figure 1-11 shows a generic LPAR schema from an I/O point of view. Every LPAR is created with one virtual serial and one virtual SCSI connection. There are four predefined virtual networks, and VIOS already is equipped with one virtual adapter connected to each of them.

Figure 1-11 General I/O schema on an IVM managed system

Because there is only one virtual SCSI adapter for each LPAR, the Web GUI hides its presence and shows virtual disks and optical devices as assigned directly to the partition. When the command line interface is used, the virtual SCSI adapter must be taken into account.

For virtual I/O adapter configuration, the administrator only has to define whether to create one or two virtual Ethernet adapters on each LPAR and the virtual network to which it has to be connected. Only virtual adapter addition and removal and virtual network assignment require the partition to be shut down.

All remaining I/O configurations are done dynamically:

� An optical device can be assigned to any virtual SCSI channel.

� A virtual disk device can be created, deleted, or assigned to any virtual SCSI channel.

� Ethernet bridging between a virtual network and a physical adapter can be created, deleted, or changed at any time.

LPAR1 LPARn

Virtual networks

1234

Corporate networks

Virtu

al S

CSI

Virtu

al s

eria

l Virtual SCS

I

Virtual serial

POWER5 System

VIOSEthernetbridge

LPAR1 LPARn

Virtual networks

1234

Corporate networks

Virtu

al S

CSI

Virtu

al s

eria

l Virtual SCS

I

Virtual serial

POWER5 System

VIOSEthernetbridge

Chapter 1. Overview 17

Page 28: Integrated Virtualization

18 Integrated Virtualization Manager on IBM System p5

Page 29: Integrated Virtualization

Chapter 2. Installation

Starting with Version 1.2, IVM is shipped with the VIOS media. It is activated during the VIOS installation only if all of the following conditions are true:

� The system is in the Manufacturing Default Configuration.� The system has never been managed by an HMC.� The virtualization feature has been enabled.

A new system from manufacturing that has been ordered with the virtualization feature will be ready for the IVM. If the system has ever been managed by an HMC, the administrator is required to reset it to the Manufacturing Default Configuration. If virtualization has not been activated, the system cannot manage micropartitions, and an IBM sales representative should be contacted to order the activation code.

If a system supports the IVM, it can be ordered with IVM preinstalled.

The IVM installation requires the following items:

� A serial ASCII console and cross-over cable (a physical ASCII terminal or a suitable terminal emulator) connected to one of the two system ports for initial setup

� An IP address for IVM

� An optional, but recommended, IP address for the Advanced System Management Interface (ASMI)

This chapter describes how to install the IVM on a supported system. The procedure is valid for any system as long as the IVM requirements are satisfied; however, we start with a complete reset of the server. If the system is in Manufacturing Default Configuration and the Advanced POWER Virtualization feature is enabled, the IVM can be activated. In this case, skip the first steps and start with the IVM media installation in 2.5, “VIOS image installation” on page 28.

2

© Copyright IBM Corp. 2005, 2006. All rights reserved. 19

Page 30: Integrated Virtualization

2.1 Reset to Manufacturing Default ConfigurationThis operation is needed only if the system has been previously managed by the HMC. It resets the system, removing all partition configuration and any personalization that has been made to the service processor.

The following steps describe how to reset the system:

1. Power off the system.

2. Connect a serial ASCII console to a system port using a null-modem (cross-over) cable. The port settings are:

– 19200 bits per second

– 8 data bits

– No parity

– 1 stop bit

– Xon/Xoff flow control

3. Press any key on the TTY’s serial connection to receive the service processor prompt.

4. Log in as the user admin and answer the questions about the number of lines and columns for the output. The default password is admin.

5. Enter the System Service Aids menu and select the Factory Configuration option. A warning message similar to what is shown in Example 2-1 describes the effect of the reset and asks for confirmation. Enter 1 to confirm.

Example 2-1 Factory configuration reset

Continuing will result in the loss of all configured system settings (such asthe HMC access and ASMI passwords, time of day, network configuration, hardwaredeconfiguration policies, etc.) that you may have set via userinterfaces.Also, you will lose the platform error logs and partition-relatedinformation. Additionally, the service processor will be reset. Beforecontinuing with this operation make sure you have manually recorded allsettings that need to be preserved.

Make sure that the interface HMC1 or HMC2not being used by ASMI or HMC is disconnected from the network. Follow theinstructions in the system service publications to configure the networkinterfaces after the reset.

Enter 1 to confirm or 2 to cancel: 1The service processor will reboot in a few seconds.

Note: After a factory configuration reset, the system activates the microcode version present in the permanent firmware image. Check the firmware levels in the permanent and temporary images before resetting the system.

Note: More information about migration between the HMC and IVM can be found in 5.2, “The migration between HMC and IVM” on page 98.

20 Integrated Virtualization Manager on IBM System p5

Page 31: Integrated Virtualization

2.2 Microcode updateIn order to install the IVM, a microcode level SF235 or later is required. If the update is not needed, skip this section.

The active microcode level is provided by service processor. If the system is powered off, connect to system ports as described in 2.1, “Reset to Manufacturing Default Configuration” on page 20, and log in as the admin user. The first menu shows the system’s microcode level in the Version line (Example 2-2).

Example 2-2 Current microcode level display using system port

System name: Server-9111-520-SN10DDEDCVersion: SF235_160User: adminCopyright ) 2002-2005 IBM Corporation. All rights reserved.

1. Power/Restart Control 2. System Service Aids 3. System Information 4. System Configuration 5. Network Services 6. Performance Setup 7. On Demand Utilities 8. Concurrent Maintenance 9. Login Profile99. Log out

S1>

If the service processor’s IP address is known, the same information is provided using the ASMI in the upper panel of the Web interface, as shown in Figure 2-1. For a description of the default IP configuration, see 2.3, “ASMI IP address setup” on page 23.

Figure 2-1 Current microcode level display using the ASMI

Microcode Level

Chapter 2. Installation 21

Page 32: Integrated Virtualization

If the system microcode must be updated, the code and installation instructions are available from the following Web site:

http://www14.software.ibm.com/webapp/set2/firmware

Microcode can be installed through one of the following methods:

� HMC� Running operating system� Running IVM� Diagnostic CD

The HMC and running operating system methods require the system to be reset to the Manufacturing Default Configuration before installing the IVM. If the system is already running the IVM, refer to 5.3.1, “Microcode update” on page 110 for instructions.

In order to use a diagnostic CD, a serial connection to the system port is required with the setup described in 2.1, “Reset to Manufacturing Default Configuration” on page 20.

The following steps describe how to update the microcode using a diagnostic CD:

1. Download the microcode as an ISO image and burn it onto a CD-ROM. The latest image is available at:

http://techsupport.services.ibm.com/server/mdownload/p5andi5.iso

2. Insert the diagnostic CD in the system drive and boot the system from it, following steps 1 to 7 described in 2.5, “VIOS image installation” on page 28.

3. Follow the instructions on the screen until the main menu screen (Example 2-3) opens.

Example 2-3 Main diagnostic CD menu

FUNCTION SELECTION

1 Diagnostic Routines This selection will test the machine hardware. Wrap plugs and other advanced functions will not be used.2 Advanced Diagnostics Routines This selection will test the machine hardware. Wrap plugs and other advanced functions will be used.3 Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.) This selection will list the tasks supported by these procedures. Once a task is selected, a resource menu may be presented showing all resources supported by the task.4 Resource Selection This selection will list the resources in the system that are supported by these procedures. Once a resource is selected, a task menu will be presented showing all tasks that can be run on the resource(s).99 Exit Diagnostics

NOTE: The terminal is not properly initialized. You will be prompted to initialize the terminal after selecting one of the above options.

To make a selection, type the number and press Enter. [ ]

4. Remove the diagnostic CD from the drive and insert the microcode CD.

5. Select Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.) → Update and Manage System Flash → Validate and Update System Firmware.

22 Integrated Virtualization Manager on IBM System p5

Page 33: Integrated Virtualization

6. Select the CD drive from the menu.

7. When prompted for the flash update image file, press the F7 key to commit. If the console does not support it, use the Esc-7 sequence.

8. On the final screen, shown in Example 2-4, select YES and wait for the firmware update to be completed and for the subsequent system reboot to be executed.

Example 2-4 Confirmation screen for microcode update

UPDATE AND MANAGE FLASH 802816

The image is valid and would update the temporary image to SF235_137.The new firmware level for the permanent image would be SF220_051.

The current permanent system firmware image is SF220_051.The current temporary system firmware image is SF220_051.

***** WARNING: Continuing will reboot the system! *****

Do you wish to continue?

Make selection, use 'Enter' to continue.

NO YES

2.3 ASMI IP address setupThe service processor is equipped with two standard Ethernet ports, labeled HMC1 and HMC2, for network access. In an IVM environment, they are used to access ASMI menus using a Web browser. ASMI enables remote hardware administration and service agent setup and relies on the HTTPS protocol. Both Ethernet ports can be used if a valid network address is given.

By default, when the system is connected to a power source and the service processor boots, a Dynamic Host Configuration Protocol (DHCP) request is sent in the network through both HMC ports. If a DHCP server is available, it provides an IP address to the port; otherwise, the following default values are used:

� Port HMC1: 192.168.2.147, netmask 255.255.255.0

� Port HMC2: 192.168.3.147, netmask 255.255.255.0

The DHCP-managed addresses are mainly intended to be used in an HMC environment. IVM is capable of showing the IP addresses of both HMC ports, but when the system is powered off and IVM is not running, it might become difficult to contact ASMI because the addresses might change when the service processor reboots.

The IP configuration of the ports can be changed using the ASMI menu or connecting to the system serial ports. ASMI can be reached only if the current IP configuration is known or if the default addresses are in use. Serial ports are available for service processor access only if the system is powered off.

Chapter 2. Installation 23

Page 34: Integrated Virtualization

2.3.1 Address setting using the ASMIThe following procedure uses the default address assigned to port HMC1. That address is in use if no other address has been manually configured and if no DHCP server gave an IP address to the port when the system was connected to a power source. If you are not sure about DHCP, you can disconnect the Ethernet cable from HMC1 port, remove all power to the system, reconnect the power, and then wait for service processor to boot.

You need a system equipped with a Web browser (Netscape 7.1, Microsoft® Internet Explorer® 6.0, Opera 7.23, or later versions) and configured with the following network configuration:

� IP 192.168.2.148

� Netmask 255.255.255.0

Use the following steps to set up the addressing:

1. Use an Ethernet cable to connect the HMC1 port with the Ethernet port of your system.

2. Connect the Web browser using the following URL:

https://192.168.2.147

3. Log in as the user admin with the password admin.

4. Expand the Network Services menu and click Network Configuration. Figure 2-2 shows the corresponding menu.

5. Complete the fields with the desired network settings and click Continue. The Network interface eth0 corresponds to port HMC1; eth1 corresponds to HMC2.

Figure 2-2 HMC1 port setup using the ASMI

6. Review your configuration and click Save settings to apply the change.

select

24 Integrated Virtualization Manager on IBM System p5

Page 35: Integrated Virtualization

2.3.2 Address setting using serial portsWhen the HMC port’s IP addresses are not known and ASMI cannot be used, it is possible to access the service processor by attaching an ASCII console to one of system serial ports.

The following steps describe how to assign a fixed IP address to an HMC port:

1. Power off the system.

2. Connect to the system port as described in 2.1, “Reset to Manufacturing Default Configuration” on page 20.

3. Expand the Network Services menu and click Network Configuration.

Example 2-5 shows the steps to configure the port HMC1. The menu enables you to configure the interfaces Eth0 and Eth1 that correspond to system ports HMC1 and HMC2. To define a fixed IP address, provide the IP address, netmask, and, possibly, the default gateway.

Example 2-5 HMC1 port configuration

Network Configuration

1. Configure interface Eth0 2. Configure interface Eth198. Return to previous menu99. Log out

S1> 1

Configure interface Eth0MAC address: 00:02:55:2F:BD:E0Type of IP addressCurrently: Dynamic

1. Dynamic Currently: 192.168.2.147

2. Static98. Return to previous menu99. Log out

S1> 2

Configure interface Eth0MAC address: 00:02:55:2F:BD:E0Type of IP address: Static

1. Host name 2. Domain name 3. IP address (Currently: 192.168.2.147) 4. Subnet mask 5. Default gateway 6. IP address of first DNS server 7. IP address of second DNS server 8. IP address of third DNS server 9. Save settings and reset the service processor

Chapter 2. Installation 25

Page 36: Integrated Virtualization

98. Return to previous menu99. Log out

S1>

2.4 Virtualization feature activationThis step is needed only if the system has not been enabled yet with virtualization. Normally, new systems with this feature ordered come from manufacturing with virtualization active.

Virtualization is enabled using a specific code that is shipped with the system, or can be retrieved from the following address after providing the system type and serial number:

http://www-912.ibm.com/pod/pod

The ASMI is used to activate the virtualization feature with the following steps:

1. Connect to the ASMI with a Web browser using the HTTPS protocol to the IP address of one of HMC ports and log in as the user admin. The default password is admin.

2. Set the system in standby state. Expand the Power/Restart Control menu and click Power On/Off System. Figure 2-3 shows the corresponding ASMI menu. In the Boot to system server firmware field, select Standby and click Save settings and power on.

Figure 2-3 ASMI menu to boot system in standby mode

26 Integrated Virtualization Manager on IBM System p5

Page 37: Integrated Virtualization

3. Enter the activation code as soon as the system has finished booting. Expand the On Demand Utilities menu and click CoD Activation. Figure 2-4 shows the corresponding menu. Enter the code provided to activate the feature in the specific system and click Continue. A confirmation message appears.

Figure 2-4 ASMI virtualization code activation

Fill in the activation code

Chapter 2. Installation 27

Page 38: Integrated Virtualization

4. Set the system in running mode and shut it off. Again, select the Power On/Off System menu, select Running for the Boot to system server firmware field, and click Save settings and power off, as shown in Figure 2-5.

Figure 2-5 ASMI menu to bring system in running mode and power off

2.5 VIOS image installationThe Virtual I/O Server is shipped as a single media that contains a bootable image of the software. It contains the IVM component. Installing it requires a serial connection to the system port with the setup described in 2.1, “Reset to Manufacturing Default Configuration” on page 20.

The following steps describe how to install the VIOS:

1. Power on the system using either the ASMI or pushing the power-on (white) button at the front of the system.

2. When connecting using a TTY to the serial connection, you might be prompted to define it as an active console. If so, press the key that is indicated on the screen.

3. Wait for the System Management Services (SMS) menu shown in Example 2-6 on page 29 and enter 1 after the word keyboard appears on the screen.

28 Integrated Virtualization Manager on IBM System p5

Page 39: Integrated Virtualization

Example 2-6 SMS menu selection

IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM

1 = SMS Menu 5 = Default Boot List 8 = Open Firmware Prompt 6 = Stored Boot List

memory keyboard network scsi speaker

4. When requested, provide the password for the service processor’s admin user. The default password is admin.

5. Insert the VIOS installation media in the drive.

6. Use the SMS menus to select the CD or DVD device to boot. Select Select Boot Options → Select Install/Boot Device → CD/DVD → IDE and choose the right device from a list similar to the one shown in Example 2-7.

Example 2-7 Choose optical device from which to boot

Version: SF240_261SMS 1.5 (c) Copyright IBM Corp. 2000,2003 All rights reserved.-------------------------------------------------------------------------------Select DeviceDevice Current DeviceNumber Position Name 1. 1 IDE CD-ROM ( loc=U787B.001.DNW108F-P4-D2 )

-------------------------------------------------------------------------------Navigation keys:M = return to Main MenuESC key = return to previous screen X = eXit System Management Services-------------------------------------------------------------------------------Type the number of the menu item and press Enter or select Navigation Key:1

7. Select Normal Mode Boot and exit from the SMS menu.

8. Select the console number and press Enter.

9. Select the preferred installation language from the menu.

10.Select the installation preferences. Choose the default settings, as shown in Example 2-8.

Example 2-8 VIOS installation setup

Welcome to Base Operating System Installation and Maintenance

Type the number of your choice and press Enter. Choice is indicated by >>>.

>>> 1 Start Install Now with Default Settings

2 Change/Show Installation Settings and Install

3 Start Maintenance Mode for System Recovery

Chapter 2. Installation 29

Page 40: Integrated Virtualization

88 Help ? 99 Previous Menu

>>> Choice [1]: 1

11.Wait for the VIOS restore. A progress status is shown, as in Example 2-9. At the end, VIOS reboots.

Example 2-9 VIOS installation progress status

Installing Base Operating System

Please wait...

Approximate Elapsed time % tasks complete (in minutes)

28 7 29% of mksysb data restored.

12.Log in to the VIOS using the user padmin and the default password padmin. When prompted, change the login password to something secure.

13.Accept the VIOS licence by issuing the license -accept command.

2.6 Initial configurationThe new VIOS requires a simple configuration setup using the command line interface. Then, all management is performed using the Web interface.

2.6.1 Virtualization setupThe four virtual Ethernet interfaces that IVM manages are not created during the VIOS installation. The administrator must execute the mkgencfg command to create them with the following syntax:

mkgencfg -o init [-i "configuration data"]

The optional configuration data can be used to define the prefix of the MAC address of all four VIOS’ virtual Ethernet adapters and to define the maximum number of partitions supported by the IVM after the next restart. Although the maximum number of partitions can be changed later using the IVM Web GUI, the MAC address can no longer be modified.

Example 2-10 shows the effect of the command.

Example 2-10 The mkgencfg command

$ lsdev | grep ^entent0 Available 10/100 Mbps Ethernet PCI Adapter II (1410ff01)ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890ent2 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890ent3 Available 10/100 Mbps Ethernet PCI Adapter II (1410ff01)$ mkgencfg -o init

30 Integrated Virtualization Manager on IBM System p5

Page 41: Integrated Virtualization

$ lsdev | grep ^entent0 Available 10/100 Mbps Ethernet PCI Adapter II (1410ff01)ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890ent2 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890ent3 Available 10/100 Mbps Ethernet PCI Adapter II (1410ff01)ent4 Available Virtual I/O Ethernet Adapter (l-lan)ent5 Available Virtual I/O Ethernet Adapter (l-lan)ent6 Available Virtual I/O Ethernet Adapter (l-lan)ent7 Available Virtual I/O Ethernet Adapter (l-lan)

2.6.2 Set the date and timeUse the chdate command to set the VIOS date and time, using the following syntax:

chdate [-year YYyy] [-month mm] [-day dd] [-hour HH] [-minute MM] [-timezone TZ]chdate mmddHHMM[YYyy | yy] [-timezone TZ]

2.6.3 Initial network setupThe IVM Web interface requires a valid network configuration to work. Configure the IP by choosing a physical network adapter and issuing the mktcpip command from the command line, using the following syntax:

mktcpip -hostname HostName -inetaddr Address -interface Interface[-start] [-netmask SubnetMask] [-cabletype CableType][-gateway Gateway] [-nsrvaddr NameServerAddress[-nsrvdomain Domain]]

Example 2-11 shows setting the host name, address, and IP address for the IVM.

Example 2-11 IVM network setup at the command line

$ mktcpip -hostname ivm -inetaddr 9.3.5.123 -interface en0 -start -netmask 255.255.255.000 -gateway9.3.5.41

After the IVM Web server has access to the network, it is possible to use the Web GUI with the HTTP or the HTTPS protocol pointing to the IP address of the IVM server application. Authentication requires the use of the padmin user, unless other users have been created.

2.6.4 Changing the TCP/IP settings on the Virtual I/O ServerUsing the Virtual I/O Server 1.3.0.0 with the Integrated Virtualization Manager enables you to change the TCP/IP settings on the Virtual I/O Server through the graphical user interface.

Use any role other than the View Only role to perform this task. Users with the View Only role can view the TCP/IP settings but cannot change them.

Before you can view or modify the TCP/IP settings, you must have an active network interface.

Important: The IVM, like a Web server, requires a valid name resolution to work correctly. If DNS is involved, check that both the name and IP address resolution of the IVM host name are correct.

Chapter 2. Installation 31

Page 42: Integrated Virtualization

To view or modify the TCP/IP settings, perform the following steps:

1. From the IVM Management menu, click View/Modify TCP/IP Settings. The View/Modify TCP/IP Settings panel opens (Figure 2-6).

Figure 2-6 View/Modify TCP/IP settings

2. Depending on which setting you want to view or modify, click one of the following tabs:

– General to view or modify the host name and the partition communication IP address

– Network Interfaces to view or modify the network interface properties, such as the IP address, subnet mask, and the state of the network interface

– Name Services to view or modify the domain name, name server search order, and domain server search order

– Routing to view or modify the default gateway

3. Click Apply to activate the new settings.

2.7 VIOS partition configurationAfter you complete the network configuration of VIOS, the IVM interface is available and can be accessed using a Web browser. Connect using HTTP or HTTPS to the IP address assigned to VIOS and log in as the user padmin.

Important: Modifying your TCP/IP settings remotely might result in the loss of access to the current session. Ensure that you have physical console access to the Integrated Virtualization Manager partition prior to making changes to the TCP/IP settings.

32 Integrated Virtualization Manager on IBM System p5

Page 43: Integrated Virtualization

The first panel that opens after the login process is the partition configuration, as shown in Figure 2-7. After the initial installation of the IVM, there is only the VIOS partition on the system with the following characteristics:

� The ID is 1.

� The name is equal to the system’s serial number.

� The state is Running.

� The allocated memory is the maximum value between 512 MB and one-eighth of the installed system memory.

� The number of virtual processors is equal to or greater than the number of processing units, and the processing units are equal to at least 0.1 times the total number of virtual processors in the LPAR.

Figure 2-7 Initial partition configuration

The default configuration for the partition has been designed to be appropriate for most IVM installations. If the administrator wants to change memory or processing unit allocation of the VIOS partition, a dynamic reconfiguration action can be made either using the Web GUI or the command line, as described in 3.5, “LPAR configuration changes” on page 57. With VIOS/IVM 1.3.0.0 dynamic reconfiguration of memory and processors (AIX 5L) or processors (Linux) of the client partitions is also supported.

2.8 Network managementWhen installed, the VIOS configures one network device for each physical Ethernet present on the system and creates four virtual Ethernet adapters, each belonging to a separate virtual network.

Any partition can be created with its own virtual adapters connected to any of the four available virtual networks. No bridging is provided with physical adapters at installation time.

The IVM enables any virtual network to be bridged using any physical adapter, provided that the same physical adapter is not used to bridge more than one virtual network.

Chapter 2. Installation 33

Page 44: Integrated Virtualization

In 4.1, “Network management” on page 72, we describe the network bridging setup.

2.9 Virtual Storage managementThe IVM uses the following virtual storage management concepts:

Physical volume A physical disk or a logical unit number (LUN) on a storage area network (SAN). They are all owned by the IVM. A physical volume not belonging to any storage pool can be assigned whole to a single partition as a virtual device.

Storage pool A set of physical volumes treated as a single entity. There can be multiple pools and they cannot share physical disks. One pool is defined as the default storage pool.

Virtual disk Logical volume that the IVM assigns to a single partition as a virtual device.

Both physical volumes and virtual disks can be assigned to an LPAR to provide disk space. Each of them is represented by the LPAR operating system as a single disk. For example, assigning a 73.4 GB physical disk and a 3 GB virtual disk to an LPAR running AIX 5L makes the operating system create two hdisk devices.

At installation time, there is only one storage pool named rootvg, normally containing only one physical volume. All remaining physical volumes are available but not assigned to any pool.

The rootvg pool is used for IVM management, and we do not recommend using it to provide disk space to LPARs. Because it is the only pool available at installation time, it is also defined as the default pool. Create another pool and set it as the default before creating other partitions.

You can use rootvg as a storage pool on a system equipped with a SCSI RAID adapter when all of the physical disks are configured as a single RAID array. In this case, the administrator must first boot the server using the Standalone Diagnostics CD-ROM provided with the system and create the array. During the VIOS image installation, only one disk will be available, representing the array itself.

From any storage pool, virtual disks can be defined and configured. They can be created in several ways, depending on the IVM menu that is used:

� During LPAR creation. A virtual disk is created in the default storage pool and assigned to the partition.

� Using the Create Virtual Storage link. A virtual disk is not assigned to any partition and it is created in the default storage pool. The storage pool can then be selected/assigned by selecting the Storage Pool tab on the View/Modify Virtual Storage view.

We discuss basic storage management in 3.2.2, “Storage pool disk management” on page 39 and in 4.2, “Storage management” on page 76.

Important: Create at least one additional storage pool so that the rootvg pool is not the default storage pool.

34 Integrated Virtualization Manager on IBM System p5

Page 45: Integrated Virtualization

2.10 Installing and managing the Virtual I/O Server on a JS21This section discusses the Virtual I/O Server with respect to the JS21 platform.

IVM functions in the same way as on a System p5 server. LPARs are managed identically. IBM Director is the management choice for a BladeCenter JS21.

2.10.1 Virtual I/O Server image installation from DVDVirtual I/O Server 1.3.0.0 is shipped as a single DVD media that contains a bootable image of the software. When installing VIOS/IVM from DVD, you must assign the media tray to the desired blade and then mount the VIOS installation media. The remaining steps are similar to a normal AIX 5L operating system installation.

2.10.2 Virtual I/O Server image installation from a NIM serverIt is also possible to install the Virtual I/O Server from a NIM server. To perform the VIOS installation via NIM, follow these steps:

1. Install or define an existing server running AIX 5L that can be configured as a NIM server.

2. If your NIM server does not have a DVD drive, get access to a computer with a DVD drive and a network connection. This computer may run an operating system other than the AIX 5L operating system, for example Linux or Windows®.

3. Configure the NIM server.

4. Mount the VIOS installation DVD in the computer and transfer the mksysb and bosinst.data files from the /nimol/ioserver_res directory on the DVD to the NIM server (Example 2-12).

Example 2-12 NIM installation

# mount –oro –vcdrfs /dev/cd0 /mnt# cp /mnt/nimol/ioserver_res/mksysb /export/vios# cp /mnt/nimol/ioserver_res/bosinst.data /export/vios

For more information, see Chapter 7 of the IBM BladeCenter JS21: The POWER of Blade Innovation, SG24-7273.

Note: You can also use the installios command or NIM to install the IVM without the HMC. The command will set up the resources and services for the installation. All that is needed is to point the installing machine from the SMS network boot menu (in this case, the IVM) to the server that ran the installios or nim command. The network installation will then proceed as usual.

Note: When using the JS21 in a BladeCenter chassis that does not have a DVD drive in the media tray, the VIOS can be installed through the network from a NIM server or a Linux server.

Chapter 2. Installation 35

Page 46: Integrated Virtualization

36 Integrated Virtualization Manager on IBM System p5

Page 47: Integrated Virtualization

Chapter 3. Logical partition creation

The Integrated Virtualization Manager (IVM) provides a unique environment to administer LPAR-capable servers.

This chapter discusses the following topics related to LPAR management using the IVM:

� LPAR creation, deletion, and update

� Graphical and command line interfaces

� Dynamic operations on LPARs

3

© Copyright IBM Corp. 2005, 2006. All rights reserved. 37

Page 48: Integrated Virtualization

3.1 Configure and manage partitionsThe IVM provides several ways to configure and manage LPARs:

� A graphical user interface, designed to be as simple and intuitive as possible, incorporating partition management, storage management, serviceability, and monitoring capabilities. See 3.2, “IVM graphical user interface” on page 38.

� A command line interface, to enable scripting capabilities. See 3.3, “IVM command line interface” on page 54.

� Starting with IVM version 1.3.0.0 there is also a so-called simple partition creation by using the option “Create Based On” in the task area. See 3.2.4, “Create an LPAR based on an existing partition” on page 49.

The following sections explain these methods.

3.2 IVM graphical user interfaceThe new graphical user interface (GUI) is an HTML-based interface. It enables you to create LPARs on a single managed system, manage the virtual storage and virtual Ethernet on the managed system, and view service information related to the managed system.

3.2.1 Connect to the IVMOpen a Web browser window and connect using the HTTP or HTTPS protocol to the IP address that has been assigned to the IVM during the installation process, as described in 2.6.3, “Initial network setup” on page 31. As a result, a Welcome window that contains the login and the password prompts opens, as shown in Figure 3-1. The default user ID is padmin, and the password you defined at IVM installation time.

Figure 3-1 IVM login page

After the authentication process, log in and the default IVM console window opens, as shown in Figure 3-2 on page 39. The IVM graphical user interface is composed of several elements.

38 Integrated Virtualization Manager on IBM System p5

Page 49: Integrated Virtualization

The following elements are the most important:

Navigation area The navigation area displays the tasks that you can access in the work area.

Work area The work area contains information related to the management tasks that you perform using the IVM and to the objects on which you can perform management tasks.

Task area The task area lists the tasks that you can perform for items displayed in the work area. The tasks listed in the task area can change depending on the page that is displayed in the work area, or even depending on the tab that is selected in the work area.

Figure 3-2 IVM console: View/Modify Partitions

3.2.2 Storage pool disk managementDuring the installation of the VIOS, a default storage pool is created and named rootvg.

During the process of creating the LPAR, the IVM automatically creates virtual disks in the default storage pool. We recommend that you create another storage pool and add virtual disks to it for the LPARs. For advanced configuration of the storage pool, refer to 4.2, “Storage management” on page 76.

Storage pool creationA storage pool consists of a set of physical disks that can be different types and sizes. You can create multiple storage pools; however, a disk can only be a member of a single storage pool.

Navigation area Task area

Work area

Chapter 3. Logical partition creation 39

Page 50: Integrated Virtualization

The following steps describe how to create a storage pool:

1. Under the Virtual Storage Management menu in the navigation area, click the Create Virtual Storage link.

2. Click Create Storage Pool in the work area, as shown in Figure 3-3.

Figure 3-3 Create Virtual Storage

Important: All data of a physical volume is erased when you add this volume to a storage pool.

40 Integrated Virtualization Manager on IBM System p5

Page 51: Integrated Virtualization

3. Type a name in the Storage pool name field and select the needed disks, as shown in Figure 3-4.

Figure 3-4 Create Virtual Storage: Storage pool name

4. Click OK to create the storage pool. A new storage pool called datapoolvg2 with hdisk2 and hdisk3 has been created.

Default storage poolThe default storage pool created during the IVM installation is rootvg. This is because rootvg is the only volume group created at that time.

Because the IVM is installed in rootvg, when IVM is reinstalled, the rootvg storage pool is overwritten. The default storage pool should also be changed to another one to avoid creating virtual disks within the rootvg by default, thus preventing the loss of user data during an IVM update.

The following steps describe how to change the default storage pool:

1. Under the Virtual Storage Management menu in the navigation area, click View/Modify Virtual Storage.

Important: Create at least one additional storage pool. The rootvg storage pool should not be the default storage pool; this would result in IVM and user data being merged on the same storage devices.

Chapter 3. Logical partition creation 41

Page 52: Integrated Virtualization

2. Select the storage pool you want as the default, as shown in Figure 3-5.

Figure 3-5 View/Modify Virtual Storage - Storage Pools list

3. Click Assign as default storage pool in the task area.

4. A summary with the current and the next default storage pool opens, as shown in Figure 3-6.

5. Click OK to validate the change. In this example datapoolvg2 will be the new default storage pool.

Figure 3-6 Assign as Default Storage Pool

42 Integrated Virtualization Manager on IBM System p5

Page 53: Integrated Virtualization

Virtual disk/logical volume creationLogical volumes belong to a storage pool and are also known as virtual disks. Logical volumes are used to provide disk space to LPARs but are not assigned to LPARs when you create them.

They can be created in several ways, depending on the menu that is in use:

� During LPAR creation: A logical volume is created in the default storage pool and assigned to the partition.

� After or before LPAR creation: A virtual disk is not assigned to any partition and is created in the default storage pool.

The following steps describe how to create a new logical volume:

1. Under the Virtual Storage Management menu in the navigation area, click Create Virtual Storage.

2. Click Create Virtual Disk in the work area.

Figure 3-7 Create Virtual Storage

Chapter 3. Logical partition creation 43

Page 54: Integrated Virtualization

3. Enter a name for the virtual disk, select a storage pool name from the drop-down list, and add a size for the virtual disk, as shown in Figure 3-8.

4. Click OK to create the virtual disk.

Figure 3-8 Create Virtual Disk: name and size

In order to view your new virtual disk/logical volume and use it, select the View/Modify Virtual Storage link under the Virtual Storage Management menu in the navigation area. The list of available virtual disks is displayed in the work area.

3.2.3 Create logical partitionsA logical partition is a set of resources: processors, memory, and I/O devices. Each resource assigned to an LPAR is allocated regardless of whether the LPAR is running or not. The IVM does not allow the overcommitment of resources.

The following steps describe how to create an LPAR:

1. Under the Partition Management menu in the navigation area, click Create Partitions, and then click Start Wizard in the work area.

44 Integrated Virtualization Manager on IBM System p5

Page 55: Integrated Virtualization

2. Type a name for the new partition, as shown in Figure 3-9. Click Next.

Figure 3-9 Create Partition: Name

3. Enter the amount of memory needed, as shown in Figure 3-10. Click Next.

Figure 3-10 Create Partition: (assigned) Memory

Chapter 3. Logical partition creation 45

Page 56: Integrated Virtualization

4. Select the number of processors needed and choose a processing mode, as shown in Figure 3-11. In shared mode, each virtual processor uses 0.1 processing units. Click Next.

Figure 3-11 Create Partition: Processors (and Processing Mode)

5. Each partition has two virtual Ethernet adapters that can be configured to one of the four available virtual Ethernets. In Figure 3-12, adapter 1 uses virtual Ethernet ID 1.

The Virtual Ethernet Bridge Overview section of the panel shows on which physical network interface every virtual network is bridged. In the figure, a virtual Ethernet bridge has been created. This procedure is described in 4.1.1, “Ethernet bridging” on page 72. The bridge enables the partition to connect to the physical network. Click Next.

Figure 3-12 Create Partition: Virtual Ethernet

46 Integrated Virtualization Manager on IBM System p5

Page 57: Integrated Virtualization

6. Select Assign existing virtual disks and physical volumes, as shown in Figure 3-13.

You can also let the IVM create a virtual disk for you by selecting Create virtual disk, but be aware that the virtual disk will be created in the default storage pool. To create storage pool and virtual disks or change the default storage pool, refer to 3.2.2, “Storage pool disk management” on page 39. Click Next.

Figure 3-13 Create Partition: Storage Type

7. Select needed virtual disks from the list, as shown in Figure 3-14. Click Next.

Figure 3-14 Create Partition: Storage

Chapter 3. Logical partition creation 47

Page 58: Integrated Virtualization

8. Select needed optical devices, as shown in Figure 3-15. Click Next.

Figure 3-15 Create Partition: Optical (Devices)

9. A summary of the partition to be created appears, as shown in Figure 3-16. Click Finish to create the LPAR.

Figure 3-16 Create Partition: Summary

To view the new LPAR and use it, under the Partition Management menu in the navigation area, click the View/Modify Partitions link. A list opens in the work area.

48 Integrated Virtualization Manager on IBM System p5

Page 59: Integrated Virtualization

3.2.4 Create an LPAR based on an existing partitionThe Integrated Virtualization Manager can be used to create a new LPAR that is based on an existing partition on your managed system. Any role other than View Only can be used to perform this task. This task enables you to create a new LPAR with the same properties as the selected existing partition with the exception of ID, name, physical volumes, and optical devices.

To create an LPAR based on an existing partition, perform the following steps:

1. Under Partition Management, click View/Modify Partitions. The View/Modify Partitions panel opens.

2. Select the LPAR that you want to use as a basis for the new partition.

3. In the Tasks section, click Create based on as shown in Figure 3-17.

Figure 3-17 Create based on selection from the Tasks menu

Chapter 3. Logical partition creation 49

Page 60: Integrated Virtualization

4. The Create Based On panel opens (Figure 3-18). Enter the name of the new partition, and click OK.

Figure 3-18 Create based on - name of the new LPAR

5. The View/Modify Partitions panel opens, showing the new partition (Figure 3-19).

Figure 3-19 Create based on - New logical partition has been created

The virtual disks that are created have the same size and are in the same storage pool as the selected partition. However, the data in these disks is not cloned.

50 Integrated Virtualization Manager on IBM System p5

Page 61: Integrated Virtualization

3.2.5 Shutting down logical partitions The Integrated Virtualization Manager provides the following types of shutdown options for LPARs:

� Operating System (recommended) � Delayed � Immediate

The recommended shutdown method is to use the client operating system shutdown command. The delayed shutdown is handled by AIX 5L gracefully (however, for a Linux LPAR you have to load a special RPM: the Linux on POWER Service and Productivity toolkit). The immediate shutdown method should be used as a last resort because this causes an abnormal shutdown that might result in data loss. (It is equivalent to pulling the power cord.)

If you choose not to use the operating system shutdown method, be aware of these considerations:

� Shutting down the LPARs is equivalent to pressing and holding the white control-panel power button on a server that is not partitioned.

� Use this procedure only if you cannot successfully shut down the LPARs through operating system commands. When you use this procedure to shut down the selected LPARs, the LPARs wait a predetermined amount of time to shut down. This gives the LPARs time to end jobs and write data to disks. If the LPAR is unable to shut down within the predetermined amount of time, it ends abnormally, and the next restart might take a long time.

To shut down an LPAR:

1. From the Partition Management menu, click View/Modify Partitions. The View/Modify Partitions panel opens.

2. Select the LPAR that you want to shut down.

3. From the Tasks menu, click Shutdown. The Shutdown Partitions panel opens (Figure 3-20).

Figure 3-20 Shutdown Partitions: new options

Chapter 3. Logical partition creation 51

Page 62: Integrated Virtualization

4. Select the shutdown type.

5. Optionally, select Restart after the shutdown completes if you want the LPAR to start immediately after it shuts down.

6. Click OK to shut down the partition. The View/Modify Partitions panel is displayed, and the partition is shut down.

3.2.6 Monitoring tasks In Virtual I/O Server Version 1.2, the GUI had no support for long-running tasks. If the user navigated away from a page after starting a task, no status would be received on the completion of the task. As of Virtual I/O Server 1.3.0.0 you can now view and monitor the most recent 40 tasks that are running on the Integrated Virtualization Manager.

All actions that a user can do in the GUI will become Tasks. All tasks will be audited at the task level. Each task can have subtasks - the status of each subtask is managed. When performing a task, the user will get a Busy dialog indicating the task is currently running. You navigate away from the page, and perform other tasks.

To view the properties of the tasks, do the following:

1. In the Service Management menu, click Monitor Tasks. The Monitor Tasks panel opens.

2. Select the task for which you want to view the properties (1 in Figure 3-21 on page 53).

3. In the Tasks menu, click Properties (2 in Figure 3-21 on page 53). The Task Properties window opens.

Note: If the LPAR does not have an RMC connection, the Operating System shutdown type will be disabled, and the Delayed type will be the default selection. When the IVM/VIOS logical partition is selected, the only available option is OS shutdown. In addition, a warning is placed first thing on the panel indicating that shutting down the IVM/VIOS LPAR will affect other running LPARs.

52 Integrated Virtualization Manager on IBM System p5

Page 63: Integrated Virtualization

Figure 3-21 Monitor Tasks: the last 40 tasks

4. Click Cancel to close the Task Properties window. The Monitor Tasks panel appears.

You can also just click the hyperlink of the task from which you want to view the properties (arrow without number in Figure 3-21). This eliminates steps 2 and 3. See more about hyperlinks in the following section.

3.2.7 Hyperlinks for object propertiesStarting with Virtual I/O Server Version 1.3.0.0, there are two ways to access the properties of an object. Previously you had to select the object and then select the Properties task. Because this is a frequent operation, a new method to quickly access the properties of an object with one click has been introduced: the hyperlinks. In a list view, the object in question will have a hyperlink (typically on the Name) that, when selected, displays the properties sheet for the object. It behaves exactly as the Select → Properties method, but it requires only one click. Even another object is selected, clicking the hyperlink of an object will always bring it up, as shown in Figure 3-22 on page 54.

1

2

Chapter 3. Logical partition creation 53

Page 64: Integrated Virtualization

Figure 3-22 Hyperlinks

3.3 IVM command line interfaceThe text-based console with the command line interface (CLI) is accessible through an ASCII terminal attached to a system port or through network connectivity using the telnet command. The IP address is the same as the one used to connect to the GUI, and it has been defined during the installation process. The CLI requires more experience to master than the GUI, but it offers more possibilities to tune the partition’s definitions and can be automated using scripts.

3.3.1 Update the logical partition’s profileExample 3-1 shows how to change the name of a LPAR with the chsyscfg command.

Example 3-1 Profile update

$ lssyscfg -r prof --filter "lpar_names=LPAR2" -F lpar_nameLPAR2

$ chsyscfg -r prof -i "lpar_name=LPAR2,new_name=LPAR2_new_name"

$ lssyscfg -r prof --filter "lpar_names=LPAR2_new_name" -F lpar_nameLPAR2_new_name

54 Integrated Virtualization Manager on IBM System p5

Page 65: Integrated Virtualization

3.3.2 Power on a logical partitionExample 3-2 shows how to start a LPAR using the chsysstate command. To follow the boot process, use the lsrefcode command.

Example 3-2 Power on a partition

$ chsysstate -o on -r lpar -n LPAR2$ lsrefcode -r lpar --filter "lpar_names=LPAR2" -F refcodeCA00E1F1$ lsrefcode -r lpar --filter "lpar_names=LPAR2" -F refcodeCA00E14D

3.3.3 Install an operating system on a logical partitionThe operating system installation process is similar to the process for stand-alone systems. The main steps are:

1. Log in to the IVM partition.

2. Open a virtual terminal for the LPAR to be installed with the mkvt command. You have to specify the ID of the LPAR, as shown in Example 3-3.

Example 3-3 Open a virtual terminal

$ mkvt -id 3

AIX Version 5(C) Copyrights by IBM and by others 1982, 2005.Console login:

3. Start the LPAR in SMS mode. You can change the boot mode in the properties of the partition’s profile before starting it, or enter 1 on the virtual terminal at the very beginning of the boot process, as shown in Example 3-4.

Example 3-4 Boot display

IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM

1 = SMS Menu 5 = Default Boot List 8 = Open Firmware Prompt 6 = Stored Boot List

Memory Keyboard Network SCSI Speaker

4. Select a boot device, such as virtual optical device, or a network for the Network Installation Management (NIM) installation.

5. Boot the LPAR.

6. Select the system console and the language.

Chapter 3. Logical partition creation 55

Page 66: Integrated Virtualization

7. Select the disks to be installed.

The installation of the operating system starts. Proceed as directed by your operating system installation instructions.

3.4 Optical device sharingYou can dynamically add, move, or remove optical devices from or to any LPAR, regardless of whether the LPAR is running. With VIOS 1.3.0.0, the Storage Management navigation area has changed and is now called Virtual Storage Management.

The following steps describe how to change the assignment of an optical device:

1. Under the Virtual Storage Management menu in the navigation area, click View/Modify Virtual Storage, and select the Optical Devices tab in the work area.

2. Select the optical device you want to modify, as shown in Figure 3-23.

3. Click Modify partition assignment in the tasks area.

Figure 3-23 Optical Devices selection

56 Integrated Virtualization Manager on IBM System p5

Page 67: Integrated Virtualization

4. Select the name of the LPAR to which you want to assign the optical device, as shown in Figure 3-24. You can also remove the optical device from the current LPAR by selecting None.

Figure 3-24 Optical Device Partition Assignment

5. Click OK.

6. If you move or remove an optical device from a running LPAR, you are prompted to confirm the forced removal before the optical device is removed. Because the optical device will become unavailable, log in to the LPAR and remove the optical device before going further. On AIX 5L, use the rmdev command. Press the Eject button. If the drawer opens, this is an indication that the device is not mounted.

7. Click OK.

8. The new list of optical devices is displayed with the changes you made.

9. Log in to the related LPAR and use the appropriate command to discover the new optical device. On AIX 5L, use the cfgmgr command.

3.5 LPAR configuration changesAs needed, you might want to modify the properties of the IVM or LPARs. Prior to Virtual I/O Server Version 1.3.0.0, some updates could be done dynamically (in case of the VIOS LPAR), or statically in case of the client LPARs. As of V1.3.0.0, all LPARs support dynamic reconfiguration.

3.5.1 Dynamic LPAR operations on an IVM partitionResources such as processors and memory can be dynamically allocated or released on the IVM partition. You can run those operations either on the GUI or the CLI.

Chapter 3. Logical partition creation 57

Page 68: Integrated Virtualization

Dynamic LPAR operation on memory using the GUIThe following steps describe how to increase memory size dynamically for the IVM partition:

1. Under Storage Management in the navigation area, click View/Modify Partitions.

2. Select the IVM partition, as shown in Figure 3-25.

Figure 3-25 View/Modify Partitions: Dynamic LPAR memory operation

3. Click Properties in the task area (or use the one-click hyperlink method explained in 3.2.7, “Hyperlinks for object properties” on page 53).

58 Integrated Virtualization Manager on IBM System p5

Page 69: Integrated Virtualization

4. Modify the pending values as needed. In Figure 3-26, the assigned memory is increased by 512 MB. Click OK.

Figure 3-26 Partition Properties: VIOS - Memory - Increase memory size

Memory is not added or removed in a single operation, but in 16 MB blocks. You can monitor the status by looking at partition properties.

Dynamic LPAR operation on virtual processors using the CLILog in to the IVM using the CLI and run your dynamic LPAR operation. Example 3-5 shows how to add a 0.1 processing unit dynamically to the IVM using the chsyscfg command, as shown in Example 3-5.

Example 3-5 Dynamic LPAR virtual processor operation

$ lshwres -r proc --level lpar --filter "lpar_names=VIOS" -F curr_proc_units0.20$ chsyscfg -r prof -i lpar_name=VIOS,desired_proc_units=0.3$ lshwres -r proc --level lpar --filter "lpar_names=VIOS" -F curr_proc_units0.30

3.5.2 LPAR resources managementVirtual I/O Server Version 1.3.0.0 also allows dynamic operations on resources such as the processor and memory on a client partition. To accomplish this goal, the concept of a dynamic LPAR Manager is introduced. This is a daemon task that runs in the Virtual I/O Server LPAR and monitors the pending and runtime values for processing and memory values, and it drives the runtime and pending values to be in sync.

To perform a dynamic LPAR operation, the user simply has to modify the pending value, and the DLPAR Manager will do the appropriate operations on the LPAR to complete the runtime change. If the runtime and pending values are in sync, then the DLPAR Manager will block until another configuration change is made. It will not poll when in this state. The DLPAR Manager (dlparmgr) manages the Virtual I/O Server LPAR directly, and the client LPARs are managed through the RMC daemon. Dynamic operations on the disks, optical devices, partition name, and boot mode were already allowed in the previous version.

Chapter 3. Logical partition creation 59

Page 70: Integrated Virtualization

When the dlparmgr encounters an error, it will be written to the dlparmgr status log, which can be read with the lssvcevents -t dlpar command. This log contains the last drmgr command run for each object type, for each LPAR. It includes the any responses from the drmgr command. The user will not be notified directly of these errors; their indication will be that the pending values are out of sync. The GUI enables you to see the state and gives you more information about the result of the operation. (See Figure 3-31 on page 64.)

All chsyscfg command functions will continue to work, even if the partition does not have dynamic LPAR support as they do today. The GUI, however, will selectively enable or disable function based on the capabilities.

The dynamic LPAR capabilities for each logical partition will be returned as an attribute on the lssyscfg -r lpar command. This allows the GUI to selectively enable/disable dynamic LPAR based on the current capabilities of the logical partition.

Setup of dynamic LPARTo enable dynamic LPAR operations, you have to enable RMC communication between the IVM LPAR and the client LPARs:

1. TCP/IP must have been configured in the IVM LPAR, or we could not use the IVM browser interface.

2. The client LPARs will use virtual Ethernet only (no physical adapters in the client LPARs under IVM). For the IVM LPAR, select which physical interface will be used to provide the bridge between the internal VLAN and the real external network, in this case VLAN-1 and en0. The client LPAR can have a TCP/IP configuration on that same subnet or on another one, depending on their external network or switch.

3. If multiple Interfaces are configured in the Virtual I/O Server (and thus, multiple TCP/IP addresses), the LPARs must be on the same network, so they have to be able to ping each other. In the example, only one Ethernet interface is configured, so it is automatically the default, and the IP address is grayed-out because there is no need to make any change. If there are multiple interfaces, the one that will show here as default is the one that is shown first when running. Use the lstcpip -interfaces command to see an overview of all available interfaces. Otherwise, uncheck Default and enter the IP address that should be used. This address will be used by the client LPARs to communicate with the IVM/VIOS partition.

4. The client LPAR must have TCP/IP configured (on the same subnet that we selected for IVM-to-Client communication). We then need to wait 2 or 3 minutes while the RMC subsystem completes the handshake between the client LPAR and the IVM LPAR. Viewing client LPAR Properties should then show the IP address, and communication status equal to Active. Clicking Retrieve Capabilities verifies that dynamic LPAR operations are possible.

Note: If a change is made to a pending value of an LPAR in a workload management group with another LPAR, the workload management software must be aware of this change and dynamically adapt to it, otherwise, manual intervention is required. This only applies to processors and memory.

Note: In Version 1.3.0.0, the IVM interface now provides a menu for configuring additional TCP/IP interfaces in the IVM LPAR.

60 Integrated Virtualization Manager on IBM System p5

Page 71: Integrated Virtualization

GUI changes for dynamic LPARThe graphical user interface for performing dynamic LPAR operations will be the same as the interface for performing static operations. Specifically, the user simply changes the pending value. Because the chsyscfg command can be used for dynamic LPAR, the GUI can use the same commands for both static and dynamic reconfiguration. The GUI disables changes for LPARs that do not support dynamic LPAR. For LPARs that do support certain dynamic LPAR operations (for example, processing, but not memory), the operation that is not supported will be grayed-out appropriately, and an inline message will be displayed indicating why they may not change the value.

The GUI will also display a details link next to the warning exclamation point when resources are out of sync. This will yield a pop-up window with the last run status of the dlparmgr for the specified LPAR and resource.

Partition Properties changes for dynamic LPARA new section was added to the General panel of the properties sheet that indicates the RMC Connection state, the IP address of the LPAR, and the dynamic LPAR capabilities of the logical (client) partition. Because retrieving these dynamic LPAR capabilities can be a time-consuming operation (generally less than one second, but can be up to 90 seconds with a failed network connection), the initial capabilities will show up Unknown (if the partition communication state is Active). Clicking Retrieve Capabilities will retrieve them and update the fields. (See Figure 3-27 on page 62.) Table 3-1 gives you more information about the different fields of Figure 3-27 on page 62.

Table 3-1 Field name values

Note: The following site contains the RMC and RSCT requirements for dynamic LPAR, including the additional filesets that have to be installed on Linux clients:

https://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html

Field Name Values

Partition host name or IP address

The IP address (or DNS host name) of the LPAR. This may be blank if RMC is not configured.

Partition communication state

Inactive, Active, Not Configured. Note: The name of this field matches the “Partition communication” field in the TCP/IP settings.

Memory dynamic LPAR capable

Yes, No, Unknown. Note: This will always default to “No” if we have not successfully been able to query the partition (RMC state is not active). No Linux LPARs are currently memory dynamic LPAR capable. Unknown will be the default state if communication is Active but the user has not selected Retrieve Capabilities.

Processing dynamic LPAR capable

Yes, No, Unknown. Note: If RMC is active, this will nearly always be Yes, but no guarantee. Unknown will be the default state if communication is Active but the user has not selected Retrieve Capabilities.

Retrieve Capabilities Button that is visible if the Partition Communication State is active and the user has not previously selected the button on this properties sheet.

Chapter 3. Logical partition creation 61

Page 72: Integrated Virtualization

Figure 3-27 Dynamic LPAR properties

Memory TabIf the LPAR is powered on and memory is dynamic LPAR capable (see capabilities on the General tab), then the Pending assigned value will be enabled. (min and max are still disabled.) The user may change this value and select OK. The change will take effect immediately for the pending value. The dlparmgr daemon will then work to bring the pending and current (runtime) values into sync. If these values are not in sync, the user will see the Warning icon as in Figure 3-30 on page 64. Figure 3-28 on page 63 and Figure 3-29 on page 63 show a change.

62 Integrated Virtualization Manager on IBM System p5

Page 73: Integrated Virtualization

Figure 3-28 Partition Properties: Memory tab

Figure 3-29 Dynamic LPAR of memory: removal of 256 MB of memory

Chapter 3. Logical partition creation 63

Page 74: Integrated Virtualization

Figure 3-30 Warning in work area because pending and current values are not in sync

Click the details hyperlink for more information about the resource synchronization, shown in Figure 3-31.

Figure 3-31 Resource synchronization details

64 Integrated Virtualization Manager on IBM System p5

Page 75: Integrated Virtualization

Table 3-2 provides possible memory field values.

Table 3-2 Possible field modifications: memory

Processing tabIf the LPAR is powered on and processor dynamic LPAR capable (see capabilities on the General tab), then the Pending assigned values will be enabled. The minimum and maximum (processing units as well as virtual processors) values are still disabled. The user may change these values and select OK. The change will take effect immediately for the pending value. The dlparmgr daemon will then work to bring the pending and current (runtime) values into sync. If these values are not in sync, the user will see the Warning icon as in the memory panel. As with the Memory panel, the same rules apply with respect to the enabled fields and introductory text for the various capability options.

Dynamic LPAR statusWhen a resource is not in sync, the warning icon appears on the View/Partition modify page with a Details link. Selecting the details link yields the synchronization details pop-up window. (See Figure 3-31 on page 64.) In the first IVM release when a dynamic resource is not in sync, a warning icon appears in the main partition list view next to the Memory or Processors resource. This icon also appears with text in the partition properties sheet.

A details link is now added next to the icon. This yields a pop-up window showing the current dynamic LPAR status of the logical partition. All resources are shown in this window. If a resource is out of sync, the reason will be provided. In addition, details about the previous two drmgr commands that were run against the LPAR in an attempt to synchronize the pending and current (runtime) values will be shown.

The Reason field generally matches the latest command run reason. But if the user modifies minimum/maximum values without changing the assigned value, the dlparmgr will consider the assigned values in sync, but the warning icon will still be present.

Dynamic LPAR operation on virtual disks using the GUIThe following steps describe how to assign virtual disks to a partition using the GUI:

1. Under the Virtual Storage Management menu in the navigation area, click View/Modify Virtual Storage, then click the Virtual Disks tab in the work area. Select the needed virtual disks, as shown in Figure 3-32 on page 66.

Note: The minimum and maximum memory values are enabled for the VIOS/IVM LPAR at all times.

Capability setting Enabled fields Introduction text

Yes Assigned Memory Modify the settings by changing the pending values. Changes will be applied immediately, but synchronizing the current and pending values might take some time.

No None Modify the settings by changing the pending values. This LPAR does not currently support modifying these values while running, so pending values can be edited only when the LPAR is powered off.

Unknown Assigned Memory Modify the settings by changing the pending values. This LPAR does not currently support modifying these values while running, so pending values can be edited only when the LPAR is powered off.

Chapter 3. Logical partition creation 65

Page 76: Integrated Virtualization

Figure 3-32 View/Modify Virtual Storage: Virtual disks selection

2. Click Modify partition assignment in the task area.

3. Select the partition name you want to assign to the virtual disks, as shown in Figure 3-33, and click OK to validate the virtual disk partition assignment.

Figure 3-33 Modify Virtual Disk Partition Assignment

66 Integrated Virtualization Manager on IBM System p5

Page 77: Integrated Virtualization

4. Log in to the related LPAR and discover the new disks. On AIX 5L, use the cfgmgr command. Example 3-6 shows how the partition discovers two new virtual disks on AIX 5L.

Example 3-6 Virtual disk discovery

# lsdev -Ccdiskhdisk0 Available Virtual SCSI Disk Drive

# cfgmgr

# lsdev -Ccdiskhdisk0 Available Virtual SCSI Disk Drivehdisk1 Available Virtual SCSI Disk Drive

You can also assign virtual disks by editing the properties of the LPAR.

Operation on partition definitions using the CLIThe command line interface for performing dynamic LPAR operations is the same as on the HMC. The DLPAR Manager keys off of differences between runtime and pending operations. The chsyscfg command is used for dynamic configuration changes because it updates the pending values.

You can perform the same operations with the CLI as with the GUI. Example 3-7 shows how to decrease processing units for an LPAR using the chsyscfg command.

Example 3-7 Decrease processing units of LPAR1

$ lssyscfg -r prof --filter "lpar_names=LPAR1" -F desired_proc_units0.40

$ chsyscfg -r prof -i "lpar_name=LPAR1,desired_proc_units=0.3"

$ lssyscfg -r prof --filter "lpar_names=LPAR1" -F desired_proc_units0.30

A warning icon with an exclamation point inside it is displayed in the View/Modify Partitions screen if current and pending values are not synchronized.

Example 3-8 shows an increase of memory operation.

Example 3-8 Increase memory of LPAR1 with 256 MB

$ lssyscfg -r prof --filter "lpar_names=LPAR1" -F desired_mem512$ chsyscfg -r prof -i "lpar_name=LPAR1,desired_mem+=256"

$ lssyscfg -r prof --filter "lpar_names=LPAR1" -F desired_mem768

3.5.3 Adding a client LPAR to the partition workload group If you want to manage logical partition resources using a workload management tool, you must add the client LPAR to the partition workload group.

A partition workload group identifies a set of LPARs that reside on the same physical system. Some workload management tools require that additional software be installed on the LPARs

Chapter 3. Logical partition creation 67

Page 78: Integrated Virtualization

to monitor its workload, manage its resources, or both. Workload management tools use partition workload groups to identify which LPARs they can manage. For example, Enterprise Workload Manager (EWLM) can dynamically and automatically redistribute processing capacity within a partition workload group to satisfy workload performance goals. EWLM adjusts processing capacity based on calculations that compare actual performance of work processed by the partition workload group to the business goals defined for the work.

Workload management tools use dynamic LPAR to make resource adjustments based on performance goals. Therefore, each LPAR in the partition workload group must support dynamic LPAR. Verify that the LPAR that you want to add to the partition workload group supports dynamic LPAR for the resource type that your workload management tool adjusts as shown in Table 3-3.

It is not required that all LPARs on a system participate in a partition workload group. Workload management tools manage the resources of only those LPARs that are assigned to a partition workload group. Workload management tools can monitor the work of an LPAR that is not assigned to a partition workload group, but they cannot manage the LPAR’s resources.

Table 3-3 Dynamic LPAR support

For example, the partition management function of EWLM adjusts processor resources based on workload performance goals. Thus, EWLM can adjust the processing capacity for AIX and Linux LPARs.

The following recommendations are for workload management:

� Do not add the management partition to the partition workload group. To manage LPAR resources, workload management tools often require that you install some type of management or agent software on the LPARs. To avoid creating an unsupported environment, do not install additional software on the management partition.

� The dynamic LPAR support listed in the previous table is not the same as the dynamic LPAR capabilities that are in the partition properties for an LPAR. The dynamic LPAR support listed in the previous table reflects what each operating system supports in regard to dynamic LPAR functions. The dynamic LPAR capabilities that are shown in the partition properties for an LPAR reflect a combination of:

– A Resource Monitoring and Control (RMC) connection between the management partition and the client LPAR

– The operating system’s support of dynamic LPAR (see Table 3-3)

For example, an AIX client LPAR does not have an RMC connection to the management partition, but AIX supports both processor and memory dynamic LPAR. In this situation, the dynamic LPAR capabilities shown in the partition properties for the AIX LPAR indicate that the AIX LPAR is not capable of processor or memory dynamic LPAR. However, because AIX supports processor and memory dynamic LPAR, a workload management

Note: Systems managed by the Integrated Virtualization Manager can have only one partition workload group per physical server.

Logical partition type

Supports processor dynamic LPAR

Supports memory dynamic LPAR

AIX Yes Yes

Linux Yes Yes/no (SLES 10 and RHEL 5 support memory add, but not memory removal at this moment

68 Integrated Virtualization Manager on IBM System p5

Page 79: Integrated Virtualization

tool can dynamically manage its processor and memory resources. Workload management tools are not dependent on RMC connections to dynamically manage LPAR resources.

� If an LPAR is part of the partition workload group, you cannot dynamically manage its resources from the Integrated Virtualization Manager because the workload management tool is in control of dynamic resource management. Not all workload management tools dynamically manage both processor and memory resources. When you implement a workload management tool that manages only one resource type, you limit your ability to dynamically manage the other resource type. For example, EWLM dynamically manages processor resources, but not memory. AIX supports both processor and memory dynamic LPAR. EWLM controls dynamic resource management of both processor resources and memory for the AIX LPAR, but EWLM does not dynamically manage memory. Because EWLM has control of dynamic resource management, you cannot dynamically manage memory for the AIX LPAR from the Integrated Virtualization Manager.

To add an LPAR to the partition workload group, complete the following steps:

1. Select the logical partition that you want to include in the partition workload group and click Properties. The Partition Properties window opens (Figure 3-34).

2. In the Settings section, select Partition workload group participant. Click OK.

Figure 3-34 Partition Properties: Selecting Partition Workload Group

Chapter 3. Logical partition creation 69

Page 80: Integrated Virtualization

70 Integrated Virtualization Manager on IBM System p5

Page 81: Integrated Virtualization

Chapter 4. Advanced configuration

Logical partitions require an available connection to the network and storage. The Integrated Virtualization Manager (IVM) provides several solutions using either the Web graphical interface or the command line interface.

This chapter describes the following advanced configurations on networking, storage management, and security:

� Virtual Ethernet bridging

� Ethernet link aggregation

� Disk space management

� Disk data protection

� Virtual I/O Server firewall

� SSH support

4

© Copyright IBM Corp. 2005, 2006. All rights reserved. 71

Page 82: Integrated Virtualization

4.1 Network managementAll physical Ethernet adapters installed in the system are managed by the IVM. Logical partitions can have at most two virtual Ethernet adapters, each connected to one of the four virtual networks that are present in the system.

In order to allow partitions to access any external corporate network, every virtual network can be bridged to a physical adapter. For each network, a separate adapter is required. IVM provides a Web interface to configure bridging.

When higher throughput and better link availability is required, Ethernet link aggregation is also available using the VIOS capabilities.

4.1.1 Ethernet bridgingUnder Virtual Ethernet Management in the navigation area, click View/Modify Virtual Ethernet. In the work area, the virtual Ethernet panel shows what partitions are connected to the four available networks. Go to the Virtual Ethernet Bridge tab to configure bridging, as shown in Figure 4-1. For each virtual Ethernet, you can select one physical device. Use the drop-down menu to select the physical Ethernet and click Apply to create the bridging device.

Figure 4-1 View/Modify Virtual Ethernet: Virtual Ethernet Bridge creation

The Web GUI hides the details of the network configuration. Example 4-1 on page 73 describes the VIOS configuration before the creation of the bridge. For each physical and virtual network adapter, an Ethernet device is configured. The IVM is connected to a physical network and four virtual network adapters are available.

Note: If the physical Ethernet that is selected for bridging is already configured with an IP address using the command line interface, all connections to that address will be reset.

72 Integrated Virtualization Manager on IBM System p5

Page 83: Integrated Virtualization

Example 4-1 VIOS Ethernet adapters with no bridging

$ lsdev | grep ^enen0 Available Standard Ethernet Network Interfaceen1 Defined Standard Ethernet Network Interfaceen2 Defined Standard Ethernet Network Interfaceen3 Defined Standard Ethernet Network Interfaceen4 Defined Standard Ethernet Network Interfaceen5 Defined Standard Ethernet Network Interfaceen6 Defined Standard Ethernet Network Interfaceen7 Defined Standard Ethernet Network Interfaceent0 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890ent2 Available 10/100 Mbps Ethernet PCI Adapter II (1410ff01)ent3 Available 10/100 Mbps Ethernet PCI Adapter II (1410ff01)ent4 Available Virtual I/O Ethernet Adapter (l-lan)ent5 Available Virtual I/O Ethernet Adapter (l-lan)ent6 Available Virtual I/O Ethernet Adapter (l-lan)ent7 Available Virtual I/O Ethernet Adapter (l-lan)

$ lstcpip

Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Collen0 1500 link#2 0.2.55.2f.eb.36 269 0 136 4 0en0 1500 9.3.5 ivmopenp 269 0 136 4 0lo0 16896 link#1 50 0 72 0 0lo0 16896 127 loopback 50 0 72 0 0lo0 16896 ::1 50 0 72 0 0

When a virtual Ethernet bridge is created, a new shared Ethernet adapter (SEA) is defined, binding the physical device with the virtual device. If a network interface was configured on the physical adapter, the IP address is migrated to the new SEA.

Example 4-2 shows the result of bridging virtual network 1 with the physical adapter ent0 when the IVM is using the network interface en0. A new ent8 SEA device is created, and the IP address of the IVM is migrated on the en8 interface. Due to the migration, all active network connections on en0 are reset.

Example 4-2 Shared Ethernet adapter configuration

$ lsdev | grep ^enen0 Available Standard Ethernet Network Interfaceen1 Defined Standard Ethernet Network Interfaceen2 Defined Standard Ethernet Network Interfaceen3 Defined Standard Ethernet Network Interfaceen4 Defined Standard Ethernet Network Interfaceen5 Defined Standard Ethernet Network Interfaceen6 Defined Standard Ethernet Network Interfaceen7 Defined Standard Ethernet Network Interfaceen8 Available Standard Ethernet Network Interfaceent0 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890ent2 Available 10/100 Mbps Ethernet PCI Adapter II (1410ff01)ent3 Available 10/100 Mbps Ethernet PCI Adapter II (1410ff01)ent4 Available Virtual I/O Ethernet Adapter (l-lan)ent5 Available Virtual I/O Ethernet Adapter (l-lan)

Chapter 4. Advanced configuration 73

Page 84: Integrated Virtualization

ent6 Available Virtual I/O Ethernet Adapter (l-lan)ent7 Available Virtual I/O Ethernet Adapter (l-lan)ent8 Available Shared Ethernet Adapter

$ lstcpip

Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Collen8 1500 link#3 0.2.55.2f.eb.36 336 0 212 0 0en8 1500 9.3.5 ivmopenp 336 0 212 0 0et8* 1492 link#4 0.2.55.2f.eb.36 0 0 0 0 0et8* 1492 0 0.0.0.0 0 0 0 0 0lo0 16896 link#1 50 0 75 0 0lo0 16896 127 loopback 50 0 75 0 0lo0 16896 ::1 50 0 75 0 0

4.1.2 Ethernet link aggregationLink aggregation is a network technology that enables several Ethernet adapters to be joined together to form a single virtual Ethernet device. This solution can be used to overcome the bandwidth limitation of a single network adapter and to avoid bottlenecks when sharing one network adapter among many client partitions.

The aggregated device also provides high-availability capabilities. If a physical adapter fails, the packets are automatically sent on the other available adapters without disruption to existing user connections. The adapter is automatically returned to service on the link aggregation when it recovers.

Link aggregation is an expert-level configuration and it is not managed by the IVM GUI. It is defined using the VIOS functions with the command line, but the IVM is capable of using the link aggregation for network configuration after it is defined.

To create the link aggregation, use the mkvdev command with the following syntax:

mkvdev -lnagg TargetAdapter ... [-attr Attribute=Value ...]

In the environment shown in Example 4-3, it is possible to aggregate the two physical Ethernet adapters ent2 and ent3. A new virtual adapter ent9 is created, as described in Example 4-3.

Example 4-3 Ethernet aggregation creation

$ mkvdev -lnagg ent2 ent3ent9 Availableen9et9

$ lsdev -dev ent9name status descriptionent9 Available EtherChannel / IEEE 802.3ad Link Aggregation

$ lsdev -dev en9name status descriptionen9 Defined Standard Ethernet Network Interface

74 Integrated Virtualization Manager on IBM System p5

Page 85: Integrated Virtualization

Aggregated devices can be used to define an SEA. The SEA must be created using the mkvdev command with the following syntax:

mkvdev -sea TargetDevice -vadapter VirtualEthernetAdapter ...-default DefaultVirtualEthernetAdapter-defaultid SEADefaultPVID [-attr Attributes=Value ...][-migrate]

Figure 4-2 shows the bridging of virtual network 4 with SEA ent9. The mkvdev command requires the identification of the virtual Ethernet adapter that is connected to virtual network 4.

The lssyscfg command with the parameter lpar_names set to the VIOS partition’s name provides the list of virtual adapters defined for the VIOS. The adapters are separated by commas, and their parameters are separated by slashes. The third parameter is the network number (4 in the example) and the first is the slot identifier (6 in the example).

The lsdev command with the -vpd flag provides the physical location of virtual Ethernet adapters that contains the letter C followed by its slot number. In the example, ent7 is the virtual Ethernet adapter connected to network 4.

The created ent10 adapter is the new SEA.

Figure 4-2 Manual creation of SEA using an Ethernet link aggregation

After the SEA is created using the command line, it is available from the IVM panels. It is displayed as a device with no location codes inside the parenthesis because it uses a virtual device.

Chapter 4. Advanced configuration 75

Page 86: Integrated Virtualization

Figure 4-3 shows how IVM represents an SEA created using an Ethernet link aggregation.

Figure 4-3 Virtual Ethernet bridge with link aggregation device

The SEA can be removed using the IVM by selecting None as physical adapter for the virtual network. When you click Apply, the IVM removes all devices that are related to the SEA, but the link aggregation remains active.

4.2 Storage managementVirtual disks and physical volumes can be assigned to any LPAR, one at a time. Storage allocation can be changed over time, and the content of the virtual storage is kept. When a virtual disk is created using a logical volume, its size can also be increased.

Data protection against single disk failure is available using software mirroring:

� On IVM to protect it but not the managed systems data

� Using two virtual disks for each of the managed system’s LPAR to protect its data

4.2.1 Virtual storage assignment to a partitionUnassigned virtual disks and physical volumes can be associated to a running partition. After the operation completes, the LPAR’s operating system must issue its device discovery procedure to detect the newly added disk. In an AIX 5L environment, do this by issuing the cfgmgr command.

Before removing a physical disk or a virtual disk from a running partition, the operating system should remove the corresponding disk device because it will become unavailable. In an AIX 5L environment, this is done using the rmdev command.

On the Web GUI, it is possible to remove a virtual disk or a physical volume from a running LPAR, but a warning sign always appears requiring an additional confirmation. Figure 4-4 on page 77 shows an example of this message.

with location code

Link aggregationwith no location codes

Physical adapter

76 Integrated Virtualization Manager on IBM System p5

Page 87: Integrated Virtualization

Figure 4-4 Forced removal of a physical volume

4.2.2 Virtual disk extensionSeveral options are available to provide additional disk space to an LPAR. The primary solution is to create a new virtual disk or select an entire physical disk and dynamically assign it to a partition. Because this operation can be done when the partition is running, it is preferred. After the partition’s operating system issues its own device reconfiguration process, a new virtual SCSI disk is available for use. This disk can be used to extend existing data structures when using Linux with a logical volume manager or AIX 5L.

When disk space is provided to a partition using a virtual disk, a secondary solution is to extend it using the IVM. This operation can be executed when the partition is running, but the virtual disk must be taken offline to activate the change. Disk outages should be scheduled carefully so that they do not affect overall application availability. Consider using this solution when an existing operating system’s volume has to be increased in size and a new virtual SCSI disk cannot be added for this purpose, that is, when using Linux without a logical volume manager.

The following steps describe how to extend a virtual disk:

1. On the operating system, halt any activity on the disk to be extended. If this is not possible, shut down the partition. On AIX 5L, issue the varyoff command on the volume group to which the disk belongs.

2. From the Virtual Storage Management menu in the IVM navigation area, click View/Modify Virtual Storage. From the work area, select the virtual disk and click Extend.

Important: We do not recommend virtual disk extension when using AIX 5L, because the same result is achieved by adding a new virtual disk. If the virtual disk is used by a rootvg volume group, it cannot be extended and a new virtual disk must be created.

Chapter 4. Advanced configuration 77

Page 88: Integrated Virtualization

3. Enter the disk space to be added and click OK. If the virtual disk is owned by a running partition, a warning message opens, as shown in Figure 4-5, and you must select a check box to force the expansion. The additional disk space is allocated to the virtual disk, but it is not available to the operating system.

Figure 4-5 Forced expansion of a virtual disk

78 Integrated Virtualization Manager on IBM System p5

Page 89: Integrated Virtualization

4. Under Virtual Storage Management in the IVM navigation area, click View/Modify Virtual Storage. From the work area, select the virtual disk and click Modify partition assignment. Unassign the virtual disk by selecting None in the New partition field. If the disk is owned by a running partition, a warning message opens, as shown in Figure 4-6, and you must select a check box to force the expansion.

Figure 4-6 Forced unassignment of a virtual disk

5. Execute the same action as in step 4, but assign the virtual disk back to the partition.

6. On the operating system, issue the appropriate procedure to recognize the new disk size. On AIX 5L, issue the varyonvg command on the volume group to which the disk belongs and, as suggested by a warning message, issue the chvg -g command on the volume group to recompute the volume group size.

4.2.3 IVM system disk mirroringIn order to prevent an IVM outage due to system disk failure, make the rootvg storage pool of the VIOS redundant. The default installation of IVM uses only one physical disk.

Disk mirroring on the IVM is an advanced feature that, at the time of writing, is not available on the Web GUI. It can be configured using VIOS capabilities on the command line interface, and only system logical volumes can be mirrored.

Chapter 4. Advanced configuration 79

Page 90: Integrated Virtualization

The following steps describe how to provide a mirrored configuration for the rootvg storage pool:

1. Use the IVM to add a second disk of a similar size to rootvg. Under Virtual Storage Management in the navigation area, click View/Modify Virtual Storage, then go to the Physical Volumes tab. Select a disk of a similar size that is not assigned to any storage pool. Click Add to storage pool, as shown in Figure 4-7.

Figure 4-7 Add second disk to rootvg

Important: Mirrored logical volumes are supported as virtual disks. This procedure mirrors all logical volumes defined in rootvg and must not be run if rootvg contains virtual disks.

80 Integrated Virtualization Manager on IBM System p5

Page 91: Integrated Virtualization

2. In the Storage Pool field, select rootvg and click OK.

Figure 4-8 Specify addition to storage pool

3. The actual mirroring is done using the VIOS command line. Log in as the padmin user ID and issue the mirrorios command, as shown in Example 4-4. The command asks for confirmation and causes a VIOS reboot to activate the configuration after performing data mirroring.

Example 4-4 rootvg mirroring at command line

$ mirroriosThis command causes a reboot. Continue [y|n]?

y

SHUTDOWN PROGRAMFri Oct 06 10:20:20 CDT 2006

Wait for 'Rebooting...' before stopping.

4.2.4 AIX 5L mirroring on the managed system LPARsThe AIX 5L logical volume manager is capable of data mirroring, and this feature can also be used when the partition is provided twice the number of virtual disks.

An IVM administrator should create virtual storage that will be used by AIX 5L for mirroring purposes with careful respect to data placement. The virtual storage should not have any physical disks in common to avoid a disk failure that affects both mirror copies.

Chapter 4. Advanced configuration 81

Page 92: Integrated Virtualization

On the IVM, virtual disks are created out of storage pools. They are created using the minimum number of physical disks in the pool. If there is not enough space on a single disk, they can span multiple disks. If the virtual disks are expanded, the same allocation algorithm is applied.

In order to guarantee mirror copy separation, we recommend that you create two storage pools and create one virtual disk from each of them.

After virtual storage is created and made available as an hdisk to AIX 5L, it becomes important to correctly map it. On the IVM, the command line interface is required.

On the IVM, the lsmap command provides all the mapping between each physical and virtual device. For each partition, there is a separate stanza, as shown in Example 4-5. Each logical or physical volume displayed in the IVM GUI is defined as a backing device, and the command provides the virtual storage’s assigned logical unit number (LUN) value.

Example 4-5 IVM command line mapping of virtual storage

$ lsmap -all...SVSA Physloc Client Partition ID--------------- -------------------------------------------- ------------------vhost1 U9111.520.10DDEDC-V1-C13 0x00000003

VTD vtscsi1LUN 0x8100000000000000Backing device aixboot1Physloc

VTD vtscsi2LUN 0x8200000000000000Backing device extlvPhysloc

VTD vtscsi3LUN 0x8300000000000000Backing device hdisk6Physloc U787B.001.DNW108F-P1-T14-L5-L0

VTD vtscsi4LUN 0x8400000000000000Backing device hdisk7Physloc U787B.001.DNW108F-P1-T14-L8-L0...

On AIX 5L, the lscfg command can be used to identify the hdisk using the same LUN used by the IVM. Example 4-6 shows the command output with the 12-digit hexadecimal number representing the virtual disk’s LUN number.

Example 4-6 Identification of AIX 5L virtual SCSI disk’s logical unit number

# lscfg -vpl hdisk0 hdisk0 U9111.520.10DDEDC-V3-C2-T1-L810000000000 Virtual SCSI Disk Drive

PLATFORM SPECIFIC

82 Integrated Virtualization Manager on IBM System p5

Page 93: Integrated Virtualization

Name: disk Node: disk Device Type: block

4.2.5 SCSI RAID adapter useOn a system equipped with a SCSI RAID adapter, you can protect data using the adapter’s capabilities, avoiding any software mirroring. All physical disks managed by each adapter’s SCSI chain can be used to create a single RAID5 array, and IVM can be installed on it.

The adapter must be configured to create the array before installing the IVM. To do this operation, boot the system with the stand-alone diagnostic CD and enter the adapter’s setup menu. After the array is created and has finished formatting, the IVM can be installed.

During the installation, the IVM partition’s rootvg is created on the array. Disk space for LPARs can be provided using logical volumes created on the rootvg storage pool.

Perform adapter maintenance using the IVM command line with the diagmenu command to access diagnostic routines. Example 4-7 shows the menu related to the SCSI RAID adapter. It enables to you to modify the array configuration and to handle events such as the replacement of a failing physical disk.

Example 4-7 The diagmenu menu for SCSI RAID adapter

PCI-X SCSI Disk Array Manager

Move cursor to desired item and press Enter.

List PCI-X SCSI Disk Array Configuration Create an Array Candidate pdisk and Format to 522 Byte Sectors Create a PCI-X SCSI Disk Array Delete a PCI-X SCSI Disk Array Add Disks to an Existing PCI-X SCSI Disk Array Configure a Defined PCI-X SCSI Disk Array Change/Show Characteristics of a PCI-X SCSI Disk Array Reconstruct a PCI-X SCSI Disk Array Change/Show PCI-X SCSI pdisk Status Diagnostics and Recovery Options

F1=Help F2=Refresh F3=Cancel F8=ImageF9=Shell F10=Exit Enter=Do

4.3 Securing the Virtual I/O Server The Virtual I/O Server provides extra security features that enable you to control access to the virtual environment and ensure the security of your system. These features are available with Virtual I/O Server Version 1.3.0.0 or later. The following topics discuss available security features and provide tips for ensuring a secure environment for your Virtual I/O Server setup.

Chapter 4. Advanced configuration 83

Page 94: Integrated Virtualization

Introduction to Virtual I/O Server security Beginning with Version 1.3.0.0 of the Virtual I/O Server, you can set security options that provide tighter security controls over your Virtual I/O Server environment. These options enable you to select a level of system security hardening and specify settings that are allowable within that level. The Virtual I/O Server security feature also enables you to control network traffic by enabling the Virtual I/O Server firewall. You can configure these options using the viosecure command.

The viosecure command activates, deactivates, and displays security hardening rules. By default, none of the security hardening features is activated after installation. Upon running the viosecure command, the command guides the user through the proper security settings, which range from High to Medium to Low. After this initial selection, a menu is displayed itemizing the security configuration options associated with the selected security level in sets of 10. These options can be accepted in whole, individually toggled off or on, or ignored. After any changes, viosecure continues to apply the security settings to the computer system.

The viosecure command also configures, unconfigures, and displays network firewall settings. Using the viosecure command, you can activate and deactivate specific ports and specify the interface and IP address from which connections will be allowed.

For more information about this command, see the viosecure command in the Virtual I/O Server Commands Reference.

The following sections provide an overview of these features.

System security hardening The system security hardening feature protects all elements of a system by tightening security or implementing a higher level of security. Although hundreds of security configurations are possible with the VIOS security settings, you can easily implement security controls by specifying a high, medium, or low security level. Configuration of the Virtual I/O Server system security hardening will be discussed in one of the next sections.

The system security hardening features provided by Virtual I/O Server enable you to specify values such as:

� Password policy settings � The usrck, pwdck, grpck, and sysck actions � Default file creation settings � System crontab settings

Configuring a system at too high a security level might deny services that are needed. For example, the telnet and rlogin commands are disabled for high-level security because the login password is sent over the network unencrypted. If a system is configured at too low a security level, the system might be vulnerable to security threats. Because each enterprise has its own unique set of security requirements, the predefined High, Medium, and Low security configuration settings are best suited as a starting point for security configuration rather than an exact match for security requirements. As you become more familiar with the security settings, you can make adjustments by choosing the hardening rules you want to apply. You can get information about the hardening rules by running the man command.

Virtual I/O Server firewall The Virtual I/O Server firewall enables you to enforce limitations on IP activity in your virtual environment. With this feature, you can specify which ports and network services are allowed access to the Virtual I/O Server system. For example, if you need to restrict login activity from an unauthorized port, you can specify the port name or number and specify deny to remove it from the Allow list. You can also restrict a specific IP address.

84 Integrated Virtualization Manager on IBM System p5

Page 95: Integrated Virtualization

Before configuring firewall settings, you must first enable the Virtual I/O Server firewall. The following topic describes this action.

Configuring firewall settings Enable the Virtual I/O Server (VIOS) firewall to control IP activity.

The VIOS firewall is not enabled by default. To enable the VIOS firewall, you must turn it on by using the viosecure command with the -firewall option. When you enable it, the default setting is activated, which allows access for the following IP services:

� ftp� ftp-data � ssh � web � https � rmc � cimon

You can use the default setting or configure the firewall settings to meet the needs of your environment by specifying which ports or port services to allow. You can also turn off the firewall to deactivate the settings.

Use the following tasks at the VIOS command line to configure the VIOS firewall settings:

1. Enable the VIOS firewall by issuing the following command:

viosecure -firewall on

2. Specify the ports to allow or deny, by using the following command:

viosecure -firewall allow | deny -port number

3. View the current firewall settings by issuing the following command:

viosecure -firewall view

4. If you want to disable the firewall configuration, issue the following command:

viosecure -firewall off

For more about any viosecure command option, see the viosecure command description.

Configuring Virtual I/O Server system security hardening Set the security level to specify security hardening rules for your Virtual I/O Server (VIOS) system.

To implement system security hardening rules, you can use the viosecure command to specify a security level of High, Medium, or Low. A default set of rules is defined for each level. You can also set a level of default, which returns the system to the system standard settings and removes any level settings that have been applied.

Note: The telnet command is disabled when the firewall is turned on. So if you are using Telnet to set the security settings, you will lose your connection or session.

Note: The firewall settings are in the viosecure.ctl file in the /home/ios/security directory. You can use the -force option to enable the standard firewall default ports. For more about the force option, see the viosecure command description and Appendix 3.

Chapter 4. Advanced configuration 85

Page 96: Integrated Virtualization

The low-level security settings are a subset of the medium-level security settings, which are a subset of the high-level security settings. Therefore, the High level is the most restrictive and provides the greatest level of control. You can apply all of the rules for a specified level or select which rules to activate for your environment. By default, no VIOS security levels are set; you must run the viosecure command to enable the settings.

Use the following tasks to configure the system security settings:

Setting a security level To set a VIOS security level of High, Medium, or Low, use the viosecure -level command, as in the following example:

viosecure -level low -apply

Changing the settings in a security level To set a VIOS security level in which you specify which hardening rules to apply for the setting, run the viosecure command interactively, as in the following example:

1. At the VIOS command line, type viosecure -level high. All security level options (hardening rules) at that level are displayed, 10 at a time. (Pressing Enter displays the next set in the sequence.)

2. Review the displayed options and make your selection by entering the numbers that you want to apply, separated by a comma; type ALL to apply all of the options; or type NONE to apply none of the options.

3. Press Enter to display the next set of options, and continue entering your selections.

To exit the command without making any changes, enter q.

Viewing the current security setting To display the current VIOS security level setting use the viosecure command with the -view flag, as in the following example:

viosecure -view

Removing security level settingsTo unset any previously set system security levels and return the system to the standard system settings, issue the following command:

viosecure -level default

For more information about using the viosecure command, see the viosecure command description.

4.4 Connecting to the Virtual I/O Server using OpenSSH This topic describes how to set up remote connections to the Virtual I/O Server using secure connections.

Starting with IVM/VIOS 1.3.0.0, OpenSSH and OpenSSL are already installed by default.

Setting up SSH authorization for non-prompted connection1. If the id_dsa files do not exist on your workstation, create them using the ssh-keygen

command (press Enter for passphrases) as shown in Example 4-8 on page 87.

86 Integrated Virtualization Manager on IBM System p5

Page 97: Integrated Virtualization

Example 4-8 Create the id_dsa files on your workstation

nim-ROOT[1156]/root/.ssh># ssh-keygen -t dsaGenerating public/private dsa key pair.Enter file in which to save the key (/root/.ssh/id_dsa):Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /root/.ssh/id_dsa.Your public key has been saved in /root/.ssh/id_dsa.pub.The key fingerprint is:d2:30:06:6b:68:e2:e7:fd:3c:77:b7:f6:14:b1:ce:35 root@nimnim-ROOT[1160]/root/.ssh

2. Verify that the keys are generated on your workstation (Example 4-9).

Example 4-9 Verify successful creation of id_dsa files

nim-ROOT[1161]/root/.ssh># ls -ltotal 16-rw------- 1 root system 668 Oct 13 15:31 id_dsa-rw-r--r-- 1 root system 598 Oct 13 15:31 id_dsa.pubnim-ROOT[1162]/root/.ssh

3. Now log in to the IVM through SSH. There is not yet a known_hosts file created, which will be done during the first SSH login (Example 4-10).

Example 4-10 First SSH login toward IVM - known_hosts file creation

># ssh [email protected] authenticity of host '9.3.5.123 (9.3.5.123)' can't be established.RSA key fingerprint is 1b:36:9b:93:87:c2:3e:97:48:eb:09:80:e3:b6:ee:2d.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added '9.3.5.123' (RSA) to the list of known [email protected]'s password:Last unsuccessful login: Fri Oct 13 15:23:50 CDT 2006 on ftp from ::ffff:9.3.5.111Last login: Fri Oct 13 15:25:21 CDT 2006 on /dev/pts/1 from 9.3.5.111$Connection to 9.3.5.123 closed.nim-ROOT[1163]/root/.ssh># ls -ltotal 24-rw------- 1 root system 668 Oct 13 15:31 id_dsa-rw-r--r-- 1 root system 598 Oct 13 15:31 id_dsa.pub-rw-r--r-- 1 root system 391 Oct 13 15:33 known_hosts

The known_hosts file has been created.

4. Next step is to retrieve the authorized_keys2 file with FTP (get) from the IVM (Example 4-11).

Example 4-11 Transfer of authorized_keys2 file

nim-ROOT[1168]/root/.ssh># ftp 9.3.5.123Connected to 9.3.5.123.

Chapter 4. Advanced configuration 87

Page 98: Integrated Virtualization

220 IVM FTP server (Version 4.2 Fri Feb 3 22:13:23 CST 2006) ready.Name (9.3.5.123:root): padmin331 Password required for padmin.Password:230-Last unsuccessful login: Fri Oct 13 15:23:50 CDT 2006 on ftp from ::ffff:9.3.5.111230-Last login: Fri Oct 13 15:32:03 CDT 2006 on /dev/pts/1 from 9.3.5.111230 User padmin logged in.ftp> cd .ssh250 CWD command successful.ftp> ls200 PORT command successful.150 Opening data connection for ..environmentauthorized_keys2226 Transfer complete.ftp> get authorized_keys2200 PORT command successful.150 Opening data connection for authorized_keys2 (598 bytes).226 Transfer complete.599 bytes received in 7.5e-05 seconds (7799 Kbytes/s)local: authorized_keys2 remote: authorized_keys2ftp> by221 Goodbye.

5. Add the contents of your local SSH public key (id_dsa.pub) to the authorized_keys2 file (Example 4-12).

Example 4-12 Add contents of local SSH public key to authorized_keys2 file

nim-ROOT[1169]/root/.ssh># ftp 9.3.5.123nim-ROOT[1169]/root/.ssh># cat id_dsa.pub >> auth*

6. Verify the successful addition of the public key by comparing the size of the authorized keys file to the id_dsa.pub file (Example 4-13).

Example 4-13 Compare addition of public key

nim-ROOT[1209]/root/.ssh># ls -ltotal 32-rw-r--r-- 1 root system 598 Oct 13 15:38 authorized_keys2-rw------- 1 root system 668 Oct 13 15:31 id_dsa-rw-r--r-- 1 root system 598 Oct 13 15:31 id_dsa.pub-rw-r--r-- 1 root system 391 Oct 13 15:33 known_hosts

7. Transfer the authorized key file back to the IVM into the directory /home/padmin/.ssh (Example 4-14).

Example 4-14 FTP of authorized key back to IVM

nim-ROOT[1171]/root/.ssh># ftp 9.3.5.123Connected to 9.3.5.123.220 IVM FTP server (Version 4.2 Fri Feb 3 22:13:23 CST 2006) ready.

88 Integrated Virtualization Manager on IBM System p5

Page 99: Integrated Virtualization

Name (9.3.5.123:root): padmin331 Password required for padmin.Password:230-Last unsuccessful login: Fri Oct 13 15:23:50 CDT 2006 on ftp from ::ffff:9.3.5.111230-Last login: Fri Oct 13 15:35:44 CDT 2006 on ftp from ::ffff:9.3.5.111230 User padmin logged in.ftp> cd .ssh250 CWD command successful.ftp> put authorized_keys2200 PORT command successful.150 Opening data connection for authorized_keys2.226 Transfer complete.599 bytes sent in 0.000624 seconds (937.4 Kbytes/s)local: authorized_keys2 remote: authorized_keys2ftp> by221 Goodbye.

8. Verify that the key can be read by the SSH daemon on the IVM and test the connection by typing the ioslevel command (Example 4-15).

Example 4-15 Test the configuration

nim-ROOT[1173]/root/.ssh># ssh [email protected] unsuccessful login: Fri Oct 13 15:23:50 2006 on ftp from ::ffff:9.3.5.111Last login: Fri Oct 13 15:37:33 2006 on ftp from ::ffff:9.3.5.111$ ioslevel1.3.0.0

After establishing these secure remote connections, we can execute several commands. For example:

� ssh [email protected]

This gives us an interactive login (host name is also possible).

� ssh -t [email protected] ioscli mkvt -id 2

This enables us to get a console directly to a client LPAR with id 2

� ssh [email protected] lssyscfg -r sys

Example 4-16 shows the output of the padmin command.

Example 4-16 Output of the padmin command

nim-ROOT[1217]/root/.ssh># ssh [email protected] lssyscfg -r sysname=p520-ITSO,type_model=9111-520,serial_num=10DDEEC,ipaddr=9.3.5.127,state=Operating,sys_time=10/13/06 17:39:22,power_off_policy=0,cod_mem_capable=0,cod_proc_capable=1,os400_capable=1,micro_lpar_capable=1,dlpar_mem_capable=1,assign_phys_io_capable=0,max_lpars=20,max_power_ctrl_lpars=1,service_lpar_id=1,service_lpar_name=VIOS,mfg_default_config=0,curr_configured_max_lpars=11,pend_configured_max_lpars=11,config_version=0100010000000000,pend_lpar_config_state=enablednim-ROOT[1218]/root/.ssh

Chapter 4. Advanced configuration 89

Page 100: Integrated Virtualization

90 Integrated Virtualization Manager on IBM System p5

Page 101: Integrated Virtualization

Chapter 5. Maintenance

This chapter provides information about maintenance operations on the Integrated Virtualization Manager (IVM).

This chapter discusses the following topics:

� IVM backup and restore

� Logical partition backup and restore

� IVM upgrade

� Managed system firmware update

� IVM migration

� Command logging

� Integration with IBM Director

5

© Copyright IBM Corp. 2005, 2006. All rights reserved. 91

Page 102: Integrated Virtualization

5.1 IVM maintenanceYou can use the IVM to perform operations such as backup, restore, or upgrade. Some operations are available using the GUI, the CLI, or the ASMI menus.

5.1.1 Backup and restore of the logical partition definitionsLPAR configuration information can be backed up to a file. This file will be used to restore information if required and can also be exported to another system.

The following steps describe how to back up the LPAR configuration:

1. Under the Service Management menu in the navigation area, click Backup/Restore.

2. Select Generate Backup in the work area, as shown in Figure 5-1.

Figure 5-1 Partition Configuration Backup/Restore

A file named profile.bak is generated and stored under the user’s home directory. In the work area, you can select this file name and save it to a disk. There is only one unique backup file at a time, and a new backup file replaces an existing one.

The backup file contains the LPAR’s configuration, such as processors, memory, and network. Information about virtual disks is not included in the backup file.

In order to perform a restore operation, the system must not have any LPAR configuration defined. Click Restore Partition Configuration to restore the last backed-up file. If you want to restore a backup file stored on your disk, follow these steps:

1. Click Browse and select the file.

2. Click Upload Backup File. The uploaded file replaces the existing backup file.

3. Click Restore Partition Configuration to restore the uploaded backup file.

92 Integrated Virtualization Manager on IBM System p5

Page 103: Integrated Virtualization

You can also back up and restore LPAR configuration information from the CLI. Use the bkprofdata command to back up the configuration information and the rstprofdata command to restore it. See the VIO Server and PLM command descriptions in the Information Center at the following Web page for more information:

http://publib.boulder.ibm.com/infocenter/eserver/v1r3s/index.jsp?topic=/iphb1/iphb1_vios_commandslist.htm

5.1.2 Backup and restore of the IVM operating systemThe only way to back up the IVM operating system is with the backupios command. No operating system backup operation is available within the GUI. This command creates a bootable image that includes the IVM partition’s rootvg. It can also contain the storage pool structure, depending on the flags used.

Figure 5-2 Backup/Restore of the Management Partition

The backup can use one of the following media types:

� File� Tape� CD-R� DVD-RAM

To restore the management partition, install the operating system using the bootable media created by the backup process.

Important: The backup operation does not save the data contained in virtual disks or physical volumes assigned to the LPARs.

Chapter 5. Maintenance 93

Page 104: Integrated Virtualization

5.1.3 IVM updatesRegularly IBM brings out updates (or fix packs) of the Virtual I/O Server. These can be downloaded from:

http://techsupport.services.ibm.com/server/vios/

Updates are necessary whenever new functionalities or fixes are introduced.

Determining the current VIOS levelBy executing the ioslevel command from the VIOS command line, the padmin user can determine the actual installed level of the VIOS software (Example 5-1).

Example 5-1 Using the ioslevel command

$ ioslevel1.2.1.4-FP-7.4$

In the example, the level of the VIOS software is 1.2.1.4 with Fix Pack 7.4. If we now go back to the mentioned Web site (Figure 5-3), we notice that a newer fix pack is available: FP 8.0.

Figure 5-3 IBM Virtual I/O Server Support page

Fix Pack 8.0 provides a migration path for existing Virtual I/O Server installations. Applying this package will upgrade the VIOS to the latest level, V1.3.0.0. All VIOS fix packs are cumulative and contain all fixes from previous fix packs.

Note: Applying a fix pack can cause the restart of the IVM. That means that all LPARs must be stopped during this reboot.

94 Integrated Virtualization Manager on IBM System p5

Page 105: Integrated Virtualization

To take full advantage of all of the available functions in the VIOS, it is necessary to be at a system firmware level of SF235 or later. SF230_120 is the minimum level of SF230 firmware supported by the Virtual I/O Server V1.3. If a system firmware update is necessary, it is recommended that the firmware be updated before upgrading the VIOS to Version 1.3.0.0. (See 5.3.1, “Microcode update” on page 110.) The VIOS Web site has a direct link to the microcode download site:

http://www14.software.ibm.com/webapp/set2/firmware/gjsn

All interim fixes applied to the VIOS must be manually removed before applying Fix Pack 8.0. VIOS customers who applied interim fixes to the VIOS should use the following procedure to remove them prior to applying Fix Pack 8.0. Example 5-2 shows how to list fixes.

Example 5-2 Listing fixes

$ oem_setup_env /*from the VIOS command line$ emgr -P /*gives a list of the installed efix's (by label)$ emgr -r -L /* for each additional efix listed, run this command to remove$ exit

Important: Be sure to have the right level of firmware before updating the IVM.

Note: It is recommended that the AIX 5L client partitions using VSCSI devices should upgrade to AIX 5L maintenance Level 5300-03 or later.

Chapter 5. Maintenance 95

Page 106: Integrated Virtualization

Downloading the fix packsFigure 5-4 shows four options for retrieving the latest fix packs.

Figure 5-4 IBM Virtual I/O Server download options

For the first download option, which retrieves the latest fix pack using the Download Director, all filesets are downloaded into a user-specified directory. When the download has completed, the updates can be applied from a directory on your local hard disk:

1. Log in to the Virtual I/O Server as the user padmin.

2. Create a directory on the Virtual I/O Server.

$ mkdir directory_name

3. Using the ftp command, transfer the update file (or files) to the directory you created.

4. Apply the update by running the updateios command:

$ updateios -dev directory_name -install -accept

Accept to continue the installation after the preview update is run.

5. Reboot.

Verify a successful update by checking the results of the updateios command and running the ioslevel command. The result of the ioslevel command should equal the level of the downloaded package.

$ ioslevel1.3.0.0-FP-8.0

96 Integrated Virtualization Manager on IBM System p5

Page 107: Integrated Virtualization

Uncompressing and extracting a tar file If you downloaded the latest fix pack using FTP as a single, compressed tar file (option 2 in Figure 5-4 on page 96) (fixpack<nn>.tar.gz; in our case this would be fixpack80.tar.gz), you must uncompress the tar file and extract the contents before you can install the update. Follow these steps:

1. Enter the following command to escape to a shell:

$ oem_setup_env

2. Copy the compressed tar file, fixpack<nn>.tar.gz, to the current directory.

3. Create a new directory for the files you extract from the tar file:

$ mkdir <directory>

4. Change directories to the new directory:

$ cd <directory>

5. Unzip and extract the tar file contents with the following command:

$ gzip -d -c ../fixpack<nn>.tar.gz | tar -xvf -

6. Quit from the shell.

The next step is to follow the installation instructions in the next section.

Applying updates from a local hard diskFollow these steps to apply the updates from a directory on your local hard disk:

1. Log in to the Virtual I/O Server as the user padmin.

2. Create a directory on the Virtual I/O Server:

$ mkdir directory_name

3. Using the ftp command, transfer the update file (or files) to the directory you created.

4. Apply the update by running the updateios command:

$ updateios -dev directory_name -install -accept

5. Accept to continue the installation after the preview update is run.

Verify a successful update by checking the results of the updateios command and running the ioslevel command. The result of the ioslevel command should equal the level of the downloaded package.

$ ioslevel1.3.0.0-FP-8.0

Applying updates from a remotely mounted file systemIf the remote file system is to be mounted read-only, you must first rename the fix pack file tableofcontents.txt to .toc, otherwise you will be prevented from installing this fix pack.

1. Log in to the Virtual I/O Server as user padmin.

2. Mount the remote directory onto the Virtual I/O Server:

$ mount remote_machine_name:directory /mnt

3. Apply the update by running the updateios command:

$ updateios -dev /mnt -install -accept

4. If prompted to remove the .toc?, enter no.

Chapter 5. Maintenance 97

Page 108: Integrated Virtualization

Verify a successful update by checking the results of the updateios command and running the ioslevel command. The result of the ioslevel command should equal the level of the downloaded package:

$ ioslevel1.3.0.0-FP-8.0

Applying updates from the ROM drive This fix pack can be burned (or ordered directly from IBM Delivery Service Center - option 4) onto a CD by using ISO image files, which was the third option on Figure 5-4 on page 96. After the CD has been created, perform the following steps to apply the update:

1. Log in to the Virtual I/O Server as user padmin.

2. Place the update CD into the drive.

3. Apply the update by running the updateios command:

$ updateios -dev /dev/cdX -install -accept

(where X is a device number between 0 and N)

4. Verify a successful update by checking the results of the updateios command and running the ioslevel command. The result of ioslevel command should equal the level of the downloaded package:

$ ioslevel1.3.0.0-FP-8.0

5.2 The migration between HMC and IVMIt is important to note that moving between the HMC and an IVM environments will require a certain amount of reconfiguration.

Always make a backup of your environment prior to migrating between the IVM and HMC environment.

5.2.1 Recovery after an improper HMC connectionAn HMC must not be connected to a system running the IVM; otherwise, you cannot perform any other operation on the IVM.

Note: If updating from an ioslevel prior to 1.3.0.0, the updateios command might indicate several failures (such as missing requisites) while installing the fix pack. This is expected. Proceed with the update if you are prompted to Continue with the installation [y/n].

Attention: There is no guarantee onto which disk the VIOS will install. If the install takes place to a disk containing your client volume group, you will lose the data and not be able to import it again.

You should have a backup of Virtual I/O Server and virtual I/O clients and profiles for system recovery before attempting any migration.

A simple trick might be the physical removal of any disks you want to save, when doing the install, and put them back in after installation.

98 Integrated Virtualization Manager on IBM System p5

Page 109: Integrated Virtualization

If an HMC was connected to a system using the IVM, the following steps explain how to re-enable the IVM capabilities:

1. Power off the system.

2. Remove the system definition from the HMC.

3. Unplug the HMC network cable from the system if directly connected.

4. Connect a TTY console emulator with a serial cross-over cable to one of the system’s serial ports.

5. Press any key on the console to open the service processor prompt.

6. Log in as the user admin and answer the questions about the number of lines and columns.

7. Reset the service processor.

Type 2 to select 2. System Service Aids, type 10 to select 10. Reset Service Processor, and then type 1 to confirm your selection. Wait for the system to reboot.

8. Reset it to the factory configuration (Manufacturing Default Configuration).

Type 2 to select 2. System Service Aids, type 11 to select 11. Factory Configuration, and then type 1 to confirm. Wait for the system to reboot.

9. Configure the ASMI IP addresses if needed.

Type 5 to select 5. Network Services, type 1 to select 1. Network Configuration, and then configure each Ethernet adapter. For more information, refer to 2.3, “ASMI IP address setup” on page 23.

10.Start the system.

Type 1 to select 1. Power/Restart Control, type 1 to select 1. Power On/Off System, type 8 to select 8. Power on, and press Enter to confirm your selection.

11.Go to the SMS menu.

12.Update the boot list.

Type 5 to select 5. Select Boot Options, type 2 to select 2. Configure Boot Device Order, and select the IVM boot disk.

13.Boot the system.

14.Wait for the IVM to start.

15.Connect to the IVM with the GUI.

16.Restore the partition configuration using the last backup file.

From the Service Management menu in the navigation area, click Backup/Restore, and then click Restore Partition Configuration in the work area. For more information, refer to 5.1.1, “Backup and restore of the logical partition definitions” on page 92.

This operation only updates the IVM partition configuration and does not restore the LPARs hosted by the IVM.

17.Reboot the IVM. (If changes do not require reboot, then recovery of IVM should be done immediately.)

18.Restore the partition configuration using the last backup file.

This time, each LPAR definition is restored.

19.Reboot the IVM.

This reboot is needed to make each virtual device available to the LPARs. (This is also possible by issuing the cfgdev command.)

Chapter 5. Maintenance 99

Page 110: Integrated Virtualization

20.Restart each LPAR.

5.2.2 Migration considerations

These are the minimum considerations to migrate between an HMC and the IVM. For a production redeployment, it will depend on the configuration of the system.

� VIOS version� System firmware level� VIOS I/O device configuration� Backup VIOS, virtual I/O clients profile, and virtual I/O devices� The mapping information between physical and virtual I/O devices� VIOS and VIO client backups

VIOS versionThe ioslevel command displays the VIOS version; you will see output similar to this:

$ ioslevel1.3.0.0

System firmware levelYou can display the system firmware level using lsfware command. You will see output similar to this:

$ lsfwaresystem:SF240_219 (t) SF240_219 (p) SF240_219 (t)

VIOS I/O devices configurationTo display I/O devices such as adapter, disk, or slots, use the lsdev command.

Example 5-3 VIOS device information

$ lsdev -type adaptername status descriptionent0 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890ent2 Available 10/100 Mbps Ethernet PCI Adapter II (1410ff01)ent3 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890ent4 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890ent5 Available Virtual I/O Ethernet Adapter (l-lan)ent6 Available Shared Ethernet Adapteride0 Available ATA/IDE Controller Devicesisscsia0 Available PCI-X Dual Channel Ultra320 SCSI Adaptersisscsia1 Available PCI-X Dual Channel Ultra320 SCSI Adapterusbhc0 Available USB Host Controller (33103500)usbhc1 Available USB Host Controller (33103500)vhost0 Available Virtual SCSI Server Adaptervhost1 Available Virtual SCSI Server Adaptervsa0 Available LPAR Virtual Serial Adapter

$ lsvg -lv db_spdb_sp:LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINTdb_lv jfs 800 800 1 open/syncd N/A

100 Integrated Virtualization Manager on IBM System p5

Page 111: Integrated Virtualization

If you want to display the attribute of each device, then use the lsdev -dev Devicename -attr command. And you can use lsdev -slots command for the slot informations and lsdev -dev Devicename -child command for the child devices associated with devices.

Also, you can use lsvg -lv volumegroup_name command to discover system disk configuration and volume group information.

To migrate from an HMC to an IVM environment, the VIOS must own all of the physical devices. You must check the profile of VIOS as shown in Figure 5-5.

Figure 5-5 Virtual I/O Server Physical I/O Devices

Backup VIOS, virtual I/O clients profile, and virtual I/O devicesYou should document the information in the virtual I/O clients’ that which have a dependency on the virtual SCSI server and virtual SCSI client adapter as shown in Figure 5-6 on page 102.

Tip: Note the physical location code of the disk unit that you are using to boot the VIOS. To display this, use the lsldev -dev Devicename -vpd command.

Chapter 5. Maintenance 101

Page 112: Integrated Virtualization

Figure 5-6 Virtual SCSI — Client Adapter properties

The mapping information between physical and virtual I/O devicesIn order to display the mapping information between physical I/O devices and virtual I/O devices such as disk, network, and optical media, use the lsmap -vadapter vhost# command, as shown in Example 5-4.

Example 5-4 Mapping information between physical I/O devices and virtual I/O devices

$ lsmap -vadapter vhost0SVSA Physloc Client Partition ID--------------- -------------------------------------------- ------------------vhost0 U9111.520.10DDEEC-V1-C3 0x00000002

VTD vopt0LUN 0x8300000000000000Backing device cd0Physloc U787A.001.DNZ00XK-P4-D3

VTD vscsi0LUN 0x8100000000000000Backing device dbroot_lvPhysloc

VTD vscsi1LUN 0x8200000000000000Backing device db_lvPhysloc

VIOS and VIO client backupsBefore the migration from an HMC to an IVM environment, it is necessary to back up the VIOS and VIOC. For more information about backup, refer to 5.1.2, “Backup and restore of the IVM operating system” on page 93 as well as Section 2.1 in IBM System p Advanced POWER Virtualization Best Practices, REDP-4194.

Virtual Adapter Information

102 Integrated Virtualization Manager on IBM System p5

Page 113: Integrated Virtualization

5.2.3 Migration from HMC to an IVM environment

For redeployment from HMC to IVM, the managed system must be reset to the Manufacturing Default Configuration using the ASMI menu function.

This migration has the following requirements:

� VIOS of HMC-managed environment owns all physical I/O devices� Backup of VIOS and VIOC� VIOS Version 1.2 or above� System firmware level SF230_120 or above

Figure 5-7 shows the general migration procedure from HMC to an IVM environment. There is some dependency on system configuration.

Figure 5-7 General migration procedure from HMC to an IVM environment

1. Reset to manufacturing default configuration

If you decide to perform this migration, it is necessary to restore firmware setting, network configuration and passwords to their factory defaults. When you reset the firmware, it will remove all partition configuration and any personalization that has been made to the service processor. A default full system partition will be created to handle all hardware resources. Without an HMC the system console is provided through the internal serial

Tip: The recommended method is a complete reinstallation. The only reason to save the Virtual I/O installation is if there is client data on the rootvg (not recommended; use different storage pools for client data). If the client data is on other VGs, you should export the VG and remove the disk to make sure it does not get installed over. You will have to reconfigure all of your device mappings and so on. In the end this is more complex and time-consuming than starting with a “fresh” install.

1. Reset to Manufacturing Default Configuration

2. Change the serial console connection

3. Connect IVM Web-interface using VIOS IP address

4. Re-create Virtual Devices and Ethernet Bridging

5. Re-create Virtual I/O Clients

6. Boot each Virtual I/O Client

Chapter 5. Maintenance 103

Page 114: Integrated Virtualization

ports, and connections are made using a serial ASCII console and cross-over cable connected to the serial port.

If you perform the firmware reset after detaching the HMC, the HMC will retain information about the server as a managed system. You can remove this using the HMC GUI.

When a console session is opened to the reset server, at the first menu, select 1.Power/Restart Control → 1.Power On/Off system as shown in Example 5-5.

Example 5-5 Power On/Off System

Power On/Off SystemCurrent system power state: OffCurrent firmware boot side: TemporaryCurrent system server firmware state: Not running

1. System boot speed Currently: Fast 2. Firmware boot side for the next boot Currently: Temporary 3. System operating mode Currently: Normal 4. Boot to system server firmware Currently: Standby 5. System power off policy Currently: Automatic 6. Power on98. Return to previous menu99. Log out

Example 5-5 shows that the Power on menu is 6. This means that the firmware reset has not been performed and the system is still managed by an HMC. If the firmware reset is performed and the system is no longer managed by an HMC, then the Power on menu is 8. You can reset the service processor or put the server back to factory configuration through the System Service Aids menu in ASMI.

2. Change the serial connection for IVM

When you change the management system from HMC to IVM, you can no longer use the default console connection through vty0. You will change the console connection, as shown in Example 5-6. This is effective after the VIOS reboot, and you will change the physical serial connection from SPC1 to SPC2 for using the vty1 console connection.

Example 5-6 Serial connection change for IVM

# lsconsNULL

# lsdev -Cc ttyvty0 Defined Asynchronous Terminalvty1 Available Asynchronous Terminalvty2 Available Asynchronous Terminal

# lsdev -Cl vty0 -F parentvsa0

# lsdev -Cl vty1 -F parentvsa1

104 Integrated Virtualization Manager on IBM System p5

Page 115: Integrated Virtualization

# lsdev -Cl vsa1vsa1 Available LPAR Virtual Serial Adapter

# chcons /dev/vty1chcons: console assigned to: /dev/vty1, effective on next system boot

3. Connect IVM Web-interface using the VIOS IP address

The first Web-interface pane that opens after the login process is View/Modify Partitions, as shown in Figure 5-8. You can only see a VIOS partition. IVM does not have any information about other virtual I/O clients because the service process is reset to the manufacturing default configuration.

Figure 5-8 View/Modify Partitions

4. Re-create virtual devices and Ethernet bridging

When changed to an IVM environment, the VIOS (now Management Partition) still has virtual device information left over from the HMC environment. There is the virtual SCSI, virtual Ethernet, shared Ethernet, and virtual target device information, but their status is changed to defined after migrating to an IVM environment.

Because these virtual devices no longer exist, you should remove them before creating the virtual I/O clients in IVM. You can remove the virtual devices as shown in Example 5-7.

If you define virtual disks for clients from the Management Partition, the virtual SCSI server and client devices are created automatically for you.

Example 5-7 Remove the virtual device

$ rmdev -dev vhost0 -recursivevtopt0 deleteddbrootvg deleted

Virtual I/O Server

Chapter 5. Maintenance 105

Page 116: Integrated Virtualization

vtscsi0 deletedvhost0 deleted$ rmdev -dev ent4ent4 deleted$ rmdev -dev en4en4 deleted$ rmdev -dev et4et4 deleted

After removing the virtual devices, you can re-create virtual devices using the cfgdev command or through the IVM GUI and the Virtual Ethernet Bridge for virtual I/O clients in the View/Modify Virtual Ethernet pane as shown in Figure 5-9.

Figure 5-9 Virtual Ethernet Bridge

5. Re-create virtual I/O clients

Because the IVM does not have virtual I/O clients information, you will have to re-create virtual I/O clients using the IVM Web interface. For more information about creating LPARs, refer to 3.2, “IVM graphical user interface” on page 38.

When you choose Storage type, select Assign existing virtual disks and physical volumes as shown in Figure 5-10 on page 107.

You can also let the IVM create a virtual disk for you by selecting Create virtual disk when needed.

Tip: You should export any volume group containing client data using the exportvg command. After migrating, import the volume groups using the importvg command. This is a more efficient method to migrate the client data without loss.

106 Integrated Virtualization Manager on IBM System p5

Page 117: Integrated Virtualization

Figure 5-10 Create LPAR: Storage Type

5.2.4 Migration from an IVM environment to HMC

There is no officially announced procedure to migrate from an IVM environment to the HMC.

If an HMC is connected to the system, the IVM interface will be disabled immediately, effectively making it just a Virtual I/O Server partition. The managed system goes into recovery mode. After recovery completes, the HMC shows all of the LPARs without a profile. You have to create one profile for each LPAR.

This can be migrated more easily compared to the reverse. The IVM environment must own all of the physical devices and there can be only one Management Partition per server, so there are no restrictions on server configuration that could affect a possible migration. Figure 5-11 on page 108 shows the general migration procedure from an IVM environment to the HMC.

Tip: Take care to carefully record all configuration information before performing this migration.

Chapter 5. Maintenance 107

Page 118: Integrated Virtualization

Figure 5-11 General migration procedure from an IVM environment to HMC

1. Connect System p server to an HMC

The server is connected and recognized by the HMC, and the IVM interface will be disabled immediately, effectively making it just a VIOS partition as shown in Figure 5-12.

Figure 5-12 IVM management after connecting HMC

1. Connect System p to HMC

2. Recover HMC

3. Re-create partition profile

4. Re-create Virtual Devices and Ethernet Bridging on VIOS

5. Boot each Virtual I/O Client

108 Integrated Virtualization Manager on IBM System p5

Page 119: Integrated Virtualization

2. Recover server configuration data to HMC

Add the managed system to the HMC, then the managed system will go into recovery mode as shown in Figure 5-13. Right-click on the managed system, then select Recover Partition Data → Restore profile data from HMC backup data. Make sure that at least one of the LPARs is up and running; otherwise the HMC might delete all of the LPARs.

Figure 5-13 HMC recovery mode

3. Re-create the partition profile

After the recovery completes, the HMC displays all partitions without a profile in the managed system. VIOS should be able to use the virtual Ethernet adapter created in an IVM environment when it is rebooted. The IVM devices will appear in the defined state, as shown in Example 5-8. More information about creating partitions and profiles can be found on the following Web site:

http://publib.boulder.ibm.com/infocenter/eserver/v1r3s/topic/iphbl/iphblcreatelpar.htm

4. Re-create virtual devices and Ethernet bridging

Because everything is identical from the PHYP side, you normally should not re-create virtual devices or bridging.

However, if this is not the case, after removing the previous virtual devices, you can create the VIOS profile including the virtual server SCSI adapter and virtual Ethernet adapter. Then re-create the virtual devices to bridge between VIOS and virtual I/O clients as shown in Example 5-8.

Example 5-8 Re-create bridge between VIOS and virtual I/O clients

<< SEA creation >>$ mkvdev -sea ent0 -vadapter ent5 -default ent5 -defaultid 1ent6 Available

Chapter 5. Maintenance 109

Page 120: Integrated Virtualization

en6et6

<< Virtual Disk Mapping >>$ mkvdev -vdev dbroot_lv -vadapter vhost0 -dev vscsi0vscsi0 Available$ mkvdev -vdev cd0 -vadapter vhost0 -dev vtopt0vtopt0 Available

Also, you will create the virtual I/O clients’ profile, including the virtual client SCSI adapter and virtual Ethernet adapters.

For more information about the creation of virtual devices on the VIOS refer to the IBM Redbook Advanced POWER Virtualization on IBM System p5, SG24-7940.

5.3 System maintenanceOperations such as microcode updates and Capacity on Demand are available for the system hosting the IVM.

5.3.1 Microcode updateThe IVM provides a convenient interface to generate a microcode survey of the managed system and to download and upgrade microcode.

The following steps describe how to update the device firmware:

1. From the Service Management menu in the navigation area, click Updates, then click the Microcode Updates tab in the work area.

Important: Before migration from an IVM environment to an HMC, it is necessary to back up the VIOS and VIOC. For more information about backup, refer to Section 2.1 in theIBM System p Advanced POWER Virtualization Best Practices, REDP-4194.

Note: If you are using an IBM BladeCenter JS21, then you should follow the specific directions for this platform. See IBM BladeCenter JS21: The POWER of Blade Innovation, SG24-7273.

110 Integrated Virtualization Manager on IBM System p5

Page 121: Integrated Virtualization

2. Click Generate New Survey. This generates a list of devices, as shown in Figure 5-14.

Figure 5-14 Microcode Survey Results

3. From the Microcode Survey Results list, select one or more items to upgrade. Click the Download link in the task area.

Chapter 5. Maintenance 111

Page 122: Integrated Virtualization

4. Information appears about the selected devices such as the available microcode level and the commands you need in order to install the microcode update, as shown in Figure 5-15. Select the Accept license check box in the work area, and click OK to download the selected microcode and store it on the disk.

Figure 5-15 Download Microcode Updates

5. Log in to the IVM using a terminal session.

6. Run the install commands provided by the GUI in step 3 on page 111.

If you are not able to connect to the GUI of the IVM and a system firmware update is needed, refer to 2.2, “Microcode update” on page 21 for the update procedure with a diagnostic CD.

112 Integrated Virtualization Manager on IBM System p5

Page 123: Integrated Virtualization

5.3.2 Capacity on Demand operationsOperations for Capacity on Demand (CoD) are available only through the ASMI menu, as shown in Figure 5-16.

Figure 5-16 CoD menu using ASMI

For more information, refer to 2.4, “Virtualization feature activation” on page 26.

5.4 Logical partition maintenanceEach LPAR hosted by the IVM works like a stand-alone system. For example, you can back up and restore, boot in maintenance mode, and perform an operating system update or a migration.

5.4.1 Backup of the operating systemThere are many ways to back up LPARs hosted by the IVM, depending on which operating system is installed.

The main possibilities for the AIX operating system are:

� In general the mksysb command creates a bootable image of the rootvg volume group either in a file or onto a tape. If your machine does not have sufficient space, you can use NFS to mount some space from another server system in order to create a system backup to file. However, the file systems must be writable. Because there is no virtual tape device, tape backup cannot be done locally for the client partitions but only by a remotely operated tape device. Such a backup could also be done by using additional software such as the Tivoli Storage Manager.

Chapter 5. Maintenance 113

Page 124: Integrated Virtualization

� The mkcd command creates a system backup image (mksysb) to CD-Recordable (CD-R) or DVD-Recordable (DVD-RAM) media from the system rootvg or from a previously created mksysb image. Multiple volumes are possible for backups over 4 GB. You can create a /mkcd file system that is very large (1.5 GB for CD or 9 GB for DVDs). The /mkcd file system can then be mounted onto the clients when they want to create a backup CD or DVD for their systems.

� Network Installation Management (NIM) creates a system backup image from a logical partition rootvg using the network.

For more information, see:

http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.install/doc/insgdrf/create_sys_backup.htm

5.4.2 Restore of the operating systemThe restoration process is exactly the same as on stand-alone systems. The main steps are:

1. Log in to the IVM.

2. Open a virtual terminal for the LPAR to be installed with the mkvt command, providing the ID of the LPAR to be restored.

3. Start the LPAR in SMS mode.

4. Select the boot device that was used for the backup such as CD, DVD-RAM, or network.

5. Boot the LPAR.

6. Follow the specific operating system’s restore procedures.

5.5 Command logsAll IVM actions are logged into the system. The log contains all the commands that the IVM Web GUI runs and all IVM-specific commands issued by an administrator on the command line.

The log contains the following information for each action:

� User name� Date and time� The command including all the parameters

The following steps describe how to access the log:

1. Under the Service Management menu in the navigation area, click Application Logs.

2. In the work area, use the provided filters to restrict the log search and then click Apply. This generates the selected log entries, as shown in Figure 5-17.

Note: When creating very large backups (DVD sized backups larger than 2 GB) with the mkcd command, the file systems must be large file enabled and this requires that the ulimit values are set to unlimited.

114 Integrated Virtualization Manager on IBM System p5

Page 125: Integrated Virtualization

Figure 5-17 Application command logs

5.6 Integration with IBM DirectorThe current version of VIOS/IVM, 1.3.0.0, includes an agent that allows full integration and management through the IBM Director console. From this centralized console you can monitor critical resources and events with automated alerts or responses to predefined conditions. You also have control over to hardware to remotely start, stop, and reset the machines/LPARs. You can also examine the software and hardware inventory and deploy new applications or updates across the environment. You can also monitor processes. All of this can now be integrated into an heterogeneous environment. Figure 5-18 shows the Platform Manager and Members view of IBM Director of our IVM server, together with some of the parameters and options of IBM Director.

Chapter 5. Maintenance 115

Page 126: Integrated Virtualization

Figure 5-18 Platform Manager and Members view

The support for IVM directly leverages support for the Hardware Management Console (HMC) that was available in IBM Director 5.10. IVM contains a running CIMOM that has information about the physical system it is managing and all of the LPARs. The CIMOM also forwards event information to IBM Director (see Figure 1-4 on page 10). Because the IVM provides a Web GUI for creating, deleting, powering on, and powering off LPARs, it also enables the client to manage events that have occurred on the system.

How does it work, and how is it integrated? Before IBM Director can manage an IVM system, the system must be added to IBM Director using one of two methods:

� The client can choose to create a new system, in which case the IP address would be provided and IBM Director would validate the IP address and if validated would create a managed object for IVM. This managed object would appear on the IBM Director console with the padlock icon next to it, indicating that the managed object is locked and needs authentication information to unlock it. The user has to Request Access to the managed object, giving it the User ID and Password.

� The other way is to Discover a Level 0: Agentless Systems. This will cause IBM Director to interrogate systems that are reachable based on Director’s Discovery Preferences for Level 0. In this case 0 or more managed objects will be created and locked as above; some may be IVM systems, some might not be. This will be determined after access has been granted. This time the user will have to Request Access to the managed object so that IBM Director can determine which ones are IVM managed systems.

Attention: The figure is one of an early code level, at the time of writing. This is subject to change.

116 Integrated Virtualization Manager on IBM System p5

Page 127: Integrated Virtualization

After a user Requests Access to a Level 0 managed object and access is granted, an attribute is set to identify it as belonging to an IVM system. When this happens, IBM Director creates a Logical Platform managed object for it and passes to it the authentication details. It also indicates that this managed object is a Platform Manager. After this is done, Director connects to the CIMOM on the IVM system and begins discovering the resources that are being managed by IVM, such as the physical system and each of its LPARs. Each of these resources will also have a managed object representation on the Director Console.

All discovery of the resources starts from the IBM_HwCtrlPoint CIM object. When we have that object, we use the IBM_TaggedCollection, which is an association between the Hardware Control Point and the objects that represent the physical system. This will be an instance of IBMP_CEC_CS class. Before we discover the LPARs, we must provide the Power Status, which we get from the association IBM_AssociatedPowerManagementService. This gives us an object that contains the PowerState property that we use to set the Power State attribute on the CEC and the subsequent LPARs. We then use the association between IBMP_CEC_CS object and IBMP_LPAR_CS objects to get all objects for all LPARs. This gives us the whole topology. Finally, we subscribe to the CIMOM for event notification.

IBM Director has a presence check facility. This is enabled by default, and has a default interval of 15 minutes. Basically, every 15 minutes (or whatever the user chooses), a presence check will be attempted on the managed object for IVM and for all of the managed objects that it is managing. This is done by attempting to connect to the CIMOM on the IVM system. These presence checks could happen either before or after a request access has been completed successfully.

The presence check uses the credentials that the managed object has, so if the presence check is done before the request access, IBM Director will either get a fail to connect or an invalid authentication. If IBM Director gets a fail to connect indication, then the managed object will indicate “offline” and will remain that way until a presence check gets an invalid authentication indication. While the managed object is in the offline state, the user will not be able to request access to it. If IBM Director was receiving a fail to connect indication because of a networking problem or because the hardware was turned off, fixing those problems will cause the managed object to go back to online and locked.

At this point the user can request access. After access is granted, subsequent presence checks will use those validated credentials to connect to the CIMOM. Now the possibilities for presence check should be “fail to connect” or “connect successful.” If the connection is successful, then presence check does a topology scan and verifies that all resources have managed objects and all managed objects represent existing resources. If that is not the case, managed objects are created or deleted to make the two lists agree. Normally events will be created when an LPAR is deleted, for example. When this happens, IBM Director will delete the managed object for this LPAR. Because an LPAR could be deleted when Director server is down for some reason, this validation that is done by presence check would keep things in sync.

IBM Director subscribes to events with the CIMOM on IVM. Some events require action from IBM Director such as power-on or power-off events or creation or deletion of an LPAR, and some require no action. All events that IBM Director receives are recorded in the Director Event Log and those that require action are acted on. For example, if an LPAR was deleted, then Director’s action would be to remove the managed object from the console. If an LPAR was powered on, then the managed object for the LPAR would show the new power state.

IBM Director also provides a means of doing Inventory collection. For IVM, we collect information for physical and virtual information for processors and memory.

Chapter 5. Maintenance 117

Page 128: Integrated Virtualization

118 Integrated Virtualization Manager on IBM System p5

Page 129: Integrated Virtualization

Appendix A. IVM and HMC feature summary

Table 5-1 provides a comparison between IVM and the HMC.

Table 5-1 IVM and HMC comparison at a glance

A

Integrated Virtualization Manager (IVM)

Hardware Management Console (HMC)

Physical footprint Integrated into the server A desktop or rack-mounted appliance

Installation Installed with the VIOS (optical or network). Preinstall option available on some systems.

Appliance is preinstalled. Reinstall using optical media or network is supported.

Managed operating systems supported

AIX 5L and Linux AIX 5L, Linux, and i5/OS®

Virtual console support AIX 5L and Linux virtual console support

AIX 5L, Linux, and i5/OS virtual console support

User security Password authentication with support for either full or ready-only authorities

Password authentication with granular control of task-based authorities and object-based authorities

Network security -Firewall support via command line

-Web server SSL support

-Integrated firewall

-SSL support for clients and for communications with managed systems

© Copyright IBM Corp. 2005, 2006. All rights reserved. 119

Page 130: Integrated Virtualization

Servers supported System p5 505 and 505Q Express

System p5 510 and 510Q Express

System p5 520 and 520Q Express

System p5 550 and 550Q Express

System p5 560Q Express

eServer p5 510 and 510 Express

eServer p5 520 and 520 Express

eServer p5 550 and 550 Express

OpenPower 710 and 720

BladeCenter JS21

All POWER5 and POWER5+

Processor-based servers:

System p5 and System p5 Express

eServer p5 and eServer p5 Express

OpenPower

eServer i5

Multiple system support One IVM per server One HMC can manage multiple servers

Redundancy One IVM per server Multiple HMCs can manage the same system for HMC redundancy

Maximum number of partitions supported

Firmware maximum Firmware maximum

Uncapped partition support Yes Yes

Dynamic Resource Movement (dynamic LPAR)

- System p5 support for processing & memory

- BladeCenter JS21 only support for processing

Yes - full support

I/O Support for AIX 5L and Linux Virtual optical, disk, Ethernet, and console

Virtual and Direct

I/O Support for i5/OS None Virtual and Direct

Maximum # of virtual LANs Four 4096

Fix/update process for Manager VIOS fixes and updates HMC e-fixes and release updates

Adapter microcode updates Inventory scout Inventory scout

Firmware updates VIOS firmware update tools (not concurrent)

Service Focal Point with concurrent firmware updates

Integrated Virtualization Manager (IVM)

Hardware Management Console (HMC)

120 Integrated Virtualization Manager on IBM System p5

Page 131: Integrated Virtualization

I/O concurrent maintenance VIOS support for slot and device level concurrent maintenance via the diag hot plug support

Guided support in the Repair and Verify function on the HMC

Scripting and automation VIOS command line interface (CLI) and HMC-compatible CLI

HMC command line interface

Capacity on Demand No support Full support

User interface Web browser (no local graphical display)

WebSM (local or remote)

Workload Management (WLM) groups supported

One 254

LPAR configuration data backup and restore

Yes Yes

Support for multiple profiles per partition

No Yes

Serviceable event management Service Focal Point Light: Consolidated management of firmware and management of partition detected errors

Service Focal Point support for consolidated management of operating system and firmware detected errors

Hypervisor and service processor dump support

Dump collection with support to do manual dump downloads

Dump collection and call home support

Remote support No remote support connectivity Full remote support for the HMC and connectivity for firmware remote support

Integrated Virtualization Manager (IVM)

Hardware Management Console (HMC)

Appendix A. IVM and HMC feature summary 121

Page 132: Integrated Virtualization

122 Integrated Virtualization Manager on IBM System p5

Page 133: Integrated Virtualization

Appendix B. System requirements

The following are the currently supported systems:

� System p5 505 and 505Q Express

� System p5 510 and 510Q Express

� System p5 520 and 520Q Express

� System p5 550 and 550Q Express

� System p5 560Q Express

� eServer p5 510 and 510 Express

� eServer p5 520 and 520 Express

� eServer p5 550 and 550 Express

� OpenPower 710 and 720

� BladeCenter JS21

The required firmware level is as follows:

� SF235 or later (not applicable to BladeCenter JS21)

The software minimum supported levels are:

� AIX 5L V5.3 or later

� SUSE Linux Enterprise Server 9 for POWER (SLES 9) or later

� Red Hat Enterprise Linux AS 3 for POWER, Update 2 (RHEL AS 3) or later

B

© Copyright IBM Corp. 2005, 2006. All rights reserved. 123

Page 134: Integrated Virtualization

124 Integrated Virtualization Manager on IBM System p5

Page 135: Integrated Virtualization

Related publications

The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this Redpaper.

IBM RedbooksFor information about ordering these publications, see “How to get IBM Redbooks” on page 127. Note that some of the documents referenced here may be available in softcopy only.

� Advanced POWER Virtualization on IBM System p5, SG24-7940, draft available, expected publication date December 2005

� IBM System p5 505 and 505Q Technical Overview and Introduction, REDP-4079

� IBM eServer p5 510 Technical Overview and Introduction, REDP-4001

� IBM eServer p5 520 Technical Overview and Introduction, REDP-9111

� IBM eServer p5 550 Technical Overview and Introduction, REDP-9113

� Managing AIX Server Farms, SG24-6606

� Partitioning Implementations for IBM eServer p5 Servers, SG24-7039

� Practical Guide for SAN with pSeries, SG24-6050

� Problem Solving and Troubleshooting in AIX 5L, SG24-5496

� Understanding IBM eServer pSeries Performance and Sizing, SG24-4810

Other publicationsThese publications are also relevant as further information sources:

� RS/6000 and eServer pSeries Adapters, Devices, and Cable Information for Multiple Bus Systems, SA38-0516, contains information about adapters, devices, and cables for your system.

� RS/6000 and eServer pSeries PCI Adapter Placement Reference for AIX, SA38-0538, contains information regarding slot restrictions for adapters that can be used in this system.

� System Unit Safety Information, SA23-2652, contains translations of safety information used throughout the system documentation.

� IBM eServer Planning, SA38-0508, contains site and planning information, including power and environment specifications.

Online resourcesThese Web sites and URLs are also relevant as further information sources:

� AIX 5L operating system maintenance packages downloads

http://www.ibm.com/servers/eserver/support/pseries/aixfixes.html

© Copyright IBM Corp. 2005, 2006. All rights reserved. 125

Page 136: Integrated Virtualization

� IBM eServer p5, pSeries, OpenPower and IBM RS/6000 Performance Report

http://www.ibm.com/servers/eserver/pseries/hardware/system_perf.html

� IBM TotalStorage Expandable Storage Plus

http://www.ibm.com/servers/storage/disk/expplus/index.html

� IBM TotalStorage Mid-range Disk Systems

http://www.ibm.com/servers/storage/disk/ds4000/index.html

� IBM TotalStorage Enterprise disk storage

http://www.ibm.com/servers/storage/disk/enterprise/ds_family.html

� IBM Virtualization Engine

http://www.ibm.com/servers/eserver/about/virtualization/

� Advanced POWER Virtualization on IBM Sserver p5

http://www.ibm.com/servers/eserver/pseries/ondemand/ve/resources.html

� Virtual I/O Server supported environments

http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html

� Hardware Management Console support information

http://techsupport.services.ibm.com/server/hmc

� IBM LPAR Validation Tool (LVT), a PC-based tool intended assist you in logical partitioning

http://www.ibm.com/servers/eserver/iseries/lpar/systemdesign.htm

� Customer Specified Placement and LPAR Delivery

http://www.ibm.com/servers/eserver/power/csp/index.html

� SUMA on AIX 5L

http://techsupport.services.ibm.com/server/suma/home.html

� Linux on IBM eServer p5 and pSeries

http://www.ibm.com/servers/eserver/pseries/linux/

� SUSE Linux Enterprise Server 9

http://www.novell.com/products/linuxenterpriseserver/

� Red Hat Enterprise Linux details

http://www.redhat.com/software/rhel/details/

� IBM eServer Linux on POWER Overview

http://www.ibm.com/servers/eserver/linux/power/whitepapers/linux_overview.html

� Autonomic computing on IBM Sserver pSeries servers

http://www.ibm.com/autonomic/index.shtml

� IBM eServer p5 AIX 5L Support for Micro-Partitioning and Simultaneous multithreading whitepaper

http://www.ibm.com/servers/aix/whitepapers/aix_support.pdf

� Hardware documentation

http://publib16.boulder.ibm.com/pseries/en_US/infocenter/base/

� IBM eServer Information Center

http://publib.boulder.ibm.com/eserver/

126 Integrated Virtualization Manager on IBM System p5

Page 137: Integrated Virtualization

� IBM eServer pSeries support

http://www.ibm.com/servers/eserver/support/pseries/index.html

� IBM eServer support: Tips for AIX 5L administrators

http://techsupport.services.ibm.com/server/aix.srchBroker

� Linux for IBM eServer pSeries

http://www.ibm.com/servers/eserver/pseries/linux/

� Microcode Discovery Service

http://techsupport.services.ibm.com/server/aix.invscoutMDS

� POWER4 system microarchitecture, comprehensively described in the IBM Journal of Research and Development, Vol 46, No.1, January 2002

http://www.research.ibm.com/journal/rd46-1.html

� SCSI T10 Technical Committee

http://www.t10.org

� Microcode Downloads for IBM Sserver i5, OpenPower, p5, pSeries, and RS/6000 systems

http://techsupport.services.ibm.com/server/mdownload

� VIO Server and PLM command descriptions

http://publib.boulder.ibm.com/infocenter/eserver/v1r3s/index.jsp?topic=/iphb1/iphb1_vios_commandslist.htm

How to get IBM RedbooksYou can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site:

ibm.com/redbooks

Help from IBMIBM Support and downloads

ibm.com/support

IBM Global Services

ibm.com/services

Related publications 127

Page 138: Integrated Virtualization

128 Integrated Virtualization Manager on IBM System p5

Page 139: Integrated Virtualization
Page 140: Integrated Virtualization

®

INTERNATIONAL TECHNICALSUPPORTORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE

IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information:ibm.com/redbooks

Redpaper

Integrated Virtualization Manageron IBM System p5

No dedicated Hardware Management Console required

Powerful integration for entry-level servers

Key administration tasks explained

The IBM Virtual I/O Server Version 1.2 provided a hardware management function called the Integrated Virtualization Manager (IVM). It handled the partition configuration on selected IBM System p5, IBM eServer p5, and IBM OpenPower systems without the need for dedicated hardware, such as a Hardware Management Console. The latest version of VIOS, 1.3.0.0, adds a number of new functions, such as support for dynamic logical partitioning for memory and processors in managed systems, task manager monitor for long-running tasks, security additions such as viosecure and firewall, and other improvements.

The Integrated Virtualization Manager enables a more cost-effective solution for consolidation of multiple partitions onto a single server. With its intuitive, browser-based interface, the Integrated Virtualization Manager is easy to use and significantly reduces the time and effort required to manage virtual devices and partitions.

This IBM Redpaper provides an introduction to the Integrated Virtualization Manager, describing its architecture and showing how to install and configure a partitioned server using its capabilities.

Back cover