SlideShare a Scribd company logo
Redbooks
Front cover
IBM PowerVC Version 1.2.3
Introduction and Configuration
Marco Barboni
Guillermo Corti
Benoit Creau
Liang Hou Xu
Ibm power vc version 1.2.3 introduction and configuration
International Technical Support Organization
IBM PowerVC Version 1.2.3: Introduction and
Configuration
October 2015
SG24-8199-02
© Copyright International Business Machines Corporation 2014, 2015. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Third Edition (October 2015)
This edition applies to version 1, release 2, modification 3 of IBM® Power Virtualization Center
Standard Edition (5765-VCS).
Note: Before using this information and the product it supports, read the information in “Notices” on
page xv.
© Copyright IBM Corp. 2014, 2015. All rights reserved. iii
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
IBM Redbooks promotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Authors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xx
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Chapter 1. PowerVC introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 PowerVC overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 PowerVC functions and advantages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 OpenStack overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 The OpenStack Foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 OpenStack framework and projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 PowerVC high-level architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 PowerVC Standard Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 PowerVC adoption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Chapter 2. PowerVC versions and releases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Previous versions and milestones. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.1 PowerVC release to OpenStack edition cross-reference . . . . . . . . . . . . . . . . . . . 10
2.1.2 IBM PowerVC first release (R1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.3 IBM PowerVC version 1.2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.4 IBM PowerVC version 1.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 IBM PowerVC version 1.2.2 enhancements and new features. . . . . . . . . . . . . . . . . . . 11
2.2.1 Image management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.2 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.3 Host maintenance mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.4 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.5 Cisco Fibre Channel support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.6 XIV storage support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.7 EMC storage support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.8 Virtual SCSI support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.9 Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.10 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3 New in IBM PowerVC version 1.2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.1 Major software changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.2 Significant scaling improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.3 Redundant HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.4 Error scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.5 Host groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
iv IBM PowerVC Version 1.2.3: Introduction and Configuration
2.3.6 Advance placement policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.7 Multiple disk capture and deployment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.8 PowerVC remote restart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.9 Cloud-init for the latest service pack of AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Chapter 3. PowerVC installation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.1 IBM PowerVC requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.1 Hardware and software requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.2 PowerVC Standard Edition requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.3 Other hardware compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Host and partition management planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.1 Physical server configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.2 HMC or PowerKVM planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.3 Virtual I/O Server planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3 Placement policies and templates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3.1 Host groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3.2 Placement policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3.3 Template types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3.4 Information that is required for compute template planning . . . . . . . . . . . . . . . . . 42
3.4 PowerVC storage access SAN planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.4.1 vSCSI storage access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.4.2 NPIV storage access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.4.3 Shared storage pool: vSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.4.4 Storage access in PowerVC Standard Edition managing PowerKVM . . . . . . . . . 50
3.5 Storage management planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.5.1 PowerVC terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.5.2 Storage templates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.5.3 Storage connectivity groups and tags. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.6 Network management planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.6.1 Multiple network planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.6.2 Shared Ethernet adapter planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.7 Planning users and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.7.1 User management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.7.2 Group management planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.8 Security management planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.8.1 Ports that are used by IBM Power Virtualization Center. . . . . . . . . . . . . . . . . . . . 74
3.8.2 Providing a certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.9 Product information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Chapter 4. PowerVC installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.1 Setting up the PowerVC environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.1.1 Create the virtual machine to host PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.1.2 Download and install Red Hat Enterprise Linux . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.1.3 Customize Red Hat Enterprise Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.2 Installing PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.3 Uninstalling PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.4 Upgrading PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.4.1 Before you begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.4.2 Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.5 Updating PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.6 PowerVC backup and recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.6.1 Backing up PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.6.2 Recovering PowerVC data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Contents v
4.6.3 Status messages during backup and recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.6.4 Consideration about backup and recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.7 PowerVC command-line interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.7.1 Exporting audit data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.8 Virtual machines that are managed by PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.8.1 Linux on Power virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.8.2 IBM AIX virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.8.3 IBM i virtual machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Chapter 5. PowerVC Standard Edition for managing PowerVM . . . . . . . . . . . . . . . . . . 97
5.1 PowerVC graphical user interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.2 Introduction to PowerVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.3 Connecting to PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.4 Host setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.5 Host Groups setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.6 Hardware Management Console management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.6.1 Add an HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.6.2 Changing HMC credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.6.3 Change the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.7 Storage and SAN fabric setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.7.1 Add a storage controller to PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.7.2 Add SAN fabric to PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.8 Storage port tags setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.9 Storage connectivity group setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.10 Storage template setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.11 Storage volume setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.12 Network setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
5.13 Compute template setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.14 Environment verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.14.1 Verification report validation categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.15 Management of virtual machines and images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.15.1 Virtual machine onboarding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5.15.2 Refresh the virtual machine view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.15.3 Start the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5.15.4 Stop the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5.15.5 Capture a virtual machine image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.15.6 Deploy a new virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
5.15.7 Add virtual Ethernet adapters for virtual machines . . . . . . . . . . . . . . . . . . . . . . 165
5.15.8 Add collocation rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.15.9 Resize the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
5.15.10 Migration of virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
5.15.11 Host maintenance mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5.15.12 Restart virtual machines remotely from a failed host . . . . . . . . . . . . . . . . . . . 175
5.15.13 Attach a volume to the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
5.15.14 Detach a volume from the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
5.15.15 Reset the state of a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5.15.16 Delete images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5.15.17 Unmanage a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5.15.18 Delete a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Chapter 6. PowerVC Standard Edition for managing PowerKVM . . . . . . . . . . . . . . . . 187
6.1 Install PowerVC Standard to manage PowerKVM . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.2 Set up PowerVC Standard managing PowerKVM . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
vi IBM PowerVC Version 1.2.3: Introduction and Configuration
6.2.1 Add the PowerKVM host. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
6.2.2 Add storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
6.2.3 Add a network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
6.3 Host group setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
6.4 Import ISO images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
6.4.1 Importing ISO images by using the command-line interface. . . . . . . . . . . . . . . . 202
6.4.2 Importing ISO images by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
6.4.3 Deploying an RHEL ISO image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
6.5 Capture a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
6.5.1 Install cloud-init on the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
6.5.2 Change devices to be mounted by name or UUID . . . . . . . . . . . . . . . . . . . . . . . 215
6.5.3 Capture the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
6.6 Deploy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
6.7 Resize virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
6.8 Suspend and resume virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
6.9 Restart a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
6.10 Migrate virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
6.11 Restarting virtual machines remotely . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6.12 Delete virtual machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
6.13 Create and attach volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
6.14 Attach volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Chapter 7. PowerVC lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
7.1 PowerVC Standard Edition lab environment for managing PowerVM . . . . . . . . . . . . 234
7.1.1 Hardware Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
7.1.2 Power Systems hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
7.1.3 Storage infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
7.1.4 Storage configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
7.1.5 Storage connectivity groups and port tagging. . . . . . . . . . . . . . . . . . . . . . . . . . . 239
7.1.6 Software stack for PowerVC lab environment. . . . . . . . . . . . . . . . . . . . . . . . . . . 242
7.2 PowerVC Standard managing PowerKVM lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
© Copyright IBM Corp. 2014, 2015. All rights reserved. vii
Figures
1-1 OpenStack framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1-2 OpenStack main components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1-3 PowerVC implementation on top of OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3-1 VIOS settings that need to be managed by PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . 37
3-2 Modifying maximum virtual adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3-3 Host group sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3-4 Migration of a partition by using a placement policy . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3-5 Memory region size view on the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3-6 PowerVC Standard Edition storage access by using vSCSI. . . . . . . . . . . . . . . . . . . . . 48
3-7 PowerVC Standard Edition storage access by using NPIV . . . . . . . . . . . . . . . . . . . . . 49
3-8 PowerVC Standard Edition storage access by using an SSP . . . . . . . . . . . . . . . . . . . 50
3-9 PowerVC Standard Edition managing PowerKVM storage access . . . . . . . . . . . . . . . 51
3-10 PowerVC storage providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3-11 Fabrics window that lists a switch with a switch GUI . . . . . . . . . . . . . . . . . . . . . . . . . 53
3-12 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3-13 Storage templates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3-14 Storage template definition: Advanced settings, thin-provisioned . . . . . . . . . . . . . . . 57
3-15 Volume creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3-16 List of storage connectivity groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3-17 Storage connectivity groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3-18 Content of a storage connectivity group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3-19 Storage connectivity groups and tags. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3-20 Examples of storage connectivity group deployments . . . . . . . . . . . . . . . . . . . . . . . . 63
3-21 Users information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3-22 Detailed user account information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3-23 Groups tab view under Users on the PowerVC management host. . . . . . . . . . . . . . . 72
3-24 Detailed view of viewer user group on the management host . . . . . . . . . . . . . . . . . . 73
4-1 Maintenance message for logged-in users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4-2 Maintenance message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5-1 Home page access to a group of functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5-2 PowerVC Login window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5-3 Initial system check. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5-4 HMC connection information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5-5 PowerVC Add Hosts dialog window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5-6 Managed hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5-7 PowerVC shows the managed hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5-8 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5-9 Host Groups page. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5-10 Create Host Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5-11 Add HMC Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5-12 Changing HMC credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5-13 Change HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5-14 Select the new HMC for hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5-15 Adding extra storage providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5-16 Add Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5-17 PowerVC Standard Edition window to select a storage pool . . . . . . . . . . . . . . . . . . 113
5-18 Add Fabric window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5-19 IPowerVC Standard Edition Add Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
viii IBM PowerVC Version 1.2.3: Introduction and Configuration
5-20 PowerVC Storage providers tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5-21 PowerVC Fibre Channel port configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5-22 PowerVC Storage Connectivity Groups dialog window . . . . . . . . . . . . . . . . . . . . . . 117
5-23 PowerVC Add Member to storage connectivity group window . . . . . . . . . . . . . . . . . 118
5-24 Disabling a storage connectivity group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5-25 IBM XIV storage template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5-26 PowerVC Create Storage Template window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5-27 PowerVC Create Storage Template Advanced Settings . . . . . . . . . . . . . . . . . . . . . 122
5-28 PowerVC Storage Templates page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5-29 PowerVC Create Volume window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5-30 List of PowerVC storage volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
5-31 PowerVC network definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5-32 IP Pool tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5-33 PowerVC Create Compute Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5-34 PowerVC Compute Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5-35 PowerVC interface while environment verification in process. . . . . . . . . . . . . . . . . . 129
5-36 Verification Results view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5-37 Example of a validation message for an error status . . . . . . . . . . . . . . . . . . . . . . . . 132
5-38 Example of a validation message for an informational message status . . . . . . . . . . 133
5-39 Operations icons on the Virtual Machines view . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5-40 Selecting a host window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5-41 Selected hosts window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5-42 Collapse and expand sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5-43 Adding existing VMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5-44 Example of an informational pop-up message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5-45 Virtual machine detailed view with collapsed sections . . . . . . . . . . . . . . . . . . . . . . . 138
5-46 Virtual machine detailed view of expanded Information section . . . . . . . . . . . . . . . . 139
5-47 Virtual machine detailed view of expanded Specifications section . . . . . . . . . . . . . . 140
5-48 Virtual machine detailed view of expanded Network Interfaces section . . . . . . . . . . 141
5-49 Detailed Network Overview tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
5-50 Virtual machine Refresh icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
5-51 Virtual machine fully started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5-52 Virtual machine powered off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5-53 Capture window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
5-54 Capture boot and data volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
5-55 Capture window confirmation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5-56 Image snapshot in progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5-57 Image creation in progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5-58 Storage volumes view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5-59 Expanded information for a captured image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5-60 Volumes section and Virtual Machines section. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
5-61 Image capture that is selected for deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
5-62 Information to deploy an image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5-63 Newly deployed virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5-64 Add an Ethernet adapter for a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5-65 Create Collocation Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5-66 Virtual Machine resize. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
5-67 VM Resize dialog window to select a compute template . . . . . . . . . . . . . . . . . . . . . 167
5-68 Exceeded value for resizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5-69 Migrate a selected virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
5-70 Select target server before the migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
5-71 Virtual machine migration in progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5-72 Virtual machine migration finished . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Figures ix
5-73 Enter Maintenance Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
5-74 Migrate virtual machines to other hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5-75 Exit Maintenance Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5-76 Create a compute template with enabled remote restart capability . . . . . . . . . . . . . 176
5-77 Correct remote restart state under the Specifications section . . . . . . . . . . . . . . . . . 177
5-78 Remotely Restart Virtual Machines option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5-79 Remotely Restart Virtual Machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
5-80 Destination host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
5-81 Attaching a new volume to a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
5-82 Attached Volumes tab view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
5-83 Detach a volume from a virtual machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5-84 Confirmation window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5-85 Resetting the virtual machine’s state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5-86 State reset confirmation window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5-87 Image selected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5-88 Delete an image confirmation window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5-89 Unmanage an existing virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5-90 Delete a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
5-91 Confirmation window to delete a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6-1 PowerVC Login window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
6-2 PowerVC Home page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
6-3 PowerVC Add Host window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
6-4 Informational messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
6-5 Host added successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
6-6 PowerVC managing PowerKVM hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
6-7 Detailed Hosts view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
6-8 PowerKVM host information and capacity section . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
6-9 PowerKVM Virtual Switches and Virtual Machines sections. . . . . . . . . . . . . . . . . . . . 193
6-10 Add a storage device to PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
6-11 SVC storage pool choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
6-12 The new SVC storage provider. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
6-13 Add a network to the PowerVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
6-14 Network is configured now . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
6-15 List of virtual switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
6-16 Edit virtual switch window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
6-17 Message about conflicts with the updated virtual switch selections . . . . . . . . . . . . . 200
6-18 Details of the virtual switch components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
6-19 Create a host group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
6-20 Upload Image window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
6-21 ISO images that were imported to PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
6-22 Status of the imported ISO image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
6-23 RHEL ISO image details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
6-24 Select the image for deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
6-25 Virtual machine deployment parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
6-26 Deployment in-progress message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
6-27 Successful deployment verification message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
6-28 Virtual Machines view with highlighted State and Health columns . . . . . . . . . . . . . . 208
6-29 Detailed information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
6-30 Detailed information with expanded or collapsed sections . . . . . . . . . . . . . . . . . . . . 210
6-31 Stopping the virtual machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
6-32 Virtual machine started and active . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
6-33 Warning message before you capture the VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
6-34 Capture window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
x IBM PowerVC Version 1.2.3: Introduction and Configuration
6-35 Snapshot in-progress message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
6-36 Status from the Virtual Machines view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
6-37 Snapshot status from the Images view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
6-38 General and network sections of the window to deploy a VM . . . . . . . . . . . . . . . . . 220
6-39 Activation Input section of the window to deploy a virtual machine . . . . . . . . . . . . . 221
6-40 Deployment is started message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
6-41 Virtual Machines view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
6-42 Resize virtual machine window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
6-43 Suspend or pause a virtual machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
6-44 Restart a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
6-45 Migrate a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
6-46 Migrating a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6-47 Remotely Restart Virtual Machines option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6-48 Select virtual hosts to restart remotely . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
6-49 Virtual machines that were restarted remotely . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
6-50 Delete a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
6-51 Create Volume window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
6-52 Attaching new volume to a virtual machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
6-53 Attach an existing volume to this virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
7-1 PowerVC Standard Edition hardware lab for managing PowerVM. . . . . . . . . . . . . . . 234
7-2 Physical to logical management layers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
7-3 Shared storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
7-4 Storage configuration that was set for this publication . . . . . . . . . . . . . . . . . . . . . . . . 239
7-5 Storage groups and tagged ports configuration lab . . . . . . . . . . . . . . . . . . . . . . . . . . 240
7-6 Storage connectivity groups in the lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
7-7 Fibre Channel port tags that are used in the lab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
7-8 PowerVC Standard managing PowerKVM lab setup . . . . . . . . . . . . . . . . . . . . . . . . . 243
© Copyright IBM Corp. 2014, 2015. All rights reserved. xi
Tables
2-1 PowerVC releases cross-referenced to OpenStack versions . . . . . . . . . . . . . . . . . . . . 10
2-2 Updated support matrix for SSP, NPIV, and vSCSI storage paths in PowerVC version
1.2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2-3 New functions that are introduced in PowerVC 1.2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2-4 Scaling capabilities for PowerKVM and PowerVM in PowerVC . . . . . . . . . . . . . . . . . . 23
2-5 List of supported and unsupported multiple disk combinations . . . . . . . . . . . . . . . . . . 26
3-1 Hardware and OS requirements for PowerVC Standard Edition . . . . . . . . . . . . . . . . . 31
3-2 Minimum resource requirements for the PowerVC VM. . . . . . . . . . . . . . . . . . . . . . . . . 31
3-3 Supported activation methods for managed hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3-4 HMC requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3-5 Supported virtualization platforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3-6 Supported network hardware and software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3-7 Supported storage hardware for PowerVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3-8 Supported storage hardware for PowerKVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3-9 Supported security software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3-10 Processor compatibility modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3-11 Preferred practices for shared Ethernet adapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4-1 RHEL packages that relate to PowerVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4-2 Options for the PowerVC install command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4-3 Available options for the powervc-uninstall command . . . . . . . . . . . . . . . . . . . . . . . . . 85
4-4 Options for the powervc-backup command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4-5 Options for the powervc-restore command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4-6 PowerVC available commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4-7 Commands for PowerVC Standard for managing PowerKVM . . . . . . . . . . . . . . . . . . . 93
4-8 Options for the powervc-audit-export command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5-1 Information section fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
5-2 Specifications section’s fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
5-3 Details section’s fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
5-4 Modules and descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
5-5 Description of the fields in the Information section . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5-6 Description of the fields in the Specifications section . . . . . . . . . . . . . . . . . . . . . . . . . 159
5-7 Host states during the transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
7-1 HMC that was used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
7-2 Hardware test environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
7-3 Storage switch specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
7-4 IBM SAN Volume Controller specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
7-5 Software versions and releases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
xii IBM PowerVC Version 1.2.3: Introduction and Configuration
© Copyright IBM Corp. 2014, 2015. All rights reserved. xiii
Examples
2-1 The chdef commands to set the reserve policy and algorithm on new disks . . . . . . . . 17
2-2 How to check whether a host can use remote restart from PowerVC. . . . . . . . . . . . . . 26
2-3 Example of clouddev and ghostdev output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2-4 Obtain the values that are set on the ghostdev and clouddev attributes . . . . . . . . . . . 27
3-1 Adding an admin user account with the useradd command . . . . . . . . . . . . . . . . . . . . . 68
3-2 Verify users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3-3 Updating the admin user account with the usermod command . . . . . . . . . . . . . . . . . . 70
4-1 Installing the gettext package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4-2 Installing PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4-3 Installation completed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4-4 Uninstallation successful. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4-5 Update successfully completed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4-6 Example of PowerVC backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4-7 Mismatch between backup and recovery environments . . . . . . . . . . . . . . . . . . . . . . . . 89
4-8 Example of PowerVC recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4-9 powervc-audit command use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4-10 IBM Installation Toolkit sample output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4-11 RMC status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5-1 scratchpad.txt file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
5-2 scratchpad.txt file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5-3 Specific device names for the /etc/fstab file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5-4 /etc/lilo.conf file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5-5 Specific devices names for the /etc/lilo.conf file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5-6 Commands to enable the activation engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
5-7 Output from the /opt/ibm/ae/AE.sh -R command . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6-1 Importing a Red Hat ISO image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
6-2 ISO image location and naming in PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
6-3 virsh list --all output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
6-4 Virtual console that shows Disc Found message . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
6-5 Symbolic links mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
6-6 Sample device names before the change. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
6-7 Sample device names after the change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
6-8 lilo.conf file before change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
6-9 lilo.conf file after change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
xiv IBM PowerVC Version 1.2.3: Introduction and Configuration
© Copyright IBM Corp. 2014, 2015. All rights reserved. xv
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
xvi IBM PowerVC Version 1.2.3: Introduction and Configuration
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX®
DB2®
Enterprise Storage Server®
FlashCopy®
GDPS®
Geographically Dispersed Parallel
Sysplex™
GPFS™
HACMP™
IBM®
IBM SmartCloud®
IBM Spectrum™
Parallel Sysplex®
POWER®
Power Systems™
POWER6®
POWER6+™
POWER7®
POWER7 Systems™
POWER7+™
POWER8®
PowerHA®
PowerVM®
Redbooks®
Redbooks (logo) ®
Storwize®
SystemMirror®
XIV®
The following terms are trademarks of other companies:
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Other company, product, or service names may be trademarks or service marks of others.
IBM REDBOOKS PROMOTIONS
Find and read thousands of
IBM Redbooks publications
Search, bookmark, save and organize favorites
Get up-to-the-minute Redbooks news and announcements
Link to the latest Redbooks blogs and videos
Download
Now
Get the latest version of the Redbooks Mobile App
iOS
Android
Place a Sponsorship Promotion in an IBM
Redbooks publication, featuring your business
or solution with a link to your web site.
Qualified IBM Business Partners may place a full page
promotion in the most popular Redbooks publications.
Imagine the power of being seen by users who download
millions of Redbooks publications each year!
®
®
Promote your business
in an IBM Redbooks
publication
ibm.com/Redbooks
About Redbooks Business Partner Programs
IBM Redbooks promotions
THIS PAGE INTENTIONALLY LEFT BLANK
© Copyright IBM Corp. 2014, 2015. All rights reserved. xix
Preface
IBM® Power Virtualization Center (PowerVC™) is an advanced enterprise virtualization
management offering for IBM® Power Systems™, which is based on the OpenStack
framework. This IBM Redbooks® publication introduces PowerVC and helps you understand
its functions, planning, installation, and setup.
Starting with PowerVC version 1.2.2, the Express Edition offering is no longer available and
the Standard Edition is the only offering. PowerVC supports both large and small
deployments, either by managing IBM PowerVM® that is controlled with the Hardware
Management Console (HMC) or by managing PowerKVM directly. PowerVC can manage IBM
AIX®, IBM i, and Linux workloads that run on POWER® hardware, including IBM PurePower
systems.
PowerVC editions include the following features and benefits:
Virtual image capture, deployment, and management
Policy-based virtual machine (VM) placement to improve use
Management of real-time optimization and VM resilience to increase productivity
VM Mobility with placement policies to reduce the burden on IT staff in a simple-to-install
and easy-to-use graphical user interface (GUI)
An open and extensible PowerVM management system that you can adapt as you need
and that runs in parallel with your existing infrastructure, preserving your investment
A management system for existing PowerVM deployments
You will also find all the details about how we set up the lab environment that is used in this
book.
This book is for experienced users of IBM PowerVM and other virtualization solutions who
want to understand and implement the next generation of enterprise virtualization
management for Power Systems.
Unless stated otherwise, the content of this book refers to versions 1.2.2 and 1.2.3 of
IBM PowerVC.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, Poughkeepsie Center.
Marco Barboni is an IT Specialist at the IBM Rome Software Lab in Italy. He has 4 years of
experience in cloud virtualization and management in the IBM Power infrastructures field. He
holds a degree in Information Technology from “Roma Tre” University. His areas of expertise
include AIX administration, virtualization on Power, HMC, IBM Power Systems, IBM Linux on
Power, and also IBM Systems Director and IBM PowerVC infrastructure management.
xx IBM PowerVC Version 1.2.3: Introduction and Configuration
Guillermo Corti is an IT Architect at IBM Argentina. He has been with IBM since 2004 and
has 20 years experience with Power Systems and AIX. He has a degree in Systems from
Moron University and 11 years of experience working in service delivery for North American
accounts. His areas of expertise include Power Systems, AIX, IBM Linux on Power, and IBM
PowerVM solutions.
Benoit Creau is an AIX Systems Engineer who works in large French banks (currently BNP
Paribas). He has six years of experience managing client production environments with IBM
Power Systems. His areas of expertise include AIX, Virtual I/O Servers, Power Systems, and
PowerVC. He currently focuses on integrating new technology (IBM POWER8® and
PowerVC) in client environments. He has participated in the community by writing a blog
about Power Systems and related subjects for more than 5 years (chmod666.org).
Liang Hou Xu, PMP, is an IT Architect at IBM China. He has 16 years of experience in Power
Systems and four years of experience in the cloud field. He holds a degree in Engineering
from Tsinghua University. His areas of expertise include Power Systems, AIX, Linux, cloud,
IBM DB2®, C programming, and Project Management.
The project that created this book was managed by:
Scott Vetter, PMP
Thanks to the following people for their contributions to this project:
Dave Archer, Senthil Bakthavachalam, David Bennin, Eric Brown, Ella Buslovich,
Chun Shi Chang, Rich Conway, Joe Cropper, Rebecca Dimock, William Edmonds,
Edward Fink, Nigel Griffiths, Nicolas Guérin, Kyle Henderson, Philippe Hermes, Amy Hieter,
Greg Hintermeister, Bhrugubanda Jayasankar, Liang Jiang, Rishika Kedia,
Sailaja Keshireddy, Yan Koyfman, Jay Kruemcke, Samuel D. Matzek, John R. Niemi,
Geraint North, Sujeet Pai, Atul Patel, Carl Pecinovski, Taylor Peoples, Antoni Pioli,
Jeremy Salsman, Douglas Sanchez, Edward Shvartsman, Anna Sortland, Jeff Tenner,
Drew Thorstensen, Ramesh Veeramala, Christine Wang, and Michael Williams
Thanks to the authors of the previous editions of this book. The authors of the first edition,
IBM PowerVC Version 1.2.0 and 1.2.1 Introduction and Configuration, which was published in
October 2014, were Bruno Blanchard, Guillermo Corti, Sylvain Delabarre, Ho Jin Kim, Ondrej
Plachy, Marcos Quezada, and Gustavo Santos.
Now you can become a published author, too!
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Preface xxi
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
xxii IBM PowerVC Version 1.2.3: Introduction and Configuration
© Copyright IBM Corp. 2014, 2015. All rights reserved. 1
Chapter 1. PowerVC introduction
IBM® Power Virtualization Center Standard Edition (PowerVC) is the next generation of
enterprise virtualization management tools for IBM Power Systems. PowerVC incorporates a
powerful yet simple and intuitive GUI and deep integration with IBM PowerVM virtualization
technologies. PowerVC simplifies the management of the virtualization for Power Systems
servers that run on IBM AIX and Linux operating systems. It now supports the IBM i operating
system to benefit IBM i clients with various virtualization management functionalities in
PowerVC.
This publication provides introductory and configuration information for PowerVC. After we
present an overview of PowerVC in this first chapter, we cover the following topics in
subsequent chapters:
Release reviews in Chapter 2, “PowerVC versions and releases” on page 9
Planning information in Chapter 3, “PowerVC installation planning” on page 29
Installation guidelines in Chapter 4, “PowerVC installation” on page 77
General configuration and setup that are common to all variants of PowerVC in Chapter 5,
“PowerVC Standard Edition for managing PowerVM” on page 97
Information that is specific to using PowerVC Standard for managing PowerKVM in
Chapter 6, “PowerVC Standard Edition for managing PowerKVM” on page 187
A description of the test environment that was used for the examples in Chapter 7,
“PowerVC lab environment” on page 233
1
2 IBM PowerVC Version 1.2.3: Introduction and Configuration
1.1 PowerVC overview
This publication is for system administrators who are familiar with the concepts included in
these IBM Redbooks publications:
IBM PowerVM Virtualization Introduction and Configuration, SG24-7940
IBM PowerVM Virtualization Managing and Monitoring, SG24-7590
PowerVC simplifies the management of virtual resources in your Power Systems
environment.
After the product code is installed, the PowerVC no-menus interface guides the system
administrator through three simple configuration steps to register physical hosts, storage
providers, and network resources and to start capturing and intelligently deploying AIX, IBM i,
and Linux virtual machines (VMs). PowerVC also helps the system administrator perform the
following activities:
Create VMs and resize their CPU and memory.
Attach disk volumes to those VMs.
Import existing VMs and volumes so that they can be managed by PowerVC.
Monitor the use of resources in your environment.
Migrate VMs while they are running (live migration between physical servers).
Deploy images quickly to create new VMs that meet the demands of ever-changing business
needs. At the time of writing this publication, PowerVC can deploy VMs that use AIX, IBM i, or
Linux operating systems. PowerVC is built on OpenStack, which is open source software that
controls large pools of server, storage, and networking resources throughout a data center.
PowerVC uses IBM Platform Resource Scheduler (PRS) to extend the OpenStack set of
technologies to Power Systems environments with enhanced security, intelligent placement of
VMs, and other advanced policy-based features that are required on enterprise clouds.
PRS is a proven technology that is used in grid and scaled-out computing environments by
more than 2,000 clients. Its open and extensible architecture supports reservations,
over-subscription policies, and user-defined policies. PRS is also energy-aware. For more
information about PRS, see this website:
http://www.ibm.com/systems/platformcomputing/products/rs/
1.1.1 PowerVC functions and advantages
Why PowerVC? Why do we need another virtualization management offering? When more
than 70% of IT budgets is spent on operations and maintenance, IT clients legitimately expect
vendors to focus their new development efforts to reduce this cost and foster innovation within
IT departments.
PowerVC gives IBM Power Systems clients advantages:
It is deeply integrated with Power Systems.
It provides virtualization management tools.
It eases the integration of servers that are managed by PowerVM or PowerKVM in
automated IT environments, such as clouds.
It is a building block of IBM Infrastructure as a Service (IaaS), based on Power Systems.
Chapter 1. PowerVC introduction 3
PowerVC is an addition to the existing PowerVM set of enterprise virtualization technologies
that provide virtualization management. It is based on open standards and integrates server
management with storage and network management.
Because PowerVC is based on the OpenStack initiative, Power Systems can be managed by
tools that are compatible with OpenStack standards. When a system is controlled by
PowerVC, it can be managed in either of two ways:
By a system administrator by using the PowerVC GUI
By higher-level tools that call PowerVC by using standard OpenStack application
programming interfaces (APIs)
PowerVC is an option that is between the Hardware Management Console (HMC) and IBM
SmartCloud® IaaS offerings. It provides a systems management product that enterprise
clients require to effectively manage the advanced features that are offered by IBM premium
hardware. It reduces resource use and manages workloads for performance and availability.
In the following sections, we introduce the concepts of OpenStack to help you understand the
terminology that is used in this book.
1.2 OpenStack overview
PowerVC is based on the OpenStack initiative. The following sections provide an overview of
OpenStack.
1.2.1 The OpenStack Foundation
OpenStack is an IaaS solution that is applied to the cloud computing domain, which is led by
the OpenStack Foundation. The foundation is a non-commercial organization that promotes
the OpenStack project and helps the developers within the OpenStack community. Many
major IT companies contribute to the OpenStack Foundation. Check their website for more
information:
http://www.openstack.org/foundation/
IBM is an active member of the OpenStack community. Multiple IBM divisions have key roles
as members. IBM contributes through code contributions, governance, and support within its
products.
OpenStack is no-charge, open source software that is released under the terms of the
Apache license.
1.2.2 OpenStack framework and projects
The goal of OpenStack is to provide an open source cloud computing platform for public and
private clouds.
OpenStack has a modular architecture. Several projects are underway in parallel to develop
these components:
Nova Nova manages the lifecycle and operations of hosts and compute resources.
Swift Swift covers object-oriented storage. It is meant for distributed high availability
in virtual containers.
4 IBM PowerVC Version 1.2.3: Introduction and Configuration
Cinder This project covers the management for block storage, such as IBM Storwize®
or IBM SAN Virtual Controller.
Glance Glance is the image service that provides discovery, registration, and delivery
services for virtual disk images.
Horizon This dashboard project is the web service management and user interface to
integrate various OpenStack services.
Neutron Neutron is the network management service for OpenStack. Formerly named
Quantum, Neutron includes various aspects, such IP address management.
Keystone The Keystone focus is on security, identity, and authentication services.
Ceilometer The Ceilometer project is for metering. The Ceilometer provides measurement
and billing across all OpenStack components.
You can find complete descriptions of the main OpenStack projects on the Wiki page of their
website:
https://wiki.openstack.org/wiki/Main_Page
Figure 1-1 shows a high-level view of the OpenStack framework and main components and
how they can be accessed by applications that use the OpenStack computing platform APIs.
Figure 1-1 OpenStack framework
Nova (Compute)
Glance (Image Service)
APIs
OpenStack Shared Services
HARDWARE
Applications
Horizon
(Dashboard)
Neutron (Networking)
Swift (Object Storage)
Cinder (Block Storage)
Chapter 1. PowerVC introduction 5
Figure 1-2 provides details about the main components of the OpenStack framework. It also
contains a few explanations of the roles of these components. The illustration shows that one
of the main benefits of OpenStack is that it provides a standard interface for hardware.
Hardware vendors provide OpenStack compatible drivers for their devices. These drivers can
then be used by the other OpenStack components to act on the hardware devices.
Figure 1-2 OpenStack main components
Higher Level Mgmt Ecosystem
Cloud Mgmt SW
Enterprise
Mgmt SW
Other
Mgmt SW Dashboard (Horizon)
OpenStack API
Security (KeyStone) Scheduler Projects
Images (Glance) Flavors Quotas
AMQP DBMS
drivers drivers drivers
Server
Compute Nova Block Storage
Cinder Network Neutron
Storage Network
Cloud Management APIs
• Focus on providing IaaS
• Broad Eco System
Simple Console
• Built using OS REST API
• Basic GUI for OS functions
Management Services
• Image Management
• Virtual Machine Placement
• Account Management
Foundation (Middleware)
• AMQP Message Broker
• Database for Persistence
Virtualization Drivers
• Adapters to hypervisors
• Server, storage, network
• Vendor Led Drivers
6 IBM PowerVC Version 1.2.3: Introduction and Configuration
1.2.3 PowerVC high-level architecture
Figure 1-3 shows how PowerVC is implemented on top of the OpenStack framework and how
additional components are inserted within the OpenStack framework to add functions to the
standard set of OpenStack features. It also illustrates that IBM is providing drivers to support
IBM devices by using the OpenStack APIs.
Figure 1-3 PowerVC implementation on top of OpenStack
PowerVC is available in Standard Edition, which is described in the following section.
1.3 PowerVC Standard Edition
PowerVC Standard Edition will manage PowerVM systems that run either IBM POWER6®,
IBM POWER7®, or POWER8 processors that are controlled by an HMC. In addition,
PowerVC can manage PowerKVM Linux scale-out servers.
During installation, PowerVC Standard Edition can be configured to manage VMs that are
virtualized on top of either PowerVM or PowerKVM.
On PowerVM, dual Virtual I/O Servers for each host are supported to access storage and the
network. VMs can be either N_Port ID Virtualization (NPIV)-attached storage or shared
storage pool (SSP) back-end storage and virtual SCSI (vSCSI), which were introduced in
PowerVC 1.2.2. The following hardware products are supported for NPIV:
EMC (VNX and VMAX)
IBM XIV® Storage System
IBM Storwize V3700 system
Block Storage
IBM Power SystemsStorage
IBM and 3r d
Party
Network
IBM and 3r d
Party
OpenStack API
PowerVC Virtualization Management Console
API Additions
Monitoring
Differentiators
AMQP DBMS
Security (KeyStone) Scheduler Platform
EGO
Projects
Images Flavors QuotasOVF
Nova/
Libvirt
Cinder NeutronCompute Network
Storage Drivers PowerVM/KVM Driver Network Drivers
Virtualization Mgmt UI
• Simpleand Intuitive
• Targeting the IT Admin
New Management APIs
• Host & Storage
Registration
• Environment Validation
NewMgmt Capabilities
• More granular VM Mgmt
• Differentiators (DLPAR)
• Power Virtual IO
• OVF image Formats
Platform EGO Provides...
• VirtualMachine
Placement
• WorkloadAware Mgmt
VirtualizationDrivers
• HMC driver for PowerVM
• Libvirt drivers for
PowerKVM
• Leverage ecosystem to
support broad range of IBM
and non-IBM storage and
network attachedto Power
Packaging and Simplification
• Simplified install and
Configuration
• IntuitiveAdministration
Model
• Focus on day 0/1 TTV
Chapter 1. PowerVC introduction 7
IBM Storwize V7000 system
IBM SAN Volume Controller
For storage on an SSP, any SSP-supported storage device is supported by PowerVC.
On PowerKVM, storage is backed by iSCSI devices.
For more information, see 3.1, “IBM PowerVC requirements” on page 30.
For the latest list of requirements, see this website:
http://ibm.co/1jC4Xx0
1.4 PowerVC adoption
Two features are useful for a smooth adoption of PowerVC in an existing environment:
When PowerVC manages a physical server, it can manage the full set or only a subset of
the partitions that are hosted on that server.
When PowerVC is adopted in an environment where partitions are already in production,
PowerVC can discover the existing partitions and selectively start to manage them.
Therefore, the adoption of PowerVC in an existing environment does not require a major
change. It can be a smooth transition that is planned over several days or more.
8 IBM PowerVC Version 1.2.3: Introduction and Configuration
© Copyright IBM Corp. 2014, 2015. All rights reserved. 9
Chapter 2. PowerVC versions and releases
This chapter describes the evolution of IBM® Power Virtualization Center Standard Edition
(PowerVC) through its versions with special focus on version 1.2.2 and version 1.2.3.
The following topics are covered in this chapter:
Previous versions and milestones
IBM PowerVC version 1.2.2 enhancements and new features
New in IBM PowerVC version 1.2.3
2
10 IBM PowerVC Version 1.2.3: Introduction and Configuration
2.1 Previous versions and milestones
IBM Systems and Technology Group Cloud System Software developed a virtualization
management solution for PowerVM and PowerKVM, which is called the Power Virtualization
Center (PowerVC). The objective is to manage virtualization on the Power platform by
providing a robust, easy-to-use tool to enable its users to take advantage of the Power
platform differentiation.
This list shows the previous versions:
IBM PowerVC first release (R1)
IBM PowerVC version 1.2.0
IBM PowerVC version 1.2.1
2.1.1 PowerVC release to OpenStack edition cross-reference
Table 2-1 cross-references the PowerVC releases to editions of OpenStack.
Table 2-1 PowerVC releases cross-referenced to OpenStack versions
2.1.2 IBM PowerVC first release (R1)
PowerVC first release was available in certain markets in 2013. The primary objective of this
release was to simplify the task of deploying a single logical partition (LPAR) with operating
system software for new IBM Power System hardware clients. This release presented several
restrictions, requiring virtualization management of the hosts and supporting only limited
resource configurations.
2.1.3 IBM PowerVC version 1.2.0
The second release, PowerVC version 1.2.0, was also available worldwide in 2013. The
primary objective was to simplify the virtualization management experience of IBM Power
Systems servers through the Hardware Management Console (HMC) and build a foundation
for enterprise-level virtualization management.
2.1.4 IBM PowerVC version 1.2.1
The third release of PowerVC, version 1.2.1, was available worldwide in 2014 with the
addition of PowerKVM support that was built on IBM POWER8 servers and shared storage
pool (SSP) support for the PowerVM edition.
PowerVC release Availability OpenStack edition
V1.2 October 2013 Havana
V1.2.1 April 2014 Icehouse
V1.2.2 October 2014 Juno
V1.2.3 April 2015 Kilo
Chapter 2. PowerVC versions and releases 11
2.2 IBM PowerVC version 1.2.2 enhancements and new features
The fourth release of PowerVC, version 1.2.2, was also available worldwide in 2014. This
version focused on adding new features and support to the following components:
Image management
Monitoring
Host maintenance mode
Storage
Network
Security
2.2.1 Image management
This version supports new levels of the Linux distributions (previously supported distribution,
new release):
Red Hat Enterprise Linux (RHEL) 6.6
RHEL 7 (which is supported on IBM PowerKVM only in version 1.2.1)
SUSE Linux Enterprise Server (SLES 12)
New Linux distribution support exists for Ubuntu 14.
Currency support for the Linux operating system can be done on cloud-init. Also, for any new
Linux OS distribution support, only cloud-init is supported, not Virtual Solutions Activation
Engine (VSAE). Any changes that are needed in cloud-init to support the new distribution are
coordinated with the IBM Linux Technology Center (LTC) to distribute the changes to the
cloud-init open source community.
2.2.2 Monitoring
Enhancements and new capabilities are included in PowerVC 1.2.2:
Use the Ceilometer framework to monitor the memory and I/O metrics for instances
Provide the hosts with metrics for CPU utilization and I/O
Provide out-of-band lifecycle operation-related checks
With the new set of health checks and metrics, PowerVC version 1.2.2 monitoring
enhancements include the improved scale and stability of the monitoring functions.
The following major capabilities are available in this version:
Reduce the steady-state CPU utilization of the monitor function
Reduce the redundant health and metric event publication to help improve performance
Use the asynchronous update events and reduce the resource polling
Important: IBM PowerVC Express Edition is no longer supported in this release.
Note: Because Ubuntu is a new distribution, you must update the distribution list that is
used by the image import command-line interface (CLI) and graphical user interface (GUI)
to include Ubuntu.
12 IBM PowerVC Version 1.2.3: Introduction and Configuration
2.2.3 Host maintenance mode
Virtualization administrators often need to prepare a host system for maintenance, for
example, replace a faulty hardware component or update critical software components. This
act is widely known in the industry as putting a host into maintenance mode. Consider the
following points from a virtualization management perspective:
The host will be prevented from entering maintenance mode if any one (or more) of the
following conditions are true and the user requested automated mobility upon entering
maintenance:
– The host’s hypervisor state is anything other than operating. (For example, the
administrator must address any issues in advance; otherwise, live migrations are
unlikely to succeed.)
– The host has at least one virtual machine (VM) in the error state, and migration cannot
be performed until the administrator resolves the issue.
– The host has at least one VM in the paused state (when one or more VMs in paused
state mean that it resides in memory and the administrator needs to power down the
host).
– The host is based on PowerVM and not licensed for active partition mobility.
No additional virtual machines can be placed on the host while its maintenance state is
either entering, error, or on.
If mobility was requested when the host was entering maintenance mode and an active
VM existed, this VM must be relocated automatically to other hosts within the relocation
domain.
While virtual machines are migrated to other hosts, the host’s Platform Resource
Scheduler (PRS) hypervisor state is entering maintenance. The PRS hypervisor state
automatically transitions to in maintenance when the migrations complete and Nova
notifications will be generated as the state transitions.
After the administrator completes the maintenance, the administrator will remove the host
from maintenance mode. At that point, the PRS hypervisor state transitions back to ok.
Now, virtual machines are able to be scheduled to the host again. VMs that were
previously on the host that were put in maintenance mode need to be migrated back to the
host manually.
2.2.4 Storage
Two additional volume drivers and one fabric driver were added in PowerVC version 1.2.2.
The volume drivers are IBM XIV Storage System and EMC, and the fabric driver is Cisco.
Volume attachment now includes virtual SCSI (vSCSI) connectors. The following uses and
cases apply to these new devices:
Registration of storage arrays and Fibre Channel (FC) switches with the storage template
and storage connectivity groups (SCGs)
Deployment of VMs
Attachment and detachment of volumes in existing VMs
Note: The administrator can take the host out of maintenance mode at any point. PRS
finishes any in-progress migrations and halts afterward.
Chapter 2. PowerVC versions and releases 13
Image management
Onboarding of VMs and volumes
The new storage and fabric drivers require new registration application programming
interfaces (APIs) to register the new devices.
New storage templates are required for XIV and EMC. Both drivers support additional storage
templates. API and user interface (UI) changes are associated with the storage templates.
Table 2-2 represents how clients are using volumes within PowerVM. For example, when an
N_Port ID Virtualization (NPIV) connection exists to boot a VM, it is not necessary to attach a
vSCSI-connected volume.
When the client sets their connection type for boot and data volumes within an SCG, a client
is limited to two connector types within a single SCG. On deployment or attachment, the SCG
determines the connection type between NPIV and vSCSI for a storage area network (SAN)
device.
Table 2-2 Updated support matrix for SSP, NPIV, and vSCSI storage paths in PowerVC version 1.2.2
The SCG changes allow the creation of a vSCSI on SCG. PowerVC version 1.2.2 provides
the option on the SCG configuration so that the client can specify whether they want dual
Virtual I/O Servers to be guaranteed during deployment and migration. API and UI changes
are associated with these SCG changes.
2.2.5 Cisco Fibre Channel support
This newly added support is for Cisco Multicast Distributed Switching (MDS) Fibre Channel
(FC) switches. This support was developed in collaboration with IBM to ensure compatibility
with PowerVC.
Next, we describe how to enable Cisco support within the PowerVC FC zoning architecture,
which differs significantly from the community architecture.
The relevant components to support Cisco FC are contained within the Cinder-volume
service. One of these services runs for every registered storage provider. Volume manager
invokes the zone manager whenever connections are added or removed. The zone manager
has a pluggable driver model that separates generic code from hardware-specific code. The
following steps describe the flow during the volume attachment or detachment:
1. After the volume driver is invoked, the zone manager flow is invoked.
2. The volume driver returns the initiator that is wanted.
3. The target is mapped from the initialize_connection or terminate_connection method.
4. The returned structure feeds into the zone manager operation.
PowerVC version 1.2.2 supports a maximum of two fabrics. The fabrics can be mixed.
Boot volume/data volume SSPs NPIV vSCSI
SSPs Supported Supported Not supported
NPIV Not supported Supported Not supported
vSCSI Not supported Supported Supported
14 IBM PowerVC Version 1.2.3: Introduction and Configuration
Function
The Cisco driver has configuration file options that, for each fabric, specify the user name,
password, IP address, and virtual SAN (VSAN) to use for zoning operations. The VSAN is
interesting. Cisco and Brocade switches allow the physical ports on the switch to be divided
into separate fabrics. Cisco calls them VSANs, and Brocade calls them Virtual Fabrics.
Therefore, every zoning operation on a switch is performed in the contact of a VSAN or Virtual
Fabric. However, the two drivers work differently:
For Cisco, a user does not have a default VSAN. So, the VSAN to use is specified in the
configuration file. This method is not ideal. The user needs to be able to determine the
VSAN automatically by looking at where the initiator and target ports are logged in.
For Brocade, every user has a default Virtual Fabric, and the driver creates zones on that
default fabric.
Integration
To extend PowerVC integration, the zone manager class supports an fc_fabric_type option,
which allows the user to select Brocade and Cisco switches.
Zone manager also tolerates slight variations in the behavior of the two drivers. It delivers an
extended Cisco CLI module that is called powervc_cisco_fc_zone_client_cli.py. This
module adds a get_active_zone_map function that is needed by the PowerVC zoning driver.
The Cisco driver is enabled by editing the /etc/cinder/fabrics.conf file.
The fabric registration UI allows to the user to register Brocade and Cisco FC switches.
Mixed fabrics are supported for PowerVC, Brocade, and Cisco Tier1 drivers. Third-party fabric
drivers can be provided and mixed by vendors. However, third-party fabric drivers cannot be
mixed with PowerVC fabric drivers because Cinder supports a single zone manager only and
Tier1 drivers are managed from the PowerVC zone manager.
For Cisco fabrics, the following properties are required for registration:
Display name
IP address
Port
User name
Password
VSAN
The registration API performs a test connection to ensure that the credentials are correct and
the specified VSAN exists.
2.2.6 XIV storage support
Support for IBM XIV Storage System storage arrays is added to PowerVC. The functionality
that is offered by this interface is similar to the functions that are offered through the IBM SAN
Volume Controller (SVC).
Note: IBM PowerVC version 1.2.2 continues to support a maximum of two fabrics that can
be registered.
Chapter 2. PowerVC versions and releases 15
This interface requires the XIV driver, which is downloaded and included in the build and
installed in the PowerVC environment. The downloaded XIV driver also contains helper
methods to derive a list of volumes in a certain XIV array and its unique identifier. These
methods are used by the corresponding PowerVC registration and extended driver code.
Function
All functions that relate to storage arrays are supported:
Registration by using the default storage template
Storage connectivity group setup
Configuration of the FC port
Onboarding of VMs with XIV volumes that are attached to them
Onboarding of volumes that are already in XIV storage
Creation and deletion of volumes on XIV storage
Deployment of VMs by using volumes from XIV storage
Integration
A new XIV registration code is integrated into PowerVC. As part of the storage registration UI,
this new registration code collects the IP address, user friendly name, user name, and
password to register the XIV Storage System to the PowerVC.
The registration API performs a test connection and retrieves a list of available storage pools
from the XIV system. The list is displayed to the user, so that the user can choose the pool to
use for default provisioning operations.
This approach is similar to how the IBM Storwize registration UI looks today, except that the
Secure Shell (SSH) keys are not supported. Currently, no UI is available for the user to select
the type of storage controller that the user is registering. Storwize is the only option.
A user can use the UI to select between Storwize and Network File System (NFS), and that
selection can be reused to provide the PowerVC user with a Storwize/XIV option.
2.2.7, “EMC storage support” on page 16 shows a choice of SAN Volume Controller, EMC, or
XIV storage during storage registration.
The storage template UI for XIV is similar to Storwize support. The UI needs to recognize the
type of storage provider and display the correct UI.
The storage metadata API is used by the storage template UI to get a list of storage pools and
related information, but first, the XIV driver needs to be enhanced. PowerVC has an extended
XIV driver with the get_storage_metadata function implemented in it. This extended driver is
used by the XIV registration code.
Like the SAN Volume Controller, the XIV has a limit on the number of hosts that can be
defined. During initialize_connection, the host creation fails with a return code of
REMOTE_MAX_VIRTUAL_HOSTS_REACHED. This limit is not determined yet.
The attach operation fails with an appropriate message. However, the TTV validation tool
might expose the total or percent of slots that is used with the same or similar naming scheme
that is used with the SAN Volume Controller for images and volumes. Images start with Image
and volumes start with volume.
Note: The /etc/cinder/cinder.conf file needs to be updated to include xiv as a
supported storage type.
16 IBM PowerVC Version 1.2.3: Introduction and Configuration
2.2.7 EMC storage support
The EMC storage array is now included in PowerVC version 1.2.2. The support includes EMC
VNX and VMAX storage devices. VNX and VMAX are in two different EMC drivers.
This support is essentially how the PowerVC enables the Storage Management Initiative
Specification (SMI-S) EMC driver. The SMI-S provider proxy applies to the EMC VMAX driver
only, not the VNX driver. The EMC VNX driver uses a remote command tool set that is located
with the cinder driver to communicate to the VNX device rather than through an SMI-S proxy.
The EMC VMAX driver requires that you download the EMC SMI-S provider proxy software
from the EMC website. The EMC VMAX driver also requires that you run on an x86 Linux
system and that you are at version V4.5.1 or higher. The OpenStack EMC driver
communicates with this proxy by using the Web-Based Enterprise Management (WBEM).
The OpenStack EMC driver also has a dependency on the python pywebm package.
The EMC driver supports both iSCSI and FC connectivity. Although the EMC driver has iSCSI
support, only NPIV connectivity is supported in this release.
The configuration of the EMC driver is in two locations. The cinder.conf file contains general
settings that reference the driver and also a link to an external XML file that contains the
detailed settings. The following configuration file settings are valid:
volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
Integration
New EMC registration code is available and enabled in PowerVC version 1.2.2. For
similarities, see “Integration” on page 14.
Like the SAN Volume Controller, the EMC limits the number of hosts that can be defined.
During initialize_connection, the host creation returns a failure. This limit for VNX is 1,024
maximum hosts. The attach operation fails with an appropriate message. TTV might expose
the total number of used slots or the percent of used slots.
The same or similar naming scheme is used with the SAN Volume Controller for images and
volumes. Images start with Image and volumes start with volume.
The EMC low level of design determines any new attributes to be exposed in the default
storage template.
2.2.8 Virtual SCSI support
Current cinder code supports NPIV connectivity from SAN Storage to a VM in the PowerVC
Standard Edition. In this model, the storage volume is mapped directly to the virtual FC
adapter in the VM. PowerVC 1.2.2 adds the support in Standard Edition for mapping the
storage volume to the Virtual I/O Server (VIOS) and for establishing a vSCSI connection from
the VIOS to the VM.
The vSCSI classic model is needed for PureApp where the VM boots from a vSCSI-attached
volume and data volumes are also vSCSI-attached.
Important: This command toolset runs on x86 only, which limits the PowerVC
management server to x86 installations.
Chapter 2. PowerVC versions and releases 17
Use the updated support matrix, Table 2-2 on page 13, for input to the necessary design
changes to the SCGs. The SGG determines the connection type to the VM during the
attachment and detachment of a volume to a VM. During deployment, the SCG includes
hosts that are compatible with the SCG only.
The SCG has two connectivity types:
One connectivity type for the OS disk
One connectivity type for data volumes
The selection of an NPIV or vSCSI SCG determines the connectivity type for the OS disk.
When a volume is attached to a VM, the connectivity type for volumes determines whether
the volume is connected through NPIV or vSCSI.
vSCSI is supported for all PowerVC tier-1 cinder drivers, which include PowerVC, SAN
Volume Controller, EMC, and XIV drivers. No support is available initially for non-tier-1 volume
drivers.
Two methods exist to establish SAN zoning and storage controller hosts. The first method is
outside of the scope of this section. The administrator establishes all of the zoning and
storage controller hosts before anyone uses the vSCSI connectivity. Most clients already use
this method when they use vSCSI connections from the VIOS. Clients create a zone on the
switch that contains all the VIOS and storage controllers. Live Partition Mobility (LPM)
operations are supported without additional zoning requirements.
Typically, clients also run the rootvg of the VIOS from SAN so an existing host entry is
available on the storage controller. This step also includes the management of the SCG and
the creation of zones and hosts on the storage controller. This duality is evaluated as part of
the design changes that are needed for SCG to support vSCSI.
To enable multiple paths and LPM operations with vSCSI connections, disk reservations must
be turned off for all of the hdisks that are discovered on the VIOS. Use the AIX chdef
command to overwrite configuration attributes when a device is discovered.
For the SAN Volume Controller, the following chdef commands that are shown in Example 2-1
must be executed on the target VIOS before you assign the disks to the vSCSI adapters.
Example 2-1 The chdef commands to set the reserve policy and algorithm on new disks
chdef -a reserve_policy=no_reserve -c disk -s fcp -t mpioosdisk
chdef -a algorithm=round_robin -c PCM -s friend -t fcpother
These chdef commands need to be executed only one time on the Virtual I/O Servers before
you attempt to use vSCSI connections.
Storage controller registration, volume creation, volume deletion, and volume onboard are
unaffected by the addition of the vSCSI connectivity type.
Note: You are required to overwrite the reserve_policy and set the algorithm for the disks
that are discovered. The default algorithm is a failover algorithm.
Note: Consider changing the reserve policy if it was not set to no_reserve. If this setting is
not executed before you allocate the disks to the vSCSI adapter, you are required to
change these settings for each disk.
18 IBM PowerVC Version 1.2.3: Introduction and Configuration
The major changes outside of SCG for vSCSI connections are in the areas of volume
attachment and detachment. For volume attachment, the new vSCSI connection type causes
the discovery the new hdisk on the targeted VIOS to establish a vSCSI mapping from the
hdisk on the targeted VM, to the vhost adapter on the Virtual I/O Servers.
For volume detachment, the new vSCSI connection type causes the removal of the vSCSI
mapping between the hdisk and vhost adapter on the targeted Virtual I/O Servers, and then
removes the hdisk from the VIOS.
During VM migration, the VM’s volumes must be mapped to the targeted VIOS and the hdisk
that was discovered before you call PowerVM to migrate the VM. This process is covered in
5.15.10, “Migration of virtual machines” on page 169.
2.2.9 Network
The following key additional characteristics were introduced in PowerVC version 1.2.2 for
networking:
IPv6 management node support.
IPv6 deployment to targets (API level only). Significant restrictions apply.
Add/remove virtual network interface controller (vNIC) adapters.
IP pool support.
User updates to network ports/IP addresses.
A brief introduction follows of each new characteristic.
IPv6 management node support
IPv6 support (a homogeneous IPv6 static-based network environment) is added to support
the PureApp solution. In future releases, this function is expanded to support mixed-mode
environments. Mixed-mode environments were not tested in this release.
You can install and operate PowerVC on an IPv6 network. The network that was tested is a
stateless address autoconfiguration (SLAAC)-based IPv6 network where each node has an
IPv6 endpoint. PowerVC itself simplifies the communication between an IPv6-based address
rather than an IPv4-based address. If host names are specified, the host operating system
resolves the appropriate address type to use on its own.
The installer has a silent installation option to support the detection of IPv4 or IPv6 options.
The user can choose the IPv4 or IPv6 installation options. If IPv6 is selected, but no IPv6
(non-link local) address is detected, an error is displayed. If IPv4 is selected, but no IPv4
address is detected, an error is displayed.
The user can register compute nodes or SAN switches, storage controllers, and more with
either IPv4, IPv6, or host names. The management system must be able to resolve the
address, however. The compute nodes must also be able to communicate back to the
management system through the IPv6 address (if the management node is configured
correctly).
For this release, the following components are tested with IPv6:
HMC
VIOS
PowerVC management server
Brocade SAN switch (FVT only)
V7000 storage device
PowerKVM host
Chapter 2. PowerVC versions and releases 19
No other devices are tested. Their testing is with SLAAC addresses only. API changes are not
needed. Replacing IPv6 addresses with IPv4 for Neutron is generally sufficient for static
addressing.
IPv6 deployment to targets
This function is only supported at the API. No UI work is required to support this function.
Users of IPv6 must not expect any UI support in this release of PowerVC.
The scope of the support is listed:
A single network can be either IPv6 or IPv4 (cannot be both).
Only a single static IPv6 address and IPv6 link local address for each adapter are
supported.
Multiple adapters can be applied to the VM (part IPv6 and part IPv4)
Cloud-init support exists for RHEL, SLES, Ubuntu, and AIX. Cloud-init is the primary
activation strategy for the future.
Existing Visual Studio Authoring Extension (VSAE) images are supported. See the
following list of items that are not in scope.
To send configuration data to the activation engine, you are required to know the activation
strategy that is used for each VSAE image or cloud-init.
You can determine the IP address for a specific VM and network.
The following items are not in scope:
You cannot have two networks on the same VLAN, where one network is IPv4 and another
network is IPv6. This restriction is a PowerKVM restriction.
No IBM i support exists.
VSAE is not enhanced to support new RHEL or SLES versions, for example, RHEL 7. In
particular, VSAE is not supported on the following operating systems:
– Ubuntu
– RHEL 6.6 and higher
– RHEL 7 and higher
– SLES 12 and higher
You cannot configure manual network routes. If the user requires this function, the user
needs to write an activation engine extension. For the ability to set Media Access Control
(MAC) addresses on an adapter, users must accept the MAC addresses that are defined
on the adapter by the system.
For SLAAC addresses and SLAAC-like addresses, a scheme is used by PureApp to set
the MAC addresses/IP addresses to look like SLAAC, but the addresses are not true
SLAAC addresses and do not act like SLAAC addresses.
GUI support is not available.
Addition and removal of vNIC adapters
PowerVC supports the addition of vNICs to a specific system. However, because PowerVC
does not use the local Dynamic Host Configuration Protocol (DHCP) server, IP addresses
cannot be assigned to a VM dynamically.
Note: All new images, even images that use older versions of these operating
systems, must use cloud-init because cloud-init is the IBM Common Cloud stack’s
strategic activation technology.
20 IBM PowerVC Version 1.2.3: Introduction and Configuration
PowerVC can dynamically add a NIC to a VM and also prompt the user for the IP address to
use to that NIC.
To update an IP address within PowerVC, when the IP address is already assigned to the VM
(or to remove the IP address), you must use the “User Update to Network Ports/IP
Addresses” function. A single network interface supports a single IP address only. However,
multiple network interfaces can be added to support additional IP addresses.
PowerVC also offers an option to remove a vNIC. When you remove a vNIC, PowerVC
immediately removes the NIC from the VM and releases the IP address.
IP address pool support
In PowerVC Version 1.2.2, the user can choose between two types of IP addresses:
Static
DHCP
If the user decides to use Static, the user is required to specify the IP address on every
deployment. This choice is not a preferred practice because the user does not know which IP
address is available for use, but this version introduced the option to let PowerVC show a
predefined pool of IP addresses.
To enable this function, PowerVC provides a pool, which is a capability that is built on top of
the existing Neutron “port” API. PowerVC recognizes that to maintain a pool, the user must be
able to “lock” certain IP addresses. These locked IP addresses can be used by VMs outside
of the PowerVC management domain, such as a Domain Name Server (DNS) or gateway.
PowerVC provides a function to lock an IP address. This function works by creating a lock at
the Neutron port and by specifying a device owner. The device owner is named
PowerVC:<user input>. The user can specify the reason why the IP address is locked in user
input.
The IP addresses must be presented to the user. If the IP address is “In Use”, which means
that it is attached to a VM, that VM must be identifiable to the user. Due to API restrictions, IP
addresses must be locked one element at a time. The API does not support batch
processing.
User updates to network ports/IP addresses
In enterprise virtualization, the lifecycle of the VM might be longer than the lifecycle in a
standard cloud.
Because PowerVC does not manage a DHCP server, its IP address logic is mainly for
bookkeeping. Therefore, clients often want to modify the IP address that is assigned to a VM,
for example:
They imported a VM and want to put the real IP address on the VM.
They changed the networking on the console of the system.
Note: Even though PowerVC attaches or detaches the adapter, you need to configure the
adapter within the VM.
Note: Neutron APIs only allow the modification of a single port at a time.
Chapter 2. PowerVC versions and releases 21
To support modification, a new function was added to the VM panel so that the user can
modify the port that is assigned to the VM. This function is supported by the existing Neutron
APIs, but the design needs to be defined by the user experience, edge cases, and others.
No hard limit exists, beyond the hypervisor limits, on the number of NICs that can be added.
The UI limits this operation to eight NICs, as with deployment. Eight NICs is a reasonable
upper limit, which provides the opportunity to display a message to the user before the user
receives (hits) esoteric hypervisor messages.
2.2.10 Security
Many configuration properties throughout PowerVC affect security. For example, Glance (the
OpenStack database name for the image repository) has properties to configure the
maximum size of an image and the storage quota for each user. An administrator might want
to configure these properties specifically for their environment as part of a defense against
denial of service through disk space exhaustion. PowerVC provides a supported mechanism
for the customer to configure these settings through the CLI.
Also, the default values for settings that relate to National Institute of Standards and
Technology (NIST) 800-131a changed to comply with that standard. This change offers better
security for customers and prepares the way for future compliance.
2.3 New in IBM PowerVC version 1.2.3
The first part of this section is an overview of the new PowerVC 1.2.3 features. Then,
complete detailed descriptions are provided of the most relevant changes that are introduced
in this release, as shown in Table 2-3.
Table 2-3 New functions that are introduced in PowerVC 1.2.3
New functionality Description
Collocation rules (affinity/anti-affinity) Rules can be created to keep VMs on the same
or different hosts. These rules are called affinity
and anti-affinity rules.
Host group Hosts can be logically separated and controlled
separately with placement policies.
Multi-volume capture Additional volumes can be captured now in
addition to the boot volume.
Placement policy VMs can be placed now by choosing among
different placement policies. In addition to striping
and packing, CPU usage and memory balance
are added.
Remote VM restart If one host fails, the user can remotely restart the
VMs now on another host by using the simplified
remote restart feature (POWER8 only).
Redundant HMC PowerVC now supports redundant HMC.
Switching from one HMC to another HMC is a
user-initiated action.
22 IBM PowerVC Version 1.2.3: Introduction and Configuration
The following list describes the most important features that are introduced in IBM PowerVC
Standard Edition Version 1.2.3:
Major software changes
Significant scaling improvement
Redundant HMC
Error scenarios
Host groups
Advance placement policies (affinity/anti-affinity)
Multi-disk capture/deploy
PowerVM and PowerKVM remote restart
Cloud-init for the latest service pack (SP) of AIX
2.3.1 Major software changes
PowerVC presents several major software changes in this new version:
PowerVC follows the lifecycle of OpenStack. IBM PowerVC version 1.2.3 is based on the
OpenStack Kilo version.
PowerVC host management must be installed on RHEL 7.1 for IBM Power or x86_64.
Storage mirror (Storwize) The storage templates are enhanced to allow the
creation of mirrored volumes on a Storwize family
storage provider, for example, a SAN Volume
Controller stretched cluster.
Volume sharing Volumes can be added to multiple VMs now. This
capability is essential for high availability VMs,
such as IBM Spectrum™ Scale (formerly General
Parallel File System (IBM GPFS™))/IBM
PowerHA® SystemMirror® for AIX Enterprise
Edition or PowerHA SystemMirror for AIX
Standard Edition.
Activation support Cloud-init is now supported on AIX starting with
AIX 7.1TL3SP5 and 6.1TL9SP5. Cloud-init is
preferred over the old activation method (VSAE).
SDDPCM in Virtual I/O Servers The use of Subsystem Device Driver Path Control
Module (SDDPCM) for VSCSI logical unit number
(LUN) management in VIOS is now supported.
Scaling improvement To use PowerVC in large environments, scaling is
improved and PowerVC can manage 30 hosts
and 3,000 VMs now.
Maximum transmission unit (MTU) support Definitions of networks are enhanced so that the
administrator can set the MTU size that is used
by a VM, for example, jumbo frames (MTU 9000).
Import images that consist of multiple volumes You can import an image that is made of multiple
volumes and create a single deployable image
from them.
Set the host name from DNS Cloud-init can be used to set the VM host name
by resolving a DNS record now.
New functionality Description
Chapter 2. PowerVC versions and releases 23
New client operating systems are supported now:
– RHEL 7.1 (Little Endian)
– SLES 12 (Little Endian)
– Ubuntu V15.04 (Little Endian)
2.3.2 Significant scaling improvement
To fit in a cloud environment well, PowerVC can manage 30 hosts and 3,000 VMs. The
improved PowerVC scaling capabilities for PowerKVM and PowerVM are shown in Table 2-4.
Table 2-4 Scaling capabilities for PowerKVM and PowerVM in PowerVC
2.3.3 Redundant HMC
To avoid a single point of failure, PowerVC 1.2.3 now supports redundant HMCs. Switching
from one HMC to another HMC is a user-initiated action. The PowerVC administrator can
switch between HMCs on a host basis.
A Change HMC button is now available on the host pane so that the administrator can select
a single host or multiple hosts and change its HMC connection.
Note: If PowerVC 1.2.3 is installed on a Power System server, you can choose a big
endian or little endian version for installation because both versions are supported.
PowerKVM PowerVM
Scales up to 160 vCPUs. Five hundred VMs for each HMC are supported.
If you plan to reach 3,000 VMs, you need six
HMCs.
PowerVC supports a maximum of 50 concurrent
deployments. We recommend that you do not
exceed eight concurrent deployments for each
host.
The maximum number of deployments depends
on the number of migrations that are supported
by the VIOS and firmware version that are
associated with each host.
Each host support a maximum of 225 VMs. Each host support a maximum of 500 VMs.
Ten concurrent migrations or remote restart
operations are supported.
Ten concurrent remote restart operations for
each source host are supported and four
concurrent remote restart operations for
destination hosts are supported.
24 IBM PowerVC Version 1.2.3: Introduction and Configuration
2.3.4 Error scenarios
Consider several important error scenarios for host maintenance mode and the related
orchestration.
While a host is put in maintenance mode, the operation generates notifications for all possible
error scenarios. In addition to normal notifications, it generates errors for state transitions:
When you perform host evacuation, the scheduler starts to receive invalid exceptions,
which can happen in single host environments or multiple host environments where the
alternative hosts cannot satisfy the VMs’ demand.
PRS puts the host into maintenance; however, the error state appears as soon as this
situation occurs.
When you perform host evacuation, one or more of the VMs enters the error state.
PRS puts the host into maintenance; however, the error state appears as soon as this
situation occurs.
When you perform host evacuation, one or more of the VMs never transition out of the
migrating state even after the configured time period.
PRS puts the host into maintenance; however, the error state appears as soon as this
situation occurs.
When you perform host evacuation, one or more of the VMs never start to migrate, for
example, due to an exception that is thrown.
PRS puts the host into maintenance; however, the error state appears as soon as this
situation occurs.
When the host is in maintenance mode and all VMs are migrated, the administrator starts
an inactive VM out-of-band task to be performed from the HMC or virsh interface.
PRS will not detect this situation and the host will remain in maintenance mode. It is
assumed that host administrators will not perform out-of-band operations on any of the
VMs during this sensitive period.
2.3.5 Host groups
PowerVC 1.2.3 can group hosts so you can manage them as a unit with policies. Host groups
can be used to separate the production environment from the test environment, for instance.
Hosts can be moved dynamically and placed between different host groups.
To control the placement of VMs within a host group, a placement policy is selected.
The current placement policies are available for host groups:
Packing
Striping
CPU balance
Memory balance
CPU usage
VMs can be migrated between hosts within the same group. At any time, host groups can be
modified by the user so that the user can move a host from one host group to another host
group. Hosts that are not a member of any user-defined host group are placed in the default
host group, which cannot be deleted.
Chapter 2. PowerVC versions and releases 25
2.3.6 Advance placement policies
In addition to the previous packing and striping placements, new policies are defined to
automate the placement of the VMs. These new policies are more sophisticated than before
and the user can place the VM on a host by choosing free capacity criteria.
Memory and CPU balance
New VMs are placed on the host with the largest amount of free CPU capacity. Also, the VMs
are placed on the host with the largest amount of free memory capacity.
CPU usage
New VMs are placed on the host with the lowest historical CPU usage. CPU usage is
calculated by taking the current usage every minute and then averaging the last 15 minutes
worth of data.
Affinity and anti-affinity
To complete these new placement policies, affinity and anti-affinity collocation rules are
added to PowerVC 1.2.3. The goal of collocation rules goal is to create a VM-to-VM
relationship that restricts where the VM can reside.
These rules can be used to force a list of VMs to be kept together or on separate hosts. For
instance, use an anti-affinity collocation rule to ensure that two nodes of a PowerHA
SystemMirror cluster are always on different hosts even if an LPM operation occurs (for high
availability). Or, use an affinity collocation rule to always regroup a database VM and an
application VM on the same host or host group (to increase performance and reduce network
latency).
VMs that are part of affinity or anti-affinity collocation rules cannot be remote restart or
migrated to ensure that rules are not violated. To migrate or remote restart a VM member of a
collocation rule, the machine must be removed from the rule. All collocation rule operations
are dynamic, which means that they can be modified at any time.
If a collocation rule is violated, the user is warned that the rule is broken and correct the issue.
2.3.7 Multiple disk capture and deployment
VMs often have one or many data volumes in addition to the boot volume. When you capture
a VM, you can capture both boot and data volumes.
Data volumes can be captured separately and deployed in combination with any images. Boot
and data volumes can reside on different storage providers. For example, you can capture a
boot volume on an SSP and capture data volumes that are created on a VMAX array and that
are accessed through NPIV. Table 2-5 on page 26 indicates the combinations that are
allowed to support multiple disks.
26 IBM PowerVC Version 1.2.3: Introduction and Configuration
Table 2-5 List of supported and unsupported multiple disk combinations
2.3.8 PowerVC remote restart
Now, PowerVC can use the simplified remote restart capability that is available on IBM
POWER8 systems to accelerate the recovery time for a server.
The minimum version of firmware that is required for PowerVM simplified remote restart
capability is FW820 for high-end servers and FW830 for any Linux scale-out PowerKVM
system that supports remote restart.
The version of remote restart that is available on IBM POWER7 Systems™ cannot be
managed by PowerVC. Only the simplified remote restart is supported.
Example 2-2 shows how to check whether hosts support simplified remote restart if you plan
to use PowerVC to restart your VMs remotely.
Example 2-2 How to check whether a host can use remote restart from PowerVC
# lssyscfg -r sys -F name,simplified_remote_restart_capable
p814-1,1
p814-2,1
If one of the hosts that is controlled by PowerVC is failing, for example, its status is different
than Operating, Power Off, or Power Off in progress, the PowerVC administrator can initiate
a remote restart operation manually to restart the VM on a healthy host.
At VM creation, the user can toggle an attribute to enable the simplified remote restart
capability. A specific compute template can be created to enable this capability at the creation
of the VM.
Remote restart supports PowerVM and PowerKVM, and AIX, IBM i, and Linux VMs.
Boot volumes Data volumes Support
SSP SSP Supported
SSP NPIV Supported
SSP vSCSI Not supported
NPIV NPIV Supported
NPIV SSP Not supported
NPIV vSCSI Not supported
vSCSI vSCSI Supported
vSCSI NPIV Supported
vSCSI SSP Not supported
Chapter 2. PowerVC versions and releases 27
2.3.9 Cloud-init for the latest service pack of AIX
Cloud-init is the most common activation tool that is used by the cloud provider. It is the
industry standard for bootstrapping the cloud server and now the strategic image activation
technology of IBM. In addition to Activation Engine (VSAE), cloud-init is fully supported on
AIX. Only the latest service packs of the latest AIX release are supported to use cloud-init as
an activation method.
These current versions of AIX are supported for cloud-init:
AIX 7.1 TL3 SP5 (7100-03-05)
AIX 6.1 TL9 SP5 (6100-09-05)
For more information about the cloud-init configuration, see the official documentation:
https://cloudinit.readthedocs.org/en/latest/
For additional information, see this website:
ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/cloudinit/
These latest service packs of AIX introduce a new device attribute on the sys0 device that is
called clouddev. The role of the clouddev attribute is to replace the ghostdev attribute that is
used to reset Object Data Manager (ODM) customization when VM is booted on another host
or with a different LPAR ID, for example, a remote restart operation or an inactive LPM
operation.
Example 2-3 shows clouddev and ghostdev attributes on AIX.
Example 2-3 Example of clouddev and ghostdev output
# lsattr -D -l sys0 -a clouddev
clouddev 0 N/A True
# lsattr -D -l sys0 -a ghostdev
ghostdev 0 Recreate ODM on system change / modify PVD True
On a supported version of AIX, the user cloud-init clouddev is set to 1 and ghostdev is set to
0. This value can be gathered by executing the commands that are shown in Example 2-4.
Example 2-4 Obtain the values that are set on the ghostdev and clouddev attributes
# lsattr -El sys0 -a ghostdev
ghostdev 0 recreate OD devices on system change / modify PVID True
# lsattr -El sys0 -a clouddev
clouddev 1 N/A True
Note: If you use cloud-init on an unsupported version of AIX, ghostdev is set to 1 after
activation. Change this value to 0 if you plan to use remote restart or inactive LPM.
28 IBM PowerVC Version 1.2.3: Introduction and Configuration
© Copyright IBM Corp. 2014, 2015. All rights reserved. 29
Chapter 3. PowerVC installation planning
This chapter describes the key aspects of IBM® Power Virtualization Center Standard Edition
(PowerVC) installation planning:
Section 3.1, “IBM PowerVC requirements” on page 30 presents the hardware and
software requirements for the various components of a PowerVC environment:
management station, managed hosts, network, storage area network (SAN), and storage
devices.
Sections 3.2, “Host and partition management planning” on page 35 through 3.9, “Product
information” on page 75 provide detailed planning information for various aspects of the
environment’s setup:
– Hosts
– Partitions
– Placement policies
– Templates
– Storage and SAN
– Storage connectivity groups and tags
– Networks
– User and group management
– Security
3
30 IBM PowerVC Version 1.2.3: Introduction and Configuration
3.1 IBM PowerVC requirements
In this section, we describe the necessary hardware and software to implement IBM
PowerVC to manage AIX, Linux, and IBM i platforms.
Beginning with PowerVC version 1.2.2, only PowerVC Standard Edition is included in the
PowerVC installation media. If you want to use PowerVC Express Edition, you need to install
PowerVC version 1.2.1. PowerVC Standard Edition supports the management of VMs (VMs)
that are hosted on PowerVM and managed by a Hardware Management Console (HMC) or
VMs that are hosted on PowerKVM.
For information about available releases, see this website:
http://www.ibm.com/software/support/lifecycle/
IBM PowerVC Standard Edition can manage Linux, AIX, and IBM i VMs that run on Power
Hardware.
PowerVC does not support the management of VMs that are hosted on PowerVM and
PowerKVM from the same management server.
3.1.1 Hardware and software requirements
The following sections describe the hardware, software, and resource minimum requirements
at the time of publication of this book for versions 1.2.2 and 1.2.3 of PowerVC Standard
Edition.
See the IBM Knowledge Center for the complete requirements:
PowerVC managing PowerVM
Select PowerVC Standard Edition 1.2.3 → Managing PowerVM → Planning for
PowerVC standard Managing PowerVM.
http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.standar
d.help.doc/powervc_planning_hmc.html
PowerVC managing PowerKVM
Select PowerVC Standard Edition 1.2.3 → Managing PowerKVM → Planning for IBM
Virtualization Center.
http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.hel
p.doc/powervc_planning_kvm.html
3.1.2 PowerVC Standard Edition requirements
The following information provides a consolidated view of the hardware and software
requirements for PowerVC Standard Edition.
PowerVC management and managed hosts
The PowerVC architecture supports a single management host for each managed domain. It
is not possible to configure redundant PowerVC management hosts that control the same
objects.
Chapter 3. PowerVC installation planning 31
The VMs that host the PowerVC management host must be dedicated to this function. No
other software or application can be installed on this VM. However, you can install software for
the management of this VM, such as monitoring agents and data collection tools for audit or
security. Table 3-1 lists the PowerVC Standard Edition hardware and software requirements.
Table 3-1 Hardware and OS requirements for PowerVC Standard Edition
Table 3-2 describes the minimum and recommended resources that are required for
PowerVC VMs. In the table, the meaning of the processor capacity row depends on the type
of host that is used as the PowerVC management host:
If the PowerVC management host is PowerVM, processor capacity refers to either the
number of processor units of entitled capacity or the number of dedicated processors.
If the PowerVC management host is PowerKVM or x86, processor capacity refers to the
number of physical cores.
Table 3-2 Minimum resource requirements for the PowerVC VM
Host type Supported hardware Supported operating
systems
PowerVC management host IBM POWER7, POWER7+™,
or POWER8 processor-based
server models
or
any x86 server
Red Hat Enterprise Linux
(RHEL), version 7.1 for IBM
Power for ppc64 and ppc64le
RHEL Server, and version 7.1
for x86_64
Managed hosts PowerVM:
IBM Power processor-based
servers:
IBM POWER6, POWER7,
POWER7+, and POWER8
servers
PowerKVM:
POWER8 servers with IBM
PowerKVM 2.1.1.2 or later
Guest operating systems that
are supported for deployment:
PowerVM and PowerKVM:
RHEL 5.9, 5.10, 6.4, 6.5,
6.6, 7.0, and 7.1 (little
endian)
SUSE Linux Enterprise
Server (SLES), version
11SP3 and SP4, SLES
version 12 (little endian)
Ubuntu 15.04 (little endian)
PowerVM only:
IBM AIX 6.1 and 7.1
IBM i 7.1 and 7.2
Minimum Recommended
Number of
VMs
Up to 400 401 - 1000 1001 - 2000 2001 - 3000
Processor
capacity
1 2 4 8 8
Virtual
CPUs
2 2 4 8 8
Memory
(GB)
10 10 12 20 28
Swap
space (GB)
10 10 12 20 28
Disk space
(GB)
40 40 60 80 100
32 IBM PowerVC Version 1.2.3: Introduction and Configuration
The installer has the following space requirements:
/tmp: 250 MB
/usr: 250 MB
/opt: 2.5 GB
/home: 3 GB (minimum). We recommend that 20% of the space is assigned to /home. For
example, for 400 VMs, 8 GB are recommended. For 1,000 VMs, 20 GB are recommended.
For 2,000 VMs, 30 GB are recommended.
The remaining space is used for /var and swap space.
Supported activation methods
Table 3-3 lists the supported activation methods for VMs on managed hosts.
Virtual Solutions Activation Engine (VSAE) is deprecated, and it might be withdrawn from
support in subsequent releases. We strongly recommend that new images are constructed
with cloud-init. Cloud-init is the strategic image activation technology of IBM. It offers a rich
set of system initialization features and a high degree of interoperability.
Table 3-3 Supported activation methods for managed hosts
Operating
system
Little endian (LE) or
big endian (BE)
Version Initialization
AIX BE 6.1 TL0 SP0 or later
7.1 TL0 SP0 or later
Virtual Solutions
Activation Engine
(VSAE)
AIX BE 6.1 TL9 SP5 or later
7.1 TL3 SP5 or later
cloud-init
IBM i BE 7.1 TR10 or later
7.2 TR2 or later
IBM i AE
RHEL BE 5.9 or later VSAE
RHEL BE 6.4 or later VSAE, cloud-init
RHEL BE 7.0 or later cloud-init
RHEL LE 7.1 or later cloud-init
SLES BE 11 SP3 or later VSAE and cloud-init
SLES LE 12 SP0 or later cloud-init
Ubuntu LE 15.04.0 or later cloud-init
Chapter 3. PowerVC installation planning 33
Hardware Management Console
Table 3-4 shows the HMC version and release requirements to support PowerVC Standard
Edition managing PowerVM. This section does not apply for managing systems that are
controlled by PowerKVM.
Table 3-4 HMC requirements
We recommend that you update to the latest HMC fix pack for the specific HMC release. You
can check the recommended fixes for HMC from the IBM Fix Level Recommendation Tool:
http://ibm.co/1MbXlIA
You can get the latest fix packages from IBM Fix Central:
http://www.ibm.com/support/fixcentral/
Virtualization platform
Table 3-5 includes the VIOS version requirements for PowerVC Standard Edition managing
PowerVM.
Table 3-5 Supported virtualization platforms
Item Requirement
Software level 8.2.0
8.3.0
Hardware-level requirements Requirements:
Up to 300 VMs: CR5 with 4 GB memory
More than 300 VMs: CR6, CR7, or CR8 with
8 GB memory
Recommendations
Up to 300 VMs: CR6, CR7, or CR8 with 8 GB
memory
More than 300 VMs: CR6, CR7, or CR8 with
16 GB memory
Platform Requirement
VIOS for POWER7 hosts and earlier Version 2.2.3.52 or later
VIOS for POWER8 hosts Version 2.2.3.52 or later
Tip: Set the Maximum Virtual Adapters value to at least 200 on the Virtual I/O Servers.
However, Virtual I/O Servers that are managed by PowerVC can serve more than 100
VMs, and each VM can require four or more virtual I/O devices from the VIOS. When you
plan the VIOS configuration, base the size of the Maximum Virtual Adapters value on real
workload requirements.
34 IBM PowerVC Version 1.2.3: Introduction and Configuration
Network resources
Table 3-6 lists the network infrastructure that is supported by PowerVC Standard Edition.
Table 3-6 Supported network hardware and software
Storage providers
Table 3-7 lists the hardware that is supported by PowerVC Standard Edition managing
PowerVM.
Table 3-7 Supported storage hardware for PowerVM
Table 3-8 lists the hardware that is supported by PowerVC Standard Edition managing
PowerKVM.
Table 3-8 Supported storage hardware for PowerKVM
Item Requirement
Network switches PowerVC does not manage network switches, but it supports network
configurations that use virtual-LAN (VLAN)-capable switches.
Virtual networks PowerVM: Shared Ethernet adapters for VM networking.
PowerKVM: Supports Open vSwitch 2.0. The backing adapters
for the virtual switch can be physical Ethernet adapters, bonded
adapters (Open vSwitch also supports bonding), or Linux bridges
(not recommended).
Item Requirement
Storage systems IBM Storwize family of controllers.
IBM XIV Storage System.
EMC VNX.
Notes: EMC VNX Series is supported on RHEL Server for x86_64
management hosts only, due to EMC limitations.
EMC VMAX.
SAN switches Brocade Fibre Channel (FC) switches are supported by the
Brocade OpenStack Cinder zone manager driver.
Cisco SAN FC switches are supported by the Cisco Cinder zone
manager driver.
Storage connectivity FC attachment through at least one N_Port ID Virtualization
(NPIV)-capable host bus adapter (HBA) on each host.
Item Requirement
Storage systems File-level storage
Network File System (NFS) V3 or V4 is required for migration. It
must be manually configured on the kernel-based VM (KVM) host
before it is registered on PowerVC.
Storage connectivity Internet Small Computer System Interface (iSCSI): Data volumes on
the IBM Storwize family of controllers only.
Chapter 3. PowerVC installation planning 35
Security
Table 3-9 includes the supported security features.
Table 3-9 Supported security software
3.1.3 Other hardware compatibility
PowerVC is based on OpenStack, so rather than being compatible with specific hardware
devices, PowerVC is compatible with drivers that conform to OpenStack standards. They are
called pluggable devices in PowerVC. Therefore, PowerVC can take advantage of hardware
devices that are available from vendors that provide OpenStack-compatible drivers for their
products. IBM cannot state the support of other hardware vendors for their specific devices
and drivers that are supported by PowerVC, so check with the vendors to learn about their
drivers. For more information about pluggable devices, see the IBM Knowledge Center:
http://ibm.co/1Q2QtRe
3.2 Host and partition management planning
When you plan for the hosts in your PowerVC Standard Edition managing PowerVM, you
need to consider the limitations in the number of hosts and VMs that can be managed by
PowerVC, and the benefits of using multiple Virtual I/O Servers.
3.2.1 Physical server configuration
If you plan to use partition mobility, you must ensure that all servers are configured with the
same logical-memory block size. This logical-memory block size can be changed from the
Advanced System Management Interface (ASMI) interface.
3.2.2 HMC or PowerKVM planning
Data centers can contain hundreds of hosts and thousands of VMs. For PowerVC version
1.2.3, the following maximums are suggested:
PowerVC Standard Edition 1.2.3 managing PowerVM:
– A maximum of 30 managed hosts is supported.
– Each host can have a maximum of 500 VMs on it.
– A maximum of 3,000 VMs can be on all of the combined hosts.
– Each HMC can have a maximum of 500 VMs on it.
Note:
IBM i hosts on IBM XIV Storage Systems must be attached by virtual SCSI (vSCSI) due
to IBM i and IBM XIV storage limitations.
For EMC VNX and VMAX storage, IBM i hosts on EMC VNX and VMAX storage
systems must be attached by vSCSI due to IBM i and EMC storage limitations.
Item Requirement
Lightweight Directory Access Protocol (LDAP)
server (optional)
OpenLDAP version 2.0 or later.
Microsoft Active Directory 2003 or later.
36 IBM PowerVC Version 1.2.3: Introduction and Configuration
PowerVC Standard Edition 1.2.3 managing PowerKVM:
– A maximum of 30 managed hosts is supported.
– Each host can have a maximum of 225 VMs on it.
– A maximum of 3,000 VMs can be on all of the combined hosts.
Therefore, you need to consider how to partition your HMC, and kernel-based VM (KVM) in
subsets, where each is managed by a PowerVC management host.
Advanced installations typically use redundant HMCs to manage the hosts. With version 1.2.3
or later, PowerVC can support hosts that are managed by redundant HMCs. If one HMC that
you selected for PowerVC becomes unavailable, change to the working HMC through the
PowerVC GUI.
3.2.3 Virtual I/O Server planning
Plan to use more than one VIOS if you want a failover VIOS or expanded VIOS functions.
PowerVC provides the option to use more than one VIOS. Consider a second VIOS to provide
redundancy and I/O connectivity resilience to the hosts. Use two Virtual I/O Servers to avoid
outages to the hosts when you need to perform maintenance, updates, or changes in the
VIOS configuration.
If you plan to make partitions mobile, define the VIOS that provides the mover service on all
hosts, and ensure that the Mover service partition option is enabled in the profile of these
Virtual I/O Servers.
Note: No hard limitations exist in PowerVC. These maximums are suggested from a
performance perspective only.
Note: PowerVC uses only one HMC to manage hosts even with redundant, defined HMCs.
You need to change an HMC to another HMC manually, if the original HMC fails.
Chapter 3. PowerVC installation planning 37
The VIOS must be configured with “Sync current configuration Capability” turned ON. On the
HMC, verify the settings of the Virtual I/O Servers, as shown in Figure 3-1.
Figure 3-1 VIOS settings that need to be managed by PowerVC
Changing maximum virtual adapters in a VIOS
From the HMC, on the left panel, click Server Management → Servers → managed_server,
select the VIOS, and then click Configuration → Manage Profiles from the drop-down
menu.
Select the profile that you want to use, and click Actions → Edit. Then, select the Virtual
Adapters tab.
Important: Configure the maximum number of virtual resources (virtual adapters) for the
VIOS to at least 200. This setting provides sufficient resources on your hosts while you
create and migrate VMs throughout your environment. Otherwise, PowerVC indicates a
warning during the verification process.
38 IBM PowerVC Version 1.2.3: Introduction and Configuration
Replace the value in the Maximum virtual adapters field with a new value. See Figure 3-2.
Figure 3-2 Modifying maximum virtual adapters
3.3 Placement policies and templates
One goal of PowerVC is to simplify the management of VMs and storage by providing the
automated creation of partitions and virtual storage disks and the automated placement of
partitions on physical hosts. This automation replaces the manual steps that are needed
when you use PowerVM directly. In the manual steps, you need to create disks, select all
parameters that define each partition to deploy, and configure the mapping between the
storage units and the partitions in the Virtual I/O Servers.
This automation is performed by using deployment templates and placement policies.
3.3.1 Host groups
Use host groups to group hosts logically regardless of any features that they might share. For
example, the hosts do not need the same architecture, network configuration, or storage.
Host groups have these important features:
Every host must be in a host group
Any hosts that do not belong to a user-defined host group are members of the default host
group. The default host group cannot be deleted.
VMs are kept within the host group
A VM can be deployed to a specific host or to a host group. After deployment, if that VM is
migrated, it must always be migrated within the host group.
Chapter 3. PowerVC installation planning 39
Placement policies are associated with host groups
Every host within a host group is subject to the host group’s placement policy. The default
placement policy is striping.
An enterprise client can group its hosts to meet different business needs, for example, for
test, development, and production, as shown in Figure 3-3. With different placement
policies, even with different hardware, the client can archive at different service levels.
Figure 3-3 Host group sample
3.3.2 Placement policies
When you want to deploy a new partition, you can indicate to PowerVC the host on which you
want to create this partition. You can also ask PowerVC to identify the hosts on which the
partitions will best fit in a host group, based on a policy that matches your business needs. If
you ask PowerVC to identify the hosts on which the partitions will best fit in a host group,
PowerVC compares the requirements of the partitions with the availability of resources on the
possible set of target hosts. PowerVC considers the selected placement policy to make a
choice.
PowerVC version 1.2.3 offers five policies to deploy VMs:
Striping placement policy
The striping placement policy distributes your VMs evenly across all of your hosts. For
each deployment, PowerVC determines the hosts with enough processing units and
memory to meet the requirements of the VM. Other factors for determining eligible hosts
include the storage and network connectivity that are required by the VM. From the group
of eligible hosts, PowerVC chooses the host that contains the fewest number of VMs and
places the VM on that host.
Packing placement policy
The packing placement policy places VMs on a single host until its resources are fully
used, and then it moves on to the next host. For each deployment, PowerVC determines
the hosts with enough processing units and memory to meet the requirements of the VM.
Other factors for determining eligible hosts include the storage and network connectivity
that are required by the VM. From the group of eligible hosts, PowerVC chooses the host
that contains the most VMs and places the VM on that host. After the resources on this
host are fully used, PowerVC moves on to the next eligible host that contains the most
VMs.
40 IBM PowerVC Version 1.2.3: Introduction and Configuration
This policy can be useful when you deploy large partitions on small servers. For example,
you need to deploy four partitions that require eight, eight, nine, and seven cores on two
servers, each with 16 cores. If you use the striping policy, the first two partitions are
deployed on the two servers, which leaves only eight free cores on each. PowerVC cannot
deploy the 9-core partition, because a Live Partition Migration (LPM) operation must be
performed before the 9-core partition can be deployed.
By using the packing policy, the first two 8-core partitions are deployed on the first hosts,
and PowerVC can then deploy the 9-core and 7-core partitions on the second host. This
example is simplistic, but it illustrates the difference between the two policies: The striping
policy optimizes performance, and the packing policy optimizes human operations.
CPU utilization balance placement policy
This placement policy places VMs on the host with the lowest CPU utilization in the host
group. The CPU utilization is computed as a running average over the last 15 minutes.
CPU allocation balance placement policy
This placement policy places VMs on the host with the lowest percentage of its CPU that
is allocated post-deployment or after relocation.
For example, consider an environment with two hosts:
– Host 1 has 16 total processors, four of which are assigned to VMs.
– Host 2 has four total processors, two of which are assigned to VMs.
Assume that the user deploys a VM that requires one processor. Host 1 has (4+1)/16, or
5/16 of its processors that are allocated. Host 2 has (2+1)/4, or 3/4 of its processors that
are allocated. Therefore, the VM is scheduled to Host 1.
Memory allocation balance placement policy
This placement policy places VMs on the host with the lowest percentage of its memory
that is allocated post-deployment or after relocation.
For example, consider an environment with two hosts:
– Host 1 has 16 GB total memory; 4 GB of which is assigned to VMs.
– Host 2 has four GB total memory; 2 GB of which is assigned to VMs.
Assume that the user deploys a VM that requires 1 GB of total memory. Host 1 has
(4+1)/16, or 5/16 of its memory that is allocated. Host 2 has (2+1)/4, or 3/4 of its memory
that is allocated. Therefore, the VM is scheduled to Host 1.
When a new host is added to the host group that is managed by PowerVC, if the placement
policy is set to the striping mode, new VMs will be deployed on the new host until it catches up
with the existing hosts. PowerVC allocates partitions only on this new host until the resources
use of this host is about the same as on the previously installed hosts.
Note: A default placement policy change does not affect existing VMs. It affects only new
VMs that are deployed after the policy setting is changed. Therefore, changing the
placement policy for an existing environment does not result in moving existing partitions.
Tip: The following settings might increase the throughput and decrease the duration of
deployments:
Use the striping policy rather than the packing policy.
Limit the number of concurrent deployments to match the number of hosts.
Chapter 3. PowerVC installation planning 41
When a new partition is deployed, the placement algorithm uses several criteria to select the
target server for the deployment, such as availability of resources and access to the storage
that is needed by the new partitions. By design, the PowerVC placement policy is
deterministic. Therefore, the considered resources are the amounts of processing power and
memory that are needed by the partition, as defined in the partition profile (virtual processors,
entitlement, and memory). Dynamic resources, such as I/O bandwidth, are not taken
considered, because they will result in a non-deterministic placement algorithm.
The placement policy can also be used when you migrate a VM. Figure 3-4 shows the
PowerVC user interface for migrating a partition. Use this interface to select between
specifying a specific target or letting PowerVC select a target according to the current
placement policy.
Figure 3-4 Migration of a partition by using a placement policy
3.3.3 Template types
Rather than define all characteristics for each partition or each storage unit that must be
created, the usual way to create them in PowerVC is to instantiate these objects from a
template that was previously defined. The amount of effort that is needed to define a template
is similar to the effort that is needed to define a partition or storage unit. Therefore, reusing
templates saves significant effort for the system administrator, who needs to deploy many
objects.
PowerVC provides a GUI to help you create or customize templates. Templates can be easily
defined to accommodate your business needs and your IT environment.
Two types of templates are available:
Compute templates These templates are used to define processing units, memory, and
disk space that are needed by a partition. They are described in 3.3.4,
“Information that is required for compute template planning” on
page 42.
Storage templates These templates are used to define storage settings, such as a
specific volume type, storage pool, and storage provider. They are
described in 3.5.2, “Storage templates” on page 56.
Use the templates to deploy new VMs. This approach propagates the values for all of the
resources into the VMs. The templates accelerate the deployment process and create a
baseline for standardization.
Note: The placement policies are predefined. You cannot create your own policies.
42 IBM PowerVC Version 1.2.3: Introduction and Configuration
Templates can be defined by using the Standard view or, for more detailed and specific
configuration, you can use the Advanced view, as described in the next section.
3.3.4 Information that is required for compute template planning
The PowerVC 1.2.3 management host provides 11 predefined compute templates. Your
redefined templates can be edited and removed. You can create your own templates, also.
Before you start to create templates, plan for the amount of resources that you need for the
classes of partitions that you will need. For example, different templates can be used for
partitions that are used for development, test, and production, or you can have different
templates for database servers, application servers, and web servers.
PowerVC offers two template options:
Basic Create micropartitions (shared partitions) by specifying the minimum
amount of information.
Advanced Create dedicated partitions or micropartitions, with the level of detail
that is available on the HMC.
Basic templates
You need the following information to plan a basic template:
Template name The name to use for the template.
Virtual processors Number of virtual processors. A VM usually performs best if the
number of virtual processors is close to the number of processing
units that is available to the VM.
Memory (MB) Amount of memory, in MB. The value for memory must be a multiple of
the memory region size that is configured on your host. To see the
region size for your host, open the Properties panel for the selected
host in the HMC, and then open the Memory tab and record the
“memory region size” value. Figure 3-5 on page 44 shows an example.
Processing units Number of entitled processing units. A processing unit is the minimum
amount of processing resource that the VM can use. For example, a
value of 1 (one) processing unit corresponds to 100% use of a single
physical processor. Processing units are split between virtual
processors, so a VM with two virtual processors and one processing
unit appears to the VM user as a system with two processors, each
running at 50% speed.
Disk (GB) Disk space that is needed, in GB.
Compatibility mode Select the processor compatibility that you need for your VM.
Table 3-10 on page 43 describes each compatibility mode and the
servers on which the VMs that use each mode can operate.
Chapter 3. PowerVC installation planning 43
Table 3-10 Processor compatibility modes
Processor compatibility
mode
Description Supported servers
POWER6 Use the POWER6 processor
compatibility mode to run operating
system versions that use all of the
standard features of the POWER6
processor.
VMs that use the POWER6
processor compatibility mode
can run servers that are based
on POWER6, IBM
POWER6+™, POWER7, or
POWER8 processors.
POWER6+ Use the POWER6+ processor
compatibility mode to run operating
system versions that use all of the
standard features of the POWER6+
processor.
VMs that use the POWER6+
processor compatibility mode
can run on servers that are
based on POWER6+,
POWER7, or POWER8
processors.
POWER7, including
POWER7+
Use the POWER7 processor
compatibility mode to run operating
system versions that use all of the
standard features of the POWER7
processor.
VMs that use the POWER7
processor compatibility mode
can run servers that are based
on POWER7 or POWER8
processors.
POWER8 Use the POWER8 processor
compatibility mode to run operating
system versions that use all of the
standard features of the POWER8
processor.
VMs that use the POWER8
processor compatibility mode
can run servers that are based
on POWER8 processors.
Default The default processor compatibility
mode is a preferred processor
compatibility mode that enables the
hypervisor to determine the current
mode for the VM. When the
preferred mode is set to Default,
the hypervisor sets the current
mode to the most fully featured
mode that is supported by the
operating environment. In most
cases, this mode is the processor
type of the server on which the VM
is activated. For example, assume
that the preferred mode is set to
Default and the VM is running on a
POWER8 processor-based server.
The operating environment
supports the POWER8 processor
capabilities, so the hypervisor sets
the current processor compatibility
mode to POWER8.
The servers on which VMs with
the preferred processor
compatibility mode of default
can run depend on the current
processor compatibility mode of
the VM. For example, if the
hypervisor determines that the
current mode is POWER8, the
VM can run on servers that are
based on POWER8 processors.
Note: For a detailed explanation of processor compatibility modes, see IBM PowerVM
Virtualization Introduction and Configuration, SG24-7940.
44 IBM PowerVC Version 1.2.3: Introduction and Configuration
Advanced templates
You need the following information to plan advanced templates:
Template name The name for the template.
Virtual processors The number of virtual processors. A VM usually performs best if the
number of virtual processors is close to the number of processing
units that is available to the VM. You can specify the following values:
Minimum The smallest number of virtual processors that you will accept for
deploying a VM.
Desired The number of virtual processors that you want for deploying a VM.
Maximum The largest number of virtual processors that you will allow when
you resize a VM. This value is the upper limit to resize a VM
dynamically. When it is reached, you need to power off the VM, edit
the profile, change the maximum to a new value, and restart the
VM.
Memory (MB) Amount of memory, expressed in MB. The value for memory must be a
multiple of the memory region size that is configured on your host. The
minimum value is 16 MB. To see the region size for your host, open the
Properties panel for the selected host on the HMC, and then open the
Memory tab to view the memory region size. Figure 3-5 shows an
example. You can specify the following values:
Minimum The smallest amount of memory that you want for deploying a VM.
If the value is not available, the deployment will not occur.
Desired The total memory that you want in the VM. The deployment occurs
with an amount of memory less than or equal to the desired
amount and greater than or equal to the minimum amount that is
specified.
Maximum The largest amount of memory that you will allow when you resize
a VM. This value is the upper limit to resize a VM dynamically.
When it is reached, you need to power off the VM, edit the profile,
change the maximum to a new value, and restart the VM.
Figure 3-5 Memory region size view on the HMC
Chapter 3. PowerVC installation planning 45
Processing units Number of entitled processing units. A processing unit is the minimum
amount of processing resource that the VM can use. For example, a
value of 1 (one) processing unit corresponds to 100% use of a single
physical processor. The setting of processing units is available only for
shared partitions, not for dedicated partitions. You can specify the
following values:
Minimum The smallest number of processing units that you will accept for
deploying a VM. If this value is not available, the deployment will
not occur.
Desired The number of processing units that you want for deploying a VM.
The deployment will occur with a number of processing units that is
less than or equal to the desired value and greater than or equal to
the minimum value.
Maximum The largest number of processing units that you will allow when you
resize a VM. This value is the upper limit to which you can resize
dynamically. When it is reached, you need to power off the VM, edit
the profile, change the maximum value to a new value, and restart
the VM.
Disk (GB) Disk space that is needed in GB.
Compatibility mode Select the compatibility that is needed for your VM. Table 3-10 on
page 43 lists each processor compatibility mode and the servers on
which the VMs that use each processor compatibility mode can
successfully operate.
Enable virtual machine remote restart
With PowerVC version 1.2.3 or later, users can remote restart a VM on
another host easily if the current host fails. This feature enhanced the
availability of applications in addition to the solutions that are based on
PowerHA and Live Partition Mobility (LPM).
Shared processors or dedicated processor
Decide whether the VM will use processing resources from a shared
processor pool or dedicated processor resources.
Important: Processing units and virtual processor are values that work closely and
must be calculated carefully. For more information about virtual processor and
processing units, see IBM PowerVM Virtualization Managing and Monitoring,
SG24-7590.
Note: Use the advanced template to define only the amount of storage that you need. You
cannot use the advanced template to specify a number of volumes to create.
Note: This function is based on PowerVM simplified remote restart function and only
supported by POWER8 servers at the time that this book was written. For the requirements
of remote restart, see the IBM Knowledge Center:
http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.standar
d.help.doc/powervc_recovery_reqs_hmc.html
46 IBM PowerVC Version 1.2.3: Introduction and Configuration
Option A: Shared processors settings
The following values are available for option A:
Uncapped Uncapped VMs can use processing units that are not being used by
other VMs, up to the number of virtual processors that is assigned to
the uncapped VM.
Capped Capped VMs can use only the number of processing units that are
assigned to them.
Weight (0 - 255) If multiple uncapped VMs require unused processing units, the
uncapped weights of the uncapped VMs determine the ratio of unused
processing units that are assigned to each VM. For example, an
uncapped VM with an uncapped weight of 200 receives two processing
units for every processing unit that is received by an uncapped VM
with an uncapped weight of 100.
Option B: Dedicated processor settings
The following values are available for option B:
Idle sharing This setting enables this VM to share its dedicated processors with
other VMs when this VM is powered on and idle (also known as a
dedicated donating partition).
Availability priority To avoid shutting down mission-critical workloads when your server
firmware unconfigures a failing processor, set availability priorities for
the VMs (0 - 255). A VM with a failing processor can acquire a
replacement processor from a VM with a lower availability priority. The
acquisition of a replacement processor allows the VM with the higher
availability priority to continue running after a processor failure.
3.4 PowerVC storage access SAN planning
In the PowerVC Standard Edition, VMs can access their storage by using either of three
protocols:
Classical vSCSI, as described in “vSCSI storage access” on page 47
NPIV, as described in “NPIV storage access” on page 49
vSCSI to shared storage pool (SSP), as described in “Shared storage pool: vSCSI” on
page 50
A minimum configuration of the SAN and storage is necessary before PowerVC can use
them. For example, PowerVC will create virtual disks on storage devices, but these devices
must be set up first. You must perform the following actions before you use PowerVC:
Configuration of the FC fabric for the PowerVC environment must be planned first: cable
attachments, SAN fabrics, and redundancy. It is common to create at least two
independent fabrics to provide SAN redundancy.
Note: PowerVC assumes that all hosts can access all registered storage controllers.
The cabling must be performed in a way so that all hosts can access the same set of
storage devices.
Chapter 3. PowerVC installation planning 47
PowerVC provides storage for VMs through the VIOS.
With PowerVC Standard Edition, the storage is accessed by using NPIV, vSCSI, or an
SSP that uses vSCSI.
The VIOS and SSP must be configured before PowerVC can manage them.
The SAN switch administrator user ID and password must be set up. They will be used by
PowerVC.
The storage controller administrator user ID and passwords must be set up so that SAN
logical unit numbers (LUNs) can be created.
For vSCSI, turn off SCSI reserves for volumes that are being discovered on all the Virtual
I/O Servers that are used for vSCSI connections. This action is required for LPM
operations and for dual Virtual I/O Servers.
For vSCSI and SSP, initial zoning must be established to provide access from Virtual I/O
Servers to storage controllers.
In PowerVC Standard Edition, you need to create a VM manually to capture your first
image. Prepare by performing these tasks:
– VIOS must be set up for NPIV or vSCSI to provide access from the VM to the SAN.
– For NPIV, SAN zoning must be configured to provide access from virtual FC ports in
VM to storage controllers.
– The OS must be installed in the first VM, and the activation engine or cloud-init must be
installed and used.
After PowerVC Standard Edition can access storage controllers and switches, it can perform
these tasks:
Collect inventory on the FC fabric
Collect inventory on storage devices (pools and volumes)
Monitor health
Detect misconfigurations
Manage zoning
Manage LUNs on storage devices
3.4.1 vSCSI storage access
With PowerVC version 1.2.2 or later, you can use vSCSI to access SAN storage in the
PowerVC environment.
Before you use vSCSI-attached storage in PowerVC, you need to perform the following steps.
1. Turn off SCSI reserves for volumes that are being discovered on all the Virtual I/O Servers
that are used for vSCSI connections. This step is required for LPM operations and for dual
Virtual I/O Servers.
For the IBM Storwize family, XIV, and EMC that use the AIX path control module (PCM)
model, you must run the following command on every VIOS where vSCSI operations will
be run:
chdef -a reserve_policy=no_reserve -c disk -s fcp -t mpioosdisk
Important: If you connect a VM to several FC adapters (and therefore several
worldwide port names (WWPNs)) to storage devices with several WWPNs, you
need to create one zone for each pair of source and target WWPNs. You must not
create a single zone with all source and target WWPNs.
48 IBM PowerVC Version 1.2.3: Introduction and Configuration
2. You must configure all zoning between the VIOS and the storage device ports so that you
can import vSCSI environments easily and use any number of fabrics with vSCSI.
3. You might need to increase the pre_live_migration_timeout setting in nova.conf if many
vSCSI-attached volumes are on the VM or a heavy load is on the destination host’s Virtual
I/O Servers. Increasing this setting provides the additional time that is required to process
many vSCSI-attached volumes.
Figure 3-6 shows how VMs in PowerVC Standard Edition access storage by using vSCSI.
The flow of storage management from physical storage LUNs to VMs in PowerVC Standard
Edition with vSCSI is described:
LUNs are provisioned on a supported storage controller.
LUNs are masked to VIOS FC ports and are discovered as hdisk logical devices in VIOS.
LUNs are mapped (by using mkvdev) from VIOS to VMs over an vSCSI virtual adapter pair.
These steps are completed automatically by PowerVC. No zoning is involved, because
individual VMs do not access physical LUNs directly over the SAN.
Figure 3-6 PowerVC Standard Edition storage access by using vSCSI
Note: You must use the chdef command, not the chdev command.
Important: This step is mandatory. Different commands exist for other multipath I/O
drivers. See the documentation of the drivers to learn how to turn off SCSI reserves.
Chapter 3. PowerVC installation planning 49
3.4.2 NPIV storage access
Figure 3-7 shows how VMs access storage through NPIV with PowerVC Standard Edition.
The following list describes the actions that are performed by PowerVC Standard Edition to
manage the flow of storage from physical storage LUNs to VMs:
Access to SAN from VMs is configured on Virtual I/O Servers by using FC adapter pair
and NPIV (vfcmap command).
LUNs are provisioned on a supported storage controller.
LUNs are masked to VM virtual FC ports.
SAN zoning is adjusted so that VMs have access from their virtual FC ports to storage
controller host ports. Changes in zoning are performed automatically by PowerVC
Standard Edition.
LUNs are viewed as logical devices in VMs.
These actions are completed automatically by PowerVC Standard Edition.
Figure 3-7 PowerVC Standard Edition storage access by using NPIV
IBM PowerVC Standard
SAN Switch
AIX/Linux
Storage Device
Virtual FC
HMC
VIOS 2
Virtual FC
VIOS 1
PowerVC Standard Edition
manages Storage, SAN
and VIOServers (via HMC)
PowerVC Standard Edition
Instructs the VIOS
to map virtual FC
to VMs (NPIV),
Dual VIOS configuration
is supported
PowerVC Standard Edition
manages SAN zoning
zones are:
Storage host ports
to VM virtual FC ports
(NPIV)
PowerVC Standard Edition
manages LUNs and LUN
masking on storage,
LUNs are masked directly
to VM
Edition
IBM Power Systems
Server
50 IBM PowerVC Version 1.2.3: Introduction and Configuration
3.4.3 Shared storage pool: vSCSI
Figure 3-8 shows how VMs access storage in an SSP with PowerVC Standard Edition.
The flow of storage management from physical storage LUNs to VMs in PowerVC Standard
Edition is described:
Access to storage from Virtual I/O Servers by using physical FC adapters is set manually.
The SSP is configured manually: Creation of a cluster, inclusion of Virtual I/O Servers in
the cluster, and additions of disk to the pool.
PowerVC discovers the SSP when it discovers the Virtual I/O Servers.
PowerVC can create logical units (LUs) in the shared storage pool when it creates a new
VM.
PowerVC instructs the VIOS to map the SSP LUs to LUNs for the VIO clients’ partitions
that access them through vSCSI devices.
Figure 3-8 PowerVC Standard Edition storage access by using an SSP
3.4.4 Storage access in PowerVC Standard Edition managing PowerKVM
Figure 3-9 shows how VMs access storage with PowerVC Standard Edition managing
PowerKVM.
The following list is a description of the flow of storage management from host internal
storage to VMs in PowerVC Standard Edition managing PowerKVM:
PowerKVM accesses the internal storage on the host.
PowerVC manages the internal storage when a PowerKVM host is added for
management.
IBM PowerVC Standard
SAN Switch or SVC
AIX/Linux
Shared Storage Pool: LUs
vSCSI
HMC
vSCSI
VIOS 1
PowerVC Standard Edition
manages Storage, SAN
and VIOServers (via HMC)
PowerVC Standard Edition
instructs the VIOS
to map SSP Lus
Dual VIOS configuration
is supported
PowerVC Standard Edition
manages SSP Lus.
Edition
IBM Power Systems
Server
Zoning must be
done manually.
Zones contains storage
Devices or SVC ports
and VIOS FC ports
VIOS 2
Chapter 3. PowerVC installation planning 51
LUN requests are created automatically by PowerVC and mapped to the VMs.
The flow of storage management from SAN storage to VMs in PowerVC Standard Edition
managing PowerKVM by using iSCSI is described:
SAN storage is available through the Ethernet network by configuring access over the
iSCSI protocol.
PowerVC manages the SAN storage when the storage provider is added.
LUN requests are created automatically by PowerVC and mapped to VMs.
Figure 3-9 PowerVC Standard Edition managing PowerKVM storage access
3.5 Storage management planning
PowerVC manages storage volumes, which can be attached to VMs. These storage volumes
can be backed by IBM Storwize storage devices, SAN Volume Controller devices, IBM XIV
storage devices, EMC VMAX storage devices, EMC VNX storage devices, or SSP files.
PowerVC requires IP connectivity to the storage providers to manage the storage volumes.
3.5.1 PowerVC terminology
PowerVC uses a few terms and concepts that differ from terms that are used in PowerVM:
Storage provider Any system that provides storage volumes. In version 1.2.3 of
PowerVC, storage providers can be IBM Storwize devices, SAN
Volume Controller devices that hide the real storage unit that holds the
data, IBM XIV devices, EMC VMAX storages, EMC VNX storages, or
SSP. Figure 3-10 shows a PowerVC environment that manages three
storage providers: one IBM Storwize V7000, one IBM XIV storage, and
Hosts
Resource
Manager
Power
Systems
Storage NetworkKey
Virtual
Machines
PowerKVM
Virtual Machine
Virtual Machine
Fabric
SAN
Ethernet Network
iSCSI
connections
Internal Storage
Power VC
52 IBM PowerVC Version 1.2.3: Introduction and Configuration
one EMC VMAX storage. PowerVC also refers to storage providers as
storage controllers.
Figure 3-10 PowerVC storage providers
Chapter 3. PowerVC installation planning 53
Fabric Another name for a SAN switch. Figure 3-11 shows a PowerVC
Fabrics window that displays information for a switch that is named
fswitch, with IP address 172.16.21.139. Click this address on the
Fabrics window to open the graphical view of the switch.
Figure 3-11 Fabrics window that lists a switch with a switch GUI
54 IBM PowerVC Version 1.2.3: Introduction and Configuration
Storage pool A storage resource that is defined on the storage provider in which
PowerVC can create volumes. PowerVC cannot create or modify
storage pools; it can only discover them. The storage pools must be
managed directly from the storage providers. Figure 3-12 shows the
detail of an IBM Storwize V7000 storage provider that is configured
with two storage pools for different purposes.
Figure 3-12 Storage pools
Shared storage pool In PowerVC, shared storage resource refers to the PowerVM shared
storage pool (SSP) feature. The SSP cannot be created or modified by
PowerVC. You must create the SSP on the VIOS before PowerVC can
create volumes on the SSP.
Volume Volumes are also referred to as a disk or a logical unit number (LUN).
They are carved from the storage pools and presented as virtual disks
to the partitions that are managed by PowerVC.
Chapter 3. PowerVC installation planning 55
Storage template This template defines the properties of a storage volume, such as
location, thin provisioning, and compression. For example, by using
the templates that are shown in Figure 3-13, you can create volumes
that are either a normal thin-provisioned volume or a mirrored volume.
For more information, see 3.5.2, “Storage templates” on page 56.
Figure 3-13 Storage templates
Storage connectivity group
A set of Virtual I/O Servers with access to the same storage
controllers. For more information, see 3.5.3, “Storage connectivity
groups and tags” on page 58.
Tags Tags are a way to partition the FC ports of a host in sets that can be
associated with sets of Virtual I/O Servers. For more information, see
3.5.3, “Storage connectivity groups and tags” on page 58.
56 IBM PowerVC Version 1.2.3: Introduction and Configuration
3.5.2 Storage templates
Storage templates are used to speed up the creation of new disk. A storage template defines
several properties of the disk unit. Disk size is not part of the template. For different types of
storage devices, the information that is defined in a template differs. We introduce the IBM
Storwize storage template only, which is a common type of storage that is used in the
PowerVC environment.
IBM Storwize storage template definition
The following information is defined in a template:
Name of the storage template
Storage provider. The template is associated with a single storage provider. It cannot be
used to instantiate disks from multiple storage providers.
Storage pool within storage provider. The template is associated with a single storage
pool. With PowerVC version 1.2.3 or later, you can add another pool to support volume
mirroring in the Advanced settings area.
Thin, thick (full), or compressed provisioning. To choose thick provisioning, select the
Generic type of volume.
Advanced Settings area:
The following information is defined in the Advanced Settings area:
– I/O group: The I/O group to add the volume to. For the SAN Volume Controller, the
maximum I/O groups that are supported is four.
– % of virtual capacity: Determines how much real storage capacity is allocated to the
volume at creation time, as a percentage of the maximum size that the volume can
reach.
– Automatically expand: Check box Yes or No. Prevents the volume from using all of its
capacity and going offline. As a thin-provisioned volume uses more of its capacity, this
feature maintains a fixed amount of unused real capacity, which is called the
contingency capacity.
– Warning threshold: When real capacity reaches a specific percentage of virtual
capacity, a warning alert is sent.
– Grain size: Thin-provisioned grain size can be selected in the range from 32 KB to
256 KB. A grain is a chunk that is used for allocating space. The grain size affects the
maximum virtual capacity for the volume. Generally, smaller grain sizes save space but
require more metadata access, which can affect performance adversely. The default
grain size is 256 KB, which is the strongly recommended option. The grain size cannot
be changed after the thin-provisioned volume is created.
– Use all available WWPNs for attachment: Specifies whether to enable multipath
zoning. When this setting is enabled, PowerVC uses all available WWPNs from all of
the I/O groups in the storage controller to attach the volume to the VM. Enabling
multipath causes each WWPN that is visible on the fabric to be zoned to the VM.
– Enable mirroring: When checked, you will need to select another pool for volume
mirroring. The volume that is created will have one more copy in the mirroring pool.
IBM Storwize clients can use two pools based on two different back-end storage
devices to provide high availability.
A storage template can then be selected during volume creation operations.
Chapter 3. PowerVC installation planning 57
Figure 3-14 shows a dialog window that is presented to an PowerVC administrator when the
administrator defines the advanced settings for a thin-provisioned storage template definition.
Figure 3-14 Storage template definition: Advanced settings, thin-provisioned
Storage template planning
When you register a storage provider with PowerVC, a default storage template is created for
that provider. We suggest that you edit this default template to suit your needs immediately
after PowerVC discovers the service provider.
You can define several storage templates for one storage provider. If the storage provider
contains several storage pools, at least one storage template is needed for each pool before
those pools can be used to create volumes.
Note: After a disk is created and uses a template, you cannot modify the template settings.
58 IBM PowerVC Version 1.2.3: Introduction and Configuration
When you create a storage volume, you must select a storage template. All of the properties
that are specified in the storage template are applied to the new volume, which is created on
the storage provider that is specified in the storage template. To create a disk, you need to
enter the name of the template to use, volume name, and size only. Decide whether to select
the Enable sharing check box. See Figure 3-15.
Figure 3-15 Volume creation
A storage template must also be specified when you deploy a new VM to control the
properties of the virtual server’s boot volumes and data volumes. PowerVC can manage
pre-existing storage volumes. You can select them when you register the storage device or at
any later time. Preexisting storage volumes do not have an associated storage template.
3.5.3 Storage connectivity groups and tags
PowerVC Standard Edition uses storage connectivity groups and tags.
Storage connectivity groups
When you create a VM, PowerVC needs a way to identify on which host it has to deploy this
machine. One of the requirements is that from this host, the VM will connect to its storage.
Also, when you request PowerVC to migrate a VM, PowerVC must ensure that the target host
also provides the VM with connectivity to its volume.
The purpose of a storage connectivity group is to define sets of hosts with access to the same
storage devices where a VM can be deployed. A storage connectivity group is a set of Virtual
I/O Servers with access to the same storage controllers. It can span several host systems on
IBM Power Systems servers with landscapes that are managed by PowerVC Standard
Edition.
Chapter 3. PowerVC installation planning 59
When you deploy a new VM with PowerVC, a storage connectivity group must be specified.
The VM will be associated with that storage connectivity group during the VM’s existence. A
VM can be deployed only on Power Systems hosts that contain at least one VIOS that is part
of the storage connectivity group. Specifying the storage connectivity group that a VM
belongs to defines the set of hosts on which this VM can be deployed.
The VM can be migrated only within its associated storage connectivity group and host group.
PowerVC ensures that the source and destination servers can access the required storage
controllers and LUNs.
Default storage connectivity groups are automatically created when PowerVC discovers the
environment. These default connectivity groups contain all Virtual I/O Servers that access the
same devices. Figure 3-16 shows the result of the discovery by PowerVC of an environment
with the following conditions:
Two POWER8 servers exist.
Each server hosts two Virtual I/O Servers.
Each VIOS has two FC ports.
All Virtual I/O Servers connect to an IBM Storwize V7000.
PowerVC automatically created two storage connectivity groups: One storage connectivity
group for NPIV storage access and one storage connectivity group for vSCSI storage access.
These two storage connectivity groups correspond to the two ways that partitions can access
storage from these hosts.
Figure 3-16 List of storage connectivity groups
The default storage connectivity groups can be disabled but not deleted. For more
information, see 5.9, “Storage connectivity group setup” on page 116.
The system administrator can define additional storage connectivity groups to further
constrain the selection of host systems. You can use storage connectivity group-to-group host
systems together in, for example, production and development groups. On large servers that
are hosting several Virtual I/O Servers, you can use storage connectivity groups to direct
partitions to use a specific pair of Virtual I/O Servers on each host.
60 IBM PowerVC Version 1.2.3: Introduction and Configuration
Figure 3-17 shows a diagram of storage connectivity group technology. It includes two Power
Systems servers, each with three Virtual I/O Servers. Two Virtual I/O Servers from each
server are part of the production storage connectivity group (called Production SCG in the
figure) and one VIOS from each server is part of the development storage connectivity group
(Development SCG). The VMs that are named VM1, VM2, VM4, and VM5 are associated
with the production storage connectivity group, and their I/O traffic passes through the FC
ports of A1, A2, B1, and B2 Virtual I/O Servers. The development partitions VM3 and VM6
are associated with the development storage connectivity group, and their traffic is limited to
using the FC ports that are attached to Virtual I/O Servers A3 and B3.
Figure 3-17 Storage connectivity groups
Tip: A storage connectivity group can be modified after its creation to, for example, add or
remove Virtual I/O Servers. Therefore, when your environment changes, you can add new
hosts and include their Virtual I/O Servers in existing storage connectivity groups.
IBM Power Systems Server A
Hypervisor
VM1 VM2 VM3
vSCSI
vSCSI
IBM Power Systems Server B
Hypervisor
VM4 VM5 VM6
vSCSI
vSCSI
Production
VIOS A2VIOS A1
FC FC FC FC
Production
VIOS B2VIOS B1
FC FC FC FC
Dev
VIOS A3
FC FC
Dev
VIOS B3
FC FC
Redundant
production SAN Development SAN
Production SCG
Development
SCG
Chapter 3. PowerVC installation planning 61
Figure 3-18 shows how PowerVC presents the detail of a storage connectivity group. It is
similar to the production storage connectivity group of the previous example, with two servers,
two Virtual I/O Servers for each server, and two ports for each VIOS.
Figure 3-18 Content of a storage connectivity group
Storage port tags
PowerVC Standard Edition introduces a concept that does not exist within PowerVM: storage
port tags. PowerVC allows arbitrary tags to be placed on FC ports.
A storage connectivity group can be configured to connect only through FC ports with a
specific tag. Storage connectivity groups that share a VIOS can use different physical FC
ports on the VIOS. The PowerVC administrator handles this function by assigning different
port tags to the physical FC ports of the VIOS. These tags are labels that can be assigned to
specific FC ports across your hosts. A storage connectivity group can be configured to
connect only through FC ports that have the same tags when you deploy with NPIV direct
connectivity. Port tagging is not effective when you use SSP.
Combining a storage connectivity group and tags
By using both the storage connectivity group and tag functions, you can easily manage
different configurations of SAN topology that fit your business needs for partitioning the SAN
and restricting disk I/O traffic to part of the SAN.
Note: An FC port can have no tag or one tag. This tag can change over time, but a port
cannot have two or more tags simultaneously.
62 IBM PowerVC Version 1.2.3: Introduction and Configuration
Figure 3-19 shows an example of possible tag usage. The example consists of two IBM
Power Systems servers, each with two Virtual I/O Servers. Each VIOS has three FC ports.
The first two FC ports are tagged ProductionSCG and connect to a redundant production
SAN. The third port is tagged DevelopmentSCG and connects to a development SAN. Client
VMs that belong to either storage connectivity groups (ProductionSCG or DevelopmentSCG)
share the same Virtual I/O Servers but do not share FC ports.
Figure 3-19 Storage connectivity groups and tags
IBM Power Systems Server A
Hypervisor
VM1 VM2 VM3
vSCSIvSCSI
VIOS A1
FC FC FC
Redundant
production SAN Development SAN
VIOS A2
FC FC FC
IBM Power Systems Server B
Hypervisor
VM4 VM5 VM6
vSCSIvSCSI
VIOS B1
FC FC FC
VIOS B2
FC FC FC
Development
SCG
Production
SCG
Chapter 3. PowerVC installation planning 63
The Virtual I/O Servers in a storage connectivity group provide storage connectivity to a set of
VMs with common requirements. An administrator can use several approaches to configure
storage connectivity groups. Figure 3-20 shows these possible scenarios:
Uniform All VMs use all Virtual I/O Servers and all FC ports.
Virtual I/O Server segregation
Different groups of different VMs use different sets of Virtual I/O
Servers but all FC ports on each VIOS.
Port segregation Different groups of different VMs use all Virtual I/O Servers but
different FC ports according to tags on those ports.
Combination In a combination of VIOS and port segregation, different groups of
different VMs use different sets of Virtual I/O Servers and different FC
ports according to tags on those ports.
Figure 3-20 Examples of storage connectivity group deployments
3.6 Network management planning
A network represents a set of Layer 2 and Layer 3 network specifications, such as how your
network is subdivided by using VLANs, and information about the subnet mask, gateway, and
other characteristics. When you deploy an image, you choose one or more existing networks
to apply to the new VM.
Setting up networks in advance reduces the amount of information that you need to enter
during each deployment and helps to ensure a successful deployment.
IBM Power Systems ServerIBM Power Systems Server
VM1 VM2 VM3 VM1 VM2 VM3
IBM Power Systems Server
VM1 VM2 VM3
vSCSI
FC
VIOS 2
FC FC FC
VIOS 1
FCFC FC
IBM Power Systems Server
VM1 VM2 VM3
VIOS 3
FC FC FC
Dev SCG
vSCSI
VIOS 1
FC FC FC FC
vSCSI
VIOS 2
FC FC FC FC
Uniform VIOS Segregated
Port Segregated VIOS And Port Segregated
vSCSI
VIOS 1
FC FC FC
VIOS 2
FC FC FC
Production SCG
vSCSI
VIOS 1
FC FC FC
VIOS 2
FC FC FC
VIOS 3
FC FC FC
Production SCG Dev SCG
Dev SCG
Production 1 SCG
Production 2 SCG
Production SCG
64 IBM PowerVC Version 1.2.3: Introduction and Configuration
The first selected network is the management network that provides the primary system
default gateway address. You can add additional networks to divide the traffic and provide
more functions.
PowerVC supports IP addresses by using hardcoded (/etc/hosts) or Domain Name Server
(DNS)-based host name resolution. PowerVC also supports Dynamic Host Configuration
Protocol (DHCP) or static IP address assignment. For DHCP, an external DHCP server is
required to provide the address on the VLANs of the objects that are managed by PowerVC.
3.6.1 Multiple network planning
Each VM that you deploy must be connected to one or more networks. By using multiple
networks, you can split traffic. The PowerVC management host uses three common types of
networks when it deploys VMs:
Data network This network provides the route over which workload traffic is sent. At
least one data network is required for each VM, and more than one
data network is allowed.
Management network
This type of network is optional but highly suggested to provide a
higher level of function and security to the VMs. A management
network provides the Resource Monitoring and Control (RMC)
connection between the management console and the client logical
partition (LPAR). VMs are not required to have a dedicated
management network, but a dedicated management network
simplifies the management of advanced features, such as LPM and
dynamic reconfiguration. PowerVC provides the ability to connect to a
management network. First, you must set up networking on the
switches and the shared Ethernet adapter to support it.
Live Partition Migration (LPM) network
This optional network provides the route over which migration data is
sent from one host to another host. By separating this data onto its
own network, you can shape that network traffic to specify a higher or
lower priority over data or management traffic. If you do not want to
use a separate network for LPM, you can reuse an existing data or
management network connection for LPM.
Since version 1.2.2, PowerVC can dynamically add a network interface controller (NIC) to a
VM or remove a NIC from a VM. PowerVC will not set the IP address for new network
interfaces that are created after the machine deployment. Any removal of a NIC will result in
freeing the IP address that was set on it.
Note: When you use DHCP, PowerVC is not aware of the IP addresses of the VMs that it
manages.
Tip: We suggest that the user creates all of the networks that are needed for future VM
creation. Contact your network administrator to add all of the needed VLANs on the switch
ports that will be used by the shared Ethernet adapter (PowerVM) or network bridges
(PowerKVM). This action will drastically reduce the amount of time that is needed for
network management (no more actions for PowerVC administrators and network teams).
Chapter 3. PowerVC installation planning 65
3.6.2 Shared Ethernet adapter planning
Set up the shared Ethernet adapters for a registered host before you use the host within
PowerVC. The configuration for each shared Ethernet adapter determines how each host
treats networks. PowerVC requires that the shared Ethernet adapters are created before you
start to manage the systems.
If you are using shared Ethernet adapter in sharing/auto mode with VLAN tagging, we
suggest that you create it without any VLANs that are assigned on the Virtual Ethernet
Adapters.
PowerVC will add or remove the VLANs on the shared Ethernet adapters when necessary (at
VM deletion and creation):
If you deploy a VM on a new network, PowerVC will add the VLAN on the shared Ethernet
adapter.
If you delete the last VM of a specific network (for a host), the VLAN will be automatically
deleted.
If the VLAN is the last VLAN that was defined on the Virtual Ethernet Adapter, this VLAN
will be removed from the shared Ethernet adapter.
If you are using shared Ethernet adapter and the following setting is true:
– High availability mode set to sharing: PowerVC will ensure that at least two Virtual
Ethernet Adapters will be kept in the shared Ethernet adapter.
– High availability mode set to auto: PowerVC will ensure that at least one Virtual
Ethernet Adapter will be kept in the shared Ethernet adapter.
PowerVC then connects VMs to that shared Ethernet adapter, deploys client-level VLANs to
it, and allows dynamic reconfiguration of the network to shared Ethernet adapter mapping.
When you create a network in PowerVC, a shared Ethernet adapter is automatically chosen
from each registered host, based on the VLAN that you specified when you defined the
network. If the VLAN does not exist yet on the shared Ethernet adapter, PowerVC deploys
that VLAN to the shared Ethernet adapter that is specified.
VLANs are deployed only as VMs need them to reduce the broadcast domains.
You can dynamically change the shared Ethernet adapter to which a network is mapped or
you can remove the mapping, but remember that this assignment is a default automatic
assignment when you set up your networks. It might not match your organization’s naming
policies.
The shared Ethernet adapter that is chosen as the default adapter has the same network
VLAN as the new network. If a shared Ethernet adapter with the same VLAN does not exist,
PowerVC chooses as the default the shared Ethernet adapter with the lowest primary VLAN
ID Port Virtual LAN Identifier (PVID) that is in an available state.
Important: When multiple Ethernet adapters exist on either or both the migration source
host or destination host, PowerVC cannot control which adapter is used during the
migration. To ensure the use of a specific adapter for your migrations, configure an
IP address on the adapter that you want to use.
Note: To manage PowerVM, PowerVC requires that at least one shared Ethernet adapter
is defined on the host.
66 IBM PowerVC Version 1.2.3: Introduction and Configuration
Certain configurations might ensure the assignment of a particular shared Ethernet adapter
to a network. For example, if the VLAN that you choose when you create a network in
PowerVC is the PVID of the shared Ethernet adapter or one of the additional VLANs of the
primary Virtual Ethernet Adapter, that shared Ethernet adapter must back the network. No
other options are available. Plan more than one VIOS if you want a failover VIOS or expanded
VIOS functionality.
In our experience, certain clients want to keep the slot-numbering convention. By default,
PowerVC will add and remove Virtual Ethernet Adapter from the shared Ethernet adapter by
choosing the next available slot ID. If you want to avoid this behavior, you can modify all of the
/etc/nova/nova*.conf and change the automated_powervc_vlan_clean attribute to false by
using the following command:
openstack-config --set /etc/nova/nova.conf DEFAULT automated_powervm_vlan_cleanup
False
If host change is already defined, use this attribute for each nova-*.conf file (one for each
host), for example:
openstack-config --set /etc/nova/nova-828642A_10D6D5T.conf DEFAULT
automated_powervm_vlan_cleanup False
Then, restart the PowerVC Nova service:
/opt/ibm/powervc/bin/powervc-services nova restart
Tip: Systems that use multiple virtual switches are supported. If a network is modified to
use a different shared Ethernet adapter and that existing VLAN is already deployed by
other networks, those other networks move to the new adapter, also. To split a single VLAN
across multiple shared Ethernet adapters, break those shared Ethernet adapters into
separate virtual switches. PowerVC supports the use of virtual switches in the system. Use
multiple virtual switches when you want to separate a single VLAN across multiple distinct
physical networks.
If you create a network, deploy VMs to use it, and then change the shared Ethernet
adapter to which that network is mapped, your workloads will be affected. The network will
experience a short outage while the reconfiguration takes place.
In environments with dual Virtual I/O Servers, the secondary shared Ethernet adapter is
not shown except as an attribute on the primary shared Ethernet adapter.
Chapter 3. PowerVC installation planning 67
Table 3-11 is a table of suggestions when you create and use shared Ethernet adapters. The
use of SEAs is a preferred practice.
Table 3-11 Preferred practices for shared Ethernet adapter
3.7 Planning users and groups
To access the PowerVC GUI, you must enter a user ID. This user ID is one of the user IDs that
is defined on the underlying Linux operating system. PowerVC also takes advantage of the
operating system groups.
Changes to users and groups are managed by the operating system and they are reflected
immediately on PowerVC.
3.7.1 User management
When you install PowerVC, it is configured to use the security features of the operating
system on the management host, by default. This configuration sets the root operating
system user account as the only available account with access to the PowerVC server.
We recommend that you create at least one new system administrator user account to
replace the root user account as the PowerVC management administrator. For more
information, see “Adding user accounts” on page 68. After a new administrator ID is defined,
remove the PowerVC administrator rights to the root user ID as explained in “Disable the root
user account from PowerVC” on page 71.
Type of deployment High availability mode auto High availability mode sharing
New host Shared Ethernet adapter
creation with one VEA. Do not
put any VLANs on the VEA.
Shared Ethernet adapter creation
with two VEAs. Do not put any
VLANs on the VEAs.
Existing host (keep
numbering convention)
Set
automated_powervc_vlan_cleanup
of nova-*.conf to False.
Set
automated_powervc_vlan_cleanup of
nova-*.conf to False.
Existing host (let PowerVC
manage numbering the
adapters)
Do nothing. Do nothing.
Important: The PowerVC management host stores data in an IBM DB2 database. When
the installation of PowerVC is complete, an operating system user account is created for
the main DB2 process to run under. This user account is pwrvcdb. Do not remove or modify
this user. PowerVC also requires other user IDs that are defined in /etc/passwd and they
must not be modified, such as nova, neutron, keystone, and cinder. All of the users are
used by DB2 and OpenStack and they must not be modified or deleted.
For security, you cannot connect remotely to these user IDs. These users are configured
for no login.
68 IBM PowerVC Version 1.2.3: Introduction and Configuration
User account planning is important to define standard accounts and the process and
requirements for managing these accounts. A PowerVC management host can take
advantage of user accounts that are managed by the Linux operating system security tools or
can be configured to use the services that are provided by LDAP.
Operating system user account management
Each user is added, modified, or removed by the system administrator, by using Linux
operating system commands. After the user ID is defined on the operating system, the user
ID becomes available in PowerVC if it is a member of a group with a PowerVC role that is
granted, such as admin, deployer, or viewer (see 3.7.2, “Group management planning” on
page 71).
Operating system-based user management requires command-line experience, but it is easy
to maintain. No dependency exists on other servers or services. To see user accounts in the
PowerVC management hosts, click Users in the top navigation bar of the PowerVC GUI. Use
the underlying Linux commands to manage your account (useradd, usermod, or userdel, for
example).
The system administrator of the PowerVC management host must replace the default root
user account configuration. After the system administrator adds the new user account to the
admin group in the operating system, the root user must be removed from this group.
Adding user accounts
To add a user account to the operating systems on the PowerVC management host, run the
following command as root from the Linux command-line interface (CLI):
# useradd [options] login_name
Assume that you want to create a user ID for a system administrator who is new to PowerVC.
You want to allow this administrator to view the PowerVC environment only, not to act on any
of the managed objects. Therefore, you want to give this administrator only a viewer privilege.
By using the command that is shown in the Example 3-1, create the user viewer1, with
/home/viewer1 as the home and base directory, the viewer group as the main group, and a
comment with additional information, such as PowerVC.
Example 3-1 Adding an admin user account with the useradd command
useradd -d /home/viewer1 -g viewer -m -c "PowerVC" viewer
Chapter 3. PowerVC installation planning 69
The new user is created with the viewer role in the PowerVC management host because it is
part of the viewer user group. Double-click the viewer1 user account to see detailed
information, as shown in Figure 3-21. After the administrator is skilled enough with PowerVC
to start managing the environment, you can change the administrator’s group to give the
administrator more management privileges, as described in “Update user accounts” on
page 70.
In addition to the viewer group, the admin and developer group can be assigned to a user.
Use these commands to create a user with the deployer and admin role:
Deployer:
useradd -d /home/deployer1 -g deployer -m -c “One deployer account” deployer1
Admin:
useradd -d /home/admin1 -g admin -m -c “One admin account” admin1
In the example in Figure 3-21, three user IDs (admin1, deployer1, and viewer1) were added to
the initial root user ID.
Figure 3-21 Users information
Figure 3-21 shows the new accounts.
Note: Do not forget to set a password to the new user if you want to log in with these
accounts on the PowerVC GUI.
70 IBM PowerVC Version 1.2.3: Introduction and Configuration
Figure 3-22 shows the new user admin1 that was added to the admin group.
Figure 3-22 Detailed user account information
You can verify each user/group in the /etc/group or /etc/passwd file as shown in
Example 3-2.
Example 3-2 Verify users
# grep -wE "viewer|deployer|admin" /etc/group
admin:x:1001:root
deployer:x:1002:
viewer:x:1003:
# grep -wE "viewer1|deployer1|admin1" /etc/passwd
viewer1:x:1001:1003:One viewer account:/home/viewer1:/bin/bash
deployer1:x:1002:1002:One deployer account:/home/deployer1:/bin/bash
admin1:x:1003:1001:One admin account:/home/admin1:/bin/bash
Update user accounts
To update a user account in the operating systems on the PowerVC management host, run
the following command as root:
# usermod [options] login_name
Use the command that is shown in Example 3-3, update the admin user account with the
comment IBM PowerVC admin user account, and move it to the admin user group.
Example 3-3 Updating the admin user account with the usermod command
usermod -g admin admin
Chapter 3. PowerVC installation planning 71
After this modification, the admin user account is part of the admin user group and can
manage the PowerVC management host, as shown in Figure 3-22 on page 70.
Disable the root user account from PowerVC
Remove the root user account from the admin user group in the PowerVC management hosts
by running the following command as root:
gpasswd -d root admin
Lightweight Directory Access Protocol (LDAP)
LDAP is an open standard for accessing global or local directory services over a network or
the Internet. A directory can handle as much information as you need, but it is commonly
used to associate names with phone numbers and addresses. LDAP is a client/server
solution. The client requests information and the server answers the request. LDAP can be
used as an authentication server.
If an LDAP server is configured in your enterprise, you can use that LDAP server for PowerVC
user authentication. PowerVC can be configured to query an LDAP server for authentication
rather than using operating system user accounts authentication. Use the
powervc-ldap-config to set up the ldap authentication.
See “Configuring LDAP” in the PowerVC section of the IBM Knowledge Center page for
instructions:
http://www-01.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.standar
d.help.doc/powervc_ldap_hmc.html
Selecting the authentication method
Plan the authentication method and necessary accounts before the PowerVC installation. For
simplicity of management, we recommend the use of the operating system authentication
method to manage user accounts in most of the PowerVC installations. Use the LDAP
authentication method only if an LDAP server is already installed and configured.
3.7.2 Group management planning
By default, PowerVC is configured to use the group security features of the operating system
on the management host. PowerVC includes three user groups with the following privileges:
admin
Users in this group can perform all tasks, and they have access to all resources.
deployer
Users in this group can perform all tasks, except the following tasks:
– Adding, updating, or deleting storage systems
– Adding, updating, or deleting hosts
– Adding, updating, and deleting networks
– Viewing users and groups
viewer
Users in this group can view resources and the properties of resources, but they cannot
perform tasks. They cannot view the user and group properties.
Important: We strongly recommend that you do not use the root user account on
PowerVC. It is a security preferred practice to remove it from the admin group.
72 IBM PowerVC Version 1.2.3: Introduction and Configuration
Membership in these groups is defined in the operating system. Group management is not
performed from PowerVC. To add or remove users from these groups, you must add or
remove them in the operating system. Any changes to the operating system groups are
reflected on PowerVC.
The PowerVC management host can display the user accounts that belong to each group.
Log in to the PowerVC management host and click Users on the top navigation bar of the
PowerVC GUI, and then click the Groups tab, as shown in Figure 3-23.
Figure 3-23 Groups tab view under Users on the PowerVC management host
Note: You cannot create your own authorization rules; only viewer, deployer, and admin
are available. You cannot fine-tune the user rights with a mechanism, such as role-based
access control (RBAC).
Chapter 3. PowerVC installation planning 73
This view displays the default groups. To access detailed information for each group,
double-click the group name. Figure 3-24 shows an example of a group that includes three
user IDs.
Figure 3-24 Detailed view of viewer user group on the management host
3.8 Security management planning
PowerVC provides security services that support a secure environment and, in particular, the
following security features:
LDAP support for authentication and authorization information (users and groups).
The PowerVC Apache web server is configured to use secured https protocol. Only
Transport Layer Security (TLS) 1.2 is supported.
Host key and certificate verification of hosts, storage, and switches.
For a list of configuration rules for Internet Explorer, see this website:
http://www-01.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.stan
dard.help.doc/powervc_hwandsw_reqs_hmc.html
Audit logs, which are recorded and available.
Note: File upload is not supported in Internet Explorer, version 9.0. Certain functions
will be limited. When you use Internet Explorer version 9.0 or version 10.0, you must
select Use TLS 1.2.
74 IBM PowerVC Version 1.2.3: Introduction and Configuration
3.8.1 Ports that are used by IBM Power Virtualization Center
The set of ports differs with the PowerVC editions (PowerVM and PowerKVM).
Information about the ports that are used by PowerVC management hosts for inbound and
outbound traffic is on the following IBM Knowledge Center pages:
PowerVC Standard Edition, for managing PowerVM:
http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.standar
d.help.doc/powervc_planning_security_firewall_hmc.html
PowerVC Standard Edition, for managing PowerKVM:
http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.hel
p.doc/powervc_planning_security_firewall_kvm.html
3.8.2 Providing a certificate
A PowerVC management host is installed with a default self-signed certificate and a key.
PowerVC can also use certificate authority (CA)-signed certificates.
Self-signed certificates are certificates that you create for private use. After you create a
self-signed certificate, you can use it immediately. Because anyone can create self-signed
certificates, they are not considered publicly trusted certificates. You can replace default,
expired, or corrupted certificates with a new certificate. You can also replace the default
certificate with certificates that are requested from a CA.
The certificates are installed in the following locations:
/etc/pki/tls/certs/powervc.crt
/etc/pki/tls/private/powervc.key
Clients can replace the rsyslog and libvirt certificates for PowerKVM installations.
The process to replace the certificates is described in the IBM Knowledge Center:
PowerVC Standard Managing PowerVM:
http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.standard.h
elp.doc/powervc_certificate_hmc.html
PowerVC Standard Managing PowerKVM:
http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.help.d
oc/powervc_rsyslog_cert_kvm.html
http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.help.d
oc/powervc_certificate_kvm.html
Important: If a firewall is configured on the management host, ensure that all ports that
are listed on the associated IBM Knowledge Center page are open.
Chapter 3. PowerVC installation planning 75
3.9 Product information
See the following resources for more planning information.
Direct customer support
For technical support or assistance, contact your IBM representative or the Support website:
http://www.ibm.com/support
Packaging
The PowerVC Standard Editions contain a DVD that includes product installation
documentation and files. Your Proof of Entitlement (PoE) for this program is a copy of a paid
sales receipt, purchase order, invoice, or other sales record from IBM or its authorized
reseller from whom you acquired the program, provided that it states the license charge unit
(the characteristics of intended use of the program, number of processors, and number of
users) and quantity that was acquired.
Software maintenance
This software license offers Software Maintenance, which was previously referred to as
Software Subscription and Technical Support.
Processor core (or processor)
Processor core (or processor) is a unit of measure by which the program can be licensed.
Processor core (or processor) is a functional unit within a computing device that interprets
and executes instructions. A processor core consists of at least an instruction control unit and
one or more arithmetic or logic units. With multi-core technology, each core is considered a
processor core. Entitlements must be acquired for all activated processor cores that are
available for use on the server.
In addition to the entitlements that are required for the program directly, the license must
obtain entitlements for this program that are sufficient to cover the processor cores that are
managed by program.
A Proof of Entitlement (PoE) must be acquired for all activated processor cores that are
available for use on the server. Authorization for PowerVC is based on the total number of
activated processors on the machines that are running the program and the activated
processors on the machines that are managed by the program.
Licensing
The IBM International Program License Agreement, including the License Information
document and Proof of Entitlement (PoE), governs your use of the program. PoEs are
required for all authorized use.
This software license includes Software Subscription and Support (also referred to as
Software Maintenance).
76 IBM PowerVC Version 1.2.3: Introduction and Configuration
© Copyright IBM Corp. 2014, 2015. All rights reserved. 77
Chapter 4. PowerVC installation
This chapter explains the IBM Power Virtualization Center Standard Edition (PowerVC)
installation. It covers the following topics:
4.1, “Setting up the PowerVC environment” on page 78
4.2, “Installing PowerVC” on page 82
4.3, “Uninstalling PowerVC” on page 84
4.5, “Updating PowerVC” on page 87
4.6, “PowerVC backup and recovery” on page 87
4.7, “PowerVC command-line interface” on page 92
4.8, “Virtual machines that are managed by PowerVC” on page 94
4
78 IBM PowerVC Version 1.2.3: Introduction and Configuration
4.1 Setting up the PowerVC environment
IBM PowerVC version 1.2.3.0 can be installed on Red Hat Enterprise Linux (RHEL) version
7.1, either on the ppc64, ppc64LE, or x86_64 platform.
Before you install PowerVC, install RHEL on the management virtual machine (VM) or
management host. PowerVC requires several additional packages to be installed. These
packages are automatically installed if you have a valid Linux repository. If you need to
manually install these packages, see “Installing Red Hat Enterprise Linux on the
management server or host” in the PowerVC Standard Edition section of the IBM Knowledge
Center:
https://ibm.biz/BdXKQR
To set up the management hosts, complete the following tasks:
1. Create the VM (only if you plan to install PowerVC in a virtualized server).
2. Install RHEL Server 7.1 on the management hosts.
3. Customize RHEL Server to meet the PowerVC requirements.
4.1.1 Create the virtual machine to host PowerVC
Create the VM that will host PowerVC with the same procedure that is used to create any
other partition.
Create the virtual machine by using the Hardware Management Console
To create the VM by using the HMC, complete the following steps:
1. In the navigation panel, open Systems Management and click Servers.
2. In the work panel, select the managed system, click Tasks, and click Configuration →
Create Partition.
3. Follow the steps in the Create Partition wizard to create a logical partition (LPAR) and
partition profile.
After the VM is created, you need to install the operating system into the management VM.
Create the virtual machine by using PowerKVM
To create the management VM on a PowerKVM host, you can use the tool that you prefer
from these options:
A command-line utility that is called virsh
An HTML-based management tool that is called Kimchi.
Both tools are provided with PowerKVM.
Note: Unlike the Hardware Management Console (HMC), PowerVC is not a stand-alone
appliance. It must be installed on an operating system. You must have a valid Linux license
to use the operating system and a valid license to use PowerVC.
Important: The management VM must be dedicated to PowerVC and the operating
system on which it runs. Do not install other software on it.
Chapter 4. PowerVC installation 79
After the VM is created, you need to install the operating system into the management VM.
Create the management virtual machine on IBM System x
To create the management VM on an IBM System x server, follow the instructions for your
server.
After the VM is created, you need to install the operating system into the management VM.
4.1.2 Download and install Red Hat Enterprise Linux
As part of the PowerVC setup, you need to download and install RHEL, so you need a valid
license and a valid copy of the software. PowerVC is not a stand-alone appliance. It is
installed on top of the operating system, but it does not include the license to use RHEL.
You can get the software and a valid license from the Red Hat website:
http://www.redhat.com
Install RHEL by using your preferred method. See the Red Hat Enterprise Linux 7 Installation
Guide for instructions:
https://ibm.biz/BdXKQ4
4.1.3 Customize Red Hat Enterprise Linux
Before you install PowerVC, customize RHEL to meet the following PowerVC requirements
(described in the following sections):
Network, Domain Name Server (NS), and host name configuration
Creation of a repository for the RHEL packages or manual installation
Configure the network
The first task before you install PowerVC is to configure the network. PowerVC uses the
default network interface: eth0. To use a different network interface, such as eth1, set the
HOST_INTERFACE environment variable before you run the install script. The following example
shows the setting:
export HOST_INTERFACE=eth1
Important: PowerVC does not support dual management by both PowerVC and Kimchi
after PowerVC is installed.
Note: After the installation finishes, do not add any other package to the server. If any
other packages are needed by PowerVC, the additional packages are obtained by the
PowerVC installer automatically.
Important: IBM Installation Toolkit for Linux must not be installed on the PowerVC
management host.
80 IBM PowerVC Version 1.2.3: Introduction and Configuration
Set the Domain Name Server and host name
Two options exist for managing name resolution: Either use DNS or use the /etc/hosts file.
You must pay attention to the correct setting of the name resolution of all components that will
be managed by PowerVC.
If you do not plan to use DNS for host name resolution, ensure that all hardware
components (including virtualized components) are correctly defined in the /etc/hosts
file.
If you plan to use DNS for host name resolution, all hardware components must be
defined correctly in your DNS. In addition, you need to enable forward and reverse
resolution. Host names must be consistent within the whole PowerVC domain.
Configure the YUM repository for the PowerVC installation
Before you install PowerVC, you need a valid repository for the RHEL software.
This section provides an example that illustrates how to configure the local YUM repository by
using an RHEL International Organization for Standardization (ISO) file so that the PowerVC
installation finds the packages that it requires. Follow these steps:
1. Configure the yum repo by selecting and adding the new channel for Optional Software.
2. Verify that yum is seeing the new optional repo file:
yum repolist
3. As part of the installation process, you need to manually install the gettext package. Run
the following command after the repository is created:
yum install gettext
Then, follow the instructions that it provides. The output is similar to Example 4-1.
Example 4-1 Installing the gettext package
Loaded plug-ins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use
subscription-manager to register.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package gettext.ppc64 0:0.17-16.el6 will be installed
--> Processing Dependency: libgomp.so.1(GOMP_1.0)(64bit) for package:
gettext-0.17-16.el6.ppc64
--> Processing Dependency: cvs for package: gettext-0.17-16.el6.ppc64
--> Processing Dependency: libgomp.so.1()(64bit) for package:
gettext-0.17-16.el6.ppc64
--> Running transaction check
---> Package cvs.ppc64 0:1.11.23-16.el6 will be installed
---> Package libgomp.ppc64 0:4.4.7-4.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=======================================================================Package
Arch Version Repository Size
Important: Regardless of the host name resolution that method you use, the PowerVC
management host must be configured with a valid, fully qualified domain name.
Chapter 4. PowerVC installation 81
=======================================================================Installi
ng:
gettext ppc64 0.17-16.el6 rhel-source 1.9 M
Installing for dependencies:
cvs ppc64 1.11.23-16.el6 rhel-source 714 k
libgomp ppc64 4.4.7-4.el6 rhel-source 121 k
Transaction Summary
=======================================================================Install
3 Package(s)
Total download size: 2.7 M
Installed size: 8.5 M
Is this ok [y/N]: y
Downloading Packages:
-----------------------------------------------------------------------Total
25 MB/s | 2.7 MB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : libgomp-4.4.7-4.el6.ppc64 1/3
Installing : cvs-1.11.23-16.el6.ppc64 2/3
Installing : gettext-0.17-16.el6.ppc64 3/3
Verifying : cvs-1.11.23-16.el6.ppc64 1/3
Verifying : gettext-0.17-16.el6.ppc64 2/3
Verifying : libgomp-4.4.7-4.el6.ppc64 3/3
Installed:
gettext.ppc64 0:0.17-16.el6
Dependency Installed:
cvs.ppc64 0:1.11.23-16.el6 libgomp.ppc64 0:4.4.7-4.el6
Complete!
4. The RHEL 7.1 OS media does not contain any other packages that are required by
PowerVC. You can download the packages that are required by PowerVC from the
Optional Software channel by using the RHN subscription. Table 4-1 lists the package
prerequisites for the PowerVC installation.
Table 4-1 RHEL packages that relate to PowerVC
Important: A list of packages that must not be installed on the server before you start
the PowerVC installation is available in the IBM Knowledge Center. For information
about the packages’ requirements and restrictions, see “Installing Red Hat Enterprise
Linux on the management server or host” in the IBM Knowledge Center:
https://ibm.biz/BdXKQc
Red Hat Enterprise Linux for IBM Power
[ppc64 and ppc64le]
Red Hat Enterprise Linux x86_64
python-zope-interface python-zope-interface
python-jinja2 python-jinja2
python-pyasn1 python-pyasn1-modules
python-pyasn1-modules python-webob
python-webob python-webtest
82 IBM PowerVC Version 1.2.3: Introduction and Configuration
For information about how to add the optional repositories, see this website:
http://red.ht/1FSNvif
5. After you install the operating system, you must set the maximum file size to unlimited by
typing the following command as the root user:
ulimit -f unlimited
4.2 Installing PowerVC
This section describes how to install PowerVC on your management host by using .tar files
that are obtained from the download site.
Before you install PowerVC, ensure that all of the hardware and software prerequisites are
met and that your environment is configured correctly. If you need further information, see
3.1.1, “Hardware and software requirements” on page 30. Also, ensure that you prepared the
management host and installed the supported version of RHEL Server on it.
Follow these steps to install PowerVC:
1. To begin the installation, open a web browser and navigate to the Entitled Software
Support website:
http://www.ibm.com/servers/eserver/ess/OpenServlet.wss
2. Sign in with your IBM ID.
3. Select Software downloads.
4. Select the Power (AIX) brand.
5. Select the customer number that you want to work with, and click Continue.
6. Select the edition of PowerVC that you purchased under 5692-A6P, and click Continue.
7. Download either the PPC64, PPC64LE, or the x86_64 .tar file.
python-webtest python-libguestfs
SOAPpy SOAPpy
pyserial pyserial
python-fpconst python-fpconst
python-twisted-core python-twisted-core
python-twisted-web python-twisted-web
Red Hat Enterprise Linux for IBM Power
[ppc64 and ppc64le]
Red Hat Enterprise Linux x86_64
Important: The management VM is dedicated for PowerVC and the operating system on
which it runs. Do not install other software onto it.
Note: If your web ID is not yet registered with a customer number, select Register
Customer ID number. If you are the first web ID to register your customer number, you
will become the primary ID. However, if you are not the first web ID, you will be
forwarded to the primary contact, who will need to approve your web ID.
Chapter 4. PowerVC installation 83
8. After you download the .tar file, extract it to the location from which you want to run the
installation script.
9. Change your current directory to the directory where the files were extracted.
10.Start the installation by running the installation script:
./install
11.Select the offering type to install from the following two options:
– 1 - Standard managing PowerVM
– 2 - Standard managing PowerKVM
– 9 - Exit
12.After you read and accept the license agreement, PowerVC installs. See Example 4-2. An
installation log file is created: /opt/ibm/powervc/log/.
Example 4-2 Installing PowerVC
###############################################################################
Starting the IBM PowerVC 1.2.3.0 Installation on:
2015-06-12T16:52:18-05:00
###############################################################################
LOG file is /opt/ibm/powervc/log/powervc_install_2015-06-12-165214.log
13.After the installation is complete, you will see a message similar to Example 4-3. Ensure
that you download and install any fix packs that are available on Fix Central. See 4.5,
“Updating PowerVC” on page 87.
Example 4-3 Installation completed
***************************************************************************
PowerVC installation successfully completed at 2015-06-12T17:07:03-05:00.
Refer to
/opt/ibm/powervc/log/powervc_install_2015-06-12-165214.log
for more details.
***************************************************************************
Use a web browser to access IBM PowerVC at https://powervca.pwrvc.ibm.com
Note: The IBM DB2 database use of the 32-bit file libpam.so is not required by
PowerVC. Ignore the following warning:
Requirement not matched for DB2 database "Server".
Summary of prerequisites that are not met on the current system:
DBT3514W The db2prereqcheck utility failed to find the following 32-bit
library file: "/lib/libpam.so*".
84 IBM PowerVC Version 1.2.3: Introduction and Configuration
Table 4-2 shows the available options for the Install command.
Table 4-2 Options for the PowerVC install command
If the installation does not complete successfully, run the following command to remove the
files that were created during the failed installation before you reinstall PowerVC:
[powervc_install_file_folder]/install -u -f
4.3 Uninstalling PowerVC
The procedure to remove PowerVC from the management host is described. It does not
remove or change anything in the environment that is managed by PowerVC. Objects that
were created with PowerVC (VM, volumes, and so on) are unchanged by this process. Any
RHEL prerequisite packages that are installed during the PowerVC installation remain
installed.
Run the following command to uninstall PowerVC:
/opt/ibm/powervc/bin/powervc-uninstall
Option Description
-c nofirewall No firewall configuration will be performed during the installation. The Admin
user will need to configure the firewall manually.
-s <offering> Run a silent installation. This option requires that the offering value is set to
'standard' or 'powerkvm'.
-t Run the prerequisite checks and exit.
-u Uninstall to attempt to clean up after a failed installation, and then exit.
-f Force the installation to override or bypass certain checks. This option is
used with the uninstall option to bypass failures during the uninstall.
-n The following values are valid:
preferipv4 (default). This option is the default for the IBM PowerVC
installation. Select this option to install IBM PowerVC by using the IPv4
IP address. If the IPv4 address is unavailable, the installation will use the
IPv6 IP address.
preferipv6. Select this option to install IBM PowerVC by using the IPv6 IP
address. If the IPv6 address is unavailable, the installation will use the
IPv4 IP address.
requireipv4. Select this option to install IBM PowerVC by using the IPv4
IP address only. If the IPv4 IP address is unavailable, the installation
fails.
requireipv6. Select this option to install IBM PowerVC by using the IPv6
IP address only. If the IPv6 IP address is unavailable, the installation
fails.
-h Display the help messages and exit.
Note: Use this command only to remove files from a failed installation. If you need to
uninstall a working instance of PowerVC, use the correct uninstall command. For more
information, see 4.3, “Uninstalling PowerVC” on page 84.
Chapter 4. PowerVC installation 85
Example 4-4 shows the last few output lines of the uninstall process.
Example 4-4 Uninstallation successful
The execution completed successfully.
For more information, see the DB2 uninstallation log at
"/tmp/db2_deinstall.log.23987".
DB2 uninstalled successfully.
DB2 uninstall return code: 0
Completing post-uninstall cleanup.
Database removal was successful.
Uninstallation of IBM PowerVC completed.
####################################################################
Ending the IBM PowerVC Uninstallation on:
2014-05-14T23:35:04-04:00
####################################################################
Uninstallation was logged in /var/log/powervc-uninstall.log
The uninstallation process writes its log in this file:
/var/log/powervc-uninstall.log
If you encounter issues when you run the powervc-uninstall command, you can clean up
the environment by using the following command:
[powervc_install_file_folder]/powervc-uninstall -f
This command forces the uninstallation of all components of PowerVC. For the complete list
of available options with the powervc-uninstall command, see Table 4-3.
Table 4-3 Available options for the powervc-uninstall command
4.4 Upgrading PowerVC
You can upgrade to PowerVC version 1.2.3 on RHEL 7.1 from PowerVC 1.2.1.2 and later.
Before you upgrade PowerVC, you need to run the powervc-backup command on the system
where the previous version of PowerVC is installed. You can restore the backup file on the
system that is upgraded to PowerVC version 1.2.3.
Option Description
-f Forcefully removes IBM PowerVC.
-l Disables uninstall logging. Logging is enabled by default.
-y Uninstalls without prompting.
-s Saves configuration files to an archive.
-h Displays the help message and exits.
86 IBM PowerVC Version 1.2.3: Introduction and Configuration
4.4.1 Before you begin
Perform the following steps before you begin your software upgrade:
Review the hardware and software requirements for PowerVC version 1.2.3.
Ensure that all compute and storage hosts are up and running before you start the
upgrade.
Verify your environment before you start the upgrade to ensure that the upgrade process
does not fail because of environment issues.
Ensure that no tasks, such as resizing, migrating, or deploying, are running on the VM
when you start the upgrade. Any tasks that are running on the VM during the upgrade will
cause the VM to enter an error state after the upgrade is complete.
Ensure that you manually copy any customized powervc.crt and powervc.key files from
the previous version of PowerVC on RHEL 6.0 to PowerVC version 1.2.3 on RHEL 7.1.
Any operating system users from the Admin, Deployer, or Viewer groups on the previous
version must be added again to the groups on the RHEL 7.1. system.
4.4.2 Upgrading
To upgrade PowerVC and migrate the existing data, complete the following steps at a shell
prompt as the root user:
1. Go to the previous version of PowerVC on the RHEL 6.0 system and run
/opt/ibm/powervc/bin/powervc-backup.
2. Install PowerVC version 1.2.3 on the RHEL 7.1 system.
3. We strongly recommend that you go to the Fix Central website to download and install any
fix packs that are available.
4. Copy the most recent backup archive from the previous version of PowerVC to the server
where you installed PowerVC version 1.2.3.
5. On the server with PowerVC version 1.2.3, run the powervc-restore command with the
--targetdir option that points to the new backup archive. This step completes the
upgrade process.
powervc-restore --targetdir /var/opt/ibm/powervc/backups/powervc_backup.tar.gz
Notes:
If you upgrade PowerVC while the user interface is active, it prompts you that it is set to
maintenance mode and you cannot use it. After you run the powervc-restore command
successfully, you can access the PowerVC user interface again.
If an error occurs while you run the powervc-restore command, check for errors in the
powervc-restore logs in the /opt/ibm/powervc/log file. After you correct or resolve the
issues, run the powervc-restore command again.
If you want to install PowerVC version 1.2.3 on a system with RHEL 6.0 installed, follow
these steps:
a. Copy the backup archive to another system.
b. Uninstall RHEL 6.0.
c. Install RHEL 7.1 and then install PowerVC version 1.2.3 on the system.
d. Copy the backup archive to this system and restore the archive as described in the
previous steps 4 and 5.
Chapter 4. PowerVC installation 87
4.5 Updating PowerVC
PowerVC updates are published on the IBM Fix Central repository. Log in with your IBM ID to
get the update package:
http://www.ibm.com/support/fixcentral
1. Before you update PowerVC, check that enough disk space is available.
2. Download the package to a directory, extract the file, and run the update command. To
extract the file, run this command:
tar -zxvf [location_path]/powervc-update-ppc-rhel-version.tgz
This command extracts the package in the current directory and creates a new directory
that is named powervc-version.
3. Run the update script by running the following command:
/[location_path]/powervc-[version]/update
When the update process is finished, it displays the message that is shown in
Example 4-5.
Example 4-5 Update successfully completed
***************************************************************************
PowerVC installation successfully completed at 2015-06-12T17:19:56-05:00.
Refer to
/opt/ibm/powervc/log/powervc_update_2015-06-12-171011.log
for more details.
***************************************************************************
4.6 PowerVC backup and recovery
Consider backing up your PowerVC data regularly as part of a broader system backup and
recovery strategy. You can use the operating system scheduling tool to perform regular
backups or any other automation tool.
Backup and recovery tasks can be performed only by using the command-line interface (CLI).
No window is available to open in the GUI for backup and recovery.
4.6.1 Backing up PowerVC
Use the powervc-backup command to back up your essential PowerVC data. You can then
restore it to a working state in a data corruption situation or disaster.
The powervc-backup command is in the /opt/ibm/powervc/bin/ directory. Use this command
syntax:
powervc-backup [-h] [--noprompt] [--targetdir LOCATION]
Important: If /opt or /var or /home are separate mount points, 2500 MB of installation
space is required in /opt, 187 MB of free space is required in /var, and 3000 MB of
free space is required in /home.
88 IBM PowerVC Version 1.2.3: Introduction and Configuration
Table 4-4 lists the command options.
Table 4-4 Options for the powervc-backup command
The following data is backed up:
PowerVC databases, such as the Nova database where information about your registered
hosts is stored
PowerVC configuration data, such as /etc/nova
Secure Shell (SSH) private keys that are provided by the administrator
Glance image repositories
Back up PowerVC data
Complete the following steps to back up PowerVC data:
1. Ensure that the pwrvcdb user has, at a minimum, read and execute permissions to the file
structure for the target directory.
2. Open a CLI to the operating system on the VM on which PowerVC is installed.
3. Navigate to the /opt/ibm/powervc/bin/ directory.
4. Run the powervc-backup command with any necessary options. If prompts are not
suppressed, respond to them as needed.
The following example shows the command with a non-default mounted file system target
directory:
powervc-backup --targetDir=/powervcbkp
This command displays a prompt to confirm that you want to stop all the services. Type y
to accept and continue. See Example 4-6.
Example 4-6 Example of PowerVC backup
Continuing with this operation will stop all PowerVC services. Do you want to
continue? (y/N):y
Stopping PowerVC services...
Backing up the NOVA database...
Backing up the QTM_IBM database...
Backing up the CINDER database...
Backing up the GLANCE database...
Backing up the NOSQL database...
Option Description
-h, --help Displays help information about the command.
--noprompt If specified, no user intervention is required during execution of the
backup process.
--targetdir LOCATION Target location in which to save the backup archive. The default value
is /var/opt/ibm/powervc/backups.
Note: Glance is the OpenStack database name for the image repository.
Important: During a backup, most PowerVC services are stopped, and all other users are
logged off from PowerVC until the operation completes.
Chapter 4. PowerVC installation 89
Backing up the KEYSTONE database...
Backing up the data files...
Database and file backup completed. Backup data is in archive
/powervcbkp/20150615164334862394/powervc_backup.tar.gz
Starting PowerVC services...
PowerVC backup completed successfully.
When the backup operation completes, a new time-stamped subdirectory is created in the
target directory and a backup file is created in that subdirectory, for example:
/powervcbkp/2014515152932179256/powervc_backup.tar.gz
We recommend that you copy this file outside of the management host, according to your
organization’s backup and recovery guidelines.
4.6.2 Recovering PowerVC data
Use the powervc-restore command to recover PowerVC data that was previously backed up
so that you can restore a working state after a data corruption situation or disaster.
You can restore a backup archive only to a system that is running the same level of PowerVC
and operating system (and hardware if the OS is executing on a dedicated host rather than a
VM) as the system from which the backup was taken. Ensure that the target system meets
those requirements before you restore the data. PowerVC checks this compatibility of the
source platform and the target platform, as shown in Example 4-7.
Example 4-7 Mismatch between backup and recovery environments
Continuing with this operation will stop all PowerVC services and overwrite
critical PowerVC data in both the database and the file system. Do you want to
continue? (y/N):y
The backup archive is not compatible with either the restore system's
architecture, operating system or PowerVC Version. Exiting.
The backup process does not back up Secure Sockets Layer (SSL) certificates and
associated configuration information. When you restore a PowerVC environment, the SSL
certificate and configuration remain the same SSL certificate and configuration that existed
within the PowerVC environment before the restore operation, not the SSL configuration of
the environment from which the backup was taken.
The powervc-restore command is in the /opt/ibm/powervc/bin/ directory and has the
following syntax and options:
powervc-restore [-h] [--noprompt] [--targetdir LOCATION]
Note: If an error occurs while you run the powervc-backup command, you can check the
powervc-backup logs file in /opt/ibm/powervc/log.
90 IBM PowerVC Version 1.2.3: Introduction and Configuration
Table 4-5 shows the powervc-uninstall command options.
Table 4-5 Options for the powervc-restore command
Complete the following steps to recover PowerVC data.
1. Ensure that the pwrvcdb user has, at a minimum, the read and execute permissions to the
file structure for the target directory.
2. Open a CLI to the operating system on the VM on which PowerVC is installed.
3. Navigate to the /opt/ibm/powervc/bin/ directory.
4. Run the powervc-restore command with any necessary options. If prompts are not
suppressed, respond to them as needed.
The following example shows the command with a non-default target directory:
powervc-restore --targetDir=/powervcbkp
This command displays a prompt to confirm that you want to stop all of the services. Type y to
accept and continue (see Example 4-8).
Example 4-8 Example of PowerVC recovery
CContinuing with this operation will stop all PowerVC services. Do you want to
continue? (y/N):y
Stopping PowerVC services...
Backing up the NOVA database...
Backing up the QTM_IBM database...
Backing up the CINDER database...
Backing up the GLANCE database...
Backing up the NOSQL database...
Backing up the KEYSTONE database...
Backing up the data files...
Database and file backup completed. Backup data is in archive
/powervcbkp/20150615164334862394/powervc_backup.tar.gz
Starting PowerVC services...
PowerVC backup completed successfully.
[root@jay118 bin]# ./powervc-restore
Continuing with this operation will stop all PowerVC services and overwrite
critical PowerVC data in both the database and the file system. Do you want to
continue? (y/N):y
Using archive /powervcbkp/20150615164334862394/powervc_backup.tar.gz for the
restore.
Stopping PowerVC services...
Restoring the data files...
Option Description
-h, --help Show the help message and exit.
--noprompt If specified, no user intervention is required during the execution of the
restore process.
--targetdir LOCATION Target location where the backup archive is located. The default value
is /var/opt/ibm/powervc/backups/<most recent>.
Important: During the recovery, most PowerVC services are stopped and all other users
are logged off from PowerVC until the operation completes.
Chapter 4. PowerVC installation 91
Restoring the KEYSTONE database...
Restoring the NOSQL database...
Restoring the GLANCE database...
Restoring the CINDER database...
Restoring the QTM_IBM database...
Restoring the NOVA database...
Starting PowerVC services...
PowerVC restore completed successfully.
When the restore operation completes, PowerVC runs with all of the data from the targeted
backup file.
4.6.3 Status messages during backup and recovery
During the backup and recovery tasks, all PowerVC processes and databases are shut down.
Any user that is working with PowerVC receives the maintenance message that is shown in
Figure 4-1 and is logged out.
Figure 4-1 Maintenance message for logged-in users
Accessing PowerVC during the backup and recovery tasks is not allowed. Any user that
attempts to log on to PowerVC receives the maintenance message that is shown in
Figure 4-2.
Figure 4-2 Maintenance message
4.6.4 Consideration about backup and recovery
The PowerVC backup and recovery task must be part of a backup plan for your infrastructure.
The PowerVC backup and recovery commands save only information that relates to
PowerVC. We suggest that you save the management station operating systems by using the
tool that you prefer at the same time that you back up PowerVC.
92 IBM PowerVC Version 1.2.3: Introduction and Configuration
4.7 PowerVC command-line interface
PowerVC offers a CLI to perform tasks outside of the GUI. The CLI is used mainly for
maintenance and for troubleshooting problems.
Table 4-6 shows the PowerVC commands that are available for the following versions:
PowerVC Standard Edition for managing PowerVM
PowerVC Standard Edition for managing PowerKVM
Table 4-6 PowerVC available commands
Command Description Link to IBM Knowledge
Center
powervc-audit View and edit the current audit
configuration, and export
previously collected audit data.
This command is deprecated.
Use the powervc-config and
powervc-audit-export
commands instead.
https://ibm.biz/BdXKQi
powervc-audit-export Extract audit data. https://ibm.biz/BdXKQi
powervc-backup Backs up essential PowerVC
data so that you can restore to
a working state in a data
corruption situation or disaster.
https://ibm.biz/BdXKQj
powervc-config Facilitates PowerVC
management node
configuration changes.
https://ibm.biz/BdXKQY
powervc-diag Collects diagnostic data from
your PowerVC installation.
https://ibm.biz/BdXKQz
powervc-domainname Sets a default domain name
that PowerVC assigns to all
newly deployed VMs.
https://ibm.biz/BdXKQf
powervc-encrypt Prompts the user for a string,
then encrypts the string and
returns it. Use the command to
encrypt passwords, tokens, and
strings that are stored by
PowerVC.
https://ibm.biz/BdXKQP
install Installs PowerVC. https://ibm.biz/BdXKQy
powervc-keystone Avoids Lightweight Directory
Access Protocol (LDAP) user
group conflicts. You can also
use this command to list users,
user groups, and roles.
https://ibm.biz/BdXKQM
powervc-ldap-config Configures PowerVC to work
with an existing LDAP server.
https://ibm.biz/BdXK3S
powervc-restore Recovers PowerVC data that
was previously backed up.
https://ibm.biz/BdXK3v
Chapter 4. PowerVC installation 93
Table 4-7 shows the PowerVC commands that are available for PowerVC Standard for
managing PowerKVM.
Table 4-7 Commands for PowerVC Standard for managing PowerKVM
4.7.1 Exporting audit data
IBM Power Virtualization Center provides auditing support for the OpenStack services. Use
the powervc-audit-export command to export audit data to a specified file.
An audit record is a recording of characteristics, including user ID, time stamp, activity, and
location, of each request that is made by PowerVC.
Reviewing audit records is helpful when you are trying to solve problems or resolve errors. For
example, if a host was deleted and you need to determine the user who deleted it, the audit
records show that information.
powervc-services Start, stop, restart, and view the
status of PowerVC services.
https://ibm.biz/BdXKT2
powervc-uninstall Uninstalls PowerVC from your
management server or host.
https://ibm.biz/BdXK3L
powervc-validate Validates that your environment
meets certain hardware and
software requirements.
https://ibm.biz/BdXK35
powervc-volume-image-import Creates a deployable image by
using one or more volumes.
Command Description Link to IBM Knowledge
Center
powervc-iso-import Imports ISO images into
PowerVC.
https://ibm.biz/BdXK37
powervc-log-management View and modify the settings for
log management for PowerVC.
The default action is to view the
current settings.
powervc-register Register a storage provider that
is supported by OpenStack.
Command Description Link to IBM Knowledge
Center
94 IBM PowerVC Version 1.2.3: Introduction and Configuration
The powervc-audit-export command is in the /usr/bin directory. The syntax and options are
shown in Example 4-9.
Example 4-9 powervc-audit command use
powervc-audit-export [-h] [-u <user name>] [-n <number of records>] [-o <output
file>] [-f <filter file>] [-x {json,csv}]
Table 4-8 explains the powervc-audit-export command options.
Table 4-8 Options for the powervc-audit-export command
Complete the following steps to export PowerVC audit data:
1. Open a CLI to the operating system of the VM on which PowerVC is installed.
2. Navigate to the /usr/bin directory.
3. Run the powervc-audit-export command with any necessary options.
Export audit records in JSON format to the /user's_home_directory/myexport_file file by
running this command:
/usr/bin/powervc-audit-export -o myexport_file
Export audit records in CSV format to the /user's_home_directory/myexport_file.csv
file by running this command:
/usr/bin/powervc-audit-export -o myexport_file.csv -x csv
For more information, see this website:
http://www-01.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.standar
d.help.doc/powervc_cli_hmc.html?lang=en
4.8 Virtual machines that are managed by PowerVC
This section provides recommendations for the operating system on the managed VMs.
Option Description
-h, --help Displays help information about the command.
-u <user name>,
--user_name <user name>
The user that requests audit data. This flag is optional. The default is
the logged-in user.
-n <number of records>,
--top_n <number of
records>
Upper limit for the number of audit records to return. The request and
response audit records are returned in pairs. This flag is optional.
-o <output file>,
--output <output file>
The file to contain the exported audit data. This flag is optional. The
default file is export_audit.json or export_audit.csv, depending on
the specified output format.
-f <filter file>,
--filter <filter file>
The file that contains the filter records. The format of the records is JSON.
Reference the PowerVC IBM Knowledge Center for examples of filter
records. This flag is optional.
-x {text,csv},
--output_format
{text,csv}
The format of the exported audit data. This flag is optional. The formats
are text (JSON format) and csv. If not specified, the default is json.
Chapter 4. PowerVC installation 95
4.8.1 Linux on Power virtual machines
If you plan to use Logical Partition Mobility (LPM) or Dynamic Logical Partitioning with your
Linux VM, you must install the IBM Installation Toolkit, especially the Reliable Scalable
Cluster Technology (RSCT) utilities and RSCT core tools. Run the following command to start
the IBM Installation Toolkit installation process:
[IBM Installation Toolkit directory]/install.sh
Follow the instructions. Example 4-10 shows the common installation output.
Example 4-10 IBM Installation Toolkit sample output
[root@linux01 mnt1]# ./install
Do you want to copy the repository of IBM packages to your machine? [y/n]
y
Do you want to configure your machine to receive updates of IBM packages? [y/n]
n
IBMIT needs the ports 4234 and 8080 to be accessed remotely. Would you like to
open those ports? [y/n]
y
The licenses BSD, GPL, ILAN and MIT must be accepted. You can read their text
using the options below and then accept or decline them.
1) Read license: BSD
2) Read license: GPL
3) Read license: ILAN
4) Read license: MIT
5) I have read and accept all the licenses
6) I do not accept any of the licenses
#? 5
Configuring an installation repository for your Linux distribution
Where is the installation media to be used?
1) DVD
2) Network (HTTP or FTP)
3) Directory
4) I already have a repository configured. Skip.
5) I don't know
#? 1
Insert the DVD in the drive
Press Enter to continue
Verifying if there is a repository on DVD
Available DVD devices: /dev/sr1 /dev/sr0
Checking /dev/sr1
Adding repository configuration to repository manager
Repository successfully configured
Package ibmit4linux was successfully installed
After you install the Installation Toolkit, install ibm-power-managed-rhel.ppc64 by running the
following command:
yum install -y ibm-power-managed-rhel6.ppc64
After the installation completes, check the Resource Monitoring and Control (RMC) status by
running the following command:
lssrc -a
96 IBM PowerVC Version 1.2.3: Introduction and Configuration
The output appears as shown in Example 4-11.
Example 4-11 RMC status
Subsystem Group PID Status
ctrmc rsct 3916 active
IBM.DRM rsct_rm 3966 active
IBM.ServiceRM rsct_rm 4059 active
IBM.HostRM rsct_rm 4096 active
ctcas rsct inoperative
IBM.ERRM rsct_rm inoperative
IBM.AuditRM rsct_rm inoperative
IBM.SensorRM rsct_rm inoperative
IBM.MgmtDomainRM rsct_rm inoperative
For more information about the toolkit, including installation information, see the IBM
Installation Toolkit for Linux on Power web page:
https://www-304.ibm.com/webapp/set2/sas/f/lopdiags/installtools/home.html
4.8.2 IBM AIX virtual machines
To install VMs when your system runs on the IBM AIX operating system, no additional setup
is necessary. After the IP address is configured, an RMC connection is automatically created.
4.8.3 IBM i virtual machines
PowerVC can also manage the IBM i VMs. After you add the Power hosts, import the IBM i
VMs. No unique requirements exist among IBM i, AIX, or Linux on Power VMs.
Note: PowerVC, PowerVM, and the HMC rely on the RMC services. When these services
are down, most of the concurrent and dynamic tasks cannot be executed. Check the RMC
status every time that you need to change the VM dynamically. For more information about
RMC, see these IBM Redbooks publications:
IBM PowerVM Virtualization Introduction and Configuration, SG24-7940
IBM Power Systems HMC Implementation and Usage Guide, SG24-7491
Tip: By default, AIX does not contain SSH or SSL tools. We recommend that you install
them if you want to access a managed machine with commands other than telnet.
Note: The storage connection must be based on N_Port ID Virtualization (NPIV) or a
shared storage pool (SSP).
© Copyright IBM Corp. 2014, 2015. All rights reserved. 97
Chapter 5. PowerVC Standard Edition for
managing PowerVM
This chapter describes the general setup of IBM Power Virtualization Center Standard Edition
(PowerVC) for managing PowerVM. In the following sections, we explain the discovery or
configuration of the managed objects. We describe the verification of the environment and the
operations that can be performed on virtual machines (VMs) and images:
5.1, “PowerVC graphical user interface” on page 98
5.2, “Introduction to PowerVC setup” on page 99
5.3, “Connecting to PowerVC” on page 100
5.4, “Host setup” on page 101
5.5, “Host Groups setup” on page 106
5.7, “Storage and SAN fabric setup” on page 111
5.8, “Storage port tags setup” on page 115
5.9, “Storage connectivity group setup” on page 116
5.10, “Storage template setup” on page 120
5.11, “Storage volume setup” on page 123
5.12, “Network setup” on page 124
5.13, “Compute template setup” on page 126
5.14, “Environment verification” on page 128
5.15, “Management of virtual machines and images” on page 133
5
98 IBM PowerVC Version 1.2.3: Introduction and Configuration
5.1 PowerVC graphical user interface
First, we briefly present the PowerVC graphical user interface (GUI) and explain how to
access functions from the PowerVC Home page, as illustrated in Figure 5-1. The
management functions of PowerVC are grouped by classes, which can be accessed from
different locations. In all PowerVC windows, you can find hot links to several areas and
components:
User administration, environment configuration, and message logs at the top of the
PowerVC window
Management functions that relate to VM images, VMs, hosts, networks, and storage in the
column of icons at the left of the window (which also includes a link to the home page)
The hot links are highlighted in red in the illustration.
Figure 5-1 Home page access to a group of functions
In all PowerVC windows, most of the icons and text are hot links to groups of functions.
Several ways exist to access a group of functions. The blue arrows on Figure 5-1 show, for
example, the two hot links that can be used from the home window to access the VM
management functions.
Tips: For examples in this chapter, use “click Virtual Machines” to either click the icon or
the link within the page. In several PowerVC windows, you might see this pencil icon.
Click it to edit values.
Access to Virtual Machines
Main entry points to group of functions
Chapter 5. PowerVC Standard Edition for managing PowerVM 99
5.2 Introduction to PowerVC setup
Before you can start to perform tasks in PowerVC, you must discover and register the
resources that you want to manage. You can register storage systems and hosts, and you can
create networks to use when you deploy images. When you register resources with PowerVC,
you make them available to the management functions of PowerVC (such as deploying a VM
on a discovered host or storing images of captured VMs).
This discovery or registration mechanism is the key to the smooth deployment of PowerVC in
an existing environment. For example, a host can host several partitions while you deploy
PowerVC. You first register the host without registering any of the hosted partitions. All
PowerVC functions that relate to host management are available to you, but no objects exist
where you can apply the functions for managing partitions. You can then decide whether you
want to manage all of the existing partitions with PowerVC. If you prefer a progressive
adoption plan instead, start by managing only a subset of these partitions.
Ensure that the following preliminary steps are complete before you proceed to 5.3,
“Connecting to PowerVC” on page 100:
1. Configuration of the IBM Power Systems environment to be managed through the
Hardware Management Console (HMC).
2. Setup of users’ accounts with an administrator role on PowerVC. See 3.7, “Planning users
and groups” on page 67 for details.
3. Setup of host name, IP address, and an operator user ID for the HMC.
100 IBM PowerVC Version 1.2.3: Introduction and Configuration
5.3 Connecting to PowerVC
After PowerVC is installed and started on a Linux partition, you can connect to the PowerVC
management GUI by following these steps:
1. Open a web browser on your workstation and point it to the PowerVC address:
https://<ipaddress or hostname>/
2. Log in to PowerVC as an administrative user (Figure 5-2). The first time that you use
PowerVC, this administrative user is root. We recommended that after the initial setup of
PowerVC, you define other user IDs and passwords rather than using the root user. For
information about how to add, modify, or remove users, see 3.7.1, “User management” on
page 67.
Figure 5-2 PowerVC Login window
3. Now, you see the IBM PowerVC Home page.
Important: It is important that your environment meets all of the hardware and software
requirements and that it is configured correctly before you start to work with PowerVC
and register your resources.
Chapter 5. PowerVC Standard Edition for managing PowerVM 101
4. We recommend that your first action is to check the PowerVC installation by clicking Verify
Environment as shown in Figure 5-3.
Figure 5-3 Initial system check
Then, you can click View Results to verify that PowerVC is installed correctly.
5.4 Host setup
The first step to perform is to enable PowerVC to communicate with the HMCs in the
environment to manage the storage and networking devices. After hosts, storage, and
networks are configured correctly in the PowerVC domain, you can add a VM.
For more information about supported hosts, see 3.1.2, “PowerVC Standard Edition
requirements” on page 30.
102 IBM PowerVC Version 1.2.3: Introduction and Configuration
To discover the HMCs and the hosts that they manage, perform the following steps:
1. On the Home page (Figure 5-3 on page 101), click Add Hosts.
2. In the Add Hosts dialog window (Figure 5-4), provide the name and credentials for the
HMC. In the Display name field, enter the string that will be used by PowerVC to refer to
this HMC in all of its windows. Click Add Connection. PowerVC will connect to the HMC
and read the host information.
Figure 5-4 HMC connection information
The user ID and password can be the default HMC hscroot administrator user ID
combination. The ID can also be other IDs by using the hscsuperadmin role that you
created to manage the HMC.
3. PowerVC might present a message that indicates that the HMC’s certificate is untrusted or
invalid. Review the certificate details to determine whether you are willing to override this
warning. If you are willing to trust the certificate, click Connect to continue.
Note: We recommend that you do not specify hscroot for the user ID. Instead, create a
user ID on the HMC with the hscsuperadmin role and use it for managing the HMC from
PowerVC. Use this approach to identify actions on the HMC that were initiated by a user
who was logged in to the HMC or from the PowerVC management station. If a security
policy requires that the hscroot password is changed regularly, the use of a different
user ID for PowerVC credentials avoids breaking the PowerVC ability to connect to the
HMC after a system administrator changes the hscroot password.
Chapter 5. PowerVC Standard Edition for managing PowerVM 103
4. Next, you see information about all hosts that are managed by that HMC. Figure 5-5
shows the dialog for an HMC that manages three IBM POWER S824 servers that are
based on POWER8 technology. To choose the hosts to manage with PowerVC, click their
names. By holding down the Shift key while you click the host names, you can select
several host names simultaneously.
When the HMC manages several hosts, you can use the filter to select the name that
contains the character string that is used as a filter.
Figure 5-5 PowerVC Add Hosts dialog window
104 IBM PowerVC Version 1.2.3: Introduction and Configuration
5. After a few seconds, the Home page is updated and it shows the number of added objects.
Figure 5-6 shows that two hosts were added.
Figure 5-6 Managed hosts
6. Click the Hosts tab to open a Hosts window that is similar to Figure 5-7, which shows the
status of the discovered hosts.
Figure 5-7 PowerVC shows the managed hosts
Add hosts by clicking Add Host. The dialog windows to add a host are the same as the
windows in step 2 on page 102 and step 4 on page 103.
Chapter 5. PowerVC Standard Edition for managing PowerVM 105
7. Click one host name to see the detailed host information as shown in Figure 5-8. The
Manage Existing option is used for discovering pre-existing VMs in the environment.
After hosts, storage, and networks are configured correctly in the PowerVC domain, you
can add a VM by expanding the Virtual Machines section.
Figure 5-8 Host information
106 IBM PowerVC Version 1.2.3: Introduction and Configuration
5.5 Host Groups setup
After you add hosts, you can group the hosts into host groups for different business needs.
For example, we added a host group for our test. As shown in Figure 5-9, open the Host
Groups tab, and click Create.
Figure 5-9 Host Groups page
Chapter 5. PowerVC Standard Edition for managing PowerVM 107
A pop-up page opens as shown in Figure 5-10. Enter the host group name and the placement
policy of the host group. Click Add to add hosts, and then click Create Host Group. For the
placement policies that are supported by PowerVC, see 3.3.2, “Placement policies” on
page 39.
Figure 5-10 Create Host Group
5.6 Hardware Management Console management
Beginning with PowerVC version 1.2.3, users can add redundant HMCs for Power Systems
servers. If one HMC fails, the user can change the HMC to one of the redundant HMCs.
Note: Beginning with PowerVC version 1.2.3, placement policies are associated with host
groups, not a global setting any longer.
108 IBM PowerVC Version 1.2.3: Introduction and Configuration
5.6.1 Add an HMC
With PowerVC version 1.2.3 or later, you can add redundant HMCs for Power System
servers. To add an HMC, on the HMC Connections page, click Add HMC, as shown in
Figure 5-11. Enter the HMC host name or IP address, display name, user ID, and password.
Click Add HMC Connection. The new HMC is added.
You also can click Remove HMC to remove an HMC.
Figure 5-11 Add HMC Connection
Chapter 5. PowerVC Standard Edition for managing PowerVM 109
5.6.2 Changing HMC credentials
If you want to change the credentials that are used by PowerVC to access the HMC, open the
Hosts page and select the HMC Connections tab. Select the row for the HMC that you want
to work with, and then click Edit. A pop-up window opens (Figure 5-12) where you can
specify another user ID, which must already be defined on the HMC with the hscsuperadmin
role.
Figure 5-12 Changing HMC credentials
110 IBM PowerVC Version 1.2.3: Introduction and Configuration
5.6.3 Change the HMC
With PowerVC version 1.2.3 or later, you can add redundant HMCs for Power Systems
servers. But PowerVC uses only one HMC for one server. If one HMC fails, you only need to
change the management console to another HMC. As shown in Figure 5-13, on the Hosts
page, select all of the servers that you want to change, click Change HMC, select the HMC
you want, and click OK.
Figure 5-13 Change HMC
The management console of the Power System servers changes to the new HMC, as shown
in Figure 5-14.
Figure 5-14 Select the new HMC for hosts
Chapter 5. PowerVC Standard Edition for managing PowerVM 111
5.7 Storage and SAN fabric setup
When you use external storage are network (SAN) storage, you need to prepare the storage
controllers and Fibre Channel (FC) switches before they can be managed by PowerVC.
PowerVC needs management access to the storage controller. When you use user
authentication, the administrative user name and password for the storage controller must be
set up. For IBM Storwize storage, another option is the use of cryptographic key pairs. For
instructions to generate use key pairs, see the documentation for your device.
To configure the storage controller and SAN switch, follow these preliminary steps:
1. Configure the FC SAN fabric for the PowerVC environment.
2. Connect the required FC ports that are owned by the Virtual I/O Server (VIOS) and the
storage controllers to the SAN switches.
3. Set up the host names, IP addresses, and administrator user ID and password
combination for the SAN switches.
4. Set up the host names, IP addresses, and the administrator user ID and password
combination for the storage controllers.
5. Create volumes for the initial VMs that are to be imported (installed) to PowerVC later.
For more information about supported storage in PowerVC Standard Edition, see 3.1.1,
“Hardware and software requirements” on page 30.
Note: For EMC storage, more setup actions are needed before EMC storage can be
registered in PowerVC. See the IBM Knowledge Center:
http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.stan
dard.help.doc/powervc_planning_storage_hmc.html
Important: Pay attention to the correct setting of the name resolution of the host
names of FC switches, storage controllers, the HMC, and Virtual I/O Servers that will
be managed by PowerVC. The host names of those components must match the
names that were defined in the Domain Name Server (DNS). Both forward and reverse
DNS resolutions must work correctly before the initial setup of PowerVC.
Note: PowerVC creates VMs from an image. No image is provided with PowerVC.
Therefore, you must manually configure at least one initial partition, from which you will
create this image. The storage volumes for this initial partition must be created
manually, also. When PowerVC creates more partitions, it will also create the storage
volumes for them.
Note: For PowerVC version 1.2.2 and higher, you can import an image (that you
created earlier) from storage into PowerVC.
112 IBM PowerVC Version 1.2.3: Introduction and Configuration
5.7.1 Add a storage controller to PowerVC
The following steps guide you through setting up storage providers and the SAN fabric:
1. To add a storage controller, click the Add Storage link on the PowerVC home page that is
shown in Figure 5-3 on page 101. If a storage provider is already defined, the icon differs
slightly. Click the plus sign (+) to the right of Storage Providers, as shown in Figure 5-15.
Figure 5-15 Adding extra storage providers
2. The dialog window that is shown in Figure 5-16 requires this information:
– Type. Four types are supported: Storwize, IBM XIV Storage System, EMC VMAX, and
EMC VNX. We selected Storwize for our IBM V7000 storage.
– Storage controller name or IP address and display name.
– User ID and password or Secure Shell SSH encryption key. (The encryption key option
is only for IBM Storwize storage.)
3. Click Add Storage. PowerVC presents a message that indicates that the authenticity of
the storage cannot be verified. Confirm that you want to continue. PowerVC connects to
the storage controller and retrieves information.
Figure 5-16 Add Storage
Chapter 5. PowerVC Standard Edition for managing PowerVM 113
4. PowerVC presents information about storage pools that are configured on the storage
controller. You must select the default pool where PowerVC creates logical unit numbers
(LUNs) for this storage provider, as shown in Figure 5-17.
Click Add Storage, and PowerVC finishes adding the storage controller.
Figure 5-17 PowerVC Standard Edition window to select a storage pool
5.7.2 Add SAN fabric to PowerVC
Add the SAN fabric to PowerVC. After you add the storage, PowerVC automatically prompts
to add fabrics. Open the window that is shown in Figure 5-18, and click Add Fabric.
Figure 5-18 Add Fabric window
Tip: For more information about the storage template, see 5.9, “Storage connectivity group
setup” on page 116.
114 IBM PowerVC Version 1.2.3: Introduction and Configuration
You must complete the following information about the first SAN switch to add under the
PowerVC control:
Fabric type. For PowerVC 1.2.2 or later, Brocade and Cisco SAN switches are supported.
Principal switch name or IP address and display name
User ID and password
In the Add Fabric window, click Add Fabric, and then confirm the connection in the pop-up
window. PowerVC connects to the switch and retrieves the setup information. The dialog is
shown in Figure 5-19.
Figure 5-19 IPowerVC Standard Edition Add Fabric
Figure 5-20 shows the PowerVC Storage window after you successfully add the SAN storage
controllers and SAN switches. The Storage Providers tab is selected. To show managed
SAN switches, click the Fabrics tab.
Figure 5-20 PowerVC Storage providers tab
Chapter 5. PowerVC Standard Edition for managing PowerVM 115
Additional storage controllers can be added by clicking Storage → the Storage Providers
tab → Add Storage. The dialog window to add a storage controller is the same window that
was used for the first storage controller in steps 1 and 2 in 5.7.1, “Add a storage controller to
PowerVC” on page 112.
You can add SAN switches by clicking Storage → the Fabrics tab → Add Fabric. The dialog
window to add a switch is the same window that was used for the first switch (fabric) in 5.7.2,
“Add SAN fabric to PowerVC” on page 113.
5.8 Storage port tags setup
The next step to customize PowerVC is the FC port tag setup. This setting is optional.
Individual FC ports in Virtual I/O Servers that are managed by PowerVC can be tagged with
named labels. For more information about PowerVC tags and storage connectivity groups,
see 3.5.3, “Storage connectivity groups and tags” on page 58.
To set up tagging, start from the PowerVC Home page and select Configuration → Fibre
Channel Port Configuration to open the dialog window that is shown in Figure 5-21 on
page 116.
Note: PowerVC version 1.2.3 supports a maximum of two fabrics.
Note: Tagging is optional. It is needed only when you want to partition the I/O traffic and
restrict certain traffic to use a subset of the available FC ports.
116 IBM PowerVC Version 1.2.3: Introduction and Configuration
For each FC adapter in all Virtual I/O Servers that are managed by PowerVC, you can enter
or select a port tag (arbitrary name) and a switch to which this port is connected (fabric). You
can either double-click a Port Tag field and enter a new tag or use the drop-down menu to
select a tag from a list of predefined tags. You can also set the tag to None or define your own
tag. You can also select N_Port ID Virtualization (NPIV) or virtual SCSI (vSCSI) for the
Connectivity field to strict the port to special SAN access. In this example, two sets of FC
ports were defined, with Product and Test tags. Certain ports allow NPIV access only, and
other ports allow vSCSI, or Any. Do not forget to click Save to validate your port settings, as
shown in Figure 5-21.
Figure 5-21 PowerVC Fibre Channel port configuration
5.9 Storage connectivity group setup
Next, define the storage connectivity groups. A storage connectivity group is a set of Virtual
I/O Servers with access to the same storage controllers. The storage connectivity group also
controls the boot volumes and data volumes to use NPIV or vSCSI storage access. For a
detailed description, see 3.6, “Network management planning” on page 63. Storage
connectivity group setup is a mandatory step for the deployment of VMs on PowerVC.
Note: Situations exist where you add adapters to a host after PowerVC is installed and
configured. Assign them to a VIOS. Enable the VIOS to discover them by using the
cfgdev command. Then, PowerVC automatically discovers them. If you open the Fibre
Channel Port Configuration window, PowerVC shows these new adapters.
Chapter 5. PowerVC Standard Edition for managing PowerVM 117
Follow these steps to set up a storage connectivity group:
1. Start from the PowerVC Home page. Select Configuration → Storage Connectivity
Groups to open the dialog window that is shown in Figure 5-22.
Figure 5-22 PowerVC Storage Connectivity Groups dialog window
Default storage connectivity groups are defined for the following components:
– All ports of all Virtual I/O Servers that can access the storage providers by using NPIV
– A vSCSI boot volume storage connectivity group is added if the environment meets the
requirements of vSCSI SAN access
– For all Virtual I/O Servers that belong to the shared storage pools (SSPs) that
PowerVC discovered if SSP was configured
2. You can then create your own storage connectivity group. Click Create. In the next
window, enter information or select predefined options for the new storage connectivity
group:
– Name of the storage connectivity group.
– Boot and Data volume connectivity types: NPIV or vSCSI.
– “Automatically add applicable Virtual I/O Servers from newly registered hosts to this
storage connectivity group”. If checked, from now on, newly added Virtual I/O Servers
are added to this group if they can access the same storage (fabrics and tags) as the
other members of the group.
– “Allow deployments using this storage connectivity group (enable)”. If checked, the
storage connectivity group is enabled for deployment on VMs; otherwise, it is disabled.
You can change this selection later, if necessary.
118 IBM PowerVC Version 1.2.3: Introduction and Configuration
– Restrict image deployments to hosts with FC-tagged ports. This setting is optional. If
you use tags, you can select a specific tag. VMs that are deployed to this storage
connectivity group (with a selected tag) can access storage only through FC ports with
the specified tag.
– NPIV Fabric Access Requirement. This setting controls how the FC paths will be
created when a VM is created. You can choose Any, Dual, Dual per VIOS, Fabric A, or
Fabric B.
3. When the information is complete, click Add Member to open the window in Figure 5-23.
You must select which Virtual I/O Servers become members of the group. If a tag was
previously selected, only eligible Virtual I/O Servers are available to select.
After you select the Virtual I/O Servers, click Add Member. Selected Virtual I/O Servers
are added to the storage connectivity group.
Then, click Add Group, and the group is created. Now, the group is available for VM
deployment.
Figure 5-23 PowerVC Add Member to storage connectivity group window
Chapter 5. PowerVC Standard Edition for managing PowerVM 119
A storage connectivity group can be disabled to prevent deployment of VMs in this group. To
disable a group, you must clear the check box for Allow deployments using storage
connectivity group (enable) on the detailed properties page of the storage connectivity
group, as shown in Figure 5-24.
Figure 5-24 Disabling a storage connectivity group
120 IBM PowerVC Version 1.2.3: Introduction and Configuration
5.10 Storage template setup
After you configure your storage connectivity group, you can also create storage templates.
Storage templates provide predefined storage configuration to use when you create a disk.
You must define different information on the storage templates for different types of storage.
For example, as shown in Figure 5-25, this storage template is for the IBM XIV storage
device. You do not need any configuration information except the template name and pool
name. For a full description, see 3.5.2, “Storage templates” on page 56.
Figure 5-25 IBM XIV storage template
A default storage template is automatically created by PowerVC for each storage provider.
However, if the storage contains several storage pools, create a storage template for each
storage pool that you want to use. For IBM Storwize storage, you also need to create a
storage template for each I/O group that you want to use, and each volume mirroring pool pair
that you want to use.
Figure 5-26 on page 121 shows the dialog window to create a storage template for IBM
Storwize storage. To access it, from the PowerVC Home page, click Configuration →
Storage Templates → Create. Then, complete these steps:
1. Select a storage provider.
2. Select a storage pool within the selected storage provider.
3. Provide the storage template name.
Chapter 5. PowerVC Standard Edition for managing PowerVM 121
4. Select the type of provisioning:
– Generic means full space allocation (also known as thick provisioning).
– Thin-provisioned is self-explanatory.
If you select thin-provisioned, the Advanced Settings option is available. If you click
Advanced Settings, an additional dialog window (Figure 5-27 on page 122) offers these
options:
• I/O group
• Real capacity % of virtual storage
• Automatically expand
• Warning threshold
• Thin-provisioned grain size
• Use all available worldwide port names (WWPNs) for attachment
• Enable mirroring. You need to select another pool to enable mirroring.
For more information about how these settings affect PowerVC disk allocation, see
3.5.2, “Storage templates” on page 56.
– Compressed for storage arrays that support compression.
Figure 5-26 PowerVC Create Storage Template window
Thick (full) provisioning
Storage Controller
Pool
122 IBM PowerVC Version 1.2.3: Introduction and Configuration
Figure 5-27 shows the advanced settings that are available for thin-provisioned templates.
The advanced settings can be configured only for storage that is backed by
SAN-accessed devices. When the storage is backed by an SSP in thin-provisioning mode,
PowerVC does not offer the option to specify these advanced settings.
Figure 5-27 PowerVC Create Storage Template Advanced Settings
5. After you click Create, the storage template is created and it is available for use when you
create storage volumes. The page that summarizes the available storage templates is
shown in Figure 5-28.
Figure 5-28 PowerVC Storage Templates page
Chapter 5. PowerVC Standard Edition for managing PowerVM 123
5.11 Storage volume setup
After you add storage providers and define storage templates, you can create storage
volumes.
When you create a volume, you must select a template that determines where (which storage
controller and pool) and what the parameters are (thin or thick provisioning, grain size, and so
on) for the volume to create.
When you create a volume, you must select these elements:
A storage template
The new volume name
A short description of the volume (optional)
The volume size (GB)
Enable sharing or not. If this option is selected, the volume can be attached to multiple
VMs. This option is for PowerHA or similar solutions.
When you create a volume, follow these steps:
1. From the PowerVC home page, click Storage Volumes → the Data Volumes tab →
Create to open the window that is shown in Figure 5-29.
Figure 5-29 PowerVC Create Volume window
Note: Only data volumes need to be created manually. Boot volumes are handled by
PowerVC automatically. When you deploy a partition as described in 5.15.6, “Deploy a new
virtual machine” on page 159, PowerVC automatically creates the boot volumes and data
volumes that are included in the images.
124 IBM PowerVC Version 1.2.3: Introduction and Configuration
2. After you click Create Volume, the volume is created. A list of existing volumes is
displayed, as shown in Figure 5-30. This figure shows that the provisioned disks are in the
available state.
3. From the Storage page, you can manage volumes. Valid operations are the creation or
deletion of already managed volumes or the discovery of volumes that are defined on a
storage provider and not yet managed by PowerVC. You also can edit the volumes to
enable or disable sharing.
Figure 5-30 List of PowerVC storage volumes
5.12 Network setup
When you create a VM, you must select a network. If the network uses static IP assignment,
you must also select a new IP address for the VM or let PowerVC select a new IP address
from the IP pools. For a full description of network configuration in PowerVC, see 3.6,
“Network management planning” on page 63.
Initially, PowerVC contains no network definition, so you need to create at least one network
definition. To create a network definition in PowerVC, from the Home page, click Networks →
Add Network to open the dialog window that is shown in Figure 5-31 on page 125.
You must provide the following data when you create a network:
Network name
Virtual LAN (VLAN) ID
Maximum transmission unit (MTU) size in bytes
For IP address type, select Dynamic or Static (Select Dynamic if the IP address will be
assigned automatically by a Dynamic Host Configuration Protocol (DHCP) server.)
Subnet mask
Gateway
Primary/Secondary DNS (This field is optional if you do not use DNS.)
Chapter 5. PowerVC Standard Edition for managing PowerVM 125
Starting IP address and ending IP address in the IP pool
Shared Ethernet adapter mapping (Select adapters within Virtual I/O Servers with access
to the specific network and that are configured with the correct VLAN ID.)
After you click Add Network, the network is created. From the Networks page, you can also
edit the network (change network parameters) and delete networks.
Consider these factors:
PowerVC detects the shared Ethernet adapter to use for each host. Verify that PowerVC
made the correct choice.
If PowerVC chooses the wrong shared Ethernet adapter to use for a specific host, you can
change the shared Ethernet adapter later.
Figure 5-31 PowerVC network definition
Note: You cannot modify the IP pool after you create the network. Ensure that you enter
the correct IP addresses. You can only remove and add a network if you want to update
the IP addresses in an IP pool.
126 IBM PowerVC Version 1.2.3: Introduction and Configuration
You can also check the IP address status in the IP Pool on the IP Pool page, as shown in
Figure 5-32.
Figure 5-32 IP Pool tab
5.13 Compute template setup
A compute template provides a predefined compute configuration to use when you create a
VM. You can customize processor, memory, and other features. You select a compute
template when you add a VM. You can change the values that are set in the compute
template that is associated with a VM to resize. You can also create new compute templates
on the Configuration page.
For the full description about compute templates, see 3.3.4, “Information that is required for
compute template planning” on page 42.
Figure 5-33 on page 127 shows the window that opens when you create a compute template.
To access the compute template configuration from the PowerVC Home page, click
Configuration → Compute Templates → Create Compute Template. You need to specify
the following settings for images that are deployed with the compute template:
For Template settings, select Advanced.
Provide the compute template name.
Provide the number of virtual processors.
Provide the number of processing units.
Provide the amount of memory.
Select the compatibility mode.
Important: In the shared Ethernet adapter mapping list, the Primary VLAN column refers
to the Port Virtual LAN Identifier (PVID) that is attached to the adapter. The VLAN number
that you specify does not need to match the primary VLAN.
Chapter 5. PowerVC Standard Edition for managing PowerVM 127
If you selected Advanced settings, additional information is required:
Provide the minimum, desired, and maximum number of virtual processors.
Provide the minimum, desired, and maximum number of processing units.
Provide the minimum, desired, and maximum amounts of memory (MB).
Enter the processor sharing type and weight (0 - 255).
Enter the availability priority (0 - 255).
Figure 5-33 PowerVC Create Compute Template
128 IBM PowerVC Version 1.2.3: Introduction and Configuration
After you click Create Compute template, the Compute Templates window opens for use
when you create a VM. The page that summarizes the available compute templates is shown
in Figure 5-34.
Figure 5-34 PowerVC Compute Templates
5.14 Environment verification
After you add the hosts, storage providers, networks, and templates, we recommend that you
verify your PowerVC environment before you try to capture, deploy, or onboard VMs.
Virtualization management function failures might occur when dependencies and prerequisite
configurations are not met.
Chapter 5. PowerVC Standard Edition for managing PowerVM 129
PowerVC reduces the complexity of virtualization and cloud management. It can check for
almost all required dependencies and prerequisite configurations and clearly communicate
the failures. It can also accurately pinpoint validation failures and remediation actions when
possible. Figure 5-35 shows the PowerVC Home interface where you start the verification
process by clicking Verify Environment. Access the verification report by clicking View
Results.
Figure 5-35 PowerVC interface while environment verification in process
The validation of the PowerVC environment takes from a few seconds to a few minutes to
complete.
You use the environment validation function architecture to add and evolve validators to check
solution-specific environment dependencies and prerequisite configurations. This architecture
is intended to allow the evolution of the tool to improve on performance, reliability, and
scalability of validation execution with the increase in the number of endpoints, their
configurations, and their interconnectivity.
130 IBM PowerVC Version 1.2.3: Introduction and Configuration
5.14.1 Verification report validation categories
After the validation process finishes, you can access a report of the results, as shown in
Figure 5-36. This report consists of a table with four columns where you see the following
values:
Status
System
Validation Category
Description
Figure 5-36 Verification Results view
The following list shows the validation categories in this report and a description for the types
of messages to expect from each of the categories:
Access and Credentials
Validation of reachability and credentials from the management server
to the PowerVC domain, including user IDs, passwords, and SSH keys
for all resources.
File System, CPU and Memory on Management Server
Minimum processing and storage requirements for the PowerVC
management server.
OS, services, database
This category groups all messages that relate to the availability of the
service daemons that are needed for the correct operation and
message passing on the PowerVC domain. This category includes
operating system services, OpenStack services, platform Enterprise
Grid Orchestrator (EGO) services, and IBM DB2 database
configuration.
Chapter 5. PowerVC Standard Edition for managing PowerVM 131
HMC version Hardware Management Console software level and K2 services are
up and running.
HMC managed Power Systems server resources
Power Systems hosts when they are managed by an HMC. Validation
messages include the operating state, PowerVM Enterprise Edition
enablement, PowerVM Live Partition Mobility (LPM) capabilities, ability
to run a VIOS, maximum number of supported Power Systems
servers, firmware level, and processor compatibility. This category is
visible from PowerVC Standard Edition.
Virtual I/O Server count, level and RMC state
Minimum number of configured Virtual I/O Servers on each managed
host, software level, Resource Monitoring and Control (RMC)
connection and state to the HMC, license agreement state, and
maximum number that is required for virtual adapter slots. This
category is viewable from PowerVC Standard Edition.
Virtual Network: Shared Ethernet adapter
The shared Ethernet adapter is configured on the PowerVC
management server network and in the Active state. The maximum
number of required virtual slots.
Virtual I/O Server shared Ethernet adapter count, state
This category relates to the validation of at least one shared Ethernet
adapter on one VIOS. You can view this category from PowerVC
Standard Edition.
Host storage LUN Visibility
LUN visibility test. LUNs are created on storage providers and are
visible to Virtual I/O Servers.
Host storage FC Connectivity
Messages that relate to the enabled access to the SAN fabric by the
Virtual I/O Servers and the correct WWPN to validate that VIOS -
Fabric - Storage connectivity is established. This category is viewable
from PowerVC Standard Edition.
Storage Model Type and Firmware Level
Messages that relate to the minimum SAN Volume Controller and
storage providers’ firmware levels and the allowed machine types and
models (MTMs).
Brocade Fabric Validations
Validation for the switch presence, zoning enablement, and firmware
level.
132 IBM PowerVC Version 1.2.3: Introduction and Configuration
Figure 5-37 shows the depth of information that is provided by PowerVC. This example shows
error messages and then confirmation of an acceptable configuration. By clicking or hovering
the mouse pointer over each row of the verification report, you can see pop-up windows with
extra information. In addition to the entry description, PowerVC suggests a solution to fix the
cause of an error or an informational message.
Figure 5-37 Example of a validation message for an error status
Chapter 5. PowerVC Standard Edition for managing PowerVM 133
Figure 5-38 shows another validation report that contains informational messages.
Figure 5-38 Example of a validation message for an informational message status
5.15 Management of virtual machines and images
The following sections describe the operations that can be performed on VMs and images by
using the PowerVC management host:
5.15.1, “Virtual machine onboarding” on page 134
5.15.2, “Refresh the virtual machine view” on page 143
5.15.3, “Start the virtual machine” on page 144
5.15.4, “Stop the virtual machine” on page 144
5.15.5, “Capture a virtual machine image” on page 145
5.15.9, “Resize the virtual machine” on page 167
5.15.10, “Migration of virtual machines” on page 169
5.15.11, “Host maintenance mode” on page 172
5.15.12, “Restart virtual machines remotely from a failed host” on page 175
5.15.13, “Attach a volume to the virtual machine” on page 180
5.15.14, “Detach a volume from the virtual machine” on page 181
5.15.15, “Reset the state of a virtual machine” on page 183
5.15.16, “Delete images” on page 184
5.15.17, “Unmanage a virtual machine” on page 185
5.15.18, “Delete a virtual machine” on page 185
134 IBM PowerVC Version 1.2.3: Introduction and Configuration
Most of these operations can be performed from the Virtual Machines window as shown on
Figure 5-39. However, removing a VM, adding an existing VM, and attaching or detaching a
volume from a VM are performed from other panels.
Figure 5-39 Operations icons on the Virtual Machines view
5.15.1 Virtual machine onboarding
PowerVC can manage VMs that were not created by PowerVC, such as VMs that were
created before the PowerVC deployment. Follow these steps to add an existing VM:
1. From the PowerVC Home window, click the hosts icon within the main panel (host icon on
the left) or click the Hosts link, as shown in Figure 5-40.
Figure 5-40 Selecting a host window
Chapter 5. PowerVC Standard Edition for managing PowerVM 135
2. Click the line of the host on which the VMs that you want to manage are deployed. The
background color of the line changes to light blue. Click the host name in the Name
column, as shown in Figure 5-41.
Figure 5-41 Selected hosts window
3. The detailed host window opens. On Figure 5-42, the Information and Capacity sections
are collapsed for improved viewing. To collapse and expand the sections, click the section
names, and you will see the collapse and expand buttons. The Virtual Machines section is
expanded, but it contains no data, because PowerVC does not yet manage any VM on this
host.
Figure 5-42 Collapse and expand sections
4. Under the Virtual Machines section (or in the home Hosts section), click Manage Existing
to open a pop-up window with two options:
– Manage all fully supported VMs that are not currently being managed by PowerVC.
VMs that require preparation need to be selected individually.
– Select specific VMs.
5. Check Select specific virtual machines.
136 IBM PowerVC Version 1.2.3: Introduction and Configuration
6. After you load data from the HMC, PowerVC displays a new page with two tabs. The
Supported tab shows you all of the VMs that can be added to be managed by the
PowerVC. Select one or more VMs that you want to add. The background color changes
to light blue for the selected VMs as shown in Figure 5-43.
Figure 5-43 Adding existing VMs
After you click Manage, PowerVC starts to manage the processing of the selected VMs.
Note: Checking Manage any supported virtual machines that are not currently
being managed by PowerVC and then clicking Manage results in adding all candidate
VMs without asking for confirmation.
Note: If a VM does not meet all of the requirements, the VM appears on the Not
supported tab. The tab also shows the reason why PowerVC cannot manage the VM.
Note: The detailed eligibility requirements to add a VM into a PowerVC managed
PowerVM host are available in the IBM Knowledge Center:
https://ibm.biz/BdXK6a
Chapter 5. PowerVC Standard Edition for managing PowerVM 137
7. PowerVC displays a pop-up message in the lower-right corner during this process, as
shown in Figure 5-44. These messages remain visible for a few seconds.
Figure 5-44 Example of an informational pop-up message
8. After you discover a VM, click the Virtual Machines icon to return to view the Manage
Existing window. Select the recently added VM. The background color changes to light
blue.
Double-click the recently added VM to display its detailed information. The VM’s details
window can be accessed by double-clicking Home → Hosts → host name → virtual
machine name where host name is the name of the server that contains the vm that you
want to view and virtual machine name is the correct VM.
Tip: You can display the messages again by clicking Messages on the black bar with
the IBM logo at the top of the window.
138 IBM PowerVC Version 1.2.3: Introduction and Configuration
9. For improved viewing, you can collapse sections on the window. Figure 5-45 presents the
detailed view of a VM with all sections collapsed. You can collapse and expand each
section by clicking the section names: Information, Specifications, Network Interfaces,
Collocation Rules, and Details.
Figure 5-45 Virtual machine detailed view with collapsed sections
10.The Information section displays information about the VM status, health, and creation
dates. Table 5-1 explains the fields in the Information section.
Table 5-1 Information section fields
Field Description
Name The name of the VM.
State The actual state for the VM.
Health The actual health status for the VM. The following health statuses are valid:
OK: The target resource, all related resources, and the PowerVC
management services for the resources report zero problems.
Warning: The target resource or a related resource requires user
attention.
Important: Nova or cinder host services that manage the resources
report problems and require user attention.
Critical: The target resource or a related resource is in an error state.
Unknown: PowerVC is unable to determine the health status of the
resource.
ID This internal ID is used by PowerVC management hosts to uniquely identify
the VM.
Host Host server name where the VM is allocated.
Created Creation date and time.
Last updated Last update date and time.
Note: Each host, network, VM, and any other resource that is created in the PowerVC
management host has its own ID number. This ID uniquely identifies each resource to
the PowerVC management host.
Chapter 5. PowerVC Standard Edition for managing PowerVM 139
11.In Figure 5-46, the Information section is expanded to display details about the recently
added VM.
Figure 5-46 Virtual machine detailed view of expanded Information section
12.Collapse the Information view and expand the Specifications section. This section
contains information that relates to the VM capacity and resources. Table 5-2 provides the
fields in the Specifications section.
Table 5-2 Specifications section’s fields
Field Description
Remote restart enabled Remote restart is enabled or not.
Remote restart state Status of the remote restart.
Memory Amount of memory (expressed in MB).
Processors Amount of entitled processing capacity.
Minimum memory (MB) Amount of minimum desired memory.
Maximum memory (MB) Amount of maximum memory.
Minimum processors Amount of minimum virtual processor capacity.
Maximum processors Amount of maximum virtual processor capacity.
Availability priority Priority number for availability when a processor fails.
Processor mode Shared or dedicated processor mode selected.
Minimum processing units Amount of minimum entitled processing capacity.
Maximum processing units Amount of maximum entitled processing capacity.
Sharing mode Uncapped or capped mode selected.
Shared weight Weight to request shared resources.
Processor compatibility mode The processor compatibility mode is determined when the
instance is powered on.
140 IBM PowerVC Version 1.2.3: Introduction and Configuration
13.Figure 5-47 provides an example of the Specifications section for the recently added VM.
Figure 5-47 Virtual machine detailed view of expanded Specifications section
Desired compatibility mode The processor compatibility mode that is wanted for the VM.
Operating system The name and level of the operating system that is installed on
the partition.
Field Description
Chapter 5. PowerVC Standard Edition for managing PowerVM 141
14.Collapse the Specifications section and expand the Network Interfaces section. This
section contains information that relates to the virtual network connectivity, as shown in
Figure 5-48.
Figure 5-48 Virtual machine detailed view of expanded Network Interfaces section
142 IBM PowerVC Version 1.2.3: Introduction and Configuration
15.Double-click Network Interfaces. Two tabs are shown. The Overview tab displays the
Network detailed information, including the VLAN ID, the Virtual I/O Servers that are
involved, the shared Ethernet adapters, and other useful information. The IP Pool tab
displays the range of IP addresses that make up the IP pool (if you previously defined it).
Figure 5-49 displays the Network Overview tab.
Figure 5-49 Detailed Network Overview tab
16.The Collocation Rules section displays the collocation rules that are used to allocate the
VM (if you configured collocation rules).
17.The last section of the Virtual Machine window is the Details section that presents the
status and the hypervisor names for the VM as listed in Table 5-3.
Table 5-3 Details section’s fields
Field Description
Power state Power status for the VM
Task status Whether a task is running on the VM and the
status of the task
Disk config How the disk was configured into the VM
Hypervisor host name The name of the host in the hypervisor and the
HMC
Hypervisor partition name The name of the VM in the hypervisor and the
HMC
Chapter 5. PowerVC Standard Edition for managing PowerVM 143
5.15.2 Refresh the virtual machine view
Refresh will reload the information for the currently selected VM. Click Refresh to reload the
information. Figure 5-50 shows the detailed Information section of the Overview tab for the
selected VM.
Figure 5-50 Virtual machine Refresh icon
Out-of-band operations
In the context of PowerVC, the term out-of-band operation refers to any operation on an
object that is managed by PowerVC that is not performed from the PowerVC tool. For
example, an LPM operation that is initiated directly from an HMC is considered an
out-of-band operation.
With the default polling interval settings, it might take several minutes for PowerVC to be
aware of the change to the environment as a result of an out-of-band operation.
Note: On many PowerVC windows, you can see a Refresh icon, as shown by the red
highlighting in Figure 5-50. Most windows update asynchronously through long polling in
the background. Refresh is available if you think that the window does not show the latest
data from those updates. (You suspect something went wrong with a network connection,
or you want to ensure that the up-to-date data displays.) By clicking the Refresh icon, a
Representational State Transfer (REST) call is made to the PowerVC server to get the
latest data that is available from PowerVC.
144 IBM PowerVC Version 1.2.3: Introduction and Configuration
5.15.3 Start the virtual machine
From the Virtual Machines window, you can use the Start option to power on the currently
selected VM. After the VM finishes the startup process, the VM is available for operations that
are performed through the PowerVC management host. The process takes more time than
the boot process of the operating system. PowerVC waits until the RMC service is available to
communicate with the VM. Even though the status field is Active (because the VM is
powered on), the health field displays a message warning that is similar to “Reason: RMC
state of virtual machine vmaix01 is Inactive”. Wait for a few minutes for the health field
to display a status of OK before you manage the VM from PowerVC. Figure 5-51 displays the
VM after it starts.
Figure 5-51 Virtual machine fully started
5.15.4 Stop the virtual machine
From the VM’s detailed window, click Stop to shut down the VM.
Important: PowerVC presents a pop-up window that asks for confirmation that you want to
shut down the machine before PowerVC acts.
Chapter 5. PowerVC Standard Edition for managing PowerVM 145
When the VM completes the shutdown process, the state changes to Shutoff as shown in
Figure 5-52. This process takes a few minutes to complete.
Figure 5-52 Virtual machine powered off
5.15.5 Capture a virtual machine image
You can capture an operating system image of a VM that you created or deployed. This image
will then be used to install the operating system of the future VMs that are created from
PowerVC. Before you capture the VM, you must first prepare and enable it.
To enable a VM, you can use either the activation engine or the cloud-init technologies. Next,
the steps to install each technology are described.
Requirements for capture
To be eligible for image capture, a VM must meet several requirements:
The VM must use any of the operating system versions that are supported by PowerVC.
Your PowerVC environment is configured.
The host on which the VM executes is managed by PowerVC.
The VM uses virtual I/Os and virtual storage; the network and storage devices are
provided by the VIOS.
Note: If an active RMC connection exists between PowerVC and the target VM, a
shutdown of the operating system is triggered. If no active RMC connection exists, the VM
is shut down without shutting down the operating system.
Note: See the “Capture requirements” page in the IBM Knowledge Center to prepare the
VM and to verify that all of the capture requirements are met:
https://ibm.biz/BdXK6a
146 IBM PowerVC Version 1.2.3: Introduction and Configuration
The /var directory on the PowerVC management hosts must have enough space
(PowerKVM only).
When you capture VMs that use local storage, the /var directory on the management
server is used as the repository for storing the images. The file system that contains the
/var directory needs to have enough space to store the captured images. This amount
can be several GBs, depending on the VM to capture.
If you plan for a Linux VM with multiple paths to storage, you must configure Linux for
multipath I/O (MPIO) on the root device.
If you want to capture an IBM i VM, multiple boot volumes are supported.
The VM is powered off. When you power off a VM, the status will appear as Active until
the VM completely shuts down. You can select the VM for capture even if the status is
displayed as Active.
Operating systems that use a Linux Loader (LILO) or Yaboot boot loader, such as SUSE
Linux Enterprise Server (SLES) 10, SLES 11, RHEL 5, and RHEL 6, require special steps
when you use VMs with multiple disks. These operating systems must be configured to
use a Universally Unique Identifier (UUID) to reference their boot disk. SLES 11 virtual
servers mount devices by using -id notation, by default, which means that they are
represented by symbolic links. To address this issue, you need to perform one of the
following configurations before you capture a SLES VM for the first time:
– Configure Linux for MPIO on the root device on VMs that will be deployed to multiple
Virtual I/O Servers or multipath environments.
– Update /etc/fstab and /etc/lilo.conf to use UUIDs instead of symbolic links.
Follow these steps to change the devices so that they are mounted by UUID:
a. Search the file system table /etc/fstab for the presence of symbolic links. Symbolic
links look like this example: /dev/disk/by-*
b. Store the mapping of /dev/disk/by-* symlinks to their target devices in a scratch file
and ensure that you use the device names in it, for example:
ls -l /dev/disk/by-* > /tmp/scratchpad.txt
c. The contents of the scratchpad.txt file are similar to Example 5-1.
Example 5-1 scratchpad.txt file
/dev/disk/by-id:
total 0
lrwxrwxrwx 1 root root 9 Apr 10 12:07 scsi-360050768028180ee380000000000603c
-> ../../sda
lrwxrwxrwx 1 root root 10 Apr 10 12:07
scsi-360050768028180ee380000000000603c-part1 -> ../../sda1
Tip: Because the default Red Hat Enterprise Linux (RHEL) configuration creates a
restricted list for all WWPN entries, you must remove them to enable the deployment of
a captured image. The following RHEL link describes how to remove them:
https://ibm.biz/BdXapw
Important: When you enable the activation engine, the VM is powered off
automatically. When you use cloud-init, you must shut down the VM manually before
the capture.
Chapter 5. PowerVC Standard Edition for managing PowerVM 147
lrwxrwxrwx 1 root root 10 Apr 10 12:07
scsi-360050768028180ee380000000000603c-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 10 12:07
scsi-360050768028180ee380000000000603c-part3 -> ../../sda3
lrwxrwxrwx 1 root root 9 Apr 10 12:07 wwn-0x60050768028180ee380000000000603c
-> ../../sda
lrwxrwxrwx 1 root root 10 Apr 10 12:07
wwn-0x60050768028180ee380000000000603c-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 10 12:07
wwn-0x60050768028180ee380000000000603c-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 10 12:07
wwn-0x60050768028180ee380000000000603c-part3 -> ../../sda3
total 0
lrwxrwxrwx 1 root root 9 Apr 10 12:07 scsi-0:0:1:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part3 -> ../../sda3
/dev/disk/by-uuid:
total 0
lrwxrwxrwx 1 root root 10 Apr 10 12:07 3cb4e486-10a4-44a9-8273-9051f607435e
-> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 10 12:07 c6a9f4e8-4e87-49c9-b211-89086c2d1064
-> ../../sda3
d. Edit the /etc/fstab file. Replace the /dev/disk/by-* entries with the device names
to which the symlinks point, as laid out in your scratchpad.txt file. Example 5-2 shows
how the lines look before you edit them.
Example 5-2 scratchpad.txt file
/dev/disk/by-id/scsi-360050768028180ee380000000000603c-part2 swap swap
defaults 0 0
/dev/disk/by-id/scsi-360050768028180ee380000000000603c-part3 / ext3
acl,user_xattr 1 1
In this example, those lines are changed to refer to the specific device names. See
Example 5-3.
Example 5-3 Specific device names for the /etc/fstab file
/dev/sda2 swap swap defaults 0 0
/dev/sda3 / ext3 acl,user_xattr 1 1
e. Edit the /etc/lilo.conf file so that the root lines correspond to the device UUID and
the boot line corresponds to the device names. Example 5-4 shows how the lines look
before you edit them.
Example 5-4 /etc/lilo.conf file
boot = /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part1
root = /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part3
148 IBM PowerVC Version 1.2.3: Introduction and Configuration
In Example 5-5, those lines were changed to refer to the specific device names.
Example 5-5 Specific devices names for the /etc/lilo.conf file
boot = /dev/sda1
root = /dev/sda3
f. Run the lilo command.
g. Run the mkinitrd command.
Preparing a virtual machine with cloud-init
The cloud-init script enables VM activation and initialization, and it is widely used for
OpenStack. Before you capture a VM, install the cloud-init initialization package. This
package is available at the /opt/ibm/powervc/images/cloud-init path in the PowerVC host.
Follow these steps:
1. Before you install cloud-init, you must install the dependencies for cloud-init. These
dependencies are not included with the operating systems:
– For SLES, install the dependencies that are provided in the SLES repo:
ftp://ftp.unicamp.br/pub/linuxpatch/cloud-init-ppc64/sles11 (or sles12)
– For RHEL, add the EPEL yum repository for the latest level of the dependent RPMs:
Use these commands for RHEL6, for example:
wget http://dl.fedoraproject.org/pub/epel/6Server/ppc64/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6*.rpm
Use these commands for RHEL7, for example:
wget http://dl.fedoraproject.org/pub/epel/7/ppc64/e/epel-release-7-5.noarch.rpm
rpm -Uvh epel-release-7*.rpm
– For AIX, follow the instructions to download the cloud-init dependencies:
ftp://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/cloudinit
2. Install the appropriate cloud-init RPM for your operating system that is available at
/opt/ibm/powervc/images/cloud-init.
However, if the VM already has an installed cloud-init RPM, you must uninstall the existing
RPM first.
– For RHEL, install the appropriate RPM from
/opt/ibm/powervc/images/cloud-init/rhel:
• RHEL6: cloud-init-0.7.4-5.el6.noarch.rpm
• RHEL7: cloud-init-0.7.4-5.el7.noarch.rpm
Important: If you are installing the cloud-init package to capture a VM on which the
activation engine is already installed, you must first uninstall the activation engine. To
check whether the activation engine Red Hat Package Managers (RPMs) are installed, run
this command on the VM:
# rpm -qa | grep activation
Chapter 5. PowerVC Standard Edition for managing PowerVM 149
– For SLES, install the appropriate RPM from
/opt/ibm/powervc/images/cloud-init/sles:
• SLES 11: cloud-init-0.7.4-2.4.ppc64.rpm
• SLES 12: cloud-init-0.7.5-8.10.ppc64le.rpm
– For Ubuntu Linux, install the appropriate RPM from
/opt/ibm/powervc/images/cloud-init/ubuntu:
Ubuntu 15: cloud-init_0.7.7~bzr1091-0ubuntu1_all.deb
– For AIX, download the AIX cloud-init RPM from this address:
ftp://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/cloudinit
3. After you install cloud-init, modify the cloud.cfg file, which is available at
/etc/cloud/cloud.cfg, by using the following values:
– For RHEL, set the following values:
disable_root: 0
ssh_pwauth: 1
ssh_deletekeys: 1
– For SLES, perform these tasks:
• Remove the following field:
users: -root
• Add the following fields:
ssh_pwauth: true
ssh_deletekeys: true
– For both RHEL and SLES, add the following new values to the cloud.cfg file:
disable_ec2_metadata: True
datasource_list: ['ConfigDrive']
– For SLES only, after you update and save the cloud.cfg file, run the following
commands:
• chkconfig -s cloud-init-local on
• chkconfig -s cloud-init on
• chkconfig -s cloud-config on
• chkconfig -s cloud-final on
– For RHEL 7.0 and 7.1, ensure that the following conditions are set on the VM that you
are capturing:
• Set SELinux to permissive or disabled on the VM that you are capturing or
deploying.
• The Network Manager must be installed and enabled.
• Ensure that the net-tools package is installed.
• Edit all of the /etc/sysconfig/network-scripts/ifcfg-eth* files to update their
NM_CONTROLLED = no settings.
Note: This package is not installed by default when you select the Minimal Install
software option during the installation of RHEL 7.0 and 7.1 from an International
Organization for Standardization (ISO) image.
150 IBM PowerVC Version 1.2.3: Introduction and Configuration
4. Remove the Media Access Control (MAC) address information. For more information
about how to remove the MAC address information, see the OpenStack page:
http://docs.openstack.org/image-guide/content/ch_openstack_images.html
5. Enable and configure the modules (Table 5-4) and host name behavior by modifying the
cloud.cfg file:
– Linux: /etc/cloud/cloud.cfg
– AIX: /opt/freeware/etc/cloud/cloud.cfg
– We recommend that you enable reset-rmc and update-bootlist on Linux.
– Host name: If you want to change the host name after the deployment, remove
"-update_hostname" from the list of cloud_init_modules. If you do not remove it,
cloud-init resets the host name to the original host name deployed value when the
system is restarted.
Table 5-4 Modules and descriptions
Important: The /etc/sysconfig/network-scripts file path that is mentioned in the
previous OpenStack page about the HWADDR must be applied only to RHEL. For
SLES, the HWADDR path is /etc/sysconfig/network.
For example, for the ifcfg-eth0 adapter, on RHEL, remove the HWADDR line from
/etc/sysconfig/network-scripts/ifcfg-eth0, and on SLES, remove the HWADDR
line from /etc/sysconfig/network/ifcfg-eth0.
The 70-persistent-net.rules and 75-persistent-net-generator.rules files are
required to add or remove network interfaces on the VMs after deployment. Ensure that
you save these files so that you can restore them after the deployment is complete.
These rules files are not supported by RHEL 7.0 and 7.1. Therefore, after you remove
the adapters, you must update the adapter configuration files manually on the VM to
match the current set of adapters.
Module Description
restore_volume_group This module restores non-rootVG volume groups when you deploy
a new VM.
Note: For AIX, run the
/opt/freeware/lib/cloud-init/create_pvid_to_vg_mappings.sh
command to save the information that is used to restore custom
volume groups on all VMs that are deployed from the image that
will be captured. Saving this information is useful if you have a
multidisk VM that has a dataVG volume group defined. The module
will restore the dataVG after the deployment.
set_multipath_hcheck_interval Use this module to set the hcheck interval for multipath. If you
deploy a multidisk VM and this module is enabled, you can deploy
specifying a cloud-config data entry that is named
"multipath_hcheck_interval" and give it an integer value that
corresponds to seconds. Post-deployment, each of the VM’s disks
must have their hcheck_interval property set to the value that was
passed through the cloud-config data. Use the lsattr -El hdisk#
-a hcheck_interval command for verification. If you do not specify
the value within the cloud-config data, the module will set each
disk’s value to 60 seconds.
Chapter 5. PowerVC Standard Edition for managing PowerVM 151
6. You can also deploy with both static and Dynamic Host Configuration Protocol (DHCP)
interfaces on SLES 11 and SLES 12:
– If you want cloud-init to set the host name, in the /etc/sysconfig/network/dhcp file,
set the DHCLIENT_SET_HOSTNAME option to no.
– If you want cloud-init to set the default route by using the first static interface, which is
standard, set the DHCLIENT_SET_DEFAULT_ROUTE option in the
/etc/sysconfig/network/dhcp file to no.
If you do not set these settings to no and then deploy with both static and DHCP interfaces,
the DHCP client software might overwrite the value that cloud-init sets for the host name
and default route, depending on how long it takes to get DHCP leases for each DHCP
interface:
– reset-rmc: This module automatically resets RMC. This action is enabled by default on
AIX. It can be enabled on Linux by adding - reset-rmc to the cloud_init_modules:
section.
– update-bootlist: This module removes the temporary virtual optical device, which is
used to send configuration information to the VM, from the VM’s bootlist. This action is
enabled by default on AIX. It can be enabled on Linux by adding - update-bootlist to
the cloud_init_modules: section.
7. For AIX, run the /opt/freeware/lib/cloud-init/create_pvid_to_vg_mappings.sh
command to save the information that is used to restore custom volume groups on all VMs
that are deployed from the image that will be captured.
8. Manually shut down the VM.
Preparing a virtual machine with activation-engine
Follow these steps to install and enable the activation engine:
1. Look for the vmc.vsae.tar activation engine package on the PowerVC management host
in the /opt/ibm/powervc/activation-engine directory.
2. Copy the vmc.vsae.tar file to the VM that you will capture. This file can be stored in any
directory that matches your environment’s guidelines.
3. On the VM that you will capture, extract the contents of the vmc.vsae.tar file.
set_hostname_from_dns Use this module to set your VM’s host name by using the host
name values from your Domain Name Server (DNS). To enable this
module, add this line to the cloud_init_modules section:
- set_hostname_from_dns
Then, remove these lines:
- set_hostname
- update_hostname
set_hostname_from_interface Use this module to choose the network interface and therefore IP
address to be used for the reverse lookup. The valid values are
interface names, such as eth0 and en1. On Linux, the default value
is eth0. On AIX, the default value is en0.
set_dns_shortname This module specifies whether to use the short name to set the
host name. Valid values are True to use the short name or False to
use the fully qualified domain name. The default value is False.
Module Description
152 IBM PowerVC Version 1.2.3: Introduction and Configuration
4. For AIX, perform these tasks:
– Ensure that the JAVA_HOME environment variable is set and points at a Java runtime
environment (JRE), for example:
# export JAVA_HOME=/usr/java5/jre
– Run the activation engine installation command:
./aix-install.sh
5. For Linux, run the following command, which was included in the vmc.vsae.tar file:
linux-install.sh
When you run this command on Linux, you are asked whether the operating system is
running on a kernel-based VM (KVM) hypervisor. Answer no to this question.
6. You can remove the .tar file and extracted files now, unless you want to remove the
activation engine later.
Before you capture a VM, you must enable the activation engine that is installed on it. To
enable the activation engine, follow these steps:
1. If you previously captured the VM and want to capture it again, run the commands that are
shown in Example 5-6.
Example 5-6 Commands to enable the activation engine
rm /opt/ibm/ae/AP/*
cp /opt/ibm/ae/AS/vmc-network-restore/resetenv /opt/ibm/ae/AP/ovf-env.xml
2. Prepare the VM to be captured by running the following command:
/opt/ibm/ae/AE.sh -R
3. Wait until the VM is powered off. See Example 5-7 for an example of the output of the
command.
Example 5-7 Output from the /opt/ibm/ae/AE.sh -R command
# /opt/ibm/ae/AE.sh -R
JAVA_HOME=/usr/java5/jre
[2013-11-01 16:44:55,831] INFO: Looking for platform initialization commands
[2013-11-01 16:44:55,841] INFO: OS: AIX Version: 7.1
[2013-11-01 16:44:56,315] INFO: No initialization commands found....continuing
[2013-11-01 16:44:56,319] INFO: Base PA: /opt/ibm/ae/ovf-env-base.xml
[2013-11-01 16:44:56,322] INFO: VSAE Encryption Level: Disabled
[2013-11-01 16:44:56,323] INFO: CLI parameters are '['AE/ae.py', '-R']'
[2013-11-01 16:44:56,325] INFO: AE base directory is /opt/ibm/ae/
[2013-11-01 16:44:56,345] INFO: windowting system. AP file: None. Interactive:
False
[2013-11-01 16:44:56,513] INFO: In window
[2013-11-01 16:44:56,513] INFO: windowting products
[2013-11-01 16:44:56,515] INFO: Start to window com.ibm.ovf.vmcontrol.system
Important: The following step will shut down the VM. Ensure that no users or programs
are active and that the machine can be stopped before you execute this step.
Note: When this command finishes, the VM is powered off and ready to be captured.
Chapter 5. PowerVC Standard Edition for managing PowerVM 153
0821-515 ifconfig: error loading /usr/lib/drivers/if_eth: A file or directory
in the path name does not exist.
[2013-11-01 16:44:56,846] INFO: Start to window
com.ibm.ovf.vmcontrol.restore.network
0821-515 ifconfig: error loading /usr/lib/drivers/if_eth: A file or directory
in the path name does not exist.
[2013-11-01 16:44:59,917] INFO: windowting the operating system
[2013-11-01 16:44:59,947] INFO: Cleaning AR and AP directories
[2013-11-01 16:44:59,957] INFO: Shutting down the system
SHUTDOWN PROGRAM
Fri Nov 1 16:45:01 CDT 2013
Broadcast message from root@vmaix01 (tty) at 16:45:01 ...
shutdown: PLEASE LOG OFF NOW !!!
System maintenance is in progress.
All processes will be killed now.
Broadcast message from root@vmaix01 (tty) at 16:45:01 ...
shutdown: THE SYSTEM IS BEING SHUT DOWN NOW
JAVA_HOME=/usr/java5/jre
[2013-11-01 16:45:10,040] INFO: Looking for platform initialization commands
[2013-11-01 16:45:10,049] INFO: OS: AIX Version: 7.1
[2013-11-01 16:45:10,424] INFO: No initialization commands found....continuing
[2013-11-01 16:45:10,428] INFO: Base PA: /opt/ibm/ae/ovf-env-base.xml
[2013-11-01 16:45:10,430] INFO: VSAE Encryption Level: Disabled
[2013-11-01 16:45:10,433] INFO: CLI parameters are '['AE/ae.py', '-d', 'stop']'
[2013-11-01 16:45:10,434] INFO: AE base directory is /opt/ibm/ae/
[2013-11-01 16:45:10,453] INFO: Stopping AE daemon.
[2013-11-01 16:45:10,460] INFO: AE daemon was not running.
0513-044 The sshd Subsystem was requested to stop.
Wait for '....Halt completed....' before stopping.
Error reporting has stopped.
154 IBM PowerVC Version 1.2.3: Introduction and Configuration
If you need to uninstall the activation engine from a VM, log on to this VM command-line
interface (CLI). Change your working directory to the directory where you unpacked (tar -x)
the vmc.vsae.tar activation engine package. Run the following commands:
For AIX, run this command:
aix-install.sh -u
For Linux, run this command:
linux-install.sh -u
Capture the virtual machine image
Follow these steps to capture a VM image:
1. After you complete the previous steps to install and prepare the VM for capture, log on to
the PowerVC GUI. Go to the Virtual Machines view. Select the VM that you want to
capture, as shown in Figure 5-53. Click Continue.
Figure 5-53 Capture window
2. Use PowerVC to choose the name for your future image and select the volumes (either
boot volumes or data volumes) that you want to capture.
Chapter 5. PowerVC Standard Edition for managing PowerVM 155
3. When you capture a VM, all volumes that belong to its boot set are included in the image
that is generated by the capture. If the VM is brought into PowerVC management, the boot
set consists of all volumes that are marked as the boot set when PowerVC manages the
VM.
If the VM is deployed from an image that is created within PowerVC, the boot set consists
of all volumes that the user chooses as the boot set when the user creates the image.
Unlike the volumes that belong to the VM’s boot set, the user can choose which data
volumes to include in the image that is generated by the capture. Figure 5-54 shows an
example of choosing to capture both boot volumes and data volumes. Click Capture.
Figure 5-54 Capture boot and data volumes
156 IBM PowerVC Version 1.2.3: Introduction and Configuration
4. PowerVC shows a confirmation page that lists all of the VM volumes that were chosen for
capture. See Figure 5-55. Click Capture again to start the capture process.
Figure 5-55 Capture window confirmation
5. On Figure 5-56, the Task column displays a “Pre-capture processing started” message.
In addition, a pop-up message, which states that PowerVC is taking a snapshot of the VM
image, appears for a few seconds in the lower-right corner of the window, as shown in
Figure 5-56.
Figure 5-56 Image snapshot in progress
Chapter 5. PowerVC Standard Edition for managing PowerVM 157
6. If you open the Images window while an image capture is ongoing, you will see the image
state displayed as Queued as shown in Figure 5-57.
Figure 5-57 Image creation in progress
7. When the image capture is complete, the state in the Images view change to Active,
which is visible in Figure 5-57.
8. Look at the Storage volumes window. You can see the storage volumes that were created
to hold the VM images. For example, Figure 5-58 shows two volumes that contain the
images that were captured on the same VM.
Figure 5-58 Storage volumes view
9. The PowerVC management host captures the image in the same way that it manages
adding a volume to the system, but it adds information to use this volume as an image.
This information enables the image to appear in the Image view to deploy new VMs.
158 IBM PowerVC Version 1.2.3: Introduction and Configuration
10.Click the Images icon on the left bar to return to the Images view. Select the image to
display its information in detail. Double-click the image to open a window that is similar to
the window that is shown in Figure 5-59.
Figure 5-59 Expanded information for a captured image
11.Table 5-5 explains each field in the Information section.
Table 5-5 Description of the fields in the Information section
Field Description
Name Name of the image capture
State Current state of the image capture
ID Unique identifier number for the resource
Description Quick description of the image
Checksum Verification sum for the resource
Captured VM Name of the VM that was used to create the image
Created Created date and time
Last updated Last updated date and time
Chapter 5. PowerVC Standard Edition for managing PowerVM 159
12.Table 5-6 explains each field of the Specifications section.
Table 5-6 Description of the fields in the Specifications section
13.The Volumes section displays all of the storage information about the image.
14.The Virtual Machines section displays the list of VMs that were deployed by using this
image. The Virtual Machines section is shown in Figure 5-60.
Figure 5-60 Volumes section and Virtual Machines section
5.15.6 Deploy a new virtual machine
You can deploy a new VM by reusing one of the images that was captured as described in
5.15.5, “Capture a virtual machine image” on page 145. You can deploy to a specific host, or
the placement policy can choose the best location for the new VM. For more information
about the placement policy functionality, see 3.3, “Placement policies and templates” on
page 38.
Field Description
Image type Description of the image type
Container format Type of container for the data
Disk format The specific format for the disk
Operating system The operating system on the image
Hypervisor type The name of the hypervisor that is managing the image
Architecture Architecture of the image
Endianness Big endian or little endian
160 IBM PowerVC Version 1.2.3: Introduction and Configuration
PowerVC version 1.2.3 has the following limits on deployments:
PowerVC supports a maximum of 50 concurrent deployments. We recommend that you do
not exceed eight concurrent deployments for each host.
Running more than 10 concurrent deployment operations might require additional memory
and processor capacity on the PowerVC management host.
If you use only SAN storage and you plan to batch-deploy over 100 VMs that are based on
one image, you must make multiple copies of that image and deploy the VMs in batches of
10.
The following settings might increase the throughput and decrease the duration of
deployments:
Use the striping policy instead of the packing policy.
Limit the number of concurrent deployments to match the number of hosts.
The host group and storage connectivity group that you select determine the hosts that are
available as target hosts in the deployment operation. For more information, see 3.5.3,
“Storage connectivity groups and tags” on page 58.
Important: Before you deploy an image, you can set a default domain name that PowerVC
uses when it creates new VMs by using the powervc-domainname command. This domain
name is used to create the fully qualified name of the new VM. If you set the domain name
to ibm.com and you create a partition with the name, new_VM, its fully qualified host name
name will be new_VM.ibm.com.
If you do not set a default domain name in the nova.conf file, PowerVC uses the domain
that is set for the VIOS on the host to which you are deploying. If PowerVC cannot retrieve
that value, it will use the domain name of the PowerVC management host. If it cannot
retrieve that value, no domain name is set and you must set the domain name manually
after you deploy the image.
See 4.7, “PowerVC command-line interface” on page 92 for details about the PowerVC CLI
and the powervc-domainname command.
Chapter 5. PowerVC Standard Edition for managing PowerVM 161
You can initiate a new deployment from the Images window to list the available images. Follow
these steps:
1. Select the image that you want to install on the VM that you create. The selected image
background changes to light blue. Then, click Deploy, as shown in Figure 5-61.
Figure 5-61 Image capture that is selected for deployment
2. PowerVC opens a new window where you need to define information about the new VM.
Figure 5-62 on page 163 presents an example of this window. In advance, during the
planning phase of the partition creation, you defined the following information:
– VM name
– Instances
If you have a DHCP server or an IP pool that is configured, you can deploy several VMs
simultaneously.
– Host or host group
Manually select the target host where the new VM will be deployed, or select the host
group so that PowerVC selects the host based on the configured policy. See 3.3,
“Placement policies and templates” on page 38 for details about the automatic
placement of partitions.
– Storage connectivity group
Select one storage connectivity group for the new VM to access its storage. PowerVC
can use a storage connectivity group to determine the use of vSCSI or NPIV to access
SAN storage. See 3.5.3, “Storage connectivity groups and tags” on page 58 for details
about the selection of the storage path and FC ports to use.
– Compute template
Select the compute template that you want to use to deploy the new VM with standard
resource definitions. See 3.3.4, “Information that is required for compute template
planning” on page 42 for detailed information about planning for CPU and memory
resources by using templates.
You can see on Figure 5-62 on page 163 that PowerVC displays the values pre-set in
the template in fields that can be overwritten. You can change the amount of resources
that you need for this new VM.
162 IBM PowerVC Version 1.2.3: Introduction and Configuration
– Image volumes
Since PowerVC version 1.2.3, you can capture a multiple-volume image. In this case,
two volumes are included in the image. You need to select the storage template that
you want for each volume to deploy the new VM with predefined storage capacity. You
can select different storage templates for those volumes to meet your business needs.
PowerVC presents a drop-down menu that lists the storage templates that are available
in the storage provider in which the image volumes are stored.
– New and existing volumes
You can add new or existing volumes in addition to the volumes that are included in the
image. To add volumes, click Add volume. The Add Volume page, where you attach a
volume to the VM opens.
– Network:
• Primary network
Select the network. If the selected network does not have a configured DHCP
server, you must also manually provide an IP address or PowerVC selects an IP
address from the IP pool.
• Additional networks
If two or more networks were defined in PowerVC, you can click the plus sign (+)
icon to add more networks. Select the network. Get the IP address from the DHCP
server, provide the IP address manually, or select one from the IP pool
automatically.
– Activation input:
You can upload configuration scripts or add configuration data at the time of deploying
a VM by using the activation input option. This script or data will automatically
configure your VM according to your requirements, after it is deployed. For more
information about the accepted data formats in cloud-init and examples of commonly
used cloud configuration data formats, see the cloud-init documentation.
For more information about activation input, see the IBM Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.s
tandard.help.doc/powervc_deploy_considerations.html
Note: PowerVC verifies that the IP address that you provide is not already used for
another VM, even if the IP address is used in a VM that is powered off.
Note: The file or scripts that you upload and add here are used by the cloud-init
initialization package and the activation engine (AE) for AIX VMs only. The activation
engine for AIX VMs supports shell scripts that start with #! only, and it does not
support the other cloud-init data formats. For any other operating system, the
activation engine does not use the data that you upload for activation.
Note: On the right part of the window, PowerVC displays the amount of available
resources on the target host and the amount of additional resources that are
requested for the new partition. So, you can see the amount of resources that are
used and free on this host after the installation of the new partitions.
Chapter 5. PowerVC Standard Edition for managing PowerVM 163
Figure 5-62 shows the window where you define information about the new VM.
Figure 5-62 Information to deploy an image
3. Click Deploy on the lower part of the window to start the deployment of the new VM. This
process might take a few minutes to finish.
Important: For other vendor’s storage devices, no technique is available like the IBM
FlashCopy® service in IBM Storwize storage. They use LUN migration, instead. A
deployment might take an hour to complete. The amount of time depends on the
volumes’ sizes and the storage device performance. Contact your storage administrator
for more information before you design your PowerVC infrastructure.
164 IBM PowerVC Version 1.2.3: Introduction and Configuration
4. When the deployment finishes, you can see a new VM in the Virtual Machines window.
This new VM is a clone of the captured image. The new VM is already configured and
powered on as shown in Figure 5-63.
Figure 5-63 Newly deployed virtual machine
Tip: The new VM is a clone of the image, so you can log on to this VM with the same user
ID and password combination that is defined in the VM from which the image was
captured.
Chapter 5. PowerVC Standard Edition for managing PowerVM 165
5.15.7 Add virtual Ethernet adapters for virtual machines
After the VM was deployed successfully, you can add more virtual Ethernet adapters for the
VM if you defined more networks in PowerVC. In a VM, PowerVC allows only one virtual
Ethernet adapter for each network. Follow these steps:
1. To add a virtual Ethernet adapter for a VM, select the VM name on the Virtual Machines
page.
2. Then, go to the VM’s details page. As shown in Figure 5-64, in the Network Interfaces
section, click Add.
3. Select the network that you want to connect. Assign an IP address or PowerVC will select
an IP address from the IP pool.
4. Click Add Interface. A new virtual Ethernet adapter is added for the VM.
Figure 5-64 Add an Ethernet adapter for a virtual machine
5.15.8 Add collocation rules
Use collocation rules to specify that selected VMs must always be kept on the same host
(affinity) or that they can never be placed on the same host (anti-affinity). These rules are
enforced when a VM is relocated. For example, in PowerHA scenarios, we need to force the
pair of high availability (HA) VMs to exist on different physical machines. Otherwise, a single
point of failure (SPOF) risk exists. Use the anti-affinity collocation rule to create this scenario.
Note: After you add the virtual Ethernet adapter, you must refresh the hardware list in the
partition. For example, run the cfgmgr command in AIX to assign the IP address to the
newly discovered Ethernet adapter manually.
166 IBM PowerVC Version 1.2.3: Introduction and Configuration
To create a new collocation rule, select Configuration → Collocation Rules → Create
Collocation Rule, as shown in Figure 5-65. Enter the collocation rule name, select the policy
type (either Affinity and Anti-Affinity), select the VMs, and click Create. The collocation
rule creation is complete.
Figure 5-65 Create Collocation Rule
Important: When VMs are migrated or restarted remotely, one VM is moved at a time,
which has the following implications for VMs in collocation rules that specify affinity:
The VMs cannot be migrated or restarted remotely on another host.
When you put a host into maintenance mode, if that host has multiple VMs in the same
collocation rule, you cannot migrate active VMs to another host.
To migrate a VM or restart a VM remotely in these situations, the VM must first be removed
from the collocation rule. After the VM is migrated or restarted remotely, the VM can be
added to the correct collocation rule.
Chapter 5. PowerVC Standard Edition for managing PowerVM 167
5.15.9 Resize the virtual machine
The PowerVC management host can resize the managed VMs dynamically. Follow these
steps:
1. From the Virtual Machines window, click Resize on the upper bar on the window as shown
in Figure 5-66.
Figure 5-66 Virtual Machine resize
2. In the next window (Figure 5-67), enter the new values for the resources or choose an
existing compute template. Select the option that best fits your business needs.
Figure 5-67 VM Resize dialog window to select a compute template
168 IBM PowerVC Version 1.2.3: Introduction and Configuration
When you enter the new value, it is verified and checked against the minimum and
maximum values that are defined in the partition profile. If the requested new values
exceed these limits for the VM, PowerVC rejects the request, highlights the field with a red
outline, and issues an error notice. See Figure 5-68.
Figure 5-68 Exceeded value for resizing
3. After you complete the information that is required in this window, click Resize to start the
resizing process. You will see a pop-up message window in the lower-right part of the
window and a “complete” message in the message view.
4. The resize process can take a few minutes. When it finishes, you can see the new sizes in
the Specifications section of the VM.
Tip: The PowerVC management server compares the entered values with the values in
the profile of the selected VM. If you modify the VM profile, you must shut down and
restart the VM for the changes to take effect.
Important: To refresh the profile, shut down and restart the VM rather than reboot it.
Rebooting the VM keeps the current values rather than reading the new values that you
set in the profile.
Note: With the PowerVC resize function, you can change the current settings of the
machine only. You cannot use the resize function to change the minimum and maximum
values that are set in the partition profile or to change a partition from shared to
dedicated.
Chapter 5. PowerVC Standard Edition for managing PowerVM 169
5.15.10 Migration of virtual machines
PowerVC can manage the Live Partition Mobility (LPM) feature. Use the LPM feature to
migrate VMs from one host to another host.
Migration requirements
To migrate VMs by using the IBM PowerVC management server, ensure that the source and
destination hosts and the VMs are configured correctly.
To migrate a VM, the following requirements must be met:
The VM is in Active status in the PowerVC management host.
The PowerVM Enterprise Edition or PowerVM for IBM Linux on Power hardware feature is
activated on your hosts. This feature enables the use of the LPM feature.
The networks for both source and target hosts must be mapped to shared Ethernet
adapters by using the same virtual Ethernet switch.
We recommend that the maximum number of virtual resources (virtual adapters) is set to
at least 200 on all of the hosts in your environment. This value ensures that you can create
enough VMs on your hosts.
The logical memory block size on the source host and the destination host must be the
same.
Both the source and destination hosts must have an equivalent configuration of Virtual I/O
Servers that belong to the same storage connectivity group.
The processor compatibility mode on the VM that you want to migrate must be supported
by the destination host.
The VM must have an enabled Resource Monitoring and Control (RMC) connection.
To migrate a VM with a vSCSI attachment, the destination VIOS must be zoned to the
backing storage.
At least one pair of VIOS VMs must be storage-ready and members of the storage
connectivity group. Each of these VIOS VMs must have at least two physical FC ports
ready.
Each of the two physical FC ports must be connected to a distinct fabric, and the fabric
must be set correctly on the FC ports’ Configuration pages.
The following restrictions apply when you migrate a VM:
You cannot migrate a VM to a host that is a member of a different host group.
If the VM is running a little endian guest, the target host must support little endian guests.
If the VM was created as remote restart-capable, the target host must support remote
restart.
Certain IBM Power System servers can run only Linux workloads. When you migrate an
AIX or IBM i VM, these hosts are not considered for placement.
Note: If the source host has two Virtual I/O Servers and the target host has only one
VIOS, it is not possible to migrate a partition by accessing its storage through both
Virtual I/O Servers on the source. However, if a partition on the source host is using
only one VIOS to access its storage, it can be migrated (assuming that other
requirements, such as port tagging, are met).
170 IBM PowerVC Version 1.2.3: Introduction and Configuration
You cannot exceed the maximum number of simultaneous migrations that are designated
for the source and destination hosts. The maximum number of simultaneous migrations
depends on the number of migrations that are supported by the Virtual I/O Servers that
are associated with each host.
A source host in a migration operation cannot serve concurrently as a destination host in a
separate migration operation.
If you deployed a VM with a processor compatibility mode of POWER7 and later changed
the mode to POWER6, you cannot migrate the VM to a POWER6 host. The MAC address
for a POWER7 VM is generated by PowerVC during the deployment.
To migrate to a POWER6 host, the MAC address of the VM must be generated by the
HMC. To migrate from a POWER7 to a POWER6 host, you must initially deploy to a
POWER7 system with the processor compatibility mode set to a POWER6 derivative, or
you must initially deploy to a POWER6 host.
PowerVM does not support the migration of a VM whose attachment type will change its
multipathing solution between the source and destination Virtual I/O Servers. For
example, a VM on a path control module (PCM)-attached VIOS can be successfully
migrated only to a PCM-attached VIOS. However, PowerVM does not enforce this
requirement. To avoid unsupported migrations, create separate storage connectivity
groups for PCM and PowerPath multipathing solutions.
Collocation rules are enforced during migration:
– If the VM is a member of a collocation rule that specifies affinity and multiple VMs are
in that collocation rule, you cannot migrate it. Otherwise, the affinity rule is broken. To
migrate a VM in this case, remove it from the collocation rule and then add it to the
correct group after the migration.
– If the VM is a member of a collocation rule that specifies anti-affinity, you cannot
migrate it to a host that has a VM that is a member of the same collocation rule. For
example, assume the following scenario:
• Virtual Machine A is on Host A.
• Virtual Machine B is on Host B.
• Virtual Machine A and Virtual Machine B are in a collocation rule that specifies
anti-affinity.
Then, Virtual Machine A cannot be migrated to Host B.
– Only one migration or remote restart at a time is allowed for VMs in the same
collocation rule. Therefore, if you try to migrate a VM or restart a VM remotely and any
other VMs in the same collocation rule are being migrated or restarted remotely, that
request fails.
Chapter 5. PowerVC Standard Edition for managing PowerVM 171
Migrate the virtual machine
Follow these steps to migrate a VM:
1. Open the Virtual Machines window, and then select the VM that you want to migrate. The
background changes to light blue.
2. Click Migrate, as shown in Figure 5-69.
Figure 5-69 Migrate a selected virtual machine
3. You can select the target host, or the placement policy can determine the best target, as
shown in Figure 5-70.
Figure 5-70 Select target server before the migration
172 IBM PowerVC Version 1.2.3: Introduction and Configuration
4. Figure 5-71 shows that during the migration, the Virtual Machines window displays the
partition with the state and task both set to Migrating.
Figure 5-71 Virtual machine migration in progress
5. After the migration completes, you can check the Virtual Machines window to verify that
the partition is now hosted on the target host, as shown in Figure 5-72.
Figure 5-72 Virtual machine migration finished
5.15.11 Host maintenance mode
You move a host to maintenance mode to perform maintenance activities on a host, such
updating firmware or replacing hardware.
Note: A warning message in the Health column is normal. It takes a few minutes to
change to OK.
Source host before migration
Target host after migration
Chapter 5. PowerVC Standard Edition for managing PowerVM 173
Maintenance mode requirements
Before you move the host into maintenance mode, check whether the following requirements
are met:
If the request was made to migrate active VMs when the host entered maintenance mode,
the following conditions must also be true:
– The hypervisor must be licensed for LPM.
– The VMs on the host cannot be in the error, paused, or building states.
– On all active VMs, the health must be OK and the RMC connections must be active.
– All requirements for live migration must be met. See “Migration requirements” on
page 169 for details.
The host’s hypervisor state must be operating. If it is not, VM migrations might fail.
If the request was made to migrate active VMs when the host entered maintenance mode,
the following conditions cannot also be true, or the request will fail:
– A VM on the host is a member of a collocation rule that specifies affinity and has
multiple members.
– The collocation rule has a member that is already undergoing a migration or is being
restarted remotely.
Put the host in maintenance mode
If all of the requirements are met, you can put a host in maintenance mode by following these
steps:
1. On the Hosts window, select the host that you want to put into maintenance mode, and
click Enter Maintenance Mode as shown in Figure 5-73.
Figure 5-73 Enter Maintenance Mode
174 IBM PowerVC Version 1.2.3: Introduction and Configuration
2. If you want to migrate the VMs to other hosts, select Migrate active virtual machines to
another host as shown in Figure 5-74. This option is unavailable if no hosts are available
for the migration.
Figure 5-74 Migrate virtual machines to other hosts
3. Click OK.
After maintenance mode is requested, the host’s maintenance state is Entering Maintenance
while the VMs are migrated to another host, if requested. This status changes to Maintenance
On after the migration is complete and the host is fully in the maintenance state.
To remove a host from maintenance mode, select the host and select Exit Maintenance
Mode. Click OK on the confirmation window as shown in Figure 5-75.
Figure 5-75 Exit Maintenance Mode
You can add VMs again to the host after it is brought out of maintenance mode.
Chapter 5. PowerVC Standard Edition for managing PowerVM 175
5.15.12 Restart virtual machines remotely from a failed host
PowerVC can restart VMs remotely from a failed host to another host. To successfully restart
VMs remotely by using PowerVC, you must ensure that the source host and destination host
are configured correctly.
Remote restart requirements
To restart a VM remotely, the following requirements must be met:
The source and destination hosts must have access to the storage that is used by the
VMs.
The source and destination hosts must have all of the appropriate virtual switches that are
required by networks on the VM.
The hosts must be running firmware 820 or later.
The HMC must be running with HMC 820 Service Pack (SP)1 or later, with the latest
program temporary fix (PTF).
The hosts must support the simplified remote restart capability.
Both hosts must be managed by the same HMC.
The service processors must be running and connected to the HMC.
The source host must be in the Error, Power Off, or Error - dump in progress state on
the HMC.
The VM must be created with the simplified remote restart capability enabled.
The remote restart state of the VM must be Remote restartable.
Shared storage pools are not officially supported through PowerVM simplified remote
restart.
Tip: You can edit the period after which the migration operation times out and the
maintenance mode enters an error state by running the following commands:
/usr/bin/openstack-config --set /etc/nova/nova.conf DEFAULT
prs_ha_timeout_seconds <duration_in_seconds>
For example, to set the timeout for two hours, run this command:
/usr/bin/openstack-config --set /etc/nova/nova.conf DEFAULT
prs_ha_timeout_seconds 7200
Then, restart the openstack-nova-ibm-ego-ha-service:
service openstack-nova-ibm-ego-ha-service restart
176 IBM PowerVC Version 1.2.3: Introduction and Configuration
Restart a virtual machine remotely
Before you can restart a VM on PowerVM remotely, you must deploy or configure the VM with
remote restart capability. You can deploy or configure the VM with remote restart capability in
two ways:
Create a compute template with the enabled remote restart capability and deploy a VM
with that compute template as shown in Figure 5-76.
Figure 5-76 Create a compute template with enabled remote restart capability
Chapter 5. PowerVC Standard Edition for managing PowerVM 177
Modify the remote restart property after the VM is deployed. In Figure 5-77, you can see a
VM with the correct remote restart state, which is Remote restartable.
Figure 5-77 Correct remote restart state under the Specifications section
The Remote Restart task is available under the Hosts view as shown in Figure 5-78.
Figure 5-78 Remotely Restart Virtual Machines option
Note: You can change the remote restart capability of a VM only if the VM is shut off.
Important: VM can be restarted remotely in PowerVM only if the Remote Restart state is
Remote restartable. When a VM is deployed initially, the HMC needs to collect partition
and resource configuration information. The remote restart state changes from Invalid to
different states. When it changes to Remote restartable, PowerVC can initiate the remote
restart operation for that VM.
178 IBM PowerVC Version 1.2.3: Introduction and Configuration
To restart a VM remotely, select the failed host and then select Remotely Restart Virtual
Machines. Then, you can select to either restart a specific VM remotely or the restart all of
the VMs on the failed host remotely as shown in Figure 5-79.
Figure 5-79 Remotely Restart Virtual Machines
The scheduler can choose a destination host automatically by placement policy, or you can
choose a destination host (Figure 5-80).
Figure 5-80 Destination host
A notification on the user interface indicates that a VM was successfully restarted remotely.
Chapter 5. PowerVC Standard Edition for managing PowerVM 179
When you select to restart all VMs on a failed host remotely, the host experiences several
transitions. Table 5-7 shows the host states during the transition.
Table 5-7 Host states during the transition
State Description
Remote Restart Started PowerVC is preparing to rebuild the VMs. This
process can take up to one minute.
Remote Restart Rebuilding PowerVC is rebuilding VMs. After the VMs are
restarted remotely on the destination host, the
source host goes back to displaying its state.
Remote Restart Error An error occurred while one or more VMs were
moved to the destination host. You can check the
reasons for the failure in the corresponding
compute log file in the /var/log/nova directory.
180 IBM PowerVC Version 1.2.3: Introduction and Configuration
5.15.13 Attach a volume to the virtual machine
The PowerVC management server can handle storage volumes. By using the management
server, you can attach a new or existing volume to a VM. Follow these steps:
1. Click the Virtual Machines icon on the left, and then select the VM to which you want to
add a volume. The background color changes to light blue.
2. Click Attach Volume. In the pop-up window that opens, you can attach an existing
volume, or you can create a volume and attach it in one step. In the example in
Figure 5-81, PowerVC will create a disk.
Figure 5-81 Attaching a new volume to a virtual machine
3. Select the storage template to select the backing device, choose the volume name, and
choose the volume size in GBs. You can add a short description for the new volume. The
Storage bar on the right side of the window changes dynamically when you change the
size. Click Attach. PowerVC creates a volume, attaches it to the VM, and then displays a
message at the bottom of the window to confirm the creation of the disk.
Note: You can select the Enable sharing check box so that other VMs can use the
volume also, if needed.
Chapter 5. PowerVC Standard Edition for managing PowerVM 181
4. To see the new volume, open the VM’s detailed information window and select the
Attached Volumes tab. This tab displays the current volumes that are attached to the VM,
as shown in Figure 5-82.
Figure 5-82 Attached Volumes tab view
5. To complete the process, you must execute the correct command on the VM command
line:
– For IBM AIX operating systems, execute this command as root:
cfgmgr
– For Linux operating systems, execute this command as root, where host_N is the
controller that manages the disks on the VM:
echo “- - -” > /sys/class/scsi_host/host_N/scan
5.15.14 Detach a volume from the virtual machine
To detach a volume from the VM, you must first remove it from the operating system.
Remove the volume from the operating system
For the IBM AIX operating system, execute this command as root, where hdisk_N is the disk
that you want to remove:
rmdev -dl hdisk_N
For the Linux operating system, reboot after you detach the volume.
Note: The Attached Volumes tab displays only volumes that were attached to the
machine after its creation or import. This tab does not display the boot volume of the
partition.
Note: We recommend that you cleanly unmount all file systems from the disk, remove the
logical volume, and remove the disk from AIX before you detach the disk from PowerVC.
182 IBM PowerVC Version 1.2.3: Introduction and Configuration
Detach the volume from a virtual machine
The PowerVC management server can handle storage volumes. By using the PowerVC
management server, you can detach an existing volume from a VM:
1. Click the Virtual Machines icon, and then double-click the VM from which you want to
detach a volume.
2. Click the Attached Volumes tab to display the list of volumes that are attached to this VM.
Select the volume that you want to detach. The background color changes to light blue.
3. Click Detach, as shown in Figure 5-83.
Figure 5-83 Detach a volume from a virtual machine
4. PowerVC displays a confirmation window. See Figure 5-84.
Figure 5-84 Confirmation window
5. You will see a Detaching status in the State column. When the process finishes, the
volume is detached from the VM.
The detached volume is still managed by the PowerVC management host. You can see the
volume from the Storage window.
Chapter 5. PowerVC Standard Edition for managing PowerVM 183
5.15.15 Reset the state of a virtual machine
In certain situations, a VM becomes unavailable or it is in an unrecognized state for the
PowerVC management server. When these situations occur, you can execute a Reset State
procedure. This process will set the machine back to an active state. Figure 5-85 shows a
VM’s detailed information window with a Reset State hot link that appears on the State line of
the Information section. Click Reset State to start the reset process.
Figure 5-85 Resetting the virtual machine’s state
Note: No changes are made to the connection or database.
184 IBM PowerVC Version 1.2.3: Introduction and Configuration
6. The PowerVC management server displays a confirmation window. Click OK to continue.
See Figure 5-86.
Figure 5-86 State reset confirmation window
5.15.16 Delete images
To delete an image that is not in use, open the Images window, and then select the image that
you want to delete. The background color changes to light blue. Then, click Delete as shown
in Figure 5-87.
Figure 5-87 Image selected
Note: This process can take a few minutes to complete. If the state does not change, try to
restore the VM or deploy the VM again from an image.
Chapter 5. PowerVC Standard Edition for managing PowerVM 185
The PowerVC management server displays a confirmation window, as shown in Figure 5-88.
If you want to delete the image from the storage permanently, select the check box and click
OK. Otherwise, the volume that contains the image will remain in the storage pool, but it will
no longer be usable to deploy an image. This function is specific to PowerVM.
Figure 5-88 Delete an image confirmation window
PowerVC opens a pop-up window with a message that indicates that the image is being
deleted.
5.15.17 Unmanage a virtual machine
The Unmanage function is used to discontinue the management of a VM from PowerVC. After
a VM becomes unmanaged, the VM is no longer listed in the Virtual Machines window, but
the VM still exists. The VM and its resources remain configured on the host. The VM can still
be managed from the HMC. The VM remains up and running.
To unmanage a VM, open the Virtual Machines window, and select the VM that you want to
remove from PowerVC. The Unmanage option is enabled. Click Unmanage to remove this
VM from the PowerVC environment.
Figure 5-89 shows the Unmanage option to unmanage a VM.
Figure 5-89 Unmanage an existing virtual machine
5.15.18 Delete a virtual machine
PowerVC can delete VMs completely from your systems.
Important: By deleting a VM, you completely remove the VM from the host system and
from the HMC, and PowerVC no longer manages it.
186 IBM PowerVC Version 1.2.3: Introduction and Configuration
To remove a VM, open the Virtual Machines window and select the VM that you want to
remove. The background color changes to light blue. Click Delete, as shown in Figure 5-90.
Figure 5-90 Delete a virtual machine
The PowerVC management server displays a confirmation window (Figure 5-91). To
permanently delete the VM, click OK. PowerVC then confirms the deletion.
Figure 5-91 Confirmation window to delete a virtual machine
When PowerVC deletes storage, it behaves differently, depending on how volumes were
created:
Volumes that were created by PowerVC (the boot volumes) are deleted and removed from
the VIOS and storage back-ends.
Volumes that were attached to the partition are detached only during the partition deletion.
The zoning to storage is removed by the deletion operation.
Important: You can delete a VM while it is running. The process stops the running VM and
then deletes it.
© Copyright IBM Corp. 2014, 2015. All rights reserved. 187
Chapter 6. PowerVC Standard Edition for
managing PowerKVM
Using IBM Power Virtualization Center Standard Edition (PowerVC) for managing PowerKVM
for the setup, storage management, and the way that PowerVC handles the capture of
International Organization for Standardization (ISO) images requires special considerations.
In this chapter, we cover the installation and setup specifics and the basic steps to import,
capture, and deploy ISO images:
6.1, “Install PowerVC Standard to manage PowerKVM” on page 188
6.2, “Set up PowerVC Standard managing PowerKVM” on page 188
6.3, “Host group setup” on page 201
6.4, “Import ISO images” on page 201
6.5, “Capture a virtual machine” on page 212
6.6, “Deploy images” on page 220
6.7, “Resize virtual machines” on page 223
6.8, “Suspend and resume virtual machines” on page 224
6.9, “Restart a virtual machine” on page 224
6.10, “Migrate virtual machines” on page 225
6.11, “Restarting virtual machines remotely” on page 226
6.12, “Delete virtual machines” on page 228
6.13, “Create and attach volumes” on page 229
6.14, “Attach volumes” on page 229
For configuration and use, see Chapter 5, “PowerVC Standard Edition for managing
PowerVM” on page 97.
6
188 IBM PowerVC Version 1.2.3: Introduction and Configuration
6.1 Install PowerVC Standard to manage PowerKVM
This section outlines the slight differences between the installation of PowerVC Standard
Edition for managing PowerKVM and the installation of PowerVC Standard Edition for
managing PowerVM.
Before you install PowerVC, a Linux Installation must be ready, as described in Chapter 4,
“PowerVC installation” on page 77. We do not cover the Linux installation in this section
because it does not differ from the Linux installation for managing PowerVM. For the
installation details, see 4.2, “Installing PowerVC” on page 82. After the PowerVC installation
for Linux is ready, follow these steps:
1. From the Linux command-line interface (CLI), change the working directory to the location
of the installation script.
2. Install PowerVC Standard for managing PowerKVM by using this command:
./install
3. Select the offering type to install from the following two options:
– 1 - Standard managing PowerVM
– 2 - Standard managing PowerKVM
– 9 - Exit
Enter 2 to install with PowerVC Standard managing PowerKVM.
The rest of the installation process is the same for all versions. For more information, see 4.2,
“Installing PowerVC” on page 82.
6.2 Set up PowerVC Standard managing PowerKVM
In this section, we cover the steps to add a PowerKVM host, a storage provider, and a
network.
Chapter 6. PowerVC Standard Edition for managing PowerKVM 189
6.2.1 Add the PowerKVM host
Follow these steps:
1. In the PowerVC GUI, type your user and password, and click Log In (Figure 6-1).
Figure 6-1 PowerVC Login window
Figure 6-2 PowerVC Home page
Note: The Home page (Figure 6-2) does not offer the option to add a fabric.
190 IBM PowerVC Version 1.2.3: Introduction and Configuration
2. Click Add host to add the PowerKVM host, as shown in the Figure 6-3.
Figure 6-3 PowerVC Add Host window
During the Add Host task, a package is transferred and installed in the PowerKVM host.
As Figure 6-4 shows, messages appear in the lower-right side of the browser.
Figure 6-4 Informational messages
After the host is added, you see the message in Figure 6-5.
Figure 6-5 Host added successfully
Chapter 6. PowerVC Standard Edition for managing PowerKVM 191
3. To review the messages, click the black menu bar at the top of the browser. Figure 6-6
shows the Home page with the available PowerKVM hosts.
Figure 6-6 PowerVC managing PowerKVM hosts
4. For a detailed view of the added PowerKVM, click the Hosts icon in the left navigation
panel (highlighted in Figure 6-6).
5. Figure 6-7 displays the new PowerKVM hosts.
Figure 6-7 Detailed Hosts view
192 IBM PowerVC Version 1.2.3: Introduction and Configuration
6. Click a PowerKVM host to display more information, as shown in Figure 6-8.
Figure 6-8 PowerKVM host information and capacity section
Chapter 6. PowerVC Standard Edition for managing PowerKVM 193
You can expand and collapse any sections. The display information about virtual switches
and virtual machines (VMs) is shown in Figure 6-9.
Figure 6-9 PowerKVM Virtual Switches and Virtual Machines sections
194 IBM PowerVC Version 1.2.3: Introduction and Configuration
6.2.2 Add storage
Follow these steps to add storage:
1. Add the storage by clicking the Add Storage plus sign (+) in the center of the PowerVC
Home page. Figure 6-10 shows a pop-up window to specify the storage array IP address
and credentials. In our lab environment, we use an IBM SAN Volume Controller (SVC).
Enter the name, user ID, and password. Click Connect.
Figure 6-10 Add a storage device to PowerVC
Chapter 6. PowerVC Standard Edition for managing PowerKVM 195
2. After you provide the IP connection settings and credentials, specify the SAN Volume
Controller storage pool that is assigned to your environment. In Figure 6-11, the SVC
shows three pools. We selected DS4800_site2_p02. Click Add Storage.
Figure 6-11 SVC storage pool choice
196 IBM PowerVC Version 1.2.3: Introduction and Configuration
After you add the SVC and storage pool successfully, a new storage provider appears on the
PowerVC Home page, as shown in Figure 6-12 (Storage Providers: 1). The storage provider
does not have a managed volume yet.
Figure 6-12 The new SVC storage provider
Chapter 6. PowerVC Standard Edition for managing PowerKVM 197
6.2.3 Add a network
Follow these steps to add a network:
1. Add a network by clicking Add Network to open the window that is shown in Figure 6-13.
2. Add the network name, virtual LAN (VLAN) ID, subnet mask, default gateway, Domain
Name Server (DNS), and the address deployment choice (Dynamic Host Configuration
Protocol (DHCP) or Static). The configured virtual switch is automatically retrieved from
the PowerKVM configuration.
Figure 6-13 Add a network to the PowerVC configuration
198 IBM PowerVC Version 1.2.3: Introduction and Configuration
3. After you add the network to the configuration, the Home page is updated, as shown in
Figure 6-14.
Figure 6-14 Network is configured now
Chapter 6. PowerVC Standard Edition for managing PowerKVM 199
Managing virtual switches
PowerVC Standard for managing PowerKVM can manage multiple virtual switches to
accommodate your business requirements. Follow these steps:
1. To edit the virtual switch configuration, from the PowerVC Home page, click the Hosts
icon, and then double-click the host that you want to use. Expand the Virtual Switches
section, if it is not expanded. The virtual switches are defined on the host as shown in
Figure 6-15.
Figure 6-15 List of virtual switches
2. Select the switch that you need to edit and click Edit Switch. From the list of available
components, select the physical component that you want to link to the virtual switch, and
click Save, as shown in Figure 6-16.
Figure 6-16 Edit virtual switch window
200 IBM PowerVC Version 1.2.3: Introduction and Configuration
3. The message that is shown in Figure 6-17 appears. Verify that no other activity is running
on the host, and click OK.
Figure 6-17 Message about conflicts with the updated virtual switch selections
4. After the process finishes, the component is shown in the Components column. Click View
Components to see the details that are shown in Figure 6-18.
Figure 6-18 Details of the virtual switch components
Environment verification
Check the overall PowerVC configuration by clicking Verify Environment.
Note: This verification is the same procedure for all PowerVC versions. For more
information, see 5.14.1, “Verification report validation categories” on page 130.
Chapter 6. PowerVC Standard Edition for managing PowerKVM 201
6.3 Host group setup
With PowerVC version 1.2.3 or later, you can group hosts into host groups. You can set
different placement policies for each host group. To create a new host group, select Hosts →
Host Groups and click Create Host Group, as shown in Figure 6-19. Enter the host group
name, select the placement policy, and the hosts. Click Create Host Group at the bottom of
the window.
Figure 6-19 Create a host group
6.4 Import ISO images
PowerVC Standard managing PowerKVM offers you the option to use ISO images to create
Linux VMs. The setup differs slightly from PowerVC Standard managing PowerVM. After the
environment is verified, you can import ISO images to the PowerVC domain.
202 IBM PowerVC Version 1.2.3: Introduction and Configuration
6.4.1 Importing ISO images by using the command-line interface
The first step to import an ISO image to PowerVC is to transfer the file to the PowerVC hosts.
Then, you can run the powervc-iso-import command to add the ISO to PowerVC.
Example 6-1 shows an example of importing a Red Hat Enterprise Linux (RHEL) ISO image
by using the command-line interface (CLI).
Example 6-1 Importing a Red Hat ISO image
[admin@powerkvm bin]# powervc-iso-import --name rhel65dvd2 --os rhel --location
/softimg/rhel-server-6.5-ppc64-dvd.iso
Password
+----------------------------+--------------------------------------+
| Property | Value |
+----------------------------+--------------------------------------+
| Property 'architecture' | ppc64 |
| Property 'hw_vif_model' | virtio |
| Property 'hypervisor_type' | qemu |
| Property 'os_distro' | rhel |
| checksum | 66bb956177d7b55946a5602935e67013 |
| container_format | bare |
| created_at | 2014-05-27T21:14:57.012159 |
| deleted | False |
| deleted_at | None |
| disk_format | iso |
| id | a898e706-c835-42c6-87c2-e53d8efb98ae |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | rhel65dvd2 |
| owner | 9c03022ea2a146b78c495cc9a00b0487 |
| protected | False |
| size | 3347902464 |
| status | active |
| updated_at | 2014-05-27T21:15:47.330608 |
| virtual_size | None |
+----------------------------+--------------------------------------+
Chapter 6. PowerVC Standard Edition for managing PowerKVM 203
6.4.2 Importing ISO images by using the GUI
Follow these steps to import ISO images by using the graphical user interface (GUI):
1. To import ISO images or qcow2 images into PowerVC by using the GUI, click Images on
the left navigation panel in PowerVC. Then, click Upload. Enter the image name,
operating system, and image type, as shown in Figure 6-20. Click Browse to navigate to
the ISO image. Select the ISO image. Finally, click Upload.
Figure 6-20 Upload Image window
Note: This process takes a few seconds or minutes, depending on the network
bandwidth and the size of the image.
204 IBM PowerVC Version 1.2.3: Introduction and Configuration
2. After the ISO image is successfully imported, the ISO image appears on the left navigation
panel of the PowerVC Home page, as shown in Figure 6-21.
Figure 6-21 ISO images that were imported to PowerVC
3. The status of the ISO images can be verified by clicking the Images icon on the left
navigation panel to open the Images view that is shown in Figure 6-22.
Figure 6-22 Status of the imported ISO image
Chapter 6. PowerVC Standard Edition for managing PowerKVM 205
4. Click the rhel65dvd2 image to get details, such as the ID, as shown in Figure 6-23.
Figure 6-23 RHEL ISO image details
The images are in the /var/lib/glance/images/ directory. Example 6-2 displays the ISO
image file based on the ID in the Images interface that is shown in Figure 6-23.
Example 6-2 ISO image location and naming in PowerVC
[admin@dpowervckvm ~]$ ls /var/lib/glance/images
a898e706-c835-42c6-87c2-e53d8efb98ae
206 IBM PowerVC Version 1.2.3: Introduction and Configuration
6.4.3 Deploying an RHEL ISO image
After an ISO image is imported, you can deploy it to a VM. This VM will be a base that is
ready for future image captures and the automatic deployments of other VMs. Follow these
steps:
1. From the Images window, on the left navigation panel (Figure 6-24), select the image and
click Deploy.
Figure 6-24 Select the image for deployment
Chapter 6. PowerVC Standard Edition for managing PowerKVM 207
2. After the image is selected for deployment, you must specify the following parameters for
the target VM before any deployment can start (Figure 6-25):
– VM name
– Target host or host group
– Compute template
The following default values can be overridden when they are available:
• Processors
• Processor units
• Memory size
• Disk size
– Network template
– VM’s IP address, or PowerVC can select an IP address automatically from the IP pool
Figure 6-25 Virtual machine deployment parameters
3. Complete the required information, and click Deploy to start the VM’s deployment. During
the deployment process, PowerVC displays several messages. Figure 6-26 shows the
deployment in-progress message.
Figure 6-26 Deployment in-progress message
208 IBM PowerVC Version 1.2.3: Introduction and Configuration
4. Figure 6-27 shows the successful deployment message.
Figure 6-27 Successful deployment verification message
5. The VM’s deployment can be monitored from the left navigation area also, as shown in
Figure 6-28.
Figure 6-28 Virtual Machines view with highlighted State and Health columns
Chapter 6. PowerVC Standard Edition for managing PowerKVM 209
6. Click the name to see the detailed Information and Specifications sections about the
deployed image, as shown in Figure 6-29.
Figure 6-29 Detailed information
210 IBM PowerVC Version 1.2.3: Introduction and Configuration
7. The sections can be collapsed and expanded as needed. Figure 6-30 shows the
expanded Network Interfaces and Details sections and the collapsed Information and
Specifications sections.
Figure 6-30 Detailed information with expanded or collapsed sections
8. The Active status and OK health mean that the VM is deployed. Although this status
seems definitive, you still must install the initial Linux installation manually.
9. The machine is prepared and ready for the operating system (OS) installation. A shutdown
is required. Select the deployed VM and click Stop, as shown in Figure 6-31.
Figure 6-31 Stopping the virtual machine
Chapter 6. PowerVC Standard Edition for managing PowerKVM 211
Linux installation for the virtual machine
The following steps describe the manual installation of a Linux VM by using an ISO image:
1. Start the VM by clicking Start on PowerVC. When the VM is started, the state is Active,
as shown in Figure 6-32.
Figure 6-32 Virtual machine started and active
2. After the VM status is Active and Health is OK, proceed with the manual installation steps.
3. Open a remote console connection from the PowerKVM command line to the VM by using
the virsh console command. First, list all of the VMs by running the virsh list --all
command. Example 6-3 shows the output for the virsh command.
Example 6-3 virsh list --all output
[admin@powerkvm ~]# virsh list --all
Id Name State
----------------------------------------------------
- linux20-36d9ca31-00000017 shut off
[admin@powerkvm ~]#
4. Copy the name of the VM and run the command:
virsh console [virtual_machine_name]
Note: This extra manual installation step is necessary only for ISO image deployment,
not for captured VMs.
Tip: When you select the VM, the action buttons become active. If no VM is selected,
all of the buttons remain inactive (gray).
Note: The Health status might remain in the Warning state for several minutes.
212 IBM PowerVC Version 1.2.3: Introduction and Configuration
5. This command opens a remote virtual console with the selected VM. Press any key to get
the initial input. You see the “Disc Found” message after RHEL boots, as shown in
Example 6-4.
Example 6-4 Virtual console that shows Disc Found message
Welcome to Red Hat Enterprise Linux for ppc64
+-----------¦ Disc Found +-----------+
¦ ¦
¦ To begin testing the media before ¦
¦ installation press OK. ¦
¦ ¦
¦ Choose Skip to skip the media test ¦
¦ and start the installation. ¦
¦ ¦
¦ +----+ +------+ ¦
¦ ¦ OK ¦ ¦ Skip ¦ ¦
¦ +----+ +------+ ¦
¦ ¦
¦ ¦
+------------------------------------+
<Tab>/<Alt-Tab> between elements | <Space> selects | <F12> next screen
6. Follow the instructions to complete the Linux installation. When the installation finishes,
the VM is ready to be captured and deployed several times.
6.5 Capture a virtual machine
A VM can be captured when it is in the Active state or a powered-off state.
This section describes how to capture a VM that is running and managed by PowerVC. This
section covers the necessary steps to capture the VM:
1. Install cloud-init on the VM that you want to capture. You need to perform this step only
the first time that you capture a VM.
2. Perform any pre-capture preparations, such as deleting or cleaning up log files, on the VM.
For SLES VMs, change the devices so that they are mounted by device name or
Universally Unique Identifier (UUID).
Before you can capture a VM, you must ensure that the following requirements are met:
Your PowerVC environment is configured as described in 6.2, “Set up PowerVC Standard
managing PowerKVM” on page 188.
The host on which the VM is configured is registered in PowerVC.
When you capture VMs that use local storage, the /var/lib/glance/images/ directory on
the PowerVC management server is used as the repository for storing the qcow2 and ISO
images. The file system that contains the /var/lib/glance/images/ directory must have
enough space to store the captured images.
Chapter 6. PowerVC Standard Edition for managing PowerKVM 213
6.5.1 Install cloud-init on the virtual machine
The cloud-init script enables VM activation and initialization. It is widely used for
OpenStack.
Before you capture a VM, install the cloud-init initialization package. This package is available
at /opt/ibm/powervc/images/cloud-init in PowerVC.
Install the required dependencies
Before you install cloud-init, you must install the necessary dependencies for cloud-init, such
as the following examples, from the repository:
Python boto
Yellowdog Updater, Modified (YUM)
Extra Packages for Enterprise Linux (EPEL)
Any other package manager
Not all dependencies are available in the regular RHEL repository.
For SLES, install the dependencies that are provided:
ftp://ftp.unicamp.br/pub/linuxpatch/cloud-init-ppc64/sles11
For RHEL 6 and 7, follow these steps:
1. Install the dependencies from the FTP location:
ftp://ftp.unicamp.br/pub/linuxpatch/cloud-init-ppc64
2. Add the EPEL YUM repository to get the dependent Red Hat Package Managers (RPMs):
– Run the following commands to set up the repository for RHEL 6:
wget
http://dl.fedoraproject.org/pub/epel/6Server/ppc64/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6*.rpm
– Run the following commands to set up the repository for RHEL 7:
wget
http://dl.fedoraproject.org/pub/epel/7/ppc64/e/epel-release-7-5.noarch.rpm
rpm -Uvh epel-release-7*.rpm
Install cloud-init
Install the appropriate cloud-init RPM for your OS from
/opt/ibm/powervc/images/cloud-init:
For RHEL 6, install cloud-init-0.7.4-*.el6.noarch.rpm.
For RHEL 7, install cloud-init-0.7.4-*.el7.noarch.rpm from the
/opt/ibm/powervc/images/cloud-init/rhel location.
Important: If you are installing the cloud-init package to capture a VM, and the activation
engine is installed, you must uninstall the activation engine. To uninstall the activation
engine, see “Preparing a virtual machine with activation-engine” on page 151.
Note: The EPEL RPM packages might be renamed with the updated version. You can
obtain the new versions from the following page with the correct version selected:
http://dl.fedoraproject.org/pub/epel/
214 IBM PowerVC Version 1.2.3: Introduction and Configuration
Modify the cloud.cfg file
After you install cloud-init, modify the cloud.cfg file that is available at /etc/cloud/cloud.cfg
with the following values, according to your OS.
For RHEL, update the cloud.cfg file with the following values:
disable_root: 0
ssh_pwauth: 1
ssh_deletekeys: 1
For SLES, edit the following fields in the cloud.cfg file:
1. Remove the following field:
users: -root
2. Add the following fields:
– ssh_pwauth: true
– ssh_deletekeys: true
For both RHEL and SLES, add the following new values to the cloud.cfg file:
disable_ec2_metadata: True
datasource_list: ['ConfigDrive']
For SLES only, after you update and save the cloud.cfg file, run the following commands:
chkconfig -s cloud-init-local on
chkconfig -s cloud-init on
chkconfig -s cloud-config on
chkconfig -s cloud-final on
For RHEL 7, ensure that the following conditions are set on the VM that you are capturing or
deploying:
SLES is set to permissive or disabled on the VM that you are capturing or deploying.
The Network Manager must be installed and enabled.
Ensure that the net-tools package is installed.
Edit all of the /etc/sysconfig/network-scripts/ifcfg-eth* files and update
NM_CONTROLLED=no in them.
Remove the MAC address information
After you install the cloud-init initialization package, remove the Media Access Control (MAC)
address information:
1. Replace /etc/udev/rules.d/70-persistent-net.rules with an empty file. (The .rules file
contains network persistence rules, including the MAC address.)
2. Replace /lib/udev/rules.d/75-persistent-net-generator.rules with an empty file,
which generates the .rules file.
Note: This package is not installed by default when you select the Minimal Install
software option during the installation of RHEL 7 from an ISO image.
Note: The recommended action is to replace the previous files with empty files rather
than deleting the files. If you delete the files, you might receive an udev kernel warning
at boot time.
Chapter 6. PowerVC Standard Edition for managing PowerKVM 215
3. Remove this HWADDR line from Fedora-based images:
/etc/sysconfig/network-scripts/ifcfg-eth0
6.5.2 Change devices to be mounted by name or UUID
For SLES virtual servers, use literal names for device names rather than symbolic links. By
default, devices are mounted by using -id, which means that they are represented by
symbolic links.
You must change the devices so they are mounted by device name or UUID rather than by
-id. You must perform this task before you capture a SLES VM for the first time. After you
capture a SLES VM for the first time, you can capture and deploy an image of the resulting
VM without performing this task.
To change the devices so that they are mounted by device name or UUID, complete the
following steps:
1. Search the file system table /etc/fstab for the presence of symbolic links. Symbolic links
will look like /dev/disk/by-*.
2. Store the mapping of the /dev/disk/by-* symbolic links to their target devices in a
scratch file by running this command:
ls -l /dev/disk/by-* > /tmp/scratchpad.txt
The contents of the scratchpad.txt file might look like Example 6-5.
Example 6-5 Symbolic links mapping
/dev/disk/by-id:
total 0
lrwxrwxrwx 1 root root 9 Apr 10 12:07
scsi-360050768028180ee380000000000603c -> ../../sda
lrwxrwxrwx 1 root root 10 Apr 10 12:07
scsi-360050768028180ee380000000000603c-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 10 12:07
scsi-360050768028180ee380000000000603c-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 10 12:07
scsi-360050768028180ee380000000000603c-part3 -> ../../sda3
lrwxrwxrwx 1 root root 9 Apr 10 12:07
wwn-0x60050768028180ee380000000000603c -> ../../sda
lrwxrwxrwx 1 root root 10 Apr 10 12:07
wwn-0x60050768028180ee380000000000603c-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 10 12:07
wwn-0x60050768028180ee380000000000603c-part2 -> ../../sda2
Tip: The /etc/sysconfig/network-script file path for the HWADDR applies for RHEL
only. For example, for the ifcfg-eth0 adapter on RHEL, remove the HWADDR line from
/etc/sysconfig/network-script/ifcfg-eth0. For SLES, the HWADDR path is
/etc/sysconfig/network. On SLES, remove the HWADDR line from
/etc/sysconfig/network/ifcfg-eth0.
Important: You must remove the network persistence rules in the image because they
cause the network interface in the instance to come up as an interface other than eth0.
Your image has a record of the MAC address of the network interface card when it was
first installed, and this MAC address is different each time that the instance boots.
216 IBM PowerVC Version 1.2.3: Introduction and Configuration
lrwxrwxrwx 1 root root 10 Apr 10 12:07
wwn-0x60050768028180ee380000000000603c-part3 -> ../../sda3
/dev/disk/by-path:
total 0
lrwxrwxrwx 1 root root 9 Apr 10 12:07 scsi-0:0:1:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part3 -> ../../sda3
/dev/disk/by-uuid:
total 0
lrwxrwxrwx 1 root root 10 Apr 10 12:07 3cb4e486-10a4-44a9-8273-9051f607435e
-> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 10 12:07 c6a9f4e8-4e87-49c9-b211-89086c2d1064
-> ../../sda3
/
3. Edit /etc/fstab, replacing the /dev/disk/by-* entries with the device names that the
symbolic links point to, as laid out in your scratchpad.txt file.
Example 6-6 shows what these lines might look like before you edit them.
Example 6-6 Sample device names before the change
/dev/disk/by-id/scsi-360050768028180ee380000000000603c-part2 swap swap
defaults 0 0
/dev/disk/by-id/scsi-360050768028180ee380000000000603c-part3 / ext3
acl,user_xattr 1 1
Example 6-7 shows what these lines might look like after you edit them.
Example 6-7 Sample device names after the change
/dev/sda2 swap swap defaults 0 0
/dev/sda3 / ext3 acl,user_xattr 1 1
4. Edit the /etc/lilo.conf file so that the boot and root lines correspond to the device
names.
Example 6-8 shows what these lines might look like before you edit them.
Example 6-8 lilo.conf file before change
boot = /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part1
root = /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part3
Example 6-9 shows what these lines might look like after you edit them.
Example 6-9 lilo.conf file after change
boot = /dev/sda1
root = /dev/sda3
Important: For the following steps, ensure that you use the device names in your own
scratchpad.txt file. The following values are merely examples.
Chapter 6. PowerVC Standard Edition for managing PowerKVM 217
5. Run the lilo command.
6. Run the mkinitrd command.
6.5.3 Capture the virtual machine
Before you can capture a VM, the VM must meet specific requirements. If you do not prepare
the VM before you capture it, you might get errors when you deploy the resulting image.
The following steps describe how to capture a VM by using the cloud-init initialization
package:
1. Install cloud-init on the VM that you want to capture. You only perform this step the first
time that you capture a VM. For more information about how to install cloud-init, see 6.5.1,
“Install cloud-init on the virtual machine” on page 213.
2. If the VM that you want to capture is running a SUSE Linux (SLES) operating system,
change the device mounting. For more information, see 6.5.2, “Change devices to be
mounted by name or UUID” on page 215.
3. Perform any pre-capture preparation, such as deleting or cleaning up log files, on the VM.
4. From the PowerVC home window, click Virtual Machines, select the VM to capture, and
click Capture.
5. When the message that is shown in Figure 6-33 appears, click Continue to proceed.
Figure 6-33 Warning message before you capture the VM
Note: The installation steps for cloud-init might change with the update of cloud-init or
PowerVC. Check the latest information about the cloud-init installation at the IBM
Knowledge Center:
http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.hel
p.doc/powervc_install_cloudinit_kvm.html
218 IBM PowerVC Version 1.2.3: Introduction and Configuration
6. Name the new image. Figure 6-34 shows a text box to enter the name, and it displays the
required default resources for this image.
Figure 6-34 Capture window
7. Click Capture to continue. PowerVC starts to capture the VM. PowerVC presents the
message that is shown in Figure 6-35.
Figure 6-35 Snapshot in-progress message
Note: You can override the amount of required resources when you deploy a new VM
with this image.
Chapter 6. PowerVC Standard Edition for managing PowerKVM 219
The process can take from a few seconds to a few minutes. To see the status of the capture
operation, click Virtual Machines. Then, check the Task column to see the status of the
snapshot, as shown in Figure 6-36.
Figure 6-36 Status from the Virtual Machines view
You can see the capture status by clicking Images, as shown in Figure 6-37.
Figure 6-37 Snapshot status from the Images view
Important: It is not necessary to shut down the VM that you want to capture. You can
capture images dynamically from VMs that are running, but you might need to review and
check any inconsistency in the data or applications outside of the operating system.
220 IBM PowerVC Version 1.2.3: Introduction and Configuration
6.6 Deploy images
The process to create a VM by using an existing image is simple. The process is completely
automated by PowerVC.
Follow these few steps to deploy a new VM:
1. Click Image, select the image that you want to deploy, and then click Deploy. Complete
the requested information. Figure 6-38 displays the first two sections of the Deploy
window.
Figure 6-38 General and network sections of the window to deploy a VM
Chapter 6. PowerVC Standard Edition for managing PowerKVM 221
2. Figure 6-39 shows the expanded Activation Input section. In this section, you can upload
scripts or add configuration data. After the VM is deployed, the script or data automatically
configures the VM according to your requirements.
Figure 6-39 Activation Input section of the window to deploy a virtual machine
After you click Deploy, PowerVC displays a message similar to the message that is shown in
Figure 6-40.
Figure 6-40 Deployment is started message
222 IBM PowerVC Version 1.2.3: Introduction and Configuration
3. When the deployment is complete, you can click Virtual Machines to see the new
deployed image, as shown in Figure 6-41.
Figure 6-41 Virtual Machines view
Note: The network is configured automatically by PowerVC during the task to build the VM.
When the deployment task finishes, the VM is up, running, and connected to the network.
Chapter 6. PowerVC Standard Edition for managing PowerKVM 223
6.7 Resize virtual machines
PowerVC Standard managing PowerKVM can resize VMs with a simple procedure.
Follow these steps to resize your VMs:
1. From the page that lists all VMs, select the VM to resize.
2. Click Resize to open the window that is shown in Figure 6-42.
Figure 6-42 Resize virtual machine window
Note: You can select a compute template to populate the required resource values or edit
each field manually.
Important: If you change the size of the disk, ensure that you go into the OS of the VM and
complete the required steps so that the OS can use the new space that was configured on
the disk. For more information, see your OS documentation.
224 IBM PowerVC Version 1.2.3: Introduction and Configuration
6.8 Suspend and resume virtual machines
PowerVC can suspend and resume a running VM. To suspend a VM, select it and then click
Suspend. Two methods exist to suspend a VM as shown in Figure 6-43.
Figure 6-43 Suspend or pause a virtual machine
After you select the option, click OK. The VM state changes to paused or suspended. To
resume the VM, select it and click Resume.
6.9 Restart a virtual machine
PowerVC can restart a VM. Follow these steps:
1. To restart a VM, select the VM and click Restart.
2. As the Restart window shows (Figure 6-44), you can select either a soft restart or a hard
restart.
Figure 6-44 Restart a virtual machine
Important: It is not possible to restart VMs that are in a suspended or paused state. The
only available option is to perform a hard restart.
Chapter 6. PowerVC Standard Edition for managing PowerKVM 225
6.10 Migrate virtual machines
PowerVC also support the migration of VMs between PowerKVM hosts if the VM meets the
requirements of migration; for example, Network File System (NFS) shared storage was
configured for the PowerKVM hosts. For the detailed requirements of VM migration, see the
IBM Knowledge Center:
http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.help.d
oc/powervc_relocation_reqs_kvm.html
Follow these steps to migrate a VM:
1. Go to the Virtual Machines page, select the VM to migrate, and click Migrate. Select the
destination host, as shown in Figure 6-45, and then click Migrate.
Figure 6-45 Migrate a virtual machine
Note: It is possible to restart VMs in a suspended or paused state. However, the only
available option is to perform a hard restart.
226 IBM PowerVC Version 1.2.3: Introduction and Configuration
2. The VM is migrated to the destination host live, as shown in Figure 6-46.
Figure 6-46 Migrating a virtual machine
6.11 Restarting virtual machines remotely
With PowerVC version 1.2.3 or later, you can restart VMs remotely if a PowerKVM host failed.
Follow these steps:
1. After a PowerKVM host fails, go to the Hosts page, select the failed host, and click
Remotely Restart Virtual Machines, as shown in Figure 6-47.
Figure 6-47 Remotely Restart Virtual Machines option
Chapter 6. PowerVC Standard Edition for managing PowerKVM 227
2. Then, select the VM, or all VMs, select the destination host, as shown in Figure 6-48, and
click Remote Restart.
Figure 6-48 Select virtual hosts to restart remotely
228 IBM PowerVC Version 1.2.3: Introduction and Configuration
3. The selected VMs are restarted remotely on the destination PowerKVM host, as shown in
Figure 6-49.
The remote restart function provides a new way to enhance the availability of applications.
Figure 6-49 Virtual machines that were restarted remotely
6.12 Delete virtual machines
PowerVC can delete a VM. The process deletes the VM and the associated storage.
Follow these steps to delete a VM:
1. To delete a VM, select it and click Delete.
2. When you see a confirmation message that is similar to the message that is shown in
Figure 6-50, click OK if the message shows the correct machine.
Figure 6-50 Delete a virtual machine
Note: Before you use the remote restart function, you need to set up PowerVC to meet the
requirements. For the detailed remote restart requirements, see the IBM Knowledge
Center:
http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.hel
p.doc/powervc_recovery_reqs_kvm.html
Chapter 6. PowerVC Standard Edition for managing PowerKVM 229
6.13 Create and attach volumes
PowerVC can create volumes in the available storage providers. These volumes can be
assigned to a VM later, or it is possible to create and attach volumes in one single step.
To create a volume, click Storage Volumes, and then click Create. A window that is similar to
the window that is shown in Figure 6-51 opens.
Figure 6-51 Create Volume window
It is possible to attach the volume later to an existing VM.
6.14 Attach volumes
PowerVC can attach a volume to existing VMs. It is also possible to create the volume and
attach it in the same operation.
To attach volumes, click Virtual Machines, select the VM, and click Attach Volume.
230 IBM PowerVC Version 1.2.3: Introduction and Configuration
In the Attach Volume window (Figure 6-52), click Attach a new volume to this virtual
machine to add a new volume. Enter the storage template, volume name, description, and
size (GB). Click Attach.
Figure 6-52 Attaching new volume to a virtual machine
Chapter 6. PowerVC Standard Edition for managing PowerKVM 231
To attach an existing volume, click Attach an existing volume to this virtual machine. A list
of volumes will be displayed, as shown in Figure 6-53.
Figure 6-53 Attach an existing volume to this virtual machine
It is possible to attach volumes to paused and suspended VMs.
Note: When you attach volumes to Linux VMs, additional work is required for the OS to
discover the volumes. For more information, check the documentation for your Linux
distribution.
232 IBM PowerVC Version 1.2.3: Introduction and Configuration
© Copyright IBM Corp. 2014, 2015. All rights reserved. 233
Chapter 7. PowerVC lab environment
This chapter describes the test environment that we used to write this book to demonstrate
the IBM Power Virtualization Center Standard Edition (PowerVC) features and to capture the
screen examples.
We installed, configured, and used several environments to share our experience with this
IBM software.
This chapter includes the following topics:
7.1, “PowerVC Standard Edition lab environment for managing PowerVM” on page 234
7.2, “PowerVC Standard managing PowerKVM lab” on page 243
7
234 IBM PowerVC Version 1.2.3: Introduction and Configuration
7.1 PowerVC Standard Edition lab environment for managing
PowerVM
This section describes the hardware components that were used in the Standard Edition lab
environment for managing PowerVM.
Figure 7-1 shows the lab environment that was used for PowerVC. It includes the real host
names that were used on the PowerVC domain.
Figure 7-1 PowerVC Standard Edition hardware lab for managing PowerVM
The PowerVC management station (labeled RHEL7.1LE in Figure 7-1) that was used for lab
tests is deployed on one of the IBM POWER8 S824 servers that is managed by PowerVC.
However, this virtual machine (VM) is not managed by PowerVC.
7.1.1 Hardware Management Console
Table 7-1 shows the hardware specifications of the Hardware Management Console (HMC)
that is used to manage the Power Systems infrastructure for the lab environment.
Table 7-1 HMC that was used
XIV
P8_9 P8_10
p8_9_vio1
SLES11
Network
hmc
Network switch
Resource
Manager
Power
Systems
Storage NetworkKey
Virtual
Machines
IBM i
AIX71
AIX61
AIX71
RHEL7.1LE
AIX61
p8_9_vio2 p8_10_vio1 p8_10_vio2
Fabric A
AIX61
Fabric B
EMCStorwize
Hardware Type Model Version Release
HMC 7042 CR7 Version 8
Build Level
20150602.1
Release 8.3.0
Service Pack 0
Chapter 7. PowerVC lab environment 235
7.1.2 Power Systems hardware
Table 7-2 shows the IBM Power Systems servers that were used in the PowerVC Standard
Edition lab environment for managing PowerVM.
Table 7-2 Hardware test environment
7.1.3 Storage infrastructure
This section describes the storage components that were used for testing.
Storage SAN switch
Table 7-3 shows the specifications of the two storage area network (SAN) switches that were
used in this test lab.
Table 7-3 Storage switch specifications
IBM SAN Volume Controller
Table 7-4 lists the specifications of the storage IBM SAN Volume Controller that was used in
the original book test lab.
Table 7-4 IBM SAN Volume Controller specifications
7.1.4 Storage configuration
This book covers multiple versions of PowerVC, as explained in 2.1, “Previous versions and
milestones” on page 10. The environment that is described next was used in the original book
about PowerVC versions 1.2.0 and 1.2.1. This section also describes the lab environment that
was used in the previous publication.
SAN configuration for PowerVC versions 1.2.0 and 1.2.1 tests
This section describes the storage device configuration that was used in the PowerVC
Standard Edition lab environment for managing PowerVM.
Host name Hardware Model Type Firmware level
P8_9 IBM POWER8 S824 8286 42A FW830.00 (TV830.38)
P8_10 IBM POWER8 S824 8286 42A FW830.00 (TV830.38)
Manufacturer Type Fabric operating system version
IBM 2498-B40 v7.0.2a
Manufacturer Type SAN Volume Controller operating system version
IBM 2145-8G4 -[GFE145AUS-1.15]-
236 IBM PowerVC Version 1.2.3: Introduction and Configuration
Figure 7-2 on page 237 shows the layers of physical and logical devices. Physical storage
devices are managed by the SAN Volume Controller. The test environment includes one IBM
DS8300 storage device and one IBM DS4800 storage device that are attached to the SAN
Volume Controller.
The SAN Volume Controller manages the external storage and creates physical disk pools. It
also provides protection and thin-provisioning features. The DS8300 is configured with two
shared storage pools (SSP), which are named SSP_powervc and DS8300_site2_p01. The
DS4800 is configured with one storage pool, which is named DS4800_site2_p2.
Storage pools are a group of physical storage devices. They can be partitioned in units of
storage that are called logical unit numbers (LUNs). These LUNs can be mapped to a host.
The storage provider layer converts LUNs to storage pools and then converts the storage
pools to physical storage devices.
For more information about the IBM SAN Volume Controller, see Implementing the IBM
System Storage SAN Volume Controller V7.4, SG24-7933.
The Virtual I/O Servers add a logical layer between the storage and the VMs. The Virtual I/O
Servers map a virtual disk of a virtual I/O (VIO) client to any of these objects:
An entire LUN
A part of a LUN or group of LUNs by using volume groups and logical volumes that are
defined on the VIOS
A file by using a file-backed device
The Virtual I/O Servers can map devices to the VMs by using virtual Small Computer System
Interface (vSCSI) or N_Port ID Virtualization (NPIV).
For more information about PowerVM storage virtualization, see IBM PowerVM Virtualization
Introduction and Configuration, SG24-7940, and IBM PowerVM Enhancements What is New
in 2013, SG24-8198.
Chapter 7. PowerVC lab environment 237
As shown in Figure 7-2, the Virtual I/O Server (VIOS) accesses the SAN Volume Controller
pools’ LUNs by using NPIV, and the SSP logical units use vSCSI.
Figure 7-2 Physical to logical management layers
Virtual I/O Servers support shared storage pools. Shared storage pools are group of hdisks
(LUNs) that are accessed simultaneously by several Virtual I/O Servers to create a common
storage space. Any VIOS member of the SSP can create a logical unit in this space. This
logical unit is visible from all Virtual I/O Servers in the SSP and can be mapped to a VM as a
vSCSI device.
PowerVC Management
SAN Volume Controller
SVC Pools Logical UnitsSSP Logical
Units
DS-8300 DS-4800
SSP_powervc
VIOS
LPARs
NPIV
NPIVvSCSI
DS8300_site2_p01 DS4800_site2_p02
Storage pools
Storage providers
Physical connections
VIOs management
Virtual connections
LPARs access
Physical storage devices
238 IBM PowerVC Version 1.2.3: Introduction and Configuration
As Figure 7-3 shows, the lab contains an SSP that is named powervc_cluster, which is
stored in the DS8300 that is managed by the SAN Volume Controller. The DS8300 LUNs are
accessed by all Virtual I/O Servers. They are used to create an SSP. Logical units are created
in this SSP and mapped to the VMs by using vSCSI.
Figure 7-3 Shared storage pools
For the VM operating system, the access to the storage does not require any special device
or driver configuration other than the standard configuration for vSCSI disk devices.
PowerVC is the tool that integrates all of these layers and creates a centralized environment
to manage the storage and the options. As shown in Figure 7-2 on page 237, PowerVC can
manage the SAN Volume Controller configuration and the SSP configuration, and it can
create NPIV or vSCSI connections between the storage and the VMs.
SAN configuration for PowerVC versions 1.2.2 and 1.2.3 tests
These PowerVC versions introduce significant changes to storage support. EMC is now
supported. The newest versions of the IBM XIV Storage System and IBM Storwize are also
supported.
...
Shared storage pool
SAN Volume Controller
DS-8300
SSP_powervc
Logical
Devices
(LUNs)
Physical
Devices
VIOS1 VIOS2 VIOS3 VIOSn
NPIV
LPARs
vSCSI
Storage
Pools
SSP
Logical
Units
VIOS
Chapter 7. PowerVC lab environment 239
Figure 7-4 shows the storage configuration that used for this book. For this lab, it was not
necessary to test the SSPs because no new features or functions were announced for these
versions.
Figure 7-4 Storage configuration that was set for this publication
7.1.5 Storage connectivity groups and port tagging
Storage connectivity groups and port tagging are part of the new features that were
introduced in PowerVC version 1.2.1. No new functions or updates were announced for
PowerVC versions 1.2.2 and 1.2.3.
The lab that is described corresponds to the tests that were performed on PowerVC version
1.2.1.
PowerVC Management
IBM XIV
VIOS
LPARs
NPIV
NPIV
Storage providers
Physical connections
VIOs management
Virtual connections
Storwize V7000
LPARs access
EMC VMAX
240 IBM PowerVC Version 1.2.3: Introduction and Configuration
Figure 7-5 shows the Fibre Channel (FC) adapters and the tags that were used in the lab.
Port tagging is useful for storage that is accessed through NPIV only. Port tagging is not used
for storage that is backed by an SSP.
In Figure 7-5, the yellow adapters do not support NPIV. Therefore, they are not tagged, and
they are used for SSP access only. The three green adapters support NPIV. We defined two
tags for partitioning the ports into a development environment and a production environment.
In the figure, the red striped ports are tagged as Prod and the blue striped ports are tagged as
Dev.
By mixing storage connectivity groups and tags, you can dedicate Virtual I/O Servers and FC
ports to classes of traffic.
Figure 7-5 Storage groups and tagged ports configuration lab
XIV
P8_9 P8_10
p8_9_vio1
SLES11
Network
hmc
Network switch
Resource
Manager
Power
Systems
Storage NetworkKey
Virtual
Machines
IBM i
AIX71
AIX61
AIX71
RHEL7.1LE
AIX61
p8_9_vio2 p8_10_vio1 p8_10_vio2
Fabric A
AIX61
Fabric B
EMCStorwize
Chapter 7. PowerVC lab environment 241
The lab contains four storage connectivity groups, as shown in Figure 7-6. Two of the storage
connectivity groups are defined by PowerVC, by default:
One storage connectivity group contains all of the Virtual I/O Servers of all of the managed
hosts that access the storage controllers.
One storage connectivity group contains all of the Virtual I/O Servers of all of the hosts
that belong to the SSP.
Also, we also defined two connectivity groups for storage-backed devices: Dev and Prod.
These storage connectivity groups contain the same three Virtual I/O Servers (those Virtual
I/O Servers that use NPIV-compatible adapters). One storage connectivity group uses the
ports that are tagged as Dev, and the other storage connectivity group uses the ports that are
tagged as Prod.
Figure 7-6 Storage connectivity groups in the lab
242 IBM PowerVC Version 1.2.3: Introduction and Configuration
Figure 7-7 shows ports without tags because they do not support NPIV. Ports that are tagged
as Prod or Dev are also shown.
Figure 7-7 Fibre Channel port tags that are used in the lab
7.1.6 Software stack for PowerVC lab environment
Table 7-5 shows the level of software that is used to test PowerVC.
Table 7-5 Software versions and releases
Software Operating system or firmware version
Virtual I/O Server 2.2.3.52
Red Hat Enterprise Linux (RHEL) 7.1
PowerVC 1.2.3
IBM AIX operating system 7.1 TL 3
IBM i 7.2
Storage SAN switch 7.0.2a
SAN Volume Controller 6.4.1.4 (build 75.3.1303080000)
Note: No specific requirements existed for the network switch, so we did not update its
configuration during the lab tests.
Chapter 7. PowerVC lab environment 243
7.2 PowerVC Standard managing PowerKVM lab
This section describes all of the components that were used during the PowerVC Standard
Edition version 1.2.3 for managing PowerKVM labs, including the installation and setup.
Figure 7-8 shows the lab environment that was created for PowerVC. The lab environment
includes the real host names that were used on the PowerVC domain.
Figure 7-8 PowerVC Standard managing PowerKVM lab setup
Important: PowerVC Standard Edition supports only internal or Internet SCSI (iSCSI)
disks when it manages PowerKVM. See 3.1.2, “PowerVC Standard Edition requirements”
on page 30.
KVM-171
Resource
Manager
Power
Systems
Storage NetworkKey
Virtual
Machines
PowerKVM
linux20
Fabric
SAN
Ethernet Network
iSCSI
connections
PowerVC
KVM-175
PowerKVM
linux21
NFS
connections
244 IBM PowerVC Version 1.2.3: Introduction and Configuration
© Copyright IBM Corp. 2014, 2015. All rights reserved. 245
ABI application binary interface
AC alternating current
ACL access control list
AFPA Adaptive Fast Path
Architecture
AIO Asynchronous I/O
APAR authorized program analysis
report
API application programming
interface
ARP Address Resolution Protocol
ASMI Advanced System
Management Interface
BFF Backup File Format
BIND Berkeley Internet Name
Domain
BIST Built-In Self-Test
BLV Boot Logical Volume
BOOTP Bootstrap Protocol
BOS Base Operating System
BSD Berkeley Software Distribution
CA certificate authority
CATE Certified Advanced Technical
Expert
CD compact disc
CD-R compact disc recordable
CD-ROM compact-disc read-only
memory
CDE Common Desktop
Environment
CEC central electrical complex
CHRP Common Hardware
Reference Platform
CLI command-line interface
CLVM Concurrent LVM
CPU central processing unit
CRC cyclic redundancy check
CSM Cluster Systems
Management
CUoD Capacity Upgrade on
Demand
CVUT Czech Technical University
DCM Dual Chip Module
Abbreviations and acronyms
DES Data Encryption Standard
DGD Dead Gateway Detection
DHCP Dynamic Host Configuration
Protocol
DLPAR dynamic LPAR
DMA direct memory access
DNS Domain Name Server
DR dynamic reconfiguration
DRM dynamic reconfiguration
manager
DVD digital versatile disc
EC EtherChannel
ECC error correction code
EGO Enterprise Grid Orchestrator
EOF end-of-file
EPOW emergency power-off warning
ERRM Event Response resource
manager
IBM ESS IBM Enterprise Storage
Server®
FC Fibre Channel
FC-AL Fibre Channel Arbitrated Loop
FDX full duplex
FLOP floating point operation
FRU field-replaceable unit
FTP File Transfer Protocol
IBM GDPS® IBM Geographically
Dispersed Parallel Sysplex™
GID group ID
IBM GPFS IBM General Parallel File
System
GUI graphical user interface
IBM HACMP™ IBM High Availability Cluster
Multiprocessing
HBA host bus adapter
HMC Hardware Management
Console
HTML Hypertext Markup Language
HTTP Hypertext Transfer Protocol
Hz hertz
I/O input/output
246 IBM PowerVC Version 1.2.3: Introduction and Configuration
IBM International Business
Machines
ID identifier
IDE Integrated Device Electronics
IEEE Institute of Electrical and
Electronics Engineers
IP Internet Protocol
IPAT IP address takeover
IPL initial program load
IPMP IP network multipathing
iSCSI Internet SCSI
ISV independent software vendor
ITSO International Technical
Support Organization
IVM Integrated Virtualization
Manager
IaaS Infrastructure as a Service
JFS journaled file system
JRE Java runtime environment
KVM kernel-based virtual machine
L1 Level 1
L2 Level 2
L3 Level 3
LA Link Aggregation
LACP Link Aggregation Control
Protocol
LAN local area network
LDAP Lightweight Directory Access
Protocol
LED light-emitting diode
LMB Logical Memory Block
LPAR logical partition
LPM Live Partition Migration
LPP licensed program product
LU logical unit
LUN logical unit number
LV logical volume
LVCB Logical Volume Control Block
LVM Logical Volume Manager
MAC Media Access Control
MBps megabytes per second
MCM multiple chip module
ML Maintenance Level
MP Multiprocessor
MPIO Multipath I/O
MTU maximum transmission unit
Mbps megabits per second
NFS Network File System
NIB Network Interface Backup
NIC network interchange
controller
NIM Network Installation
Management
NIMOL NIM on Linux
NPIV N_Port Identifier Virtualization
NVRAM nonvolatile random access
memory
N_PORT Node Port
ODM Object Data Manager
OS operating system
OSPF Open Shortest Path First
PCI Peripheral Component
Interconnect
PCI Express Peripheral Component
Interconnect Express
PCM path control module
PIC Pool Idle Count
PID process ID
PKI public key infrastructure
PLM Partition Load Manager
POST power-on self-test
POWER Performance Optimization
with Enhanced Risc
(Architecture)
PPC Physical Processor
Consumption
PPFC Physical Processor Fraction
Consumed
PTF program temporary fix
PTX Performance Toolbox
PURR Processor Utilization
Resource Register
PV physical volume
PVID Port Virtual LAN Identifier
PoE Proof of Entitlement
QoS quality of service
RAID Redundant Array of
Independent Disks
RAM random access memory
RAS reliability, availability, and
serviceability
RBAC role-based access control
RCP Remote Copy
Abbreviations and acronyms 247
RDAC Redundant Disk Array
Controller
RDO Red Hat OpenStack
REST Representational State
Transfer
RHEL Red Hat Enterprise Linux
RIO remote input/output
RIP Routing Information Protocol
RISC reduced instruction-set
computer
RMC Resource Monitoring and
Control
RPC Remote Procedure Call
RPL Remote Program Loader
RPM Red Hat Package Manager
RSA Rivest-Shamir-Adleman
algorithm
RSCT Reliable Scalable Cluster
Technology
RSH Remote Shell
SAN storage area network
SCG storage connectivity group
SCSI Small Computer System
Interface
SDD Subsystem Device Driver
SDDPCM Subsystem Device Driver
Path Control Module
SEA shared Ethernet adapter
SLES SUSE Linux Enterprise
Server
SMIT System Management
Interface Tool
SMP symmetric multiprocessor
SMS system management services
SMT simultaneous multithreading
SP Service Processor
SPOT Shared Product Object Tree
SRC System Resource Controller
SRN service request number
SSA Serial Storage Architecture
SSH Secure Shell
SSL Secure Sockets Layer
SSP shared storage pool
SUID Set User ID
SVC SAN Volume Controller
TCP/IP Transmission Control
Protocol/Internet Protocol
TL Technology Level
TLS Transport Layer Security
UDF Universal Disk Format
UDID Universal Disk Identification
VSAE Virtual Solutions Activation
Engine
VG volume group
VGDA Volume Group Descriptor
Area
VGSA Volume Group Status Area
VIOS Virtual I/O Server
VIPA virtual IP address
VLAN virtual local area network
VM virtual machine
VP virtual processor
VPD vital product data
VPN virtual private network
vSCSI virtual SCSI
VRRP Virtual Router Redundancy
Protocol
VSD Virtual Shared Disk
WLM Workload Manager
WWN worldwide name
WWPN worldwide port name
248 IBM PowerVC Version 1.2.3: Introduction and Configuration
© Copyright IBM Corp. 2014, 2015. All rights reserved. 249
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
IBM PowerVM Virtualization Introduction and Configuration, SG24-7940
IBM PowerVM Virtualization Managing and Monitoring, SG24-7590
IBM Power Systems HMC Implementation and Usage Guide, SG24-7491
Implementing the IBM System Storage SAN Volume Controller V7.4, SG24-7933
IBM PowerVM Enhancements What is New in 2013, SG24-8198
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Online resources
These websites are also relevant as further information sources:
Information about IBM Platform Resource Scheduler
http://www.ibm.com/systems/platformcomputing/products/rs/
Latest PowerVC Standard Edition requirements
http://ibm.co/1jC4Xx0
IBM Knowledge Center
http://www.ibm.com/support/knowledgecenter/
OpenStack
http://www.openstack.org/foundation/
https://wiki.openstack.org/wiki/Main_Page
Help from IBM
IBM Support and downloads
ibm.com/support
250 IBM PowerVC Version 1.2.3: Introduction and Configuration
IBM Global Services
ibm.com/services
ISBN0738441031
SG24-8199-02
(0.5”spine)
0.475”<->0.873”
250<->459pages
IBMPowerVCVersion1.2.3:IntroductionandConfiguration
Ibm power vc version 1.2.3 introduction and configuration
Ibm power vc version 1.2.3 introduction and configuration
ibm.com/redbooks
SG24-8199-02
ISBN 0738441031
Printed in U.S.A.
Back cover

More Related Content

PDF
Ref arch for ve sg248155
PDF
AIX 5L Differences Guide Version 5.3 Edition
PDF
IBM Power 770 and 780 Technical Overview and Introduction
PDF
BOOK - IBM Security on ibm z vse
PDF
Db2 virtualization
PDF
Gdfs sg246374
PDF
IBM Power 750 and 760 Technical Overview and Introduction
PDF
Learn about IBM Power 720 and 740 Technical Overview and Introduction.
Ref arch for ve sg248155
AIX 5L Differences Guide Version 5.3 Edition
IBM Power 770 and 780 Technical Overview and Introduction
BOOK - IBM Security on ibm z vse
Db2 virtualization
Gdfs sg246374
IBM Power 750 and 760 Technical Overview and Introduction
Learn about IBM Power 720 and 740 Technical Overview and Introduction.

What's hot (20)

PDF
Na vsc install
PDF
Swi prolog-6.2.6
PDF
Os linux complete notes
PDF
R installation and administration
PDF
IBM Power 750 and 755 Technical Overview and Introduction
PDF
Cluster logical volume_manager
PDF
Php myadmin documentation
PDF
IBM Power 710 and 730 Technical Overview and Introduction
PDF
programación en prolog
PDF
Faronics Deep Freeze Enterprise User Guide
PDF
Redbook overview and introduction 8670-61x
PDF
TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems
 
PDF
Smart dsp os_user_guide
PDF
IBM Power 770 and 780 Technical Overview and Introduction
PDF
Zimbra guide admin_anglais_uniquement
PDF
HRpM_UG_731_HDS_M2
PDF
Junipe 1
PDF
Faronics Power Save Enterprise User Guide
PDF
Managing Data Center Connectivity TechBook
 
PDF
Backing up web sphere application server with tivoli storage management redp0149
Na vsc install
Swi prolog-6.2.6
Os linux complete notes
R installation and administration
IBM Power 750 and 755 Technical Overview and Introduction
Cluster logical volume_manager
Php myadmin documentation
IBM Power 710 and 730 Technical Overview and Introduction
programación en prolog
Faronics Deep Freeze Enterprise User Guide
Redbook overview and introduction 8670-61x
TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems
 
Smart dsp os_user_guide
IBM Power 770 and 780 Technical Overview and Introduction
Zimbra guide admin_anglais_uniquement
HRpM_UG_731_HDS_M2
Junipe 1
Faronics Power Save Enterprise User Guide
Managing Data Center Connectivity TechBook
 
Backing up web sphere application server with tivoli storage management redp0149
Ad

Viewers also liked (20)

PDF
IBM PowerVM Virtualization Introduction and Configuration
PPT
tuningfor_oracle
DOCX
Aix install via nim
PDF
Aix student guide system administrations part 2 problem determination
PDF
How To Install and Configure SUDO on RHEL 7
PDF
Compliance and Event Monitoring with PowerSC Tools for IBM i
PPTX
Introduce: IBM Power Linux with PowerKVM
PDF
Install and Configure WordPress in AWS on RHEL 7 or CentOS 7
PDF
Configure Run Levels RHEL 7 or CentOS 7
PPT
Ibm aix technical deep dive workshop advanced administration and problem dete...
PPTX
Presentation power vm virtualization without limits
PDF
How To Configure FirewallD on RHEL 7 or CentOS 7
PPT
Power systems virtualization with power kvm
PPTX
Red hat enterprise linux 7 (rhel 7)
ODT
RHCE FINAL Questions and Answers
PPT
Red Hat Enterprise Linux 7
PPTX
Introduction to OpenStack Architecture
PPTX
OpenStack Introduction
ODP
Introducing OpenStack for Beginners
PDF
OpenStack Architecture
IBM PowerVM Virtualization Introduction and Configuration
tuningfor_oracle
Aix install via nim
Aix student guide system administrations part 2 problem determination
How To Install and Configure SUDO on RHEL 7
Compliance and Event Monitoring with PowerSC Tools for IBM i
Introduce: IBM Power Linux with PowerKVM
Install and Configure WordPress in AWS on RHEL 7 or CentOS 7
Configure Run Levels RHEL 7 or CentOS 7
Ibm aix technical deep dive workshop advanced administration and problem dete...
Presentation power vm virtualization without limits
How To Configure FirewallD on RHEL 7 or CentOS 7
Power systems virtualization with power kvm
Red hat enterprise linux 7 (rhel 7)
RHCE FINAL Questions and Answers
Red Hat Enterprise Linux 7
Introduction to OpenStack Architecture
OpenStack Introduction
Introducing OpenStack for Beginners
OpenStack Architecture
Ad

Similar to Ibm power vc version 1.2.3 introduction and configuration (20)

PDF
IBM PowerVC Introduction and Configuration
PDF
IBM PowerVM Best Practices
PDF
Introducing and Implementing IBM FlashSystem V9000
PDF
Advanced Networking Concepts Applied Using Linux on IBM System z
PDF
Sg248203
PDF
IBM Power10.pdf
PDF
Implementing the ibm storwize v3700
PDF
Sg248107 Implementing the IBM Storwize V3700
PDF
IBM Flex System Networking in an Enterprise Data Center
PDF
Getting Started with KVM for IBM z Systems
PDF
Implementing Linux With Ibm Disk Storage Ibm Redbooks
PDF
Deployment guide series ibm total storage productivity center for data sg247140
PDF
redp5222.pdf
PDF
IBM Streams - Redbook
PDF
IBM Data Center Networking: Planning for Virtualization and Cloud Computing
PDF
software-eng.pdf
PDF
Db2 udb backup and recovery with ess copy services
PDF
IBM Flex System p260 and p460 Planning and Implementation Guide
PDF
Tape automation with ibm e server xseries servers redp0415
PDF
IBM Flex System Interoperability Guide
IBM PowerVC Introduction and Configuration
IBM PowerVM Best Practices
Introducing and Implementing IBM FlashSystem V9000
Advanced Networking Concepts Applied Using Linux on IBM System z
Sg248203
IBM Power10.pdf
Implementing the ibm storwize v3700
Sg248107 Implementing the IBM Storwize V3700
IBM Flex System Networking in an Enterprise Data Center
Getting Started with KVM for IBM z Systems
Implementing Linux With Ibm Disk Storage Ibm Redbooks
Deployment guide series ibm total storage productivity center for data sg247140
redp5222.pdf
IBM Streams - Redbook
IBM Data Center Networking: Planning for Virtualization and Cloud Computing
software-eng.pdf
Db2 udb backup and recovery with ess copy services
IBM Flex System p260 and p460 Planning and Implementation Guide
Tape automation with ibm e server xseries servers redp0415
IBM Flex System Interoperability Guide

Recently uploaded (20)

PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
sap open course for s4hana steps from ECC to s4
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Cloud computing and distributed systems.
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Electronic commerce courselecture one. Pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
KodekX | Application Modernization Development
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
Chapter 3 Spatial Domain Image Processing.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
“AI and Expert System Decision Support & Business Intelligence Systems”
Digital-Transformation-Roadmap-for-Companies.pptx
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
sap open course for s4hana steps from ECC to s4
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Cloud computing and distributed systems.
Understanding_Digital_Forensics_Presentation.pptx
Agricultural_Statistics_at_a_Glance_2022_0.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Electronic commerce courselecture one. Pdf
Network Security Unit 5.pdf for BCA BBA.
Unlocking AI with Model Context Protocol (MCP)
KodekX | Application Modernization Development
Building Integrated photovoltaic BIPV_UPV.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Dropbox Q2 2025 Financial Results & Investor Presentation
Reach Out and Touch Someone: Haptics and Empathic Computing

Ibm power vc version 1.2.3 introduction and configuration

  • 1. Redbooks Front cover IBM PowerVC Version 1.2.3 Introduction and Configuration Marco Barboni Guillermo Corti Benoit Creau Liang Hou Xu
  • 3. International Technical Support Organization IBM PowerVC Version 1.2.3: Introduction and Configuration October 2015 SG24-8199-02
  • 4. © Copyright International Business Machines Corporation 2014, 2015. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Third Edition (October 2015) This edition applies to version 1, release 2, modification 3 of IBM® Power Virtualization Center Standard Edition (5765-VCS). Note: Before using this information and the product it supports, read the information in “Notices” on page xv.
  • 5. © Copyright IBM Corp. 2014, 2015. All rights reserved. iii Contents Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi IBM Redbooks promotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Authors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xx Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Chapter 1. PowerVC introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 PowerVC overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 PowerVC functions and advantages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 OpenStack overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 The OpenStack Foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.2 OpenStack framework and projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.3 PowerVC high-level architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 PowerVC Standard Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 PowerVC adoption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Chapter 2. PowerVC versions and releases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1 Previous versions and milestones. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.1 PowerVC release to OpenStack edition cross-reference . . . . . . . . . . . . . . . . . . . 10 2.1.2 IBM PowerVC first release (R1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.3 IBM PowerVC version 1.2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.4 IBM PowerVC version 1.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 IBM PowerVC version 1.2.2 enhancements and new features. . . . . . . . . . . . . . . . . . . 11 2.2.1 Image management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.2 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.3 Host maintenance mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.4 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.5 Cisco Fibre Channel support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2.6 XIV storage support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.7 EMC storage support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2.8 Virtual SCSI support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2.9 Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.10 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3 New in IBM PowerVC version 1.2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3.1 Major software changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3.2 Significant scaling improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.3 Redundant HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.4 Error scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3.5 Host groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
  • 6. iv IBM PowerVC Version 1.2.3: Introduction and Configuration 2.3.6 Advance placement policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3.7 Multiple disk capture and deployment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3.8 PowerVC remote restart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.3.9 Cloud-init for the latest service pack of AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Chapter 3. PowerVC installation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.1 IBM PowerVC requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.1.1 Hardware and software requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.1.2 PowerVC Standard Edition requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.1.3 Other hardware compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2 Host and partition management planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2.1 Physical server configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2.2 HMC or PowerKVM planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2.3 Virtual I/O Server planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.3 Placement policies and templates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3.1 Host groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3.2 Placement policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.3 Template types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.3.4 Information that is required for compute template planning . . . . . . . . . . . . . . . . . 42 3.4 PowerVC storage access SAN planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.4.1 vSCSI storage access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.4.2 NPIV storage access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.4.3 Shared storage pool: vSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.4.4 Storage access in PowerVC Standard Edition managing PowerKVM . . . . . . . . . 50 3.5 Storage management planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.5.1 PowerVC terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.5.2 Storage templates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.5.3 Storage connectivity groups and tags. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.6 Network management planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.6.1 Multiple network planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.6.2 Shared Ethernet adapter planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.7 Planning users and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.7.1 User management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.7.2 Group management planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.8 Security management planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.8.1 Ports that are used by IBM Power Virtualization Center. . . . . . . . . . . . . . . . . . . . 74 3.8.2 Providing a certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.9 Product information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Chapter 4. PowerVC installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.1 Setting up the PowerVC environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.1.1 Create the virtual machine to host PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.1.2 Download and install Red Hat Enterprise Linux . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.1.3 Customize Red Hat Enterprise Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.2 Installing PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.3 Uninstalling PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.4 Upgrading PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.4.1 Before you begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.4.2 Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.5 Updating PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.6 PowerVC backup and recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.6.1 Backing up PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.6.2 Recovering PowerVC data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
  • 7. Contents v 4.6.3 Status messages during backup and recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.6.4 Consideration about backup and recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.7 PowerVC command-line interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.7.1 Exporting audit data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.8 Virtual machines that are managed by PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.8.1 Linux on Power virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.8.2 IBM AIX virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.8.3 IBM i virtual machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Chapter 5. PowerVC Standard Edition for managing PowerVM . . . . . . . . . . . . . . . . . . 97 5.1 PowerVC graphical user interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 5.2 Introduction to PowerVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5.3 Connecting to PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.4 Host setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.5 Host Groups setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 5.6 Hardware Management Console management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.6.1 Add an HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5.6.2 Changing HMC credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.6.3 Change the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.7 Storage and SAN fabric setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5.7.1 Add a storage controller to PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.7.2 Add SAN fabric to PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.8 Storage port tags setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.9 Storage connectivity group setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 5.10 Storage template setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 5.11 Storage volume setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 5.12 Network setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 5.13 Compute template setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.14 Environment verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5.14.1 Verification report validation categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 5.15 Management of virtual machines and images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 5.15.1 Virtual machine onboarding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 5.15.2 Refresh the virtual machine view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 5.15.3 Start the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 5.15.4 Stop the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 5.15.5 Capture a virtual machine image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 5.15.6 Deploy a new virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5.15.7 Add virtual Ethernet adapters for virtual machines . . . . . . . . . . . . . . . . . . . . . . 165 5.15.8 Add collocation rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 5.15.9 Resize the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 5.15.10 Migration of virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5.15.11 Host maintenance mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 5.15.12 Restart virtual machines remotely from a failed host . . . . . . . . . . . . . . . . . . . 175 5.15.13 Attach a volume to the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 5.15.14 Detach a volume from the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 5.15.15 Reset the state of a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 5.15.16 Delete images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 5.15.17 Unmanage a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 5.15.18 Delete a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Chapter 6. PowerVC Standard Edition for managing PowerKVM . . . . . . . . . . . . . . . . 187 6.1 Install PowerVC Standard to manage PowerKVM . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 6.2 Set up PowerVC Standard managing PowerKVM . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
  • 8. vi IBM PowerVC Version 1.2.3: Introduction and Configuration 6.2.1 Add the PowerKVM host. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 6.2.2 Add storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 6.2.3 Add a network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 6.3 Host group setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 6.4 Import ISO images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 6.4.1 Importing ISO images by using the command-line interface. . . . . . . . . . . . . . . . 202 6.4.2 Importing ISO images by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 6.4.3 Deploying an RHEL ISO image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 6.5 Capture a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 6.5.1 Install cloud-init on the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 6.5.2 Change devices to be mounted by name or UUID . . . . . . . . . . . . . . . . . . . . . . . 215 6.5.3 Capture the virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 6.6 Deploy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 6.7 Resize virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 6.8 Suspend and resume virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 6.9 Restart a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 6.10 Migrate virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 6.11 Restarting virtual machines remotely . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 6.12 Delete virtual machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 6.13 Create and attach volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 6.14 Attach volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Chapter 7. PowerVC lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 7.1 PowerVC Standard Edition lab environment for managing PowerVM . . . . . . . . . . . . 234 7.1.1 Hardware Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 7.1.2 Power Systems hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 7.1.3 Storage infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 7.1.4 Storage configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 7.1.5 Storage connectivity groups and port tagging. . . . . . . . . . . . . . . . . . . . . . . . . . . 239 7.1.6 Software stack for PowerVC lab environment. . . . . . . . . . . . . . . . . . . . . . . . . . . 242 7.2 PowerVC Standard managing PowerKVM lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
  • 9. © Copyright IBM Corp. 2014, 2015. All rights reserved. vii Figures 1-1 OpenStack framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1-2 OpenStack main components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1-3 PowerVC implementation on top of OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3-1 VIOS settings that need to be managed by PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . 37 3-2 Modifying maximum virtual adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3-3 Host group sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3-4 Migration of a partition by using a placement policy . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3-5 Memory region size view on the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3-6 PowerVC Standard Edition storage access by using vSCSI. . . . . . . . . . . . . . . . . . . . . 48 3-7 PowerVC Standard Edition storage access by using NPIV . . . . . . . . . . . . . . . . . . . . . 49 3-8 PowerVC Standard Edition storage access by using an SSP . . . . . . . . . . . . . . . . . . . 50 3-9 PowerVC Standard Edition managing PowerKVM storage access . . . . . . . . . . . . . . . 51 3-10 PowerVC storage providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3-11 Fabrics window that lists a switch with a switch GUI . . . . . . . . . . . . . . . . . . . . . . . . . 53 3-12 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3-13 Storage templates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3-14 Storage template definition: Advanced settings, thin-provisioned . . . . . . . . . . . . . . . 57 3-15 Volume creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3-16 List of storage connectivity groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3-17 Storage connectivity groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3-18 Content of a storage connectivity group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3-19 Storage connectivity groups and tags. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3-20 Examples of storage connectivity group deployments . . . . . . . . . . . . . . . . . . . . . . . . 63 3-21 Users information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3-22 Detailed user account information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3-23 Groups tab view under Users on the PowerVC management host. . . . . . . . . . . . . . . 72 3-24 Detailed view of viewer user group on the management host . . . . . . . . . . . . . . . . . . 73 4-1 Maintenance message for logged-in users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4-2 Maintenance message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5-1 Home page access to a group of functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 5-2 PowerVC Login window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5-3 Initial system check. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5-4 HMC connection information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5-5 PowerVC Add Hosts dialog window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5-6 Managed hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5-7 PowerVC shows the managed hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5-8 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5-9 Host Groups page. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 5-10 Create Host Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5-11 Add HMC Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5-12 Changing HMC credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5-13 Change HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5-14 Select the new HMC for hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5-15 Adding extra storage providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5-16 Add Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5-17 PowerVC Standard Edition window to select a storage pool . . . . . . . . . . . . . . . . . . 113 5-18 Add Fabric window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5-19 IPowerVC Standard Edition Add Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
  • 10. viii IBM PowerVC Version 1.2.3: Introduction and Configuration 5-20 PowerVC Storage providers tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5-21 PowerVC Fibre Channel port configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 5-22 PowerVC Storage Connectivity Groups dialog window . . . . . . . . . . . . . . . . . . . . . . 117 5-23 PowerVC Add Member to storage connectivity group window . . . . . . . . . . . . . . . . . 118 5-24 Disabling a storage connectivity group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 5-25 IBM XIV storage template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 5-26 PowerVC Create Storage Template window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 5-27 PowerVC Create Storage Template Advanced Settings . . . . . . . . . . . . . . . . . . . . . 122 5-28 PowerVC Storage Templates page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5-29 PowerVC Create Volume window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 5-30 List of PowerVC storage volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 5-31 PowerVC network definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5-32 IP Pool tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5-33 PowerVC Create Compute Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 5-34 PowerVC Compute Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5-35 PowerVC interface while environment verification in process. . . . . . . . . . . . . . . . . . 129 5-36 Verification Results view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 5-37 Example of a validation message for an error status . . . . . . . . . . . . . . . . . . . . . . . . 132 5-38 Example of a validation message for an informational message status . . . . . . . . . . 133 5-39 Operations icons on the Virtual Machines view . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 5-40 Selecting a host window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 5-41 Selected hosts window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5-42 Collapse and expand sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5-43 Adding existing VMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5-44 Example of an informational pop-up message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 5-45 Virtual machine detailed view with collapsed sections . . . . . . . . . . . . . . . . . . . . . . . 138 5-46 Virtual machine detailed view of expanded Information section . . . . . . . . . . . . . . . . 139 5-47 Virtual machine detailed view of expanded Specifications section . . . . . . . . . . . . . . 140 5-48 Virtual machine detailed view of expanded Network Interfaces section . . . . . . . . . . 141 5-49 Detailed Network Overview tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 5-50 Virtual machine Refresh icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 5-51 Virtual machine fully started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 5-52 Virtual machine powered off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 5-53 Capture window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 5-54 Capture boot and data volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5-55 Capture window confirmation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 5-56 Image snapshot in progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 5-57 Image creation in progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 5-58 Storage volumes view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 5-59 Expanded information for a captured image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5-60 Volumes section and Virtual Machines section. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5-61 Image capture that is selected for deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 5-62 Information to deploy an image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 5-63 Newly deployed virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 5-64 Add an Ethernet adapter for a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 5-65 Create Collocation Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 5-66 Virtual Machine resize. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 5-67 VM Resize dialog window to select a compute template . . . . . . . . . . . . . . . . . . . . . 167 5-68 Exceeded value for resizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 5-69 Migrate a selected virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 5-70 Select target server before the migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 5-71 Virtual machine migration in progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 5-72 Virtual machine migration finished . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
  • 11. Figures ix 5-73 Enter Maintenance Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 5-74 Migrate virtual machines to other hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 5-75 Exit Maintenance Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 5-76 Create a compute template with enabled remote restart capability . . . . . . . . . . . . . 176 5-77 Correct remote restart state under the Specifications section . . . . . . . . . . . . . . . . . 177 5-78 Remotely Restart Virtual Machines option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 5-79 Remotely Restart Virtual Machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 5-80 Destination host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 5-81 Attaching a new volume to a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 5-82 Attached Volumes tab view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 5-83 Detach a volume from a virtual machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 5-84 Confirmation window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 5-85 Resetting the virtual machine’s state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 5-86 State reset confirmation window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 5-87 Image selected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 5-88 Delete an image confirmation window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 5-89 Unmanage an existing virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 5-90 Delete a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 5-91 Confirmation window to delete a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 6-1 PowerVC Login window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 6-2 PowerVC Home page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 6-3 PowerVC Add Host window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 6-4 Informational messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 6-5 Host added successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 6-6 PowerVC managing PowerKVM hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 6-7 Detailed Hosts view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 6-8 PowerKVM host information and capacity section . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 6-9 PowerKVM Virtual Switches and Virtual Machines sections. . . . . . . . . . . . . . . . . . . . 193 6-10 Add a storage device to PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 6-11 SVC storage pool choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 6-12 The new SVC storage provider. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 6-13 Add a network to the PowerVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 6-14 Network is configured now . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 6-15 List of virtual switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 6-16 Edit virtual switch window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 6-17 Message about conflicts with the updated virtual switch selections . . . . . . . . . . . . . 200 6-18 Details of the virtual switch components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 6-19 Create a host group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 6-20 Upload Image window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 6-21 ISO images that were imported to PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 6-22 Status of the imported ISO image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 6-23 RHEL ISO image details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 6-24 Select the image for deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 6-25 Virtual machine deployment parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 6-26 Deployment in-progress message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 6-27 Successful deployment verification message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 6-28 Virtual Machines view with highlighted State and Health columns . . . . . . . . . . . . . . 208 6-29 Detailed information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 6-30 Detailed information with expanded or collapsed sections . . . . . . . . . . . . . . . . . . . . 210 6-31 Stopping the virtual machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 6-32 Virtual machine started and active . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 6-33 Warning message before you capture the VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 6-34 Capture window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
  • 12. x IBM PowerVC Version 1.2.3: Introduction and Configuration 6-35 Snapshot in-progress message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 6-36 Status from the Virtual Machines view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 6-37 Snapshot status from the Images view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 6-38 General and network sections of the window to deploy a VM . . . . . . . . . . . . . . . . . 220 6-39 Activation Input section of the window to deploy a virtual machine . . . . . . . . . . . . . 221 6-40 Deployment is started message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 6-41 Virtual Machines view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 6-42 Resize virtual machine window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 6-43 Suspend or pause a virtual machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 6-44 Restart a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 6-45 Migrate a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 6-46 Migrating a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 6-47 Remotely Restart Virtual Machines option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 6-48 Select virtual hosts to restart remotely . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 6-49 Virtual machines that were restarted remotely . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 6-50 Delete a virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 6-51 Create Volume window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 6-52 Attaching new volume to a virtual machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 6-53 Attach an existing volume to this virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 7-1 PowerVC Standard Edition hardware lab for managing PowerVM. . . . . . . . . . . . . . . 234 7-2 Physical to logical management layers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 7-3 Shared storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 7-4 Storage configuration that was set for this publication . . . . . . . . . . . . . . . . . . . . . . . . 239 7-5 Storage groups and tagged ports configuration lab . . . . . . . . . . . . . . . . . . . . . . . . . . 240 7-6 Storage connectivity groups in the lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 7-7 Fibre Channel port tags that are used in the lab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 7-8 PowerVC Standard managing PowerKVM lab setup . . . . . . . . . . . . . . . . . . . . . . . . . 243
  • 13. © Copyright IBM Corp. 2014, 2015. All rights reserved. xi Tables 2-1 PowerVC releases cross-referenced to OpenStack versions . . . . . . . . . . . . . . . . . . . . 10 2-2 Updated support matrix for SSP, NPIV, and vSCSI storage paths in PowerVC version 1.2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2-3 New functions that are introduced in PowerVC 1.2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2-4 Scaling capabilities for PowerKVM and PowerVM in PowerVC . . . . . . . . . . . . . . . . . . 23 2-5 List of supported and unsupported multiple disk combinations . . . . . . . . . . . . . . . . . . 26 3-1 Hardware and OS requirements for PowerVC Standard Edition . . . . . . . . . . . . . . . . . 31 3-2 Minimum resource requirements for the PowerVC VM. . . . . . . . . . . . . . . . . . . . . . . . . 31 3-3 Supported activation methods for managed hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3-4 HMC requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3-5 Supported virtualization platforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3-6 Supported network hardware and software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3-7 Supported storage hardware for PowerVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3-8 Supported storage hardware for PowerKVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3-9 Supported security software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3-10 Processor compatibility modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3-11 Preferred practices for shared Ethernet adapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4-1 RHEL packages that relate to PowerVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4-2 Options for the PowerVC install command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4-3 Available options for the powervc-uninstall command . . . . . . . . . . . . . . . . . . . . . . . . . 85 4-4 Options for the powervc-backup command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4-5 Options for the powervc-restore command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4-6 PowerVC available commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4-7 Commands for PowerVC Standard for managing PowerKVM . . . . . . . . . . . . . . . . . . . 93 4-8 Options for the powervc-audit-export command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5-1 Information section fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 5-2 Specifications section’s fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 5-3 Details section’s fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 5-4 Modules and descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 5-5 Description of the fields in the Information section . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5-6 Description of the fields in the Specifications section . . . . . . . . . . . . . . . . . . . . . . . . . 159 5-7 Host states during the transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 7-1 HMC that was used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 7-2 Hardware test environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 7-3 Storage switch specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 7-4 IBM SAN Volume Controller specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 7-5 Software versions and releases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
  • 14. xii IBM PowerVC Version 1.2.3: Introduction and Configuration
  • 15. © Copyright IBM Corp. 2014, 2015. All rights reserved. xiii Examples 2-1 The chdef commands to set the reserve policy and algorithm on new disks . . . . . . . . 17 2-2 How to check whether a host can use remote restart from PowerVC. . . . . . . . . . . . . . 26 2-3 Example of clouddev and ghostdev output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2-4 Obtain the values that are set on the ghostdev and clouddev attributes . . . . . . . . . . . 27 3-1 Adding an admin user account with the useradd command . . . . . . . . . . . . . . . . . . . . . 68 3-2 Verify users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3-3 Updating the admin user account with the usermod command . . . . . . . . . . . . . . . . . . 70 4-1 Installing the gettext package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4-2 Installing PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4-3 Installation completed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4-4 Uninstallation successful. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4-5 Update successfully completed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4-6 Example of PowerVC backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4-7 Mismatch between backup and recovery environments . . . . . . . . . . . . . . . . . . . . . . . . 89 4-8 Example of PowerVC recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4-9 powervc-audit command use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4-10 IBM Installation Toolkit sample output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4-11 RMC status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 5-1 scratchpad.txt file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 5-2 scratchpad.txt file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 5-3 Specific device names for the /etc/fstab file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 5-4 /etc/lilo.conf file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 5-5 Specific devices names for the /etc/lilo.conf file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 5-6 Commands to enable the activation engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 5-7 Output from the /opt/ibm/ae/AE.sh -R command . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 6-1 Importing a Red Hat ISO image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 6-2 ISO image location and naming in PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 6-3 virsh list --all output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 6-4 Virtual console that shows Disc Found message . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 6-5 Symbolic links mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 6-6 Sample device names before the change. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 6-7 Sample device names after the change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 6-8 lilo.conf file before change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 6-9 lilo.conf file after change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
  • 16. xiv IBM PowerVC Version 1.2.3: Introduction and Configuration
  • 17. © Copyright IBM Corp. 2014, 2015. All rights reserved. xv Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
  • 18. xvi IBM PowerVC Version 1.2.3: Introduction and Configuration Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX® DB2® Enterprise Storage Server® FlashCopy® GDPS® Geographically Dispersed Parallel Sysplex™ GPFS™ HACMP™ IBM® IBM SmartCloud® IBM Spectrum™ Parallel Sysplex® POWER® Power Systems™ POWER6® POWER6+™ POWER7® POWER7 Systems™ POWER7+™ POWER8® PowerHA® PowerVM® Redbooks® Redbooks (logo) ® Storwize® SystemMirror® XIV® The following terms are trademarks of other companies: Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Other company, product, or service names may be trademarks or service marks of others.
  • 19. IBM REDBOOKS PROMOTIONS Find and read thousands of IBM Redbooks publications Search, bookmark, save and organize favorites Get up-to-the-minute Redbooks news and announcements Link to the latest Redbooks blogs and videos Download Now Get the latest version of the Redbooks Mobile App iOS Android Place a Sponsorship Promotion in an IBM Redbooks publication, featuring your business or solution with a link to your web site. Qualified IBM Business Partners may place a full page promotion in the most popular Redbooks publications. Imagine the power of being seen by users who download millions of Redbooks publications each year! ® ® Promote your business in an IBM Redbooks publication ibm.com/Redbooks About Redbooks Business Partner Programs IBM Redbooks promotions
  • 21. © Copyright IBM Corp. 2014, 2015. All rights reserved. xix Preface IBM® Power Virtualization Center (PowerVC™) is an advanced enterprise virtualization management offering for IBM® Power Systems™, which is based on the OpenStack framework. This IBM Redbooks® publication introduces PowerVC and helps you understand its functions, planning, installation, and setup. Starting with PowerVC version 1.2.2, the Express Edition offering is no longer available and the Standard Edition is the only offering. PowerVC supports both large and small deployments, either by managing IBM PowerVM® that is controlled with the Hardware Management Console (HMC) or by managing PowerKVM directly. PowerVC can manage IBM AIX®, IBM i, and Linux workloads that run on POWER® hardware, including IBM PurePower systems. PowerVC editions include the following features and benefits: Virtual image capture, deployment, and management Policy-based virtual machine (VM) placement to improve use Management of real-time optimization and VM resilience to increase productivity VM Mobility with placement policies to reduce the burden on IT staff in a simple-to-install and easy-to-use graphical user interface (GUI) An open and extensible PowerVM management system that you can adapt as you need and that runs in parallel with your existing infrastructure, preserving your investment A management system for existing PowerVM deployments You will also find all the details about how we set up the lab environment that is used in this book. This book is for experienced users of IBM PowerVM and other virtualization solutions who want to understand and implement the next generation of enterprise virtualization management for Power Systems. Unless stated otherwise, the content of this book refers to versions 1.2.2 and 1.2.3 of IBM PowerVC. Authors This book was produced by a team of specialists from around the world working at the International Technical Support Organization, Poughkeepsie Center. Marco Barboni is an IT Specialist at the IBM Rome Software Lab in Italy. He has 4 years of experience in cloud virtualization and management in the IBM Power infrastructures field. He holds a degree in Information Technology from “Roma Tre” University. His areas of expertise include AIX administration, virtualization on Power, HMC, IBM Power Systems, IBM Linux on Power, and also IBM Systems Director and IBM PowerVC infrastructure management.
  • 22. xx IBM PowerVC Version 1.2.3: Introduction and Configuration Guillermo Corti is an IT Architect at IBM Argentina. He has been with IBM since 2004 and has 20 years experience with Power Systems and AIX. He has a degree in Systems from Moron University and 11 years of experience working in service delivery for North American accounts. His areas of expertise include Power Systems, AIX, IBM Linux on Power, and IBM PowerVM solutions. Benoit Creau is an AIX Systems Engineer who works in large French banks (currently BNP Paribas). He has six years of experience managing client production environments with IBM Power Systems. His areas of expertise include AIX, Virtual I/O Servers, Power Systems, and PowerVC. He currently focuses on integrating new technology (IBM POWER8® and PowerVC) in client environments. He has participated in the community by writing a blog about Power Systems and related subjects for more than 5 years (chmod666.org). Liang Hou Xu, PMP, is an IT Architect at IBM China. He has 16 years of experience in Power Systems and four years of experience in the cloud field. He holds a degree in Engineering from Tsinghua University. His areas of expertise include Power Systems, AIX, Linux, cloud, IBM DB2®, C programming, and Project Management. The project that created this book was managed by: Scott Vetter, PMP Thanks to the following people for their contributions to this project: Dave Archer, Senthil Bakthavachalam, David Bennin, Eric Brown, Ella Buslovich, Chun Shi Chang, Rich Conway, Joe Cropper, Rebecca Dimock, William Edmonds, Edward Fink, Nigel Griffiths, Nicolas Guérin, Kyle Henderson, Philippe Hermes, Amy Hieter, Greg Hintermeister, Bhrugubanda Jayasankar, Liang Jiang, Rishika Kedia, Sailaja Keshireddy, Yan Koyfman, Jay Kruemcke, Samuel D. Matzek, John R. Niemi, Geraint North, Sujeet Pai, Atul Patel, Carl Pecinovski, Taylor Peoples, Antoni Pioli, Jeremy Salsman, Douglas Sanchez, Edward Shvartsman, Anna Sortland, Jeff Tenner, Drew Thorstensen, Ramesh Veeramala, Christine Wang, and Michael Williams Thanks to the authors of the previous editions of this book. The authors of the first edition, IBM PowerVC Version 1.2.0 and 1.2.1 Introduction and Configuration, which was published in October 2014, were Bruno Blanchard, Guillermo Corti, Sylvain Delabarre, Ho Jin Kim, Ondrej Plachy, Marcos Quezada, and Gustavo Santos. Now you can become a published author, too! Here’s an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html
  • 23. Preface xxi Comments welcome Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400 Stay connected to IBM Redbooks Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.html
  • 24. xxii IBM PowerVC Version 1.2.3: Introduction and Configuration
  • 25. © Copyright IBM Corp. 2014, 2015. All rights reserved. 1 Chapter 1. PowerVC introduction IBM® Power Virtualization Center Standard Edition (PowerVC) is the next generation of enterprise virtualization management tools for IBM Power Systems. PowerVC incorporates a powerful yet simple and intuitive GUI and deep integration with IBM PowerVM virtualization technologies. PowerVC simplifies the management of the virtualization for Power Systems servers that run on IBM AIX and Linux operating systems. It now supports the IBM i operating system to benefit IBM i clients with various virtualization management functionalities in PowerVC. This publication provides introductory and configuration information for PowerVC. After we present an overview of PowerVC in this first chapter, we cover the following topics in subsequent chapters: Release reviews in Chapter 2, “PowerVC versions and releases” on page 9 Planning information in Chapter 3, “PowerVC installation planning” on page 29 Installation guidelines in Chapter 4, “PowerVC installation” on page 77 General configuration and setup that are common to all variants of PowerVC in Chapter 5, “PowerVC Standard Edition for managing PowerVM” on page 97 Information that is specific to using PowerVC Standard for managing PowerKVM in Chapter 6, “PowerVC Standard Edition for managing PowerKVM” on page 187 A description of the test environment that was used for the examples in Chapter 7, “PowerVC lab environment” on page 233 1
  • 26. 2 IBM PowerVC Version 1.2.3: Introduction and Configuration 1.1 PowerVC overview This publication is for system administrators who are familiar with the concepts included in these IBM Redbooks publications: IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 IBM PowerVM Virtualization Managing and Monitoring, SG24-7590 PowerVC simplifies the management of virtual resources in your Power Systems environment. After the product code is installed, the PowerVC no-menus interface guides the system administrator through three simple configuration steps to register physical hosts, storage providers, and network resources and to start capturing and intelligently deploying AIX, IBM i, and Linux virtual machines (VMs). PowerVC also helps the system administrator perform the following activities: Create VMs and resize their CPU and memory. Attach disk volumes to those VMs. Import existing VMs and volumes so that they can be managed by PowerVC. Monitor the use of resources in your environment. Migrate VMs while they are running (live migration between physical servers). Deploy images quickly to create new VMs that meet the demands of ever-changing business needs. At the time of writing this publication, PowerVC can deploy VMs that use AIX, IBM i, or Linux operating systems. PowerVC is built on OpenStack, which is open source software that controls large pools of server, storage, and networking resources throughout a data center. PowerVC uses IBM Platform Resource Scheduler (PRS) to extend the OpenStack set of technologies to Power Systems environments with enhanced security, intelligent placement of VMs, and other advanced policy-based features that are required on enterprise clouds. PRS is a proven technology that is used in grid and scaled-out computing environments by more than 2,000 clients. Its open and extensible architecture supports reservations, over-subscription policies, and user-defined policies. PRS is also energy-aware. For more information about PRS, see this website: http://www.ibm.com/systems/platformcomputing/products/rs/ 1.1.1 PowerVC functions and advantages Why PowerVC? Why do we need another virtualization management offering? When more than 70% of IT budgets is spent on operations and maintenance, IT clients legitimately expect vendors to focus their new development efforts to reduce this cost and foster innovation within IT departments. PowerVC gives IBM Power Systems clients advantages: It is deeply integrated with Power Systems. It provides virtualization management tools. It eases the integration of servers that are managed by PowerVM or PowerKVM in automated IT environments, such as clouds. It is a building block of IBM Infrastructure as a Service (IaaS), based on Power Systems.
  • 27. Chapter 1. PowerVC introduction 3 PowerVC is an addition to the existing PowerVM set of enterprise virtualization technologies that provide virtualization management. It is based on open standards and integrates server management with storage and network management. Because PowerVC is based on the OpenStack initiative, Power Systems can be managed by tools that are compatible with OpenStack standards. When a system is controlled by PowerVC, it can be managed in either of two ways: By a system administrator by using the PowerVC GUI By higher-level tools that call PowerVC by using standard OpenStack application programming interfaces (APIs) PowerVC is an option that is between the Hardware Management Console (HMC) and IBM SmartCloud® IaaS offerings. It provides a systems management product that enterprise clients require to effectively manage the advanced features that are offered by IBM premium hardware. It reduces resource use and manages workloads for performance and availability. In the following sections, we introduce the concepts of OpenStack to help you understand the terminology that is used in this book. 1.2 OpenStack overview PowerVC is based on the OpenStack initiative. The following sections provide an overview of OpenStack. 1.2.1 The OpenStack Foundation OpenStack is an IaaS solution that is applied to the cloud computing domain, which is led by the OpenStack Foundation. The foundation is a non-commercial organization that promotes the OpenStack project and helps the developers within the OpenStack community. Many major IT companies contribute to the OpenStack Foundation. Check their website for more information: http://www.openstack.org/foundation/ IBM is an active member of the OpenStack community. Multiple IBM divisions have key roles as members. IBM contributes through code contributions, governance, and support within its products. OpenStack is no-charge, open source software that is released under the terms of the Apache license. 1.2.2 OpenStack framework and projects The goal of OpenStack is to provide an open source cloud computing platform for public and private clouds. OpenStack has a modular architecture. Several projects are underway in parallel to develop these components: Nova Nova manages the lifecycle and operations of hosts and compute resources. Swift Swift covers object-oriented storage. It is meant for distributed high availability in virtual containers.
  • 28. 4 IBM PowerVC Version 1.2.3: Introduction and Configuration Cinder This project covers the management for block storage, such as IBM Storwize® or IBM SAN Virtual Controller. Glance Glance is the image service that provides discovery, registration, and delivery services for virtual disk images. Horizon This dashboard project is the web service management and user interface to integrate various OpenStack services. Neutron Neutron is the network management service for OpenStack. Formerly named Quantum, Neutron includes various aspects, such IP address management. Keystone The Keystone focus is on security, identity, and authentication services. Ceilometer The Ceilometer project is for metering. The Ceilometer provides measurement and billing across all OpenStack components. You can find complete descriptions of the main OpenStack projects on the Wiki page of their website: https://wiki.openstack.org/wiki/Main_Page Figure 1-1 shows a high-level view of the OpenStack framework and main components and how they can be accessed by applications that use the OpenStack computing platform APIs. Figure 1-1 OpenStack framework Nova (Compute) Glance (Image Service) APIs OpenStack Shared Services HARDWARE Applications Horizon (Dashboard) Neutron (Networking) Swift (Object Storage) Cinder (Block Storage)
  • 29. Chapter 1. PowerVC introduction 5 Figure 1-2 provides details about the main components of the OpenStack framework. It also contains a few explanations of the roles of these components. The illustration shows that one of the main benefits of OpenStack is that it provides a standard interface for hardware. Hardware vendors provide OpenStack compatible drivers for their devices. These drivers can then be used by the other OpenStack components to act on the hardware devices. Figure 1-2 OpenStack main components Higher Level Mgmt Ecosystem Cloud Mgmt SW Enterprise Mgmt SW Other Mgmt SW Dashboard (Horizon) OpenStack API Security (KeyStone) Scheduler Projects Images (Glance) Flavors Quotas AMQP DBMS drivers drivers drivers Server Compute Nova Block Storage Cinder Network Neutron Storage Network Cloud Management APIs • Focus on providing IaaS • Broad Eco System Simple Console • Built using OS REST API • Basic GUI for OS functions Management Services • Image Management • Virtual Machine Placement • Account Management Foundation (Middleware) • AMQP Message Broker • Database for Persistence Virtualization Drivers • Adapters to hypervisors • Server, storage, network • Vendor Led Drivers
  • 30. 6 IBM PowerVC Version 1.2.3: Introduction and Configuration 1.2.3 PowerVC high-level architecture Figure 1-3 shows how PowerVC is implemented on top of the OpenStack framework and how additional components are inserted within the OpenStack framework to add functions to the standard set of OpenStack features. It also illustrates that IBM is providing drivers to support IBM devices by using the OpenStack APIs. Figure 1-3 PowerVC implementation on top of OpenStack PowerVC is available in Standard Edition, which is described in the following section. 1.3 PowerVC Standard Edition PowerVC Standard Edition will manage PowerVM systems that run either IBM POWER6®, IBM POWER7®, or POWER8 processors that are controlled by an HMC. In addition, PowerVC can manage PowerKVM Linux scale-out servers. During installation, PowerVC Standard Edition can be configured to manage VMs that are virtualized on top of either PowerVM or PowerKVM. On PowerVM, dual Virtual I/O Servers for each host are supported to access storage and the network. VMs can be either N_Port ID Virtualization (NPIV)-attached storage or shared storage pool (SSP) back-end storage and virtual SCSI (vSCSI), which were introduced in PowerVC 1.2.2. The following hardware products are supported for NPIV: EMC (VNX and VMAX) IBM XIV® Storage System IBM Storwize V3700 system Block Storage IBM Power SystemsStorage IBM and 3r d Party Network IBM and 3r d Party OpenStack API PowerVC Virtualization Management Console API Additions Monitoring Differentiators AMQP DBMS Security (KeyStone) Scheduler Platform EGO Projects Images Flavors QuotasOVF Nova/ Libvirt Cinder NeutronCompute Network Storage Drivers PowerVM/KVM Driver Network Drivers Virtualization Mgmt UI • Simpleand Intuitive • Targeting the IT Admin New Management APIs • Host & Storage Registration • Environment Validation NewMgmt Capabilities • More granular VM Mgmt • Differentiators (DLPAR) • Power Virtual IO • OVF image Formats Platform EGO Provides... • VirtualMachine Placement • WorkloadAware Mgmt VirtualizationDrivers • HMC driver for PowerVM • Libvirt drivers for PowerKVM • Leverage ecosystem to support broad range of IBM and non-IBM storage and network attachedto Power Packaging and Simplification • Simplified install and Configuration • IntuitiveAdministration Model • Focus on day 0/1 TTV
  • 31. Chapter 1. PowerVC introduction 7 IBM Storwize V7000 system IBM SAN Volume Controller For storage on an SSP, any SSP-supported storage device is supported by PowerVC. On PowerKVM, storage is backed by iSCSI devices. For more information, see 3.1, “IBM PowerVC requirements” on page 30. For the latest list of requirements, see this website: http://ibm.co/1jC4Xx0 1.4 PowerVC adoption Two features are useful for a smooth adoption of PowerVC in an existing environment: When PowerVC manages a physical server, it can manage the full set or only a subset of the partitions that are hosted on that server. When PowerVC is adopted in an environment where partitions are already in production, PowerVC can discover the existing partitions and selectively start to manage them. Therefore, the adoption of PowerVC in an existing environment does not require a major change. It can be a smooth transition that is planned over several days or more.
  • 32. 8 IBM PowerVC Version 1.2.3: Introduction and Configuration
  • 33. © Copyright IBM Corp. 2014, 2015. All rights reserved. 9 Chapter 2. PowerVC versions and releases This chapter describes the evolution of IBM® Power Virtualization Center Standard Edition (PowerVC) through its versions with special focus on version 1.2.2 and version 1.2.3. The following topics are covered in this chapter: Previous versions and milestones IBM PowerVC version 1.2.2 enhancements and new features New in IBM PowerVC version 1.2.3 2
  • 34. 10 IBM PowerVC Version 1.2.3: Introduction and Configuration 2.1 Previous versions and milestones IBM Systems and Technology Group Cloud System Software developed a virtualization management solution for PowerVM and PowerKVM, which is called the Power Virtualization Center (PowerVC). The objective is to manage virtualization on the Power platform by providing a robust, easy-to-use tool to enable its users to take advantage of the Power platform differentiation. This list shows the previous versions: IBM PowerVC first release (R1) IBM PowerVC version 1.2.0 IBM PowerVC version 1.2.1 2.1.1 PowerVC release to OpenStack edition cross-reference Table 2-1 cross-references the PowerVC releases to editions of OpenStack. Table 2-1 PowerVC releases cross-referenced to OpenStack versions 2.1.2 IBM PowerVC first release (R1) PowerVC first release was available in certain markets in 2013. The primary objective of this release was to simplify the task of deploying a single logical partition (LPAR) with operating system software for new IBM Power System hardware clients. This release presented several restrictions, requiring virtualization management of the hosts and supporting only limited resource configurations. 2.1.3 IBM PowerVC version 1.2.0 The second release, PowerVC version 1.2.0, was also available worldwide in 2013. The primary objective was to simplify the virtualization management experience of IBM Power Systems servers through the Hardware Management Console (HMC) and build a foundation for enterprise-level virtualization management. 2.1.4 IBM PowerVC version 1.2.1 The third release of PowerVC, version 1.2.1, was available worldwide in 2014 with the addition of PowerKVM support that was built on IBM POWER8 servers and shared storage pool (SSP) support for the PowerVM edition. PowerVC release Availability OpenStack edition V1.2 October 2013 Havana V1.2.1 April 2014 Icehouse V1.2.2 October 2014 Juno V1.2.3 April 2015 Kilo
  • 35. Chapter 2. PowerVC versions and releases 11 2.2 IBM PowerVC version 1.2.2 enhancements and new features The fourth release of PowerVC, version 1.2.2, was also available worldwide in 2014. This version focused on adding new features and support to the following components: Image management Monitoring Host maintenance mode Storage Network Security 2.2.1 Image management This version supports new levels of the Linux distributions (previously supported distribution, new release): Red Hat Enterprise Linux (RHEL) 6.6 RHEL 7 (which is supported on IBM PowerKVM only in version 1.2.1) SUSE Linux Enterprise Server (SLES 12) New Linux distribution support exists for Ubuntu 14. Currency support for the Linux operating system can be done on cloud-init. Also, for any new Linux OS distribution support, only cloud-init is supported, not Virtual Solutions Activation Engine (VSAE). Any changes that are needed in cloud-init to support the new distribution are coordinated with the IBM Linux Technology Center (LTC) to distribute the changes to the cloud-init open source community. 2.2.2 Monitoring Enhancements and new capabilities are included in PowerVC 1.2.2: Use the Ceilometer framework to monitor the memory and I/O metrics for instances Provide the hosts with metrics for CPU utilization and I/O Provide out-of-band lifecycle operation-related checks With the new set of health checks and metrics, PowerVC version 1.2.2 monitoring enhancements include the improved scale and stability of the monitoring functions. The following major capabilities are available in this version: Reduce the steady-state CPU utilization of the monitor function Reduce the redundant health and metric event publication to help improve performance Use the asynchronous update events and reduce the resource polling Important: IBM PowerVC Express Edition is no longer supported in this release. Note: Because Ubuntu is a new distribution, you must update the distribution list that is used by the image import command-line interface (CLI) and graphical user interface (GUI) to include Ubuntu.
  • 36. 12 IBM PowerVC Version 1.2.3: Introduction and Configuration 2.2.3 Host maintenance mode Virtualization administrators often need to prepare a host system for maintenance, for example, replace a faulty hardware component or update critical software components. This act is widely known in the industry as putting a host into maintenance mode. Consider the following points from a virtualization management perspective: The host will be prevented from entering maintenance mode if any one (or more) of the following conditions are true and the user requested automated mobility upon entering maintenance: – The host’s hypervisor state is anything other than operating. (For example, the administrator must address any issues in advance; otherwise, live migrations are unlikely to succeed.) – The host has at least one virtual machine (VM) in the error state, and migration cannot be performed until the administrator resolves the issue. – The host has at least one VM in the paused state (when one or more VMs in paused state mean that it resides in memory and the administrator needs to power down the host). – The host is based on PowerVM and not licensed for active partition mobility. No additional virtual machines can be placed on the host while its maintenance state is either entering, error, or on. If mobility was requested when the host was entering maintenance mode and an active VM existed, this VM must be relocated automatically to other hosts within the relocation domain. While virtual machines are migrated to other hosts, the host’s Platform Resource Scheduler (PRS) hypervisor state is entering maintenance. The PRS hypervisor state automatically transitions to in maintenance when the migrations complete and Nova notifications will be generated as the state transitions. After the administrator completes the maintenance, the administrator will remove the host from maintenance mode. At that point, the PRS hypervisor state transitions back to ok. Now, virtual machines are able to be scheduled to the host again. VMs that were previously on the host that were put in maintenance mode need to be migrated back to the host manually. 2.2.4 Storage Two additional volume drivers and one fabric driver were added in PowerVC version 1.2.2. The volume drivers are IBM XIV Storage System and EMC, and the fabric driver is Cisco. Volume attachment now includes virtual SCSI (vSCSI) connectors. The following uses and cases apply to these new devices: Registration of storage arrays and Fibre Channel (FC) switches with the storage template and storage connectivity groups (SCGs) Deployment of VMs Attachment and detachment of volumes in existing VMs Note: The administrator can take the host out of maintenance mode at any point. PRS finishes any in-progress migrations and halts afterward.
  • 37. Chapter 2. PowerVC versions and releases 13 Image management Onboarding of VMs and volumes The new storage and fabric drivers require new registration application programming interfaces (APIs) to register the new devices. New storage templates are required for XIV and EMC. Both drivers support additional storage templates. API and user interface (UI) changes are associated with the storage templates. Table 2-2 represents how clients are using volumes within PowerVM. For example, when an N_Port ID Virtualization (NPIV) connection exists to boot a VM, it is not necessary to attach a vSCSI-connected volume. When the client sets their connection type for boot and data volumes within an SCG, a client is limited to two connector types within a single SCG. On deployment or attachment, the SCG determines the connection type between NPIV and vSCSI for a storage area network (SAN) device. Table 2-2 Updated support matrix for SSP, NPIV, and vSCSI storage paths in PowerVC version 1.2.2 The SCG changes allow the creation of a vSCSI on SCG. PowerVC version 1.2.2 provides the option on the SCG configuration so that the client can specify whether they want dual Virtual I/O Servers to be guaranteed during deployment and migration. API and UI changes are associated with these SCG changes. 2.2.5 Cisco Fibre Channel support This newly added support is for Cisco Multicast Distributed Switching (MDS) Fibre Channel (FC) switches. This support was developed in collaboration with IBM to ensure compatibility with PowerVC. Next, we describe how to enable Cisco support within the PowerVC FC zoning architecture, which differs significantly from the community architecture. The relevant components to support Cisco FC are contained within the Cinder-volume service. One of these services runs for every registered storage provider. Volume manager invokes the zone manager whenever connections are added or removed. The zone manager has a pluggable driver model that separates generic code from hardware-specific code. The following steps describe the flow during the volume attachment or detachment: 1. After the volume driver is invoked, the zone manager flow is invoked. 2. The volume driver returns the initiator that is wanted. 3. The target is mapped from the initialize_connection or terminate_connection method. 4. The returned structure feeds into the zone manager operation. PowerVC version 1.2.2 supports a maximum of two fabrics. The fabrics can be mixed. Boot volume/data volume SSPs NPIV vSCSI SSPs Supported Supported Not supported NPIV Not supported Supported Not supported vSCSI Not supported Supported Supported
  • 38. 14 IBM PowerVC Version 1.2.3: Introduction and Configuration Function The Cisco driver has configuration file options that, for each fabric, specify the user name, password, IP address, and virtual SAN (VSAN) to use for zoning operations. The VSAN is interesting. Cisco and Brocade switches allow the physical ports on the switch to be divided into separate fabrics. Cisco calls them VSANs, and Brocade calls them Virtual Fabrics. Therefore, every zoning operation on a switch is performed in the contact of a VSAN or Virtual Fabric. However, the two drivers work differently: For Cisco, a user does not have a default VSAN. So, the VSAN to use is specified in the configuration file. This method is not ideal. The user needs to be able to determine the VSAN automatically by looking at where the initiator and target ports are logged in. For Brocade, every user has a default Virtual Fabric, and the driver creates zones on that default fabric. Integration To extend PowerVC integration, the zone manager class supports an fc_fabric_type option, which allows the user to select Brocade and Cisco switches. Zone manager also tolerates slight variations in the behavior of the two drivers. It delivers an extended Cisco CLI module that is called powervc_cisco_fc_zone_client_cli.py. This module adds a get_active_zone_map function that is needed by the PowerVC zoning driver. The Cisco driver is enabled by editing the /etc/cinder/fabrics.conf file. The fabric registration UI allows to the user to register Brocade and Cisco FC switches. Mixed fabrics are supported for PowerVC, Brocade, and Cisco Tier1 drivers. Third-party fabric drivers can be provided and mixed by vendors. However, third-party fabric drivers cannot be mixed with PowerVC fabric drivers because Cinder supports a single zone manager only and Tier1 drivers are managed from the PowerVC zone manager. For Cisco fabrics, the following properties are required for registration: Display name IP address Port User name Password VSAN The registration API performs a test connection to ensure that the credentials are correct and the specified VSAN exists. 2.2.6 XIV storage support Support for IBM XIV Storage System storage arrays is added to PowerVC. The functionality that is offered by this interface is similar to the functions that are offered through the IBM SAN Volume Controller (SVC). Note: IBM PowerVC version 1.2.2 continues to support a maximum of two fabrics that can be registered.
  • 39. Chapter 2. PowerVC versions and releases 15 This interface requires the XIV driver, which is downloaded and included in the build and installed in the PowerVC environment. The downloaded XIV driver also contains helper methods to derive a list of volumes in a certain XIV array and its unique identifier. These methods are used by the corresponding PowerVC registration and extended driver code. Function All functions that relate to storage arrays are supported: Registration by using the default storage template Storage connectivity group setup Configuration of the FC port Onboarding of VMs with XIV volumes that are attached to them Onboarding of volumes that are already in XIV storage Creation and deletion of volumes on XIV storage Deployment of VMs by using volumes from XIV storage Integration A new XIV registration code is integrated into PowerVC. As part of the storage registration UI, this new registration code collects the IP address, user friendly name, user name, and password to register the XIV Storage System to the PowerVC. The registration API performs a test connection and retrieves a list of available storage pools from the XIV system. The list is displayed to the user, so that the user can choose the pool to use for default provisioning operations. This approach is similar to how the IBM Storwize registration UI looks today, except that the Secure Shell (SSH) keys are not supported. Currently, no UI is available for the user to select the type of storage controller that the user is registering. Storwize is the only option. A user can use the UI to select between Storwize and Network File System (NFS), and that selection can be reused to provide the PowerVC user with a Storwize/XIV option. 2.2.7, “EMC storage support” on page 16 shows a choice of SAN Volume Controller, EMC, or XIV storage during storage registration. The storage template UI for XIV is similar to Storwize support. The UI needs to recognize the type of storage provider and display the correct UI. The storage metadata API is used by the storage template UI to get a list of storage pools and related information, but first, the XIV driver needs to be enhanced. PowerVC has an extended XIV driver with the get_storage_metadata function implemented in it. This extended driver is used by the XIV registration code. Like the SAN Volume Controller, the XIV has a limit on the number of hosts that can be defined. During initialize_connection, the host creation fails with a return code of REMOTE_MAX_VIRTUAL_HOSTS_REACHED. This limit is not determined yet. The attach operation fails with an appropriate message. However, the TTV validation tool might expose the total or percent of slots that is used with the same or similar naming scheme that is used with the SAN Volume Controller for images and volumes. Images start with Image and volumes start with volume. Note: The /etc/cinder/cinder.conf file needs to be updated to include xiv as a supported storage type.
  • 40. 16 IBM PowerVC Version 1.2.3: Introduction and Configuration 2.2.7 EMC storage support The EMC storage array is now included in PowerVC version 1.2.2. The support includes EMC VNX and VMAX storage devices. VNX and VMAX are in two different EMC drivers. This support is essentially how the PowerVC enables the Storage Management Initiative Specification (SMI-S) EMC driver. The SMI-S provider proxy applies to the EMC VMAX driver only, not the VNX driver. The EMC VNX driver uses a remote command tool set that is located with the cinder driver to communicate to the VNX device rather than through an SMI-S proxy. The EMC VMAX driver requires that you download the EMC SMI-S provider proxy software from the EMC website. The EMC VMAX driver also requires that you run on an x86 Linux system and that you are at version V4.5.1 or higher. The OpenStack EMC driver communicates with this proxy by using the Web-Based Enterprise Management (WBEM). The OpenStack EMC driver also has a dependency on the python pywebm package. The EMC driver supports both iSCSI and FC connectivity. Although the EMC driver has iSCSI support, only NPIV connectivity is supported in this release. The configuration of the EMC driver is in two locations. The cinder.conf file contains general settings that reference the driver and also a link to an external XML file that contains the detailed settings. The following configuration file settings are valid: volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml Integration New EMC registration code is available and enabled in PowerVC version 1.2.2. For similarities, see “Integration” on page 14. Like the SAN Volume Controller, the EMC limits the number of hosts that can be defined. During initialize_connection, the host creation returns a failure. This limit for VNX is 1,024 maximum hosts. The attach operation fails with an appropriate message. TTV might expose the total number of used slots or the percent of used slots. The same or similar naming scheme is used with the SAN Volume Controller for images and volumes. Images start with Image and volumes start with volume. The EMC low level of design determines any new attributes to be exposed in the default storage template. 2.2.8 Virtual SCSI support Current cinder code supports NPIV connectivity from SAN Storage to a VM in the PowerVC Standard Edition. In this model, the storage volume is mapped directly to the virtual FC adapter in the VM. PowerVC 1.2.2 adds the support in Standard Edition for mapping the storage volume to the Virtual I/O Server (VIOS) and for establishing a vSCSI connection from the VIOS to the VM. The vSCSI classic model is needed for PureApp where the VM boots from a vSCSI-attached volume and data volumes are also vSCSI-attached. Important: This command toolset runs on x86 only, which limits the PowerVC management server to x86 installations.
  • 41. Chapter 2. PowerVC versions and releases 17 Use the updated support matrix, Table 2-2 on page 13, for input to the necessary design changes to the SCGs. The SGG determines the connection type to the VM during the attachment and detachment of a volume to a VM. During deployment, the SCG includes hosts that are compatible with the SCG only. The SCG has two connectivity types: One connectivity type for the OS disk One connectivity type for data volumes The selection of an NPIV or vSCSI SCG determines the connectivity type for the OS disk. When a volume is attached to a VM, the connectivity type for volumes determines whether the volume is connected through NPIV or vSCSI. vSCSI is supported for all PowerVC tier-1 cinder drivers, which include PowerVC, SAN Volume Controller, EMC, and XIV drivers. No support is available initially for non-tier-1 volume drivers. Two methods exist to establish SAN zoning and storage controller hosts. The first method is outside of the scope of this section. The administrator establishes all of the zoning and storage controller hosts before anyone uses the vSCSI connectivity. Most clients already use this method when they use vSCSI connections from the VIOS. Clients create a zone on the switch that contains all the VIOS and storage controllers. Live Partition Mobility (LPM) operations are supported without additional zoning requirements. Typically, clients also run the rootvg of the VIOS from SAN so an existing host entry is available on the storage controller. This step also includes the management of the SCG and the creation of zones and hosts on the storage controller. This duality is evaluated as part of the design changes that are needed for SCG to support vSCSI. To enable multiple paths and LPM operations with vSCSI connections, disk reservations must be turned off for all of the hdisks that are discovered on the VIOS. Use the AIX chdef command to overwrite configuration attributes when a device is discovered. For the SAN Volume Controller, the following chdef commands that are shown in Example 2-1 must be executed on the target VIOS before you assign the disks to the vSCSI adapters. Example 2-1 The chdef commands to set the reserve policy and algorithm on new disks chdef -a reserve_policy=no_reserve -c disk -s fcp -t mpioosdisk chdef -a algorithm=round_robin -c PCM -s friend -t fcpother These chdef commands need to be executed only one time on the Virtual I/O Servers before you attempt to use vSCSI connections. Storage controller registration, volume creation, volume deletion, and volume onboard are unaffected by the addition of the vSCSI connectivity type. Note: You are required to overwrite the reserve_policy and set the algorithm for the disks that are discovered. The default algorithm is a failover algorithm. Note: Consider changing the reserve policy if it was not set to no_reserve. If this setting is not executed before you allocate the disks to the vSCSI adapter, you are required to change these settings for each disk.
  • 42. 18 IBM PowerVC Version 1.2.3: Introduction and Configuration The major changes outside of SCG for vSCSI connections are in the areas of volume attachment and detachment. For volume attachment, the new vSCSI connection type causes the discovery the new hdisk on the targeted VIOS to establish a vSCSI mapping from the hdisk on the targeted VM, to the vhost adapter on the Virtual I/O Servers. For volume detachment, the new vSCSI connection type causes the removal of the vSCSI mapping between the hdisk and vhost adapter on the targeted Virtual I/O Servers, and then removes the hdisk from the VIOS. During VM migration, the VM’s volumes must be mapped to the targeted VIOS and the hdisk that was discovered before you call PowerVM to migrate the VM. This process is covered in 5.15.10, “Migration of virtual machines” on page 169. 2.2.9 Network The following key additional characteristics were introduced in PowerVC version 1.2.2 for networking: IPv6 management node support. IPv6 deployment to targets (API level only). Significant restrictions apply. Add/remove virtual network interface controller (vNIC) adapters. IP pool support. User updates to network ports/IP addresses. A brief introduction follows of each new characteristic. IPv6 management node support IPv6 support (a homogeneous IPv6 static-based network environment) is added to support the PureApp solution. In future releases, this function is expanded to support mixed-mode environments. Mixed-mode environments were not tested in this release. You can install and operate PowerVC on an IPv6 network. The network that was tested is a stateless address autoconfiguration (SLAAC)-based IPv6 network where each node has an IPv6 endpoint. PowerVC itself simplifies the communication between an IPv6-based address rather than an IPv4-based address. If host names are specified, the host operating system resolves the appropriate address type to use on its own. The installer has a silent installation option to support the detection of IPv4 or IPv6 options. The user can choose the IPv4 or IPv6 installation options. If IPv6 is selected, but no IPv6 (non-link local) address is detected, an error is displayed. If IPv4 is selected, but no IPv4 address is detected, an error is displayed. The user can register compute nodes or SAN switches, storage controllers, and more with either IPv4, IPv6, or host names. The management system must be able to resolve the address, however. The compute nodes must also be able to communicate back to the management system through the IPv6 address (if the management node is configured correctly). For this release, the following components are tested with IPv6: HMC VIOS PowerVC management server Brocade SAN switch (FVT only) V7000 storage device PowerKVM host
  • 43. Chapter 2. PowerVC versions and releases 19 No other devices are tested. Their testing is with SLAAC addresses only. API changes are not needed. Replacing IPv6 addresses with IPv4 for Neutron is generally sufficient for static addressing. IPv6 deployment to targets This function is only supported at the API. No UI work is required to support this function. Users of IPv6 must not expect any UI support in this release of PowerVC. The scope of the support is listed: A single network can be either IPv6 or IPv4 (cannot be both). Only a single static IPv6 address and IPv6 link local address for each adapter are supported. Multiple adapters can be applied to the VM (part IPv6 and part IPv4) Cloud-init support exists for RHEL, SLES, Ubuntu, and AIX. Cloud-init is the primary activation strategy for the future. Existing Visual Studio Authoring Extension (VSAE) images are supported. See the following list of items that are not in scope. To send configuration data to the activation engine, you are required to know the activation strategy that is used for each VSAE image or cloud-init. You can determine the IP address for a specific VM and network. The following items are not in scope: You cannot have two networks on the same VLAN, where one network is IPv4 and another network is IPv6. This restriction is a PowerKVM restriction. No IBM i support exists. VSAE is not enhanced to support new RHEL or SLES versions, for example, RHEL 7. In particular, VSAE is not supported on the following operating systems: – Ubuntu – RHEL 6.6 and higher – RHEL 7 and higher – SLES 12 and higher You cannot configure manual network routes. If the user requires this function, the user needs to write an activation engine extension. For the ability to set Media Access Control (MAC) addresses on an adapter, users must accept the MAC addresses that are defined on the adapter by the system. For SLAAC addresses and SLAAC-like addresses, a scheme is used by PureApp to set the MAC addresses/IP addresses to look like SLAAC, but the addresses are not true SLAAC addresses and do not act like SLAAC addresses. GUI support is not available. Addition and removal of vNIC adapters PowerVC supports the addition of vNICs to a specific system. However, because PowerVC does not use the local Dynamic Host Configuration Protocol (DHCP) server, IP addresses cannot be assigned to a VM dynamically. Note: All new images, even images that use older versions of these operating systems, must use cloud-init because cloud-init is the IBM Common Cloud stack’s strategic activation technology.
  • 44. 20 IBM PowerVC Version 1.2.3: Introduction and Configuration PowerVC can dynamically add a NIC to a VM and also prompt the user for the IP address to use to that NIC. To update an IP address within PowerVC, when the IP address is already assigned to the VM (or to remove the IP address), you must use the “User Update to Network Ports/IP Addresses” function. A single network interface supports a single IP address only. However, multiple network interfaces can be added to support additional IP addresses. PowerVC also offers an option to remove a vNIC. When you remove a vNIC, PowerVC immediately removes the NIC from the VM and releases the IP address. IP address pool support In PowerVC Version 1.2.2, the user can choose between two types of IP addresses: Static DHCP If the user decides to use Static, the user is required to specify the IP address on every deployment. This choice is not a preferred practice because the user does not know which IP address is available for use, but this version introduced the option to let PowerVC show a predefined pool of IP addresses. To enable this function, PowerVC provides a pool, which is a capability that is built on top of the existing Neutron “port” API. PowerVC recognizes that to maintain a pool, the user must be able to “lock” certain IP addresses. These locked IP addresses can be used by VMs outside of the PowerVC management domain, such as a Domain Name Server (DNS) or gateway. PowerVC provides a function to lock an IP address. This function works by creating a lock at the Neutron port and by specifying a device owner. The device owner is named PowerVC:<user input>. The user can specify the reason why the IP address is locked in user input. The IP addresses must be presented to the user. If the IP address is “In Use”, which means that it is attached to a VM, that VM must be identifiable to the user. Due to API restrictions, IP addresses must be locked one element at a time. The API does not support batch processing. User updates to network ports/IP addresses In enterprise virtualization, the lifecycle of the VM might be longer than the lifecycle in a standard cloud. Because PowerVC does not manage a DHCP server, its IP address logic is mainly for bookkeeping. Therefore, clients often want to modify the IP address that is assigned to a VM, for example: They imported a VM and want to put the real IP address on the VM. They changed the networking on the console of the system. Note: Even though PowerVC attaches or detaches the adapter, you need to configure the adapter within the VM. Note: Neutron APIs only allow the modification of a single port at a time.
  • 45. Chapter 2. PowerVC versions and releases 21 To support modification, a new function was added to the VM panel so that the user can modify the port that is assigned to the VM. This function is supported by the existing Neutron APIs, but the design needs to be defined by the user experience, edge cases, and others. No hard limit exists, beyond the hypervisor limits, on the number of NICs that can be added. The UI limits this operation to eight NICs, as with deployment. Eight NICs is a reasonable upper limit, which provides the opportunity to display a message to the user before the user receives (hits) esoteric hypervisor messages. 2.2.10 Security Many configuration properties throughout PowerVC affect security. For example, Glance (the OpenStack database name for the image repository) has properties to configure the maximum size of an image and the storage quota for each user. An administrator might want to configure these properties specifically for their environment as part of a defense against denial of service through disk space exhaustion. PowerVC provides a supported mechanism for the customer to configure these settings through the CLI. Also, the default values for settings that relate to National Institute of Standards and Technology (NIST) 800-131a changed to comply with that standard. This change offers better security for customers and prepares the way for future compliance. 2.3 New in IBM PowerVC version 1.2.3 The first part of this section is an overview of the new PowerVC 1.2.3 features. Then, complete detailed descriptions are provided of the most relevant changes that are introduced in this release, as shown in Table 2-3. Table 2-3 New functions that are introduced in PowerVC 1.2.3 New functionality Description Collocation rules (affinity/anti-affinity) Rules can be created to keep VMs on the same or different hosts. These rules are called affinity and anti-affinity rules. Host group Hosts can be logically separated and controlled separately with placement policies. Multi-volume capture Additional volumes can be captured now in addition to the boot volume. Placement policy VMs can be placed now by choosing among different placement policies. In addition to striping and packing, CPU usage and memory balance are added. Remote VM restart If one host fails, the user can remotely restart the VMs now on another host by using the simplified remote restart feature (POWER8 only). Redundant HMC PowerVC now supports redundant HMC. Switching from one HMC to another HMC is a user-initiated action.
  • 46. 22 IBM PowerVC Version 1.2.3: Introduction and Configuration The following list describes the most important features that are introduced in IBM PowerVC Standard Edition Version 1.2.3: Major software changes Significant scaling improvement Redundant HMC Error scenarios Host groups Advance placement policies (affinity/anti-affinity) Multi-disk capture/deploy PowerVM and PowerKVM remote restart Cloud-init for the latest service pack (SP) of AIX 2.3.1 Major software changes PowerVC presents several major software changes in this new version: PowerVC follows the lifecycle of OpenStack. IBM PowerVC version 1.2.3 is based on the OpenStack Kilo version. PowerVC host management must be installed on RHEL 7.1 for IBM Power or x86_64. Storage mirror (Storwize) The storage templates are enhanced to allow the creation of mirrored volumes on a Storwize family storage provider, for example, a SAN Volume Controller stretched cluster. Volume sharing Volumes can be added to multiple VMs now. This capability is essential for high availability VMs, such as IBM Spectrum™ Scale (formerly General Parallel File System (IBM GPFS™))/IBM PowerHA® SystemMirror® for AIX Enterprise Edition or PowerHA SystemMirror for AIX Standard Edition. Activation support Cloud-init is now supported on AIX starting with AIX 7.1TL3SP5 and 6.1TL9SP5. Cloud-init is preferred over the old activation method (VSAE). SDDPCM in Virtual I/O Servers The use of Subsystem Device Driver Path Control Module (SDDPCM) for VSCSI logical unit number (LUN) management in VIOS is now supported. Scaling improvement To use PowerVC in large environments, scaling is improved and PowerVC can manage 30 hosts and 3,000 VMs now. Maximum transmission unit (MTU) support Definitions of networks are enhanced so that the administrator can set the MTU size that is used by a VM, for example, jumbo frames (MTU 9000). Import images that consist of multiple volumes You can import an image that is made of multiple volumes and create a single deployable image from them. Set the host name from DNS Cloud-init can be used to set the VM host name by resolving a DNS record now. New functionality Description
  • 47. Chapter 2. PowerVC versions and releases 23 New client operating systems are supported now: – RHEL 7.1 (Little Endian) – SLES 12 (Little Endian) – Ubuntu V15.04 (Little Endian) 2.3.2 Significant scaling improvement To fit in a cloud environment well, PowerVC can manage 30 hosts and 3,000 VMs. The improved PowerVC scaling capabilities for PowerKVM and PowerVM are shown in Table 2-4. Table 2-4 Scaling capabilities for PowerKVM and PowerVM in PowerVC 2.3.3 Redundant HMC To avoid a single point of failure, PowerVC 1.2.3 now supports redundant HMCs. Switching from one HMC to another HMC is a user-initiated action. The PowerVC administrator can switch between HMCs on a host basis. A Change HMC button is now available on the host pane so that the administrator can select a single host or multiple hosts and change its HMC connection. Note: If PowerVC 1.2.3 is installed on a Power System server, you can choose a big endian or little endian version for installation because both versions are supported. PowerKVM PowerVM Scales up to 160 vCPUs. Five hundred VMs for each HMC are supported. If you plan to reach 3,000 VMs, you need six HMCs. PowerVC supports a maximum of 50 concurrent deployments. We recommend that you do not exceed eight concurrent deployments for each host. The maximum number of deployments depends on the number of migrations that are supported by the VIOS and firmware version that are associated with each host. Each host support a maximum of 225 VMs. Each host support a maximum of 500 VMs. Ten concurrent migrations or remote restart operations are supported. Ten concurrent remote restart operations for each source host are supported and four concurrent remote restart operations for destination hosts are supported.
  • 48. 24 IBM PowerVC Version 1.2.3: Introduction and Configuration 2.3.4 Error scenarios Consider several important error scenarios for host maintenance mode and the related orchestration. While a host is put in maintenance mode, the operation generates notifications for all possible error scenarios. In addition to normal notifications, it generates errors for state transitions: When you perform host evacuation, the scheduler starts to receive invalid exceptions, which can happen in single host environments or multiple host environments where the alternative hosts cannot satisfy the VMs’ demand. PRS puts the host into maintenance; however, the error state appears as soon as this situation occurs. When you perform host evacuation, one or more of the VMs enters the error state. PRS puts the host into maintenance; however, the error state appears as soon as this situation occurs. When you perform host evacuation, one or more of the VMs never transition out of the migrating state even after the configured time period. PRS puts the host into maintenance; however, the error state appears as soon as this situation occurs. When you perform host evacuation, one or more of the VMs never start to migrate, for example, due to an exception that is thrown. PRS puts the host into maintenance; however, the error state appears as soon as this situation occurs. When the host is in maintenance mode and all VMs are migrated, the administrator starts an inactive VM out-of-band task to be performed from the HMC or virsh interface. PRS will not detect this situation and the host will remain in maintenance mode. It is assumed that host administrators will not perform out-of-band operations on any of the VMs during this sensitive period. 2.3.5 Host groups PowerVC 1.2.3 can group hosts so you can manage them as a unit with policies. Host groups can be used to separate the production environment from the test environment, for instance. Hosts can be moved dynamically and placed between different host groups. To control the placement of VMs within a host group, a placement policy is selected. The current placement policies are available for host groups: Packing Striping CPU balance Memory balance CPU usage VMs can be migrated between hosts within the same group. At any time, host groups can be modified by the user so that the user can move a host from one host group to another host group. Hosts that are not a member of any user-defined host group are placed in the default host group, which cannot be deleted.
  • 49. Chapter 2. PowerVC versions and releases 25 2.3.6 Advance placement policies In addition to the previous packing and striping placements, new policies are defined to automate the placement of the VMs. These new policies are more sophisticated than before and the user can place the VM on a host by choosing free capacity criteria. Memory and CPU balance New VMs are placed on the host with the largest amount of free CPU capacity. Also, the VMs are placed on the host with the largest amount of free memory capacity. CPU usage New VMs are placed on the host with the lowest historical CPU usage. CPU usage is calculated by taking the current usage every minute and then averaging the last 15 minutes worth of data. Affinity and anti-affinity To complete these new placement policies, affinity and anti-affinity collocation rules are added to PowerVC 1.2.3. The goal of collocation rules goal is to create a VM-to-VM relationship that restricts where the VM can reside. These rules can be used to force a list of VMs to be kept together or on separate hosts. For instance, use an anti-affinity collocation rule to ensure that two nodes of a PowerHA SystemMirror cluster are always on different hosts even if an LPM operation occurs (for high availability). Or, use an affinity collocation rule to always regroup a database VM and an application VM on the same host or host group (to increase performance and reduce network latency). VMs that are part of affinity or anti-affinity collocation rules cannot be remote restart or migrated to ensure that rules are not violated. To migrate or remote restart a VM member of a collocation rule, the machine must be removed from the rule. All collocation rule operations are dynamic, which means that they can be modified at any time. If a collocation rule is violated, the user is warned that the rule is broken and correct the issue. 2.3.7 Multiple disk capture and deployment VMs often have one or many data volumes in addition to the boot volume. When you capture a VM, you can capture both boot and data volumes. Data volumes can be captured separately and deployed in combination with any images. Boot and data volumes can reside on different storage providers. For example, you can capture a boot volume on an SSP and capture data volumes that are created on a VMAX array and that are accessed through NPIV. Table 2-5 on page 26 indicates the combinations that are allowed to support multiple disks.
  • 50. 26 IBM PowerVC Version 1.2.3: Introduction and Configuration Table 2-5 List of supported and unsupported multiple disk combinations 2.3.8 PowerVC remote restart Now, PowerVC can use the simplified remote restart capability that is available on IBM POWER8 systems to accelerate the recovery time for a server. The minimum version of firmware that is required for PowerVM simplified remote restart capability is FW820 for high-end servers and FW830 for any Linux scale-out PowerKVM system that supports remote restart. The version of remote restart that is available on IBM POWER7 Systems™ cannot be managed by PowerVC. Only the simplified remote restart is supported. Example 2-2 shows how to check whether hosts support simplified remote restart if you plan to use PowerVC to restart your VMs remotely. Example 2-2 How to check whether a host can use remote restart from PowerVC # lssyscfg -r sys -F name,simplified_remote_restart_capable p814-1,1 p814-2,1 If one of the hosts that is controlled by PowerVC is failing, for example, its status is different than Operating, Power Off, or Power Off in progress, the PowerVC administrator can initiate a remote restart operation manually to restart the VM on a healthy host. At VM creation, the user can toggle an attribute to enable the simplified remote restart capability. A specific compute template can be created to enable this capability at the creation of the VM. Remote restart supports PowerVM and PowerKVM, and AIX, IBM i, and Linux VMs. Boot volumes Data volumes Support SSP SSP Supported SSP NPIV Supported SSP vSCSI Not supported NPIV NPIV Supported NPIV SSP Not supported NPIV vSCSI Not supported vSCSI vSCSI Supported vSCSI NPIV Supported vSCSI SSP Not supported
  • 51. Chapter 2. PowerVC versions and releases 27 2.3.9 Cloud-init for the latest service pack of AIX Cloud-init is the most common activation tool that is used by the cloud provider. It is the industry standard for bootstrapping the cloud server and now the strategic image activation technology of IBM. In addition to Activation Engine (VSAE), cloud-init is fully supported on AIX. Only the latest service packs of the latest AIX release are supported to use cloud-init as an activation method. These current versions of AIX are supported for cloud-init: AIX 7.1 TL3 SP5 (7100-03-05) AIX 6.1 TL9 SP5 (6100-09-05) For more information about the cloud-init configuration, see the official documentation: https://cloudinit.readthedocs.org/en/latest/ For additional information, see this website: ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/cloudinit/ These latest service packs of AIX introduce a new device attribute on the sys0 device that is called clouddev. The role of the clouddev attribute is to replace the ghostdev attribute that is used to reset Object Data Manager (ODM) customization when VM is booted on another host or with a different LPAR ID, for example, a remote restart operation or an inactive LPM operation. Example 2-3 shows clouddev and ghostdev attributes on AIX. Example 2-3 Example of clouddev and ghostdev output # lsattr -D -l sys0 -a clouddev clouddev 0 N/A True # lsattr -D -l sys0 -a ghostdev ghostdev 0 Recreate ODM on system change / modify PVD True On a supported version of AIX, the user cloud-init clouddev is set to 1 and ghostdev is set to 0. This value can be gathered by executing the commands that are shown in Example 2-4. Example 2-4 Obtain the values that are set on the ghostdev and clouddev attributes # lsattr -El sys0 -a ghostdev ghostdev 0 recreate OD devices on system change / modify PVID True # lsattr -El sys0 -a clouddev clouddev 1 N/A True Note: If you use cloud-init on an unsupported version of AIX, ghostdev is set to 1 after activation. Change this value to 0 if you plan to use remote restart or inactive LPM.
  • 52. 28 IBM PowerVC Version 1.2.3: Introduction and Configuration
  • 53. © Copyright IBM Corp. 2014, 2015. All rights reserved. 29 Chapter 3. PowerVC installation planning This chapter describes the key aspects of IBM® Power Virtualization Center Standard Edition (PowerVC) installation planning: Section 3.1, “IBM PowerVC requirements” on page 30 presents the hardware and software requirements for the various components of a PowerVC environment: management station, managed hosts, network, storage area network (SAN), and storage devices. Sections 3.2, “Host and partition management planning” on page 35 through 3.9, “Product information” on page 75 provide detailed planning information for various aspects of the environment’s setup: – Hosts – Partitions – Placement policies – Templates – Storage and SAN – Storage connectivity groups and tags – Networks – User and group management – Security 3
  • 54. 30 IBM PowerVC Version 1.2.3: Introduction and Configuration 3.1 IBM PowerVC requirements In this section, we describe the necessary hardware and software to implement IBM PowerVC to manage AIX, Linux, and IBM i platforms. Beginning with PowerVC version 1.2.2, only PowerVC Standard Edition is included in the PowerVC installation media. If you want to use PowerVC Express Edition, you need to install PowerVC version 1.2.1. PowerVC Standard Edition supports the management of VMs (VMs) that are hosted on PowerVM and managed by a Hardware Management Console (HMC) or VMs that are hosted on PowerKVM. For information about available releases, see this website: http://www.ibm.com/software/support/lifecycle/ IBM PowerVC Standard Edition can manage Linux, AIX, and IBM i VMs that run on Power Hardware. PowerVC does not support the management of VMs that are hosted on PowerVM and PowerKVM from the same management server. 3.1.1 Hardware and software requirements The following sections describe the hardware, software, and resource minimum requirements at the time of publication of this book for versions 1.2.2 and 1.2.3 of PowerVC Standard Edition. See the IBM Knowledge Center for the complete requirements: PowerVC managing PowerVM Select PowerVC Standard Edition 1.2.3 → Managing PowerVM → Planning for PowerVC standard Managing PowerVM. http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.standar d.help.doc/powervc_planning_hmc.html PowerVC managing PowerKVM Select PowerVC Standard Edition 1.2.3 → Managing PowerKVM → Planning for IBM Virtualization Center. http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.hel p.doc/powervc_planning_kvm.html 3.1.2 PowerVC Standard Edition requirements The following information provides a consolidated view of the hardware and software requirements for PowerVC Standard Edition. PowerVC management and managed hosts The PowerVC architecture supports a single management host for each managed domain. It is not possible to configure redundant PowerVC management hosts that control the same objects.
  • 55. Chapter 3. PowerVC installation planning 31 The VMs that host the PowerVC management host must be dedicated to this function. No other software or application can be installed on this VM. However, you can install software for the management of this VM, such as monitoring agents and data collection tools for audit or security. Table 3-1 lists the PowerVC Standard Edition hardware and software requirements. Table 3-1 Hardware and OS requirements for PowerVC Standard Edition Table 3-2 describes the minimum and recommended resources that are required for PowerVC VMs. In the table, the meaning of the processor capacity row depends on the type of host that is used as the PowerVC management host: If the PowerVC management host is PowerVM, processor capacity refers to either the number of processor units of entitled capacity or the number of dedicated processors. If the PowerVC management host is PowerKVM or x86, processor capacity refers to the number of physical cores. Table 3-2 Minimum resource requirements for the PowerVC VM Host type Supported hardware Supported operating systems PowerVC management host IBM POWER7, POWER7+™, or POWER8 processor-based server models or any x86 server Red Hat Enterprise Linux (RHEL), version 7.1 for IBM Power for ppc64 and ppc64le RHEL Server, and version 7.1 for x86_64 Managed hosts PowerVM: IBM Power processor-based servers: IBM POWER6, POWER7, POWER7+, and POWER8 servers PowerKVM: POWER8 servers with IBM PowerKVM 2.1.1.2 or later Guest operating systems that are supported for deployment: PowerVM and PowerKVM: RHEL 5.9, 5.10, 6.4, 6.5, 6.6, 7.0, and 7.1 (little endian) SUSE Linux Enterprise Server (SLES), version 11SP3 and SP4, SLES version 12 (little endian) Ubuntu 15.04 (little endian) PowerVM only: IBM AIX 6.1 and 7.1 IBM i 7.1 and 7.2 Minimum Recommended Number of VMs Up to 400 401 - 1000 1001 - 2000 2001 - 3000 Processor capacity 1 2 4 8 8 Virtual CPUs 2 2 4 8 8 Memory (GB) 10 10 12 20 28 Swap space (GB) 10 10 12 20 28 Disk space (GB) 40 40 60 80 100
  • 56. 32 IBM PowerVC Version 1.2.3: Introduction and Configuration The installer has the following space requirements: /tmp: 250 MB /usr: 250 MB /opt: 2.5 GB /home: 3 GB (minimum). We recommend that 20% of the space is assigned to /home. For example, for 400 VMs, 8 GB are recommended. For 1,000 VMs, 20 GB are recommended. For 2,000 VMs, 30 GB are recommended. The remaining space is used for /var and swap space. Supported activation methods Table 3-3 lists the supported activation methods for VMs on managed hosts. Virtual Solutions Activation Engine (VSAE) is deprecated, and it might be withdrawn from support in subsequent releases. We strongly recommend that new images are constructed with cloud-init. Cloud-init is the strategic image activation technology of IBM. It offers a rich set of system initialization features and a high degree of interoperability. Table 3-3 Supported activation methods for managed hosts Operating system Little endian (LE) or big endian (BE) Version Initialization AIX BE 6.1 TL0 SP0 or later 7.1 TL0 SP0 or later Virtual Solutions Activation Engine (VSAE) AIX BE 6.1 TL9 SP5 or later 7.1 TL3 SP5 or later cloud-init IBM i BE 7.1 TR10 or later 7.2 TR2 or later IBM i AE RHEL BE 5.9 or later VSAE RHEL BE 6.4 or later VSAE, cloud-init RHEL BE 7.0 or later cloud-init RHEL LE 7.1 or later cloud-init SLES BE 11 SP3 or later VSAE and cloud-init SLES LE 12 SP0 or later cloud-init Ubuntu LE 15.04.0 or later cloud-init
  • 57. Chapter 3. PowerVC installation planning 33 Hardware Management Console Table 3-4 shows the HMC version and release requirements to support PowerVC Standard Edition managing PowerVM. This section does not apply for managing systems that are controlled by PowerKVM. Table 3-4 HMC requirements We recommend that you update to the latest HMC fix pack for the specific HMC release. You can check the recommended fixes for HMC from the IBM Fix Level Recommendation Tool: http://ibm.co/1MbXlIA You can get the latest fix packages from IBM Fix Central: http://www.ibm.com/support/fixcentral/ Virtualization platform Table 3-5 includes the VIOS version requirements for PowerVC Standard Edition managing PowerVM. Table 3-5 Supported virtualization platforms Item Requirement Software level 8.2.0 8.3.0 Hardware-level requirements Requirements: Up to 300 VMs: CR5 with 4 GB memory More than 300 VMs: CR6, CR7, or CR8 with 8 GB memory Recommendations Up to 300 VMs: CR6, CR7, or CR8 with 8 GB memory More than 300 VMs: CR6, CR7, or CR8 with 16 GB memory Platform Requirement VIOS for POWER7 hosts and earlier Version 2.2.3.52 or later VIOS for POWER8 hosts Version 2.2.3.52 or later Tip: Set the Maximum Virtual Adapters value to at least 200 on the Virtual I/O Servers. However, Virtual I/O Servers that are managed by PowerVC can serve more than 100 VMs, and each VM can require four or more virtual I/O devices from the VIOS. When you plan the VIOS configuration, base the size of the Maximum Virtual Adapters value on real workload requirements.
  • 58. 34 IBM PowerVC Version 1.2.3: Introduction and Configuration Network resources Table 3-6 lists the network infrastructure that is supported by PowerVC Standard Edition. Table 3-6 Supported network hardware and software Storage providers Table 3-7 lists the hardware that is supported by PowerVC Standard Edition managing PowerVM. Table 3-7 Supported storage hardware for PowerVM Table 3-8 lists the hardware that is supported by PowerVC Standard Edition managing PowerKVM. Table 3-8 Supported storage hardware for PowerKVM Item Requirement Network switches PowerVC does not manage network switches, but it supports network configurations that use virtual-LAN (VLAN)-capable switches. Virtual networks PowerVM: Shared Ethernet adapters for VM networking. PowerKVM: Supports Open vSwitch 2.0. The backing adapters for the virtual switch can be physical Ethernet adapters, bonded adapters (Open vSwitch also supports bonding), or Linux bridges (not recommended). Item Requirement Storage systems IBM Storwize family of controllers. IBM XIV Storage System. EMC VNX. Notes: EMC VNX Series is supported on RHEL Server for x86_64 management hosts only, due to EMC limitations. EMC VMAX. SAN switches Brocade Fibre Channel (FC) switches are supported by the Brocade OpenStack Cinder zone manager driver. Cisco SAN FC switches are supported by the Cisco Cinder zone manager driver. Storage connectivity FC attachment through at least one N_Port ID Virtualization (NPIV)-capable host bus adapter (HBA) on each host. Item Requirement Storage systems File-level storage Network File System (NFS) V3 or V4 is required for migration. It must be manually configured on the kernel-based VM (KVM) host before it is registered on PowerVC. Storage connectivity Internet Small Computer System Interface (iSCSI): Data volumes on the IBM Storwize family of controllers only.
  • 59. Chapter 3. PowerVC installation planning 35 Security Table 3-9 includes the supported security features. Table 3-9 Supported security software 3.1.3 Other hardware compatibility PowerVC is based on OpenStack, so rather than being compatible with specific hardware devices, PowerVC is compatible with drivers that conform to OpenStack standards. They are called pluggable devices in PowerVC. Therefore, PowerVC can take advantage of hardware devices that are available from vendors that provide OpenStack-compatible drivers for their products. IBM cannot state the support of other hardware vendors for their specific devices and drivers that are supported by PowerVC, so check with the vendors to learn about their drivers. For more information about pluggable devices, see the IBM Knowledge Center: http://ibm.co/1Q2QtRe 3.2 Host and partition management planning When you plan for the hosts in your PowerVC Standard Edition managing PowerVM, you need to consider the limitations in the number of hosts and VMs that can be managed by PowerVC, and the benefits of using multiple Virtual I/O Servers. 3.2.1 Physical server configuration If you plan to use partition mobility, you must ensure that all servers are configured with the same logical-memory block size. This logical-memory block size can be changed from the Advanced System Management Interface (ASMI) interface. 3.2.2 HMC or PowerKVM planning Data centers can contain hundreds of hosts and thousands of VMs. For PowerVC version 1.2.3, the following maximums are suggested: PowerVC Standard Edition 1.2.3 managing PowerVM: – A maximum of 30 managed hosts is supported. – Each host can have a maximum of 500 VMs on it. – A maximum of 3,000 VMs can be on all of the combined hosts. – Each HMC can have a maximum of 500 VMs on it. Note: IBM i hosts on IBM XIV Storage Systems must be attached by virtual SCSI (vSCSI) due to IBM i and IBM XIV storage limitations. For EMC VNX and VMAX storage, IBM i hosts on EMC VNX and VMAX storage systems must be attached by vSCSI due to IBM i and EMC storage limitations. Item Requirement Lightweight Directory Access Protocol (LDAP) server (optional) OpenLDAP version 2.0 or later. Microsoft Active Directory 2003 or later.
  • 60. 36 IBM PowerVC Version 1.2.3: Introduction and Configuration PowerVC Standard Edition 1.2.3 managing PowerKVM: – A maximum of 30 managed hosts is supported. – Each host can have a maximum of 225 VMs on it. – A maximum of 3,000 VMs can be on all of the combined hosts. Therefore, you need to consider how to partition your HMC, and kernel-based VM (KVM) in subsets, where each is managed by a PowerVC management host. Advanced installations typically use redundant HMCs to manage the hosts. With version 1.2.3 or later, PowerVC can support hosts that are managed by redundant HMCs. If one HMC that you selected for PowerVC becomes unavailable, change to the working HMC through the PowerVC GUI. 3.2.3 Virtual I/O Server planning Plan to use more than one VIOS if you want a failover VIOS or expanded VIOS functions. PowerVC provides the option to use more than one VIOS. Consider a second VIOS to provide redundancy and I/O connectivity resilience to the hosts. Use two Virtual I/O Servers to avoid outages to the hosts when you need to perform maintenance, updates, or changes in the VIOS configuration. If you plan to make partitions mobile, define the VIOS that provides the mover service on all hosts, and ensure that the Mover service partition option is enabled in the profile of these Virtual I/O Servers. Note: No hard limitations exist in PowerVC. These maximums are suggested from a performance perspective only. Note: PowerVC uses only one HMC to manage hosts even with redundant, defined HMCs. You need to change an HMC to another HMC manually, if the original HMC fails.
  • 61. Chapter 3. PowerVC installation planning 37 The VIOS must be configured with “Sync current configuration Capability” turned ON. On the HMC, verify the settings of the Virtual I/O Servers, as shown in Figure 3-1. Figure 3-1 VIOS settings that need to be managed by PowerVC Changing maximum virtual adapters in a VIOS From the HMC, on the left panel, click Server Management → Servers → managed_server, select the VIOS, and then click Configuration → Manage Profiles from the drop-down menu. Select the profile that you want to use, and click Actions → Edit. Then, select the Virtual Adapters tab. Important: Configure the maximum number of virtual resources (virtual adapters) for the VIOS to at least 200. This setting provides sufficient resources on your hosts while you create and migrate VMs throughout your environment. Otherwise, PowerVC indicates a warning during the verification process.
  • 62. 38 IBM PowerVC Version 1.2.3: Introduction and Configuration Replace the value in the Maximum virtual adapters field with a new value. See Figure 3-2. Figure 3-2 Modifying maximum virtual adapters 3.3 Placement policies and templates One goal of PowerVC is to simplify the management of VMs and storage by providing the automated creation of partitions and virtual storage disks and the automated placement of partitions on physical hosts. This automation replaces the manual steps that are needed when you use PowerVM directly. In the manual steps, you need to create disks, select all parameters that define each partition to deploy, and configure the mapping between the storage units and the partitions in the Virtual I/O Servers. This automation is performed by using deployment templates and placement policies. 3.3.1 Host groups Use host groups to group hosts logically regardless of any features that they might share. For example, the hosts do not need the same architecture, network configuration, or storage. Host groups have these important features: Every host must be in a host group Any hosts that do not belong to a user-defined host group are members of the default host group. The default host group cannot be deleted. VMs are kept within the host group A VM can be deployed to a specific host or to a host group. After deployment, if that VM is migrated, it must always be migrated within the host group.
  • 63. Chapter 3. PowerVC installation planning 39 Placement policies are associated with host groups Every host within a host group is subject to the host group’s placement policy. The default placement policy is striping. An enterprise client can group its hosts to meet different business needs, for example, for test, development, and production, as shown in Figure 3-3. With different placement policies, even with different hardware, the client can archive at different service levels. Figure 3-3 Host group sample 3.3.2 Placement policies When you want to deploy a new partition, you can indicate to PowerVC the host on which you want to create this partition. You can also ask PowerVC to identify the hosts on which the partitions will best fit in a host group, based on a policy that matches your business needs. If you ask PowerVC to identify the hosts on which the partitions will best fit in a host group, PowerVC compares the requirements of the partitions with the availability of resources on the possible set of target hosts. PowerVC considers the selected placement policy to make a choice. PowerVC version 1.2.3 offers five policies to deploy VMs: Striping placement policy The striping placement policy distributes your VMs evenly across all of your hosts. For each deployment, PowerVC determines the hosts with enough processing units and memory to meet the requirements of the VM. Other factors for determining eligible hosts include the storage and network connectivity that are required by the VM. From the group of eligible hosts, PowerVC chooses the host that contains the fewest number of VMs and places the VM on that host. Packing placement policy The packing placement policy places VMs on a single host until its resources are fully used, and then it moves on to the next host. For each deployment, PowerVC determines the hosts with enough processing units and memory to meet the requirements of the VM. Other factors for determining eligible hosts include the storage and network connectivity that are required by the VM. From the group of eligible hosts, PowerVC chooses the host that contains the most VMs and places the VM on that host. After the resources on this host are fully used, PowerVC moves on to the next eligible host that contains the most VMs.
  • 64. 40 IBM PowerVC Version 1.2.3: Introduction and Configuration This policy can be useful when you deploy large partitions on small servers. For example, you need to deploy four partitions that require eight, eight, nine, and seven cores on two servers, each with 16 cores. If you use the striping policy, the first two partitions are deployed on the two servers, which leaves only eight free cores on each. PowerVC cannot deploy the 9-core partition, because a Live Partition Migration (LPM) operation must be performed before the 9-core partition can be deployed. By using the packing policy, the first two 8-core partitions are deployed on the first hosts, and PowerVC can then deploy the 9-core and 7-core partitions on the second host. This example is simplistic, but it illustrates the difference between the two policies: The striping policy optimizes performance, and the packing policy optimizes human operations. CPU utilization balance placement policy This placement policy places VMs on the host with the lowest CPU utilization in the host group. The CPU utilization is computed as a running average over the last 15 minutes. CPU allocation balance placement policy This placement policy places VMs on the host with the lowest percentage of its CPU that is allocated post-deployment or after relocation. For example, consider an environment with two hosts: – Host 1 has 16 total processors, four of which are assigned to VMs. – Host 2 has four total processors, two of which are assigned to VMs. Assume that the user deploys a VM that requires one processor. Host 1 has (4+1)/16, or 5/16 of its processors that are allocated. Host 2 has (2+1)/4, or 3/4 of its processors that are allocated. Therefore, the VM is scheduled to Host 1. Memory allocation balance placement policy This placement policy places VMs on the host with the lowest percentage of its memory that is allocated post-deployment or after relocation. For example, consider an environment with two hosts: – Host 1 has 16 GB total memory; 4 GB of which is assigned to VMs. – Host 2 has four GB total memory; 2 GB of which is assigned to VMs. Assume that the user deploys a VM that requires 1 GB of total memory. Host 1 has (4+1)/16, or 5/16 of its memory that is allocated. Host 2 has (2+1)/4, or 3/4 of its memory that is allocated. Therefore, the VM is scheduled to Host 1. When a new host is added to the host group that is managed by PowerVC, if the placement policy is set to the striping mode, new VMs will be deployed on the new host until it catches up with the existing hosts. PowerVC allocates partitions only on this new host until the resources use of this host is about the same as on the previously installed hosts. Note: A default placement policy change does not affect existing VMs. It affects only new VMs that are deployed after the policy setting is changed. Therefore, changing the placement policy for an existing environment does not result in moving existing partitions. Tip: The following settings might increase the throughput and decrease the duration of deployments: Use the striping policy rather than the packing policy. Limit the number of concurrent deployments to match the number of hosts.
  • 65. Chapter 3. PowerVC installation planning 41 When a new partition is deployed, the placement algorithm uses several criteria to select the target server for the deployment, such as availability of resources and access to the storage that is needed by the new partitions. By design, the PowerVC placement policy is deterministic. Therefore, the considered resources are the amounts of processing power and memory that are needed by the partition, as defined in the partition profile (virtual processors, entitlement, and memory). Dynamic resources, such as I/O bandwidth, are not taken considered, because they will result in a non-deterministic placement algorithm. The placement policy can also be used when you migrate a VM. Figure 3-4 shows the PowerVC user interface for migrating a partition. Use this interface to select between specifying a specific target or letting PowerVC select a target according to the current placement policy. Figure 3-4 Migration of a partition by using a placement policy 3.3.3 Template types Rather than define all characteristics for each partition or each storage unit that must be created, the usual way to create them in PowerVC is to instantiate these objects from a template that was previously defined. The amount of effort that is needed to define a template is similar to the effort that is needed to define a partition or storage unit. Therefore, reusing templates saves significant effort for the system administrator, who needs to deploy many objects. PowerVC provides a GUI to help you create or customize templates. Templates can be easily defined to accommodate your business needs and your IT environment. Two types of templates are available: Compute templates These templates are used to define processing units, memory, and disk space that are needed by a partition. They are described in 3.3.4, “Information that is required for compute template planning” on page 42. Storage templates These templates are used to define storage settings, such as a specific volume type, storage pool, and storage provider. They are described in 3.5.2, “Storage templates” on page 56. Use the templates to deploy new VMs. This approach propagates the values for all of the resources into the VMs. The templates accelerate the deployment process and create a baseline for standardization. Note: The placement policies are predefined. You cannot create your own policies.
  • 66. 42 IBM PowerVC Version 1.2.3: Introduction and Configuration Templates can be defined by using the Standard view or, for more detailed and specific configuration, you can use the Advanced view, as described in the next section. 3.3.4 Information that is required for compute template planning The PowerVC 1.2.3 management host provides 11 predefined compute templates. Your redefined templates can be edited and removed. You can create your own templates, also. Before you start to create templates, plan for the amount of resources that you need for the classes of partitions that you will need. For example, different templates can be used for partitions that are used for development, test, and production, or you can have different templates for database servers, application servers, and web servers. PowerVC offers two template options: Basic Create micropartitions (shared partitions) by specifying the minimum amount of information. Advanced Create dedicated partitions or micropartitions, with the level of detail that is available on the HMC. Basic templates You need the following information to plan a basic template: Template name The name to use for the template. Virtual processors Number of virtual processors. A VM usually performs best if the number of virtual processors is close to the number of processing units that is available to the VM. Memory (MB) Amount of memory, in MB. The value for memory must be a multiple of the memory region size that is configured on your host. To see the region size for your host, open the Properties panel for the selected host in the HMC, and then open the Memory tab and record the “memory region size” value. Figure 3-5 on page 44 shows an example. Processing units Number of entitled processing units. A processing unit is the minimum amount of processing resource that the VM can use. For example, a value of 1 (one) processing unit corresponds to 100% use of a single physical processor. Processing units are split between virtual processors, so a VM with two virtual processors and one processing unit appears to the VM user as a system with two processors, each running at 50% speed. Disk (GB) Disk space that is needed, in GB. Compatibility mode Select the processor compatibility that you need for your VM. Table 3-10 on page 43 describes each compatibility mode and the servers on which the VMs that use each mode can operate.
  • 67. Chapter 3. PowerVC installation planning 43 Table 3-10 Processor compatibility modes Processor compatibility mode Description Supported servers POWER6 Use the POWER6 processor compatibility mode to run operating system versions that use all of the standard features of the POWER6 processor. VMs that use the POWER6 processor compatibility mode can run servers that are based on POWER6, IBM POWER6+™, POWER7, or POWER8 processors. POWER6+ Use the POWER6+ processor compatibility mode to run operating system versions that use all of the standard features of the POWER6+ processor. VMs that use the POWER6+ processor compatibility mode can run on servers that are based on POWER6+, POWER7, or POWER8 processors. POWER7, including POWER7+ Use the POWER7 processor compatibility mode to run operating system versions that use all of the standard features of the POWER7 processor. VMs that use the POWER7 processor compatibility mode can run servers that are based on POWER7 or POWER8 processors. POWER8 Use the POWER8 processor compatibility mode to run operating system versions that use all of the standard features of the POWER8 processor. VMs that use the POWER8 processor compatibility mode can run servers that are based on POWER8 processors. Default The default processor compatibility mode is a preferred processor compatibility mode that enables the hypervisor to determine the current mode for the VM. When the preferred mode is set to Default, the hypervisor sets the current mode to the most fully featured mode that is supported by the operating environment. In most cases, this mode is the processor type of the server on which the VM is activated. For example, assume that the preferred mode is set to Default and the VM is running on a POWER8 processor-based server. The operating environment supports the POWER8 processor capabilities, so the hypervisor sets the current processor compatibility mode to POWER8. The servers on which VMs with the preferred processor compatibility mode of default can run depend on the current processor compatibility mode of the VM. For example, if the hypervisor determines that the current mode is POWER8, the VM can run on servers that are based on POWER8 processors. Note: For a detailed explanation of processor compatibility modes, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940.
  • 68. 44 IBM PowerVC Version 1.2.3: Introduction and Configuration Advanced templates You need the following information to plan advanced templates: Template name The name for the template. Virtual processors The number of virtual processors. A VM usually performs best if the number of virtual processors is close to the number of processing units that is available to the VM. You can specify the following values: Minimum The smallest number of virtual processors that you will accept for deploying a VM. Desired The number of virtual processors that you want for deploying a VM. Maximum The largest number of virtual processors that you will allow when you resize a VM. This value is the upper limit to resize a VM dynamically. When it is reached, you need to power off the VM, edit the profile, change the maximum to a new value, and restart the VM. Memory (MB) Amount of memory, expressed in MB. The value for memory must be a multiple of the memory region size that is configured on your host. The minimum value is 16 MB. To see the region size for your host, open the Properties panel for the selected host on the HMC, and then open the Memory tab to view the memory region size. Figure 3-5 shows an example. You can specify the following values: Minimum The smallest amount of memory that you want for deploying a VM. If the value is not available, the deployment will not occur. Desired The total memory that you want in the VM. The deployment occurs with an amount of memory less than or equal to the desired amount and greater than or equal to the minimum amount that is specified. Maximum The largest amount of memory that you will allow when you resize a VM. This value is the upper limit to resize a VM dynamically. When it is reached, you need to power off the VM, edit the profile, change the maximum to a new value, and restart the VM. Figure 3-5 Memory region size view on the HMC
  • 69. Chapter 3. PowerVC installation planning 45 Processing units Number of entitled processing units. A processing unit is the minimum amount of processing resource that the VM can use. For example, a value of 1 (one) processing unit corresponds to 100% use of a single physical processor. The setting of processing units is available only for shared partitions, not for dedicated partitions. You can specify the following values: Minimum The smallest number of processing units that you will accept for deploying a VM. If this value is not available, the deployment will not occur. Desired The number of processing units that you want for deploying a VM. The deployment will occur with a number of processing units that is less than or equal to the desired value and greater than or equal to the minimum value. Maximum The largest number of processing units that you will allow when you resize a VM. This value is the upper limit to which you can resize dynamically. When it is reached, you need to power off the VM, edit the profile, change the maximum value to a new value, and restart the VM. Disk (GB) Disk space that is needed in GB. Compatibility mode Select the compatibility that is needed for your VM. Table 3-10 on page 43 lists each processor compatibility mode and the servers on which the VMs that use each processor compatibility mode can successfully operate. Enable virtual machine remote restart With PowerVC version 1.2.3 or later, users can remote restart a VM on another host easily if the current host fails. This feature enhanced the availability of applications in addition to the solutions that are based on PowerHA and Live Partition Mobility (LPM). Shared processors or dedicated processor Decide whether the VM will use processing resources from a shared processor pool or dedicated processor resources. Important: Processing units and virtual processor are values that work closely and must be calculated carefully. For more information about virtual processor and processing units, see IBM PowerVM Virtualization Managing and Monitoring, SG24-7590. Note: Use the advanced template to define only the amount of storage that you need. You cannot use the advanced template to specify a number of volumes to create. Note: This function is based on PowerVM simplified remote restart function and only supported by POWER8 servers at the time that this book was written. For the requirements of remote restart, see the IBM Knowledge Center: http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.standar d.help.doc/powervc_recovery_reqs_hmc.html
  • 70. 46 IBM PowerVC Version 1.2.3: Introduction and Configuration Option A: Shared processors settings The following values are available for option A: Uncapped Uncapped VMs can use processing units that are not being used by other VMs, up to the number of virtual processors that is assigned to the uncapped VM. Capped Capped VMs can use only the number of processing units that are assigned to them. Weight (0 - 255) If multiple uncapped VMs require unused processing units, the uncapped weights of the uncapped VMs determine the ratio of unused processing units that are assigned to each VM. For example, an uncapped VM with an uncapped weight of 200 receives two processing units for every processing unit that is received by an uncapped VM with an uncapped weight of 100. Option B: Dedicated processor settings The following values are available for option B: Idle sharing This setting enables this VM to share its dedicated processors with other VMs when this VM is powered on and idle (also known as a dedicated donating partition). Availability priority To avoid shutting down mission-critical workloads when your server firmware unconfigures a failing processor, set availability priorities for the VMs (0 - 255). A VM with a failing processor can acquire a replacement processor from a VM with a lower availability priority. The acquisition of a replacement processor allows the VM with the higher availability priority to continue running after a processor failure. 3.4 PowerVC storage access SAN planning In the PowerVC Standard Edition, VMs can access their storage by using either of three protocols: Classical vSCSI, as described in “vSCSI storage access” on page 47 NPIV, as described in “NPIV storage access” on page 49 vSCSI to shared storage pool (SSP), as described in “Shared storage pool: vSCSI” on page 50 A minimum configuration of the SAN and storage is necessary before PowerVC can use them. For example, PowerVC will create virtual disks on storage devices, but these devices must be set up first. You must perform the following actions before you use PowerVC: Configuration of the FC fabric for the PowerVC environment must be planned first: cable attachments, SAN fabrics, and redundancy. It is common to create at least two independent fabrics to provide SAN redundancy. Note: PowerVC assumes that all hosts can access all registered storage controllers. The cabling must be performed in a way so that all hosts can access the same set of storage devices.
  • 71. Chapter 3. PowerVC installation planning 47 PowerVC provides storage for VMs through the VIOS. With PowerVC Standard Edition, the storage is accessed by using NPIV, vSCSI, or an SSP that uses vSCSI. The VIOS and SSP must be configured before PowerVC can manage them. The SAN switch administrator user ID and password must be set up. They will be used by PowerVC. The storage controller administrator user ID and passwords must be set up so that SAN logical unit numbers (LUNs) can be created. For vSCSI, turn off SCSI reserves for volumes that are being discovered on all the Virtual I/O Servers that are used for vSCSI connections. This action is required for LPM operations and for dual Virtual I/O Servers. For vSCSI and SSP, initial zoning must be established to provide access from Virtual I/O Servers to storage controllers. In PowerVC Standard Edition, you need to create a VM manually to capture your first image. Prepare by performing these tasks: – VIOS must be set up for NPIV or vSCSI to provide access from the VM to the SAN. – For NPIV, SAN zoning must be configured to provide access from virtual FC ports in VM to storage controllers. – The OS must be installed in the first VM, and the activation engine or cloud-init must be installed and used. After PowerVC Standard Edition can access storage controllers and switches, it can perform these tasks: Collect inventory on the FC fabric Collect inventory on storage devices (pools and volumes) Monitor health Detect misconfigurations Manage zoning Manage LUNs on storage devices 3.4.1 vSCSI storage access With PowerVC version 1.2.2 or later, you can use vSCSI to access SAN storage in the PowerVC environment. Before you use vSCSI-attached storage in PowerVC, you need to perform the following steps. 1. Turn off SCSI reserves for volumes that are being discovered on all the Virtual I/O Servers that are used for vSCSI connections. This step is required for LPM operations and for dual Virtual I/O Servers. For the IBM Storwize family, XIV, and EMC that use the AIX path control module (PCM) model, you must run the following command on every VIOS where vSCSI operations will be run: chdef -a reserve_policy=no_reserve -c disk -s fcp -t mpioosdisk Important: If you connect a VM to several FC adapters (and therefore several worldwide port names (WWPNs)) to storage devices with several WWPNs, you need to create one zone for each pair of source and target WWPNs. You must not create a single zone with all source and target WWPNs.
  • 72. 48 IBM PowerVC Version 1.2.3: Introduction and Configuration 2. You must configure all zoning between the VIOS and the storage device ports so that you can import vSCSI environments easily and use any number of fabrics with vSCSI. 3. You might need to increase the pre_live_migration_timeout setting in nova.conf if many vSCSI-attached volumes are on the VM or a heavy load is on the destination host’s Virtual I/O Servers. Increasing this setting provides the additional time that is required to process many vSCSI-attached volumes. Figure 3-6 shows how VMs in PowerVC Standard Edition access storage by using vSCSI. The flow of storage management from physical storage LUNs to VMs in PowerVC Standard Edition with vSCSI is described: LUNs are provisioned on a supported storage controller. LUNs are masked to VIOS FC ports and are discovered as hdisk logical devices in VIOS. LUNs are mapped (by using mkvdev) from VIOS to VMs over an vSCSI virtual adapter pair. These steps are completed automatically by PowerVC. No zoning is involved, because individual VMs do not access physical LUNs directly over the SAN. Figure 3-6 PowerVC Standard Edition storage access by using vSCSI Note: You must use the chdef command, not the chdev command. Important: This step is mandatory. Different commands exist for other multipath I/O drivers. See the documentation of the drivers to learn how to turn off SCSI reserves.
  • 73. Chapter 3. PowerVC installation planning 49 3.4.2 NPIV storage access Figure 3-7 shows how VMs access storage through NPIV with PowerVC Standard Edition. The following list describes the actions that are performed by PowerVC Standard Edition to manage the flow of storage from physical storage LUNs to VMs: Access to SAN from VMs is configured on Virtual I/O Servers by using FC adapter pair and NPIV (vfcmap command). LUNs are provisioned on a supported storage controller. LUNs are masked to VM virtual FC ports. SAN zoning is adjusted so that VMs have access from their virtual FC ports to storage controller host ports. Changes in zoning are performed automatically by PowerVC Standard Edition. LUNs are viewed as logical devices in VMs. These actions are completed automatically by PowerVC Standard Edition. Figure 3-7 PowerVC Standard Edition storage access by using NPIV IBM PowerVC Standard SAN Switch AIX/Linux Storage Device Virtual FC HMC VIOS 2 Virtual FC VIOS 1 PowerVC Standard Edition manages Storage, SAN and VIOServers (via HMC) PowerVC Standard Edition Instructs the VIOS to map virtual FC to VMs (NPIV), Dual VIOS configuration is supported PowerVC Standard Edition manages SAN zoning zones are: Storage host ports to VM virtual FC ports (NPIV) PowerVC Standard Edition manages LUNs and LUN masking on storage, LUNs are masked directly to VM Edition IBM Power Systems Server
  • 74. 50 IBM PowerVC Version 1.2.3: Introduction and Configuration 3.4.3 Shared storage pool: vSCSI Figure 3-8 shows how VMs access storage in an SSP with PowerVC Standard Edition. The flow of storage management from physical storage LUNs to VMs in PowerVC Standard Edition is described: Access to storage from Virtual I/O Servers by using physical FC adapters is set manually. The SSP is configured manually: Creation of a cluster, inclusion of Virtual I/O Servers in the cluster, and additions of disk to the pool. PowerVC discovers the SSP when it discovers the Virtual I/O Servers. PowerVC can create logical units (LUs) in the shared storage pool when it creates a new VM. PowerVC instructs the VIOS to map the SSP LUs to LUNs for the VIO clients’ partitions that access them through vSCSI devices. Figure 3-8 PowerVC Standard Edition storage access by using an SSP 3.4.4 Storage access in PowerVC Standard Edition managing PowerKVM Figure 3-9 shows how VMs access storage with PowerVC Standard Edition managing PowerKVM. The following list is a description of the flow of storage management from host internal storage to VMs in PowerVC Standard Edition managing PowerKVM: PowerKVM accesses the internal storage on the host. PowerVC manages the internal storage when a PowerKVM host is added for management. IBM PowerVC Standard SAN Switch or SVC AIX/Linux Shared Storage Pool: LUs vSCSI HMC vSCSI VIOS 1 PowerVC Standard Edition manages Storage, SAN and VIOServers (via HMC) PowerVC Standard Edition instructs the VIOS to map SSP Lus Dual VIOS configuration is supported PowerVC Standard Edition manages SSP Lus. Edition IBM Power Systems Server Zoning must be done manually. Zones contains storage Devices or SVC ports and VIOS FC ports VIOS 2
  • 75. Chapter 3. PowerVC installation planning 51 LUN requests are created automatically by PowerVC and mapped to the VMs. The flow of storage management from SAN storage to VMs in PowerVC Standard Edition managing PowerKVM by using iSCSI is described: SAN storage is available through the Ethernet network by configuring access over the iSCSI protocol. PowerVC manages the SAN storage when the storage provider is added. LUN requests are created automatically by PowerVC and mapped to VMs. Figure 3-9 PowerVC Standard Edition managing PowerKVM storage access 3.5 Storage management planning PowerVC manages storage volumes, which can be attached to VMs. These storage volumes can be backed by IBM Storwize storage devices, SAN Volume Controller devices, IBM XIV storage devices, EMC VMAX storage devices, EMC VNX storage devices, or SSP files. PowerVC requires IP connectivity to the storage providers to manage the storage volumes. 3.5.1 PowerVC terminology PowerVC uses a few terms and concepts that differ from terms that are used in PowerVM: Storage provider Any system that provides storage volumes. In version 1.2.3 of PowerVC, storage providers can be IBM Storwize devices, SAN Volume Controller devices that hide the real storage unit that holds the data, IBM XIV devices, EMC VMAX storages, EMC VNX storages, or SSP. Figure 3-10 shows a PowerVC environment that manages three storage providers: one IBM Storwize V7000, one IBM XIV storage, and Hosts Resource Manager Power Systems Storage NetworkKey Virtual Machines PowerKVM Virtual Machine Virtual Machine Fabric SAN Ethernet Network iSCSI connections Internal Storage Power VC
  • 76. 52 IBM PowerVC Version 1.2.3: Introduction and Configuration one EMC VMAX storage. PowerVC also refers to storage providers as storage controllers. Figure 3-10 PowerVC storage providers
  • 77. Chapter 3. PowerVC installation planning 53 Fabric Another name for a SAN switch. Figure 3-11 shows a PowerVC Fabrics window that displays information for a switch that is named fswitch, with IP address 172.16.21.139. Click this address on the Fabrics window to open the graphical view of the switch. Figure 3-11 Fabrics window that lists a switch with a switch GUI
  • 78. 54 IBM PowerVC Version 1.2.3: Introduction and Configuration Storage pool A storage resource that is defined on the storage provider in which PowerVC can create volumes. PowerVC cannot create or modify storage pools; it can only discover them. The storage pools must be managed directly from the storage providers. Figure 3-12 shows the detail of an IBM Storwize V7000 storage provider that is configured with two storage pools for different purposes. Figure 3-12 Storage pools Shared storage pool In PowerVC, shared storage resource refers to the PowerVM shared storage pool (SSP) feature. The SSP cannot be created or modified by PowerVC. You must create the SSP on the VIOS before PowerVC can create volumes on the SSP. Volume Volumes are also referred to as a disk or a logical unit number (LUN). They are carved from the storage pools and presented as virtual disks to the partitions that are managed by PowerVC.
  • 79. Chapter 3. PowerVC installation planning 55 Storage template This template defines the properties of a storage volume, such as location, thin provisioning, and compression. For example, by using the templates that are shown in Figure 3-13, you can create volumes that are either a normal thin-provisioned volume or a mirrored volume. For more information, see 3.5.2, “Storage templates” on page 56. Figure 3-13 Storage templates Storage connectivity group A set of Virtual I/O Servers with access to the same storage controllers. For more information, see 3.5.3, “Storage connectivity groups and tags” on page 58. Tags Tags are a way to partition the FC ports of a host in sets that can be associated with sets of Virtual I/O Servers. For more information, see 3.5.3, “Storage connectivity groups and tags” on page 58.
  • 80. 56 IBM PowerVC Version 1.2.3: Introduction and Configuration 3.5.2 Storage templates Storage templates are used to speed up the creation of new disk. A storage template defines several properties of the disk unit. Disk size is not part of the template. For different types of storage devices, the information that is defined in a template differs. We introduce the IBM Storwize storage template only, which is a common type of storage that is used in the PowerVC environment. IBM Storwize storage template definition The following information is defined in a template: Name of the storage template Storage provider. The template is associated with a single storage provider. It cannot be used to instantiate disks from multiple storage providers. Storage pool within storage provider. The template is associated with a single storage pool. With PowerVC version 1.2.3 or later, you can add another pool to support volume mirroring in the Advanced settings area. Thin, thick (full), or compressed provisioning. To choose thick provisioning, select the Generic type of volume. Advanced Settings area: The following information is defined in the Advanced Settings area: – I/O group: The I/O group to add the volume to. For the SAN Volume Controller, the maximum I/O groups that are supported is four. – % of virtual capacity: Determines how much real storage capacity is allocated to the volume at creation time, as a percentage of the maximum size that the volume can reach. – Automatically expand: Check box Yes or No. Prevents the volume from using all of its capacity and going offline. As a thin-provisioned volume uses more of its capacity, this feature maintains a fixed amount of unused real capacity, which is called the contingency capacity. – Warning threshold: When real capacity reaches a specific percentage of virtual capacity, a warning alert is sent. – Grain size: Thin-provisioned grain size can be selected in the range from 32 KB to 256 KB. A grain is a chunk that is used for allocating space. The grain size affects the maximum virtual capacity for the volume. Generally, smaller grain sizes save space but require more metadata access, which can affect performance adversely. The default grain size is 256 KB, which is the strongly recommended option. The grain size cannot be changed after the thin-provisioned volume is created. – Use all available WWPNs for attachment: Specifies whether to enable multipath zoning. When this setting is enabled, PowerVC uses all available WWPNs from all of the I/O groups in the storage controller to attach the volume to the VM. Enabling multipath causes each WWPN that is visible on the fabric to be zoned to the VM. – Enable mirroring: When checked, you will need to select another pool for volume mirroring. The volume that is created will have one more copy in the mirroring pool. IBM Storwize clients can use two pools based on two different back-end storage devices to provide high availability. A storage template can then be selected during volume creation operations.
  • 81. Chapter 3. PowerVC installation planning 57 Figure 3-14 shows a dialog window that is presented to an PowerVC administrator when the administrator defines the advanced settings for a thin-provisioned storage template definition. Figure 3-14 Storage template definition: Advanced settings, thin-provisioned Storage template planning When you register a storage provider with PowerVC, a default storage template is created for that provider. We suggest that you edit this default template to suit your needs immediately after PowerVC discovers the service provider. You can define several storage templates for one storage provider. If the storage provider contains several storage pools, at least one storage template is needed for each pool before those pools can be used to create volumes. Note: After a disk is created and uses a template, you cannot modify the template settings.
  • 82. 58 IBM PowerVC Version 1.2.3: Introduction and Configuration When you create a storage volume, you must select a storage template. All of the properties that are specified in the storage template are applied to the new volume, which is created on the storage provider that is specified in the storage template. To create a disk, you need to enter the name of the template to use, volume name, and size only. Decide whether to select the Enable sharing check box. See Figure 3-15. Figure 3-15 Volume creation A storage template must also be specified when you deploy a new VM to control the properties of the virtual server’s boot volumes and data volumes. PowerVC can manage pre-existing storage volumes. You can select them when you register the storage device or at any later time. Preexisting storage volumes do not have an associated storage template. 3.5.3 Storage connectivity groups and tags PowerVC Standard Edition uses storage connectivity groups and tags. Storage connectivity groups When you create a VM, PowerVC needs a way to identify on which host it has to deploy this machine. One of the requirements is that from this host, the VM will connect to its storage. Also, when you request PowerVC to migrate a VM, PowerVC must ensure that the target host also provides the VM with connectivity to its volume. The purpose of a storage connectivity group is to define sets of hosts with access to the same storage devices where a VM can be deployed. A storage connectivity group is a set of Virtual I/O Servers with access to the same storage controllers. It can span several host systems on IBM Power Systems servers with landscapes that are managed by PowerVC Standard Edition.
  • 83. Chapter 3. PowerVC installation planning 59 When you deploy a new VM with PowerVC, a storage connectivity group must be specified. The VM will be associated with that storage connectivity group during the VM’s existence. A VM can be deployed only on Power Systems hosts that contain at least one VIOS that is part of the storage connectivity group. Specifying the storage connectivity group that a VM belongs to defines the set of hosts on which this VM can be deployed. The VM can be migrated only within its associated storage connectivity group and host group. PowerVC ensures that the source and destination servers can access the required storage controllers and LUNs. Default storage connectivity groups are automatically created when PowerVC discovers the environment. These default connectivity groups contain all Virtual I/O Servers that access the same devices. Figure 3-16 shows the result of the discovery by PowerVC of an environment with the following conditions: Two POWER8 servers exist. Each server hosts two Virtual I/O Servers. Each VIOS has two FC ports. All Virtual I/O Servers connect to an IBM Storwize V7000. PowerVC automatically created two storage connectivity groups: One storage connectivity group for NPIV storage access and one storage connectivity group for vSCSI storage access. These two storage connectivity groups correspond to the two ways that partitions can access storage from these hosts. Figure 3-16 List of storage connectivity groups The default storage connectivity groups can be disabled but not deleted. For more information, see 5.9, “Storage connectivity group setup” on page 116. The system administrator can define additional storage connectivity groups to further constrain the selection of host systems. You can use storage connectivity group-to-group host systems together in, for example, production and development groups. On large servers that are hosting several Virtual I/O Servers, you can use storage connectivity groups to direct partitions to use a specific pair of Virtual I/O Servers on each host.
  • 84. 60 IBM PowerVC Version 1.2.3: Introduction and Configuration Figure 3-17 shows a diagram of storage connectivity group technology. It includes two Power Systems servers, each with three Virtual I/O Servers. Two Virtual I/O Servers from each server are part of the production storage connectivity group (called Production SCG in the figure) and one VIOS from each server is part of the development storage connectivity group (Development SCG). The VMs that are named VM1, VM2, VM4, and VM5 are associated with the production storage connectivity group, and their I/O traffic passes through the FC ports of A1, A2, B1, and B2 Virtual I/O Servers. The development partitions VM3 and VM6 are associated with the development storage connectivity group, and their traffic is limited to using the FC ports that are attached to Virtual I/O Servers A3 and B3. Figure 3-17 Storage connectivity groups Tip: A storage connectivity group can be modified after its creation to, for example, add or remove Virtual I/O Servers. Therefore, when your environment changes, you can add new hosts and include their Virtual I/O Servers in existing storage connectivity groups. IBM Power Systems Server A Hypervisor VM1 VM2 VM3 vSCSI vSCSI IBM Power Systems Server B Hypervisor VM4 VM5 VM6 vSCSI vSCSI Production VIOS A2VIOS A1 FC FC FC FC Production VIOS B2VIOS B1 FC FC FC FC Dev VIOS A3 FC FC Dev VIOS B3 FC FC Redundant production SAN Development SAN Production SCG Development SCG
  • 85. Chapter 3. PowerVC installation planning 61 Figure 3-18 shows how PowerVC presents the detail of a storage connectivity group. It is similar to the production storage connectivity group of the previous example, with two servers, two Virtual I/O Servers for each server, and two ports for each VIOS. Figure 3-18 Content of a storage connectivity group Storage port tags PowerVC Standard Edition introduces a concept that does not exist within PowerVM: storage port tags. PowerVC allows arbitrary tags to be placed on FC ports. A storage connectivity group can be configured to connect only through FC ports with a specific tag. Storage connectivity groups that share a VIOS can use different physical FC ports on the VIOS. The PowerVC administrator handles this function by assigning different port tags to the physical FC ports of the VIOS. These tags are labels that can be assigned to specific FC ports across your hosts. A storage connectivity group can be configured to connect only through FC ports that have the same tags when you deploy with NPIV direct connectivity. Port tagging is not effective when you use SSP. Combining a storage connectivity group and tags By using both the storage connectivity group and tag functions, you can easily manage different configurations of SAN topology that fit your business needs for partitioning the SAN and restricting disk I/O traffic to part of the SAN. Note: An FC port can have no tag or one tag. This tag can change over time, but a port cannot have two or more tags simultaneously.
  • 86. 62 IBM PowerVC Version 1.2.3: Introduction and Configuration Figure 3-19 shows an example of possible tag usage. The example consists of two IBM Power Systems servers, each with two Virtual I/O Servers. Each VIOS has three FC ports. The first two FC ports are tagged ProductionSCG and connect to a redundant production SAN. The third port is tagged DevelopmentSCG and connects to a development SAN. Client VMs that belong to either storage connectivity groups (ProductionSCG or DevelopmentSCG) share the same Virtual I/O Servers but do not share FC ports. Figure 3-19 Storage connectivity groups and tags IBM Power Systems Server A Hypervisor VM1 VM2 VM3 vSCSIvSCSI VIOS A1 FC FC FC Redundant production SAN Development SAN VIOS A2 FC FC FC IBM Power Systems Server B Hypervisor VM4 VM5 VM6 vSCSIvSCSI VIOS B1 FC FC FC VIOS B2 FC FC FC Development SCG Production SCG
  • 87. Chapter 3. PowerVC installation planning 63 The Virtual I/O Servers in a storage connectivity group provide storage connectivity to a set of VMs with common requirements. An administrator can use several approaches to configure storage connectivity groups. Figure 3-20 shows these possible scenarios: Uniform All VMs use all Virtual I/O Servers and all FC ports. Virtual I/O Server segregation Different groups of different VMs use different sets of Virtual I/O Servers but all FC ports on each VIOS. Port segregation Different groups of different VMs use all Virtual I/O Servers but different FC ports according to tags on those ports. Combination In a combination of VIOS and port segregation, different groups of different VMs use different sets of Virtual I/O Servers and different FC ports according to tags on those ports. Figure 3-20 Examples of storage connectivity group deployments 3.6 Network management planning A network represents a set of Layer 2 and Layer 3 network specifications, such as how your network is subdivided by using VLANs, and information about the subnet mask, gateway, and other characteristics. When you deploy an image, you choose one or more existing networks to apply to the new VM. Setting up networks in advance reduces the amount of information that you need to enter during each deployment and helps to ensure a successful deployment. IBM Power Systems ServerIBM Power Systems Server VM1 VM2 VM3 VM1 VM2 VM3 IBM Power Systems Server VM1 VM2 VM3 vSCSI FC VIOS 2 FC FC FC VIOS 1 FCFC FC IBM Power Systems Server VM1 VM2 VM3 VIOS 3 FC FC FC Dev SCG vSCSI VIOS 1 FC FC FC FC vSCSI VIOS 2 FC FC FC FC Uniform VIOS Segregated Port Segregated VIOS And Port Segregated vSCSI VIOS 1 FC FC FC VIOS 2 FC FC FC Production SCG vSCSI VIOS 1 FC FC FC VIOS 2 FC FC FC VIOS 3 FC FC FC Production SCG Dev SCG Dev SCG Production 1 SCG Production 2 SCG Production SCG
  • 88. 64 IBM PowerVC Version 1.2.3: Introduction and Configuration The first selected network is the management network that provides the primary system default gateway address. You can add additional networks to divide the traffic and provide more functions. PowerVC supports IP addresses by using hardcoded (/etc/hosts) or Domain Name Server (DNS)-based host name resolution. PowerVC also supports Dynamic Host Configuration Protocol (DHCP) or static IP address assignment. For DHCP, an external DHCP server is required to provide the address on the VLANs of the objects that are managed by PowerVC. 3.6.1 Multiple network planning Each VM that you deploy must be connected to one or more networks. By using multiple networks, you can split traffic. The PowerVC management host uses three common types of networks when it deploys VMs: Data network This network provides the route over which workload traffic is sent. At least one data network is required for each VM, and more than one data network is allowed. Management network This type of network is optional but highly suggested to provide a higher level of function and security to the VMs. A management network provides the Resource Monitoring and Control (RMC) connection between the management console and the client logical partition (LPAR). VMs are not required to have a dedicated management network, but a dedicated management network simplifies the management of advanced features, such as LPM and dynamic reconfiguration. PowerVC provides the ability to connect to a management network. First, you must set up networking on the switches and the shared Ethernet adapter to support it. Live Partition Migration (LPM) network This optional network provides the route over which migration data is sent from one host to another host. By separating this data onto its own network, you can shape that network traffic to specify a higher or lower priority over data or management traffic. If you do not want to use a separate network for LPM, you can reuse an existing data or management network connection for LPM. Since version 1.2.2, PowerVC can dynamically add a network interface controller (NIC) to a VM or remove a NIC from a VM. PowerVC will not set the IP address for new network interfaces that are created after the machine deployment. Any removal of a NIC will result in freeing the IP address that was set on it. Note: When you use DHCP, PowerVC is not aware of the IP addresses of the VMs that it manages. Tip: We suggest that the user creates all of the networks that are needed for future VM creation. Contact your network administrator to add all of the needed VLANs on the switch ports that will be used by the shared Ethernet adapter (PowerVM) or network bridges (PowerKVM). This action will drastically reduce the amount of time that is needed for network management (no more actions for PowerVC administrators and network teams).
  • 89. Chapter 3. PowerVC installation planning 65 3.6.2 Shared Ethernet adapter planning Set up the shared Ethernet adapters for a registered host before you use the host within PowerVC. The configuration for each shared Ethernet adapter determines how each host treats networks. PowerVC requires that the shared Ethernet adapters are created before you start to manage the systems. If you are using shared Ethernet adapter in sharing/auto mode with VLAN tagging, we suggest that you create it without any VLANs that are assigned on the Virtual Ethernet Adapters. PowerVC will add or remove the VLANs on the shared Ethernet adapters when necessary (at VM deletion and creation): If you deploy a VM on a new network, PowerVC will add the VLAN on the shared Ethernet adapter. If you delete the last VM of a specific network (for a host), the VLAN will be automatically deleted. If the VLAN is the last VLAN that was defined on the Virtual Ethernet Adapter, this VLAN will be removed from the shared Ethernet adapter. If you are using shared Ethernet adapter and the following setting is true: – High availability mode set to sharing: PowerVC will ensure that at least two Virtual Ethernet Adapters will be kept in the shared Ethernet adapter. – High availability mode set to auto: PowerVC will ensure that at least one Virtual Ethernet Adapter will be kept in the shared Ethernet adapter. PowerVC then connects VMs to that shared Ethernet adapter, deploys client-level VLANs to it, and allows dynamic reconfiguration of the network to shared Ethernet adapter mapping. When you create a network in PowerVC, a shared Ethernet adapter is automatically chosen from each registered host, based on the VLAN that you specified when you defined the network. If the VLAN does not exist yet on the shared Ethernet adapter, PowerVC deploys that VLAN to the shared Ethernet adapter that is specified. VLANs are deployed only as VMs need them to reduce the broadcast domains. You can dynamically change the shared Ethernet adapter to which a network is mapped or you can remove the mapping, but remember that this assignment is a default automatic assignment when you set up your networks. It might not match your organization’s naming policies. The shared Ethernet adapter that is chosen as the default adapter has the same network VLAN as the new network. If a shared Ethernet adapter with the same VLAN does not exist, PowerVC chooses as the default the shared Ethernet adapter with the lowest primary VLAN ID Port Virtual LAN Identifier (PVID) that is in an available state. Important: When multiple Ethernet adapters exist on either or both the migration source host or destination host, PowerVC cannot control which adapter is used during the migration. To ensure the use of a specific adapter for your migrations, configure an IP address on the adapter that you want to use. Note: To manage PowerVM, PowerVC requires that at least one shared Ethernet adapter is defined on the host.
  • 90. 66 IBM PowerVC Version 1.2.3: Introduction and Configuration Certain configurations might ensure the assignment of a particular shared Ethernet adapter to a network. For example, if the VLAN that you choose when you create a network in PowerVC is the PVID of the shared Ethernet adapter or one of the additional VLANs of the primary Virtual Ethernet Adapter, that shared Ethernet adapter must back the network. No other options are available. Plan more than one VIOS if you want a failover VIOS or expanded VIOS functionality. In our experience, certain clients want to keep the slot-numbering convention. By default, PowerVC will add and remove Virtual Ethernet Adapter from the shared Ethernet adapter by choosing the next available slot ID. If you want to avoid this behavior, you can modify all of the /etc/nova/nova*.conf and change the automated_powervc_vlan_clean attribute to false by using the following command: openstack-config --set /etc/nova/nova.conf DEFAULT automated_powervm_vlan_cleanup False If host change is already defined, use this attribute for each nova-*.conf file (one for each host), for example: openstack-config --set /etc/nova/nova-828642A_10D6D5T.conf DEFAULT automated_powervm_vlan_cleanup False Then, restart the PowerVC Nova service: /opt/ibm/powervc/bin/powervc-services nova restart Tip: Systems that use multiple virtual switches are supported. If a network is modified to use a different shared Ethernet adapter and that existing VLAN is already deployed by other networks, those other networks move to the new adapter, also. To split a single VLAN across multiple shared Ethernet adapters, break those shared Ethernet adapters into separate virtual switches. PowerVC supports the use of virtual switches in the system. Use multiple virtual switches when you want to separate a single VLAN across multiple distinct physical networks. If you create a network, deploy VMs to use it, and then change the shared Ethernet adapter to which that network is mapped, your workloads will be affected. The network will experience a short outage while the reconfiguration takes place. In environments with dual Virtual I/O Servers, the secondary shared Ethernet adapter is not shown except as an attribute on the primary shared Ethernet adapter.
  • 91. Chapter 3. PowerVC installation planning 67 Table 3-11 is a table of suggestions when you create and use shared Ethernet adapters. The use of SEAs is a preferred practice. Table 3-11 Preferred practices for shared Ethernet adapter 3.7 Planning users and groups To access the PowerVC GUI, you must enter a user ID. This user ID is one of the user IDs that is defined on the underlying Linux operating system. PowerVC also takes advantage of the operating system groups. Changes to users and groups are managed by the operating system and they are reflected immediately on PowerVC. 3.7.1 User management When you install PowerVC, it is configured to use the security features of the operating system on the management host, by default. This configuration sets the root operating system user account as the only available account with access to the PowerVC server. We recommend that you create at least one new system administrator user account to replace the root user account as the PowerVC management administrator. For more information, see “Adding user accounts” on page 68. After a new administrator ID is defined, remove the PowerVC administrator rights to the root user ID as explained in “Disable the root user account from PowerVC” on page 71. Type of deployment High availability mode auto High availability mode sharing New host Shared Ethernet adapter creation with one VEA. Do not put any VLANs on the VEA. Shared Ethernet adapter creation with two VEAs. Do not put any VLANs on the VEAs. Existing host (keep numbering convention) Set automated_powervc_vlan_cleanup of nova-*.conf to False. Set automated_powervc_vlan_cleanup of nova-*.conf to False. Existing host (let PowerVC manage numbering the adapters) Do nothing. Do nothing. Important: The PowerVC management host stores data in an IBM DB2 database. When the installation of PowerVC is complete, an operating system user account is created for the main DB2 process to run under. This user account is pwrvcdb. Do not remove or modify this user. PowerVC also requires other user IDs that are defined in /etc/passwd and they must not be modified, such as nova, neutron, keystone, and cinder. All of the users are used by DB2 and OpenStack and they must not be modified or deleted. For security, you cannot connect remotely to these user IDs. These users are configured for no login.
  • 92. 68 IBM PowerVC Version 1.2.3: Introduction and Configuration User account planning is important to define standard accounts and the process and requirements for managing these accounts. A PowerVC management host can take advantage of user accounts that are managed by the Linux operating system security tools or can be configured to use the services that are provided by LDAP. Operating system user account management Each user is added, modified, or removed by the system administrator, by using Linux operating system commands. After the user ID is defined on the operating system, the user ID becomes available in PowerVC if it is a member of a group with a PowerVC role that is granted, such as admin, deployer, or viewer (see 3.7.2, “Group management planning” on page 71). Operating system-based user management requires command-line experience, but it is easy to maintain. No dependency exists on other servers or services. To see user accounts in the PowerVC management hosts, click Users in the top navigation bar of the PowerVC GUI. Use the underlying Linux commands to manage your account (useradd, usermod, or userdel, for example). The system administrator of the PowerVC management host must replace the default root user account configuration. After the system administrator adds the new user account to the admin group in the operating system, the root user must be removed from this group. Adding user accounts To add a user account to the operating systems on the PowerVC management host, run the following command as root from the Linux command-line interface (CLI): # useradd [options] login_name Assume that you want to create a user ID for a system administrator who is new to PowerVC. You want to allow this administrator to view the PowerVC environment only, not to act on any of the managed objects. Therefore, you want to give this administrator only a viewer privilege. By using the command that is shown in the Example 3-1, create the user viewer1, with /home/viewer1 as the home and base directory, the viewer group as the main group, and a comment with additional information, such as PowerVC. Example 3-1 Adding an admin user account with the useradd command useradd -d /home/viewer1 -g viewer -m -c "PowerVC" viewer
  • 93. Chapter 3. PowerVC installation planning 69 The new user is created with the viewer role in the PowerVC management host because it is part of the viewer user group. Double-click the viewer1 user account to see detailed information, as shown in Figure 3-21. After the administrator is skilled enough with PowerVC to start managing the environment, you can change the administrator’s group to give the administrator more management privileges, as described in “Update user accounts” on page 70. In addition to the viewer group, the admin and developer group can be assigned to a user. Use these commands to create a user with the deployer and admin role: Deployer: useradd -d /home/deployer1 -g deployer -m -c “One deployer account” deployer1 Admin: useradd -d /home/admin1 -g admin -m -c “One admin account” admin1 In the example in Figure 3-21, three user IDs (admin1, deployer1, and viewer1) were added to the initial root user ID. Figure 3-21 Users information Figure 3-21 shows the new accounts. Note: Do not forget to set a password to the new user if you want to log in with these accounts on the PowerVC GUI.
  • 94. 70 IBM PowerVC Version 1.2.3: Introduction and Configuration Figure 3-22 shows the new user admin1 that was added to the admin group. Figure 3-22 Detailed user account information You can verify each user/group in the /etc/group or /etc/passwd file as shown in Example 3-2. Example 3-2 Verify users # grep -wE "viewer|deployer|admin" /etc/group admin:x:1001:root deployer:x:1002: viewer:x:1003: # grep -wE "viewer1|deployer1|admin1" /etc/passwd viewer1:x:1001:1003:One viewer account:/home/viewer1:/bin/bash deployer1:x:1002:1002:One deployer account:/home/deployer1:/bin/bash admin1:x:1003:1001:One admin account:/home/admin1:/bin/bash Update user accounts To update a user account in the operating systems on the PowerVC management host, run the following command as root: # usermod [options] login_name Use the command that is shown in Example 3-3, update the admin user account with the comment IBM PowerVC admin user account, and move it to the admin user group. Example 3-3 Updating the admin user account with the usermod command usermod -g admin admin
  • 95. Chapter 3. PowerVC installation planning 71 After this modification, the admin user account is part of the admin user group and can manage the PowerVC management host, as shown in Figure 3-22 on page 70. Disable the root user account from PowerVC Remove the root user account from the admin user group in the PowerVC management hosts by running the following command as root: gpasswd -d root admin Lightweight Directory Access Protocol (LDAP) LDAP is an open standard for accessing global or local directory services over a network or the Internet. A directory can handle as much information as you need, but it is commonly used to associate names with phone numbers and addresses. LDAP is a client/server solution. The client requests information and the server answers the request. LDAP can be used as an authentication server. If an LDAP server is configured in your enterprise, you can use that LDAP server for PowerVC user authentication. PowerVC can be configured to query an LDAP server for authentication rather than using operating system user accounts authentication. Use the powervc-ldap-config to set up the ldap authentication. See “Configuring LDAP” in the PowerVC section of the IBM Knowledge Center page for instructions: http://www-01.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.standar d.help.doc/powervc_ldap_hmc.html Selecting the authentication method Plan the authentication method and necessary accounts before the PowerVC installation. For simplicity of management, we recommend the use of the operating system authentication method to manage user accounts in most of the PowerVC installations. Use the LDAP authentication method only if an LDAP server is already installed and configured. 3.7.2 Group management planning By default, PowerVC is configured to use the group security features of the operating system on the management host. PowerVC includes three user groups with the following privileges: admin Users in this group can perform all tasks, and they have access to all resources. deployer Users in this group can perform all tasks, except the following tasks: – Adding, updating, or deleting storage systems – Adding, updating, or deleting hosts – Adding, updating, and deleting networks – Viewing users and groups viewer Users in this group can view resources and the properties of resources, but they cannot perform tasks. They cannot view the user and group properties. Important: We strongly recommend that you do not use the root user account on PowerVC. It is a security preferred practice to remove it from the admin group.
  • 96. 72 IBM PowerVC Version 1.2.3: Introduction and Configuration Membership in these groups is defined in the operating system. Group management is not performed from PowerVC. To add or remove users from these groups, you must add or remove them in the operating system. Any changes to the operating system groups are reflected on PowerVC. The PowerVC management host can display the user accounts that belong to each group. Log in to the PowerVC management host and click Users on the top navigation bar of the PowerVC GUI, and then click the Groups tab, as shown in Figure 3-23. Figure 3-23 Groups tab view under Users on the PowerVC management host Note: You cannot create your own authorization rules; only viewer, deployer, and admin are available. You cannot fine-tune the user rights with a mechanism, such as role-based access control (RBAC).
  • 97. Chapter 3. PowerVC installation planning 73 This view displays the default groups. To access detailed information for each group, double-click the group name. Figure 3-24 shows an example of a group that includes three user IDs. Figure 3-24 Detailed view of viewer user group on the management host 3.8 Security management planning PowerVC provides security services that support a secure environment and, in particular, the following security features: LDAP support for authentication and authorization information (users and groups). The PowerVC Apache web server is configured to use secured https protocol. Only Transport Layer Security (TLS) 1.2 is supported. Host key and certificate verification of hosts, storage, and switches. For a list of configuration rules for Internet Explorer, see this website: http://www-01.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.stan dard.help.doc/powervc_hwandsw_reqs_hmc.html Audit logs, which are recorded and available. Note: File upload is not supported in Internet Explorer, version 9.0. Certain functions will be limited. When you use Internet Explorer version 9.0 or version 10.0, you must select Use TLS 1.2.
  • 98. 74 IBM PowerVC Version 1.2.3: Introduction and Configuration 3.8.1 Ports that are used by IBM Power Virtualization Center The set of ports differs with the PowerVC editions (PowerVM and PowerKVM). Information about the ports that are used by PowerVC management hosts for inbound and outbound traffic is on the following IBM Knowledge Center pages: PowerVC Standard Edition, for managing PowerVM: http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.standar d.help.doc/powervc_planning_security_firewall_hmc.html PowerVC Standard Edition, for managing PowerKVM: http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.hel p.doc/powervc_planning_security_firewall_kvm.html 3.8.2 Providing a certificate A PowerVC management host is installed with a default self-signed certificate and a key. PowerVC can also use certificate authority (CA)-signed certificates. Self-signed certificates are certificates that you create for private use. After you create a self-signed certificate, you can use it immediately. Because anyone can create self-signed certificates, they are not considered publicly trusted certificates. You can replace default, expired, or corrupted certificates with a new certificate. You can also replace the default certificate with certificates that are requested from a CA. The certificates are installed in the following locations: /etc/pki/tls/certs/powervc.crt /etc/pki/tls/private/powervc.key Clients can replace the rsyslog and libvirt certificates for PowerKVM installations. The process to replace the certificates is described in the IBM Knowledge Center: PowerVC Standard Managing PowerVM: http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.standard.h elp.doc/powervc_certificate_hmc.html PowerVC Standard Managing PowerKVM: http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.help.d oc/powervc_rsyslog_cert_kvm.html http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.help.d oc/powervc_certificate_kvm.html Important: If a firewall is configured on the management host, ensure that all ports that are listed on the associated IBM Knowledge Center page are open.
  • 99. Chapter 3. PowerVC installation planning 75 3.9 Product information See the following resources for more planning information. Direct customer support For technical support or assistance, contact your IBM representative or the Support website: http://www.ibm.com/support Packaging The PowerVC Standard Editions contain a DVD that includes product installation documentation and files. Your Proof of Entitlement (PoE) for this program is a copy of a paid sales receipt, purchase order, invoice, or other sales record from IBM or its authorized reseller from whom you acquired the program, provided that it states the license charge unit (the characteristics of intended use of the program, number of processors, and number of users) and quantity that was acquired. Software maintenance This software license offers Software Maintenance, which was previously referred to as Software Subscription and Technical Support. Processor core (or processor) Processor core (or processor) is a unit of measure by which the program can be licensed. Processor core (or processor) is a functional unit within a computing device that interprets and executes instructions. A processor core consists of at least an instruction control unit and one or more arithmetic or logic units. With multi-core technology, each core is considered a processor core. Entitlements must be acquired for all activated processor cores that are available for use on the server. In addition to the entitlements that are required for the program directly, the license must obtain entitlements for this program that are sufficient to cover the processor cores that are managed by program. A Proof of Entitlement (PoE) must be acquired for all activated processor cores that are available for use on the server. Authorization for PowerVC is based on the total number of activated processors on the machines that are running the program and the activated processors on the machines that are managed by the program. Licensing The IBM International Program License Agreement, including the License Information document and Proof of Entitlement (PoE), governs your use of the program. PoEs are required for all authorized use. This software license includes Software Subscription and Support (also referred to as Software Maintenance).
  • 100. 76 IBM PowerVC Version 1.2.3: Introduction and Configuration
  • 101. © Copyright IBM Corp. 2014, 2015. All rights reserved. 77 Chapter 4. PowerVC installation This chapter explains the IBM Power Virtualization Center Standard Edition (PowerVC) installation. It covers the following topics: 4.1, “Setting up the PowerVC environment” on page 78 4.2, “Installing PowerVC” on page 82 4.3, “Uninstalling PowerVC” on page 84 4.5, “Updating PowerVC” on page 87 4.6, “PowerVC backup and recovery” on page 87 4.7, “PowerVC command-line interface” on page 92 4.8, “Virtual machines that are managed by PowerVC” on page 94 4
  • 102. 78 IBM PowerVC Version 1.2.3: Introduction and Configuration 4.1 Setting up the PowerVC environment IBM PowerVC version 1.2.3.0 can be installed on Red Hat Enterprise Linux (RHEL) version 7.1, either on the ppc64, ppc64LE, or x86_64 platform. Before you install PowerVC, install RHEL on the management virtual machine (VM) or management host. PowerVC requires several additional packages to be installed. These packages are automatically installed if you have a valid Linux repository. If you need to manually install these packages, see “Installing Red Hat Enterprise Linux on the management server or host” in the PowerVC Standard Edition section of the IBM Knowledge Center: https://ibm.biz/BdXKQR To set up the management hosts, complete the following tasks: 1. Create the VM (only if you plan to install PowerVC in a virtualized server). 2. Install RHEL Server 7.1 on the management hosts. 3. Customize RHEL Server to meet the PowerVC requirements. 4.1.1 Create the virtual machine to host PowerVC Create the VM that will host PowerVC with the same procedure that is used to create any other partition. Create the virtual machine by using the Hardware Management Console To create the VM by using the HMC, complete the following steps: 1. In the navigation panel, open Systems Management and click Servers. 2. In the work panel, select the managed system, click Tasks, and click Configuration → Create Partition. 3. Follow the steps in the Create Partition wizard to create a logical partition (LPAR) and partition profile. After the VM is created, you need to install the operating system into the management VM. Create the virtual machine by using PowerKVM To create the management VM on a PowerKVM host, you can use the tool that you prefer from these options: A command-line utility that is called virsh An HTML-based management tool that is called Kimchi. Both tools are provided with PowerKVM. Note: Unlike the Hardware Management Console (HMC), PowerVC is not a stand-alone appliance. It must be installed on an operating system. You must have a valid Linux license to use the operating system and a valid license to use PowerVC. Important: The management VM must be dedicated to PowerVC and the operating system on which it runs. Do not install other software on it.
  • 103. Chapter 4. PowerVC installation 79 After the VM is created, you need to install the operating system into the management VM. Create the management virtual machine on IBM System x To create the management VM on an IBM System x server, follow the instructions for your server. After the VM is created, you need to install the operating system into the management VM. 4.1.2 Download and install Red Hat Enterprise Linux As part of the PowerVC setup, you need to download and install RHEL, so you need a valid license and a valid copy of the software. PowerVC is not a stand-alone appliance. It is installed on top of the operating system, but it does not include the license to use RHEL. You can get the software and a valid license from the Red Hat website: http://www.redhat.com Install RHEL by using your preferred method. See the Red Hat Enterprise Linux 7 Installation Guide for instructions: https://ibm.biz/BdXKQ4 4.1.3 Customize Red Hat Enterprise Linux Before you install PowerVC, customize RHEL to meet the following PowerVC requirements (described in the following sections): Network, Domain Name Server (NS), and host name configuration Creation of a repository for the RHEL packages or manual installation Configure the network The first task before you install PowerVC is to configure the network. PowerVC uses the default network interface: eth0. To use a different network interface, such as eth1, set the HOST_INTERFACE environment variable before you run the install script. The following example shows the setting: export HOST_INTERFACE=eth1 Important: PowerVC does not support dual management by both PowerVC and Kimchi after PowerVC is installed. Note: After the installation finishes, do not add any other package to the server. If any other packages are needed by PowerVC, the additional packages are obtained by the PowerVC installer automatically. Important: IBM Installation Toolkit for Linux must not be installed on the PowerVC management host.
  • 104. 80 IBM PowerVC Version 1.2.3: Introduction and Configuration Set the Domain Name Server and host name Two options exist for managing name resolution: Either use DNS or use the /etc/hosts file. You must pay attention to the correct setting of the name resolution of all components that will be managed by PowerVC. If you do not plan to use DNS for host name resolution, ensure that all hardware components (including virtualized components) are correctly defined in the /etc/hosts file. If you plan to use DNS for host name resolution, all hardware components must be defined correctly in your DNS. In addition, you need to enable forward and reverse resolution. Host names must be consistent within the whole PowerVC domain. Configure the YUM repository for the PowerVC installation Before you install PowerVC, you need a valid repository for the RHEL software. This section provides an example that illustrates how to configure the local YUM repository by using an RHEL International Organization for Standardization (ISO) file so that the PowerVC installation finds the packages that it requires. Follow these steps: 1. Configure the yum repo by selecting and adding the new channel for Optional Software. 2. Verify that yum is seeing the new optional repo file: yum repolist 3. As part of the installation process, you need to manually install the gettext package. Run the following command after the repository is created: yum install gettext Then, follow the instructions that it provides. The output is similar to Example 4-1. Example 4-1 Installing the gettext package Loaded plug-ins: product-id, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package gettext.ppc64 0:0.17-16.el6 will be installed --> Processing Dependency: libgomp.so.1(GOMP_1.0)(64bit) for package: gettext-0.17-16.el6.ppc64 --> Processing Dependency: cvs for package: gettext-0.17-16.el6.ppc64 --> Processing Dependency: libgomp.so.1()(64bit) for package: gettext-0.17-16.el6.ppc64 --> Running transaction check ---> Package cvs.ppc64 0:1.11.23-16.el6 will be installed ---> Package libgomp.ppc64 0:4.4.7-4.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved =======================================================================Package Arch Version Repository Size Important: Regardless of the host name resolution that method you use, the PowerVC management host must be configured with a valid, fully qualified domain name.
  • 105. Chapter 4. PowerVC installation 81 =======================================================================Installi ng: gettext ppc64 0.17-16.el6 rhel-source 1.9 M Installing for dependencies: cvs ppc64 1.11.23-16.el6 rhel-source 714 k libgomp ppc64 4.4.7-4.el6 rhel-source 121 k Transaction Summary =======================================================================Install 3 Package(s) Total download size: 2.7 M Installed size: 8.5 M Is this ok [y/N]: y Downloading Packages: -----------------------------------------------------------------------Total 25 MB/s | 2.7 MB 00:00 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : libgomp-4.4.7-4.el6.ppc64 1/3 Installing : cvs-1.11.23-16.el6.ppc64 2/3 Installing : gettext-0.17-16.el6.ppc64 3/3 Verifying : cvs-1.11.23-16.el6.ppc64 1/3 Verifying : gettext-0.17-16.el6.ppc64 2/3 Verifying : libgomp-4.4.7-4.el6.ppc64 3/3 Installed: gettext.ppc64 0:0.17-16.el6 Dependency Installed: cvs.ppc64 0:1.11.23-16.el6 libgomp.ppc64 0:4.4.7-4.el6 Complete! 4. The RHEL 7.1 OS media does not contain any other packages that are required by PowerVC. You can download the packages that are required by PowerVC from the Optional Software channel by using the RHN subscription. Table 4-1 lists the package prerequisites for the PowerVC installation. Table 4-1 RHEL packages that relate to PowerVC Important: A list of packages that must not be installed on the server before you start the PowerVC installation is available in the IBM Knowledge Center. For information about the packages’ requirements and restrictions, see “Installing Red Hat Enterprise Linux on the management server or host” in the IBM Knowledge Center: https://ibm.biz/BdXKQc Red Hat Enterprise Linux for IBM Power [ppc64 and ppc64le] Red Hat Enterprise Linux x86_64 python-zope-interface python-zope-interface python-jinja2 python-jinja2 python-pyasn1 python-pyasn1-modules python-pyasn1-modules python-webob python-webob python-webtest
  • 106. 82 IBM PowerVC Version 1.2.3: Introduction and Configuration For information about how to add the optional repositories, see this website: http://red.ht/1FSNvif 5. After you install the operating system, you must set the maximum file size to unlimited by typing the following command as the root user: ulimit -f unlimited 4.2 Installing PowerVC This section describes how to install PowerVC on your management host by using .tar files that are obtained from the download site. Before you install PowerVC, ensure that all of the hardware and software prerequisites are met and that your environment is configured correctly. If you need further information, see 3.1.1, “Hardware and software requirements” on page 30. Also, ensure that you prepared the management host and installed the supported version of RHEL Server on it. Follow these steps to install PowerVC: 1. To begin the installation, open a web browser and navigate to the Entitled Software Support website: http://www.ibm.com/servers/eserver/ess/OpenServlet.wss 2. Sign in with your IBM ID. 3. Select Software downloads. 4. Select the Power (AIX) brand. 5. Select the customer number that you want to work with, and click Continue. 6. Select the edition of PowerVC that you purchased under 5692-A6P, and click Continue. 7. Download either the PPC64, PPC64LE, or the x86_64 .tar file. python-webtest python-libguestfs SOAPpy SOAPpy pyserial pyserial python-fpconst python-fpconst python-twisted-core python-twisted-core python-twisted-web python-twisted-web Red Hat Enterprise Linux for IBM Power [ppc64 and ppc64le] Red Hat Enterprise Linux x86_64 Important: The management VM is dedicated for PowerVC and the operating system on which it runs. Do not install other software onto it. Note: If your web ID is not yet registered with a customer number, select Register Customer ID number. If you are the first web ID to register your customer number, you will become the primary ID. However, if you are not the first web ID, you will be forwarded to the primary contact, who will need to approve your web ID.
  • 107. Chapter 4. PowerVC installation 83 8. After you download the .tar file, extract it to the location from which you want to run the installation script. 9. Change your current directory to the directory where the files were extracted. 10.Start the installation by running the installation script: ./install 11.Select the offering type to install from the following two options: – 1 - Standard managing PowerVM – 2 - Standard managing PowerKVM – 9 - Exit 12.After you read and accept the license agreement, PowerVC installs. See Example 4-2. An installation log file is created: /opt/ibm/powervc/log/. Example 4-2 Installing PowerVC ############################################################################### Starting the IBM PowerVC 1.2.3.0 Installation on: 2015-06-12T16:52:18-05:00 ############################################################################### LOG file is /opt/ibm/powervc/log/powervc_install_2015-06-12-165214.log 13.After the installation is complete, you will see a message similar to Example 4-3. Ensure that you download and install any fix packs that are available on Fix Central. See 4.5, “Updating PowerVC” on page 87. Example 4-3 Installation completed *************************************************************************** PowerVC installation successfully completed at 2015-06-12T17:07:03-05:00. Refer to /opt/ibm/powervc/log/powervc_install_2015-06-12-165214.log for more details. *************************************************************************** Use a web browser to access IBM PowerVC at https://powervca.pwrvc.ibm.com Note: The IBM DB2 database use of the 32-bit file libpam.so is not required by PowerVC. Ignore the following warning: Requirement not matched for DB2 database "Server". Summary of prerequisites that are not met on the current system: DBT3514W The db2prereqcheck utility failed to find the following 32-bit library file: "/lib/libpam.so*".
  • 108. 84 IBM PowerVC Version 1.2.3: Introduction and Configuration Table 4-2 shows the available options for the Install command. Table 4-2 Options for the PowerVC install command If the installation does not complete successfully, run the following command to remove the files that were created during the failed installation before you reinstall PowerVC: [powervc_install_file_folder]/install -u -f 4.3 Uninstalling PowerVC The procedure to remove PowerVC from the management host is described. It does not remove or change anything in the environment that is managed by PowerVC. Objects that were created with PowerVC (VM, volumes, and so on) are unchanged by this process. Any RHEL prerequisite packages that are installed during the PowerVC installation remain installed. Run the following command to uninstall PowerVC: /opt/ibm/powervc/bin/powervc-uninstall Option Description -c nofirewall No firewall configuration will be performed during the installation. The Admin user will need to configure the firewall manually. -s <offering> Run a silent installation. This option requires that the offering value is set to 'standard' or 'powerkvm'. -t Run the prerequisite checks and exit. -u Uninstall to attempt to clean up after a failed installation, and then exit. -f Force the installation to override or bypass certain checks. This option is used with the uninstall option to bypass failures during the uninstall. -n The following values are valid: preferipv4 (default). This option is the default for the IBM PowerVC installation. Select this option to install IBM PowerVC by using the IPv4 IP address. If the IPv4 address is unavailable, the installation will use the IPv6 IP address. preferipv6. Select this option to install IBM PowerVC by using the IPv6 IP address. If the IPv6 address is unavailable, the installation will use the IPv4 IP address. requireipv4. Select this option to install IBM PowerVC by using the IPv4 IP address only. If the IPv4 IP address is unavailable, the installation fails. requireipv6. Select this option to install IBM PowerVC by using the IPv6 IP address only. If the IPv6 IP address is unavailable, the installation fails. -h Display the help messages and exit. Note: Use this command only to remove files from a failed installation. If you need to uninstall a working instance of PowerVC, use the correct uninstall command. For more information, see 4.3, “Uninstalling PowerVC” on page 84.
  • 109. Chapter 4. PowerVC installation 85 Example 4-4 shows the last few output lines of the uninstall process. Example 4-4 Uninstallation successful The execution completed successfully. For more information, see the DB2 uninstallation log at "/tmp/db2_deinstall.log.23987". DB2 uninstalled successfully. DB2 uninstall return code: 0 Completing post-uninstall cleanup. Database removal was successful. Uninstallation of IBM PowerVC completed. #################################################################### Ending the IBM PowerVC Uninstallation on: 2014-05-14T23:35:04-04:00 #################################################################### Uninstallation was logged in /var/log/powervc-uninstall.log The uninstallation process writes its log in this file: /var/log/powervc-uninstall.log If you encounter issues when you run the powervc-uninstall command, you can clean up the environment by using the following command: [powervc_install_file_folder]/powervc-uninstall -f This command forces the uninstallation of all components of PowerVC. For the complete list of available options with the powervc-uninstall command, see Table 4-3. Table 4-3 Available options for the powervc-uninstall command 4.4 Upgrading PowerVC You can upgrade to PowerVC version 1.2.3 on RHEL 7.1 from PowerVC 1.2.1.2 and later. Before you upgrade PowerVC, you need to run the powervc-backup command on the system where the previous version of PowerVC is installed. You can restore the backup file on the system that is upgraded to PowerVC version 1.2.3. Option Description -f Forcefully removes IBM PowerVC. -l Disables uninstall logging. Logging is enabled by default. -y Uninstalls without prompting. -s Saves configuration files to an archive. -h Displays the help message and exits.
  • 110. 86 IBM PowerVC Version 1.2.3: Introduction and Configuration 4.4.1 Before you begin Perform the following steps before you begin your software upgrade: Review the hardware and software requirements for PowerVC version 1.2.3. Ensure that all compute and storage hosts are up and running before you start the upgrade. Verify your environment before you start the upgrade to ensure that the upgrade process does not fail because of environment issues. Ensure that no tasks, such as resizing, migrating, or deploying, are running on the VM when you start the upgrade. Any tasks that are running on the VM during the upgrade will cause the VM to enter an error state after the upgrade is complete. Ensure that you manually copy any customized powervc.crt and powervc.key files from the previous version of PowerVC on RHEL 6.0 to PowerVC version 1.2.3 on RHEL 7.1. Any operating system users from the Admin, Deployer, or Viewer groups on the previous version must be added again to the groups on the RHEL 7.1. system. 4.4.2 Upgrading To upgrade PowerVC and migrate the existing data, complete the following steps at a shell prompt as the root user: 1. Go to the previous version of PowerVC on the RHEL 6.0 system and run /opt/ibm/powervc/bin/powervc-backup. 2. Install PowerVC version 1.2.3 on the RHEL 7.1 system. 3. We strongly recommend that you go to the Fix Central website to download and install any fix packs that are available. 4. Copy the most recent backup archive from the previous version of PowerVC to the server where you installed PowerVC version 1.2.3. 5. On the server with PowerVC version 1.2.3, run the powervc-restore command with the --targetdir option that points to the new backup archive. This step completes the upgrade process. powervc-restore --targetdir /var/opt/ibm/powervc/backups/powervc_backup.tar.gz Notes: If you upgrade PowerVC while the user interface is active, it prompts you that it is set to maintenance mode and you cannot use it. After you run the powervc-restore command successfully, you can access the PowerVC user interface again. If an error occurs while you run the powervc-restore command, check for errors in the powervc-restore logs in the /opt/ibm/powervc/log file. After you correct or resolve the issues, run the powervc-restore command again. If you want to install PowerVC version 1.2.3 on a system with RHEL 6.0 installed, follow these steps: a. Copy the backup archive to another system. b. Uninstall RHEL 6.0. c. Install RHEL 7.1 and then install PowerVC version 1.2.3 on the system. d. Copy the backup archive to this system and restore the archive as described in the previous steps 4 and 5.
  • 111. Chapter 4. PowerVC installation 87 4.5 Updating PowerVC PowerVC updates are published on the IBM Fix Central repository. Log in with your IBM ID to get the update package: http://www.ibm.com/support/fixcentral 1. Before you update PowerVC, check that enough disk space is available. 2. Download the package to a directory, extract the file, and run the update command. To extract the file, run this command: tar -zxvf [location_path]/powervc-update-ppc-rhel-version.tgz This command extracts the package in the current directory and creates a new directory that is named powervc-version. 3. Run the update script by running the following command: /[location_path]/powervc-[version]/update When the update process is finished, it displays the message that is shown in Example 4-5. Example 4-5 Update successfully completed *************************************************************************** PowerVC installation successfully completed at 2015-06-12T17:19:56-05:00. Refer to /opt/ibm/powervc/log/powervc_update_2015-06-12-171011.log for more details. *************************************************************************** 4.6 PowerVC backup and recovery Consider backing up your PowerVC data regularly as part of a broader system backup and recovery strategy. You can use the operating system scheduling tool to perform regular backups or any other automation tool. Backup and recovery tasks can be performed only by using the command-line interface (CLI). No window is available to open in the GUI for backup and recovery. 4.6.1 Backing up PowerVC Use the powervc-backup command to back up your essential PowerVC data. You can then restore it to a working state in a data corruption situation or disaster. The powervc-backup command is in the /opt/ibm/powervc/bin/ directory. Use this command syntax: powervc-backup [-h] [--noprompt] [--targetdir LOCATION] Important: If /opt or /var or /home are separate mount points, 2500 MB of installation space is required in /opt, 187 MB of free space is required in /var, and 3000 MB of free space is required in /home.
  • 112. 88 IBM PowerVC Version 1.2.3: Introduction and Configuration Table 4-4 lists the command options. Table 4-4 Options for the powervc-backup command The following data is backed up: PowerVC databases, such as the Nova database where information about your registered hosts is stored PowerVC configuration data, such as /etc/nova Secure Shell (SSH) private keys that are provided by the administrator Glance image repositories Back up PowerVC data Complete the following steps to back up PowerVC data: 1. Ensure that the pwrvcdb user has, at a minimum, read and execute permissions to the file structure for the target directory. 2. Open a CLI to the operating system on the VM on which PowerVC is installed. 3. Navigate to the /opt/ibm/powervc/bin/ directory. 4. Run the powervc-backup command with any necessary options. If prompts are not suppressed, respond to them as needed. The following example shows the command with a non-default mounted file system target directory: powervc-backup --targetDir=/powervcbkp This command displays a prompt to confirm that you want to stop all the services. Type y to accept and continue. See Example 4-6. Example 4-6 Example of PowerVC backup Continuing with this operation will stop all PowerVC services. Do you want to continue? (y/N):y Stopping PowerVC services... Backing up the NOVA database... Backing up the QTM_IBM database... Backing up the CINDER database... Backing up the GLANCE database... Backing up the NOSQL database... Option Description -h, --help Displays help information about the command. --noprompt If specified, no user intervention is required during execution of the backup process. --targetdir LOCATION Target location in which to save the backup archive. The default value is /var/opt/ibm/powervc/backups. Note: Glance is the OpenStack database name for the image repository. Important: During a backup, most PowerVC services are stopped, and all other users are logged off from PowerVC until the operation completes.
  • 113. Chapter 4. PowerVC installation 89 Backing up the KEYSTONE database... Backing up the data files... Database and file backup completed. Backup data is in archive /powervcbkp/20150615164334862394/powervc_backup.tar.gz Starting PowerVC services... PowerVC backup completed successfully. When the backup operation completes, a new time-stamped subdirectory is created in the target directory and a backup file is created in that subdirectory, for example: /powervcbkp/2014515152932179256/powervc_backup.tar.gz We recommend that you copy this file outside of the management host, according to your organization’s backup and recovery guidelines. 4.6.2 Recovering PowerVC data Use the powervc-restore command to recover PowerVC data that was previously backed up so that you can restore a working state after a data corruption situation or disaster. You can restore a backup archive only to a system that is running the same level of PowerVC and operating system (and hardware if the OS is executing on a dedicated host rather than a VM) as the system from which the backup was taken. Ensure that the target system meets those requirements before you restore the data. PowerVC checks this compatibility of the source platform and the target platform, as shown in Example 4-7. Example 4-7 Mismatch between backup and recovery environments Continuing with this operation will stop all PowerVC services and overwrite critical PowerVC data in both the database and the file system. Do you want to continue? (y/N):y The backup archive is not compatible with either the restore system's architecture, operating system or PowerVC Version. Exiting. The backup process does not back up Secure Sockets Layer (SSL) certificates and associated configuration information. When you restore a PowerVC environment, the SSL certificate and configuration remain the same SSL certificate and configuration that existed within the PowerVC environment before the restore operation, not the SSL configuration of the environment from which the backup was taken. The powervc-restore command is in the /opt/ibm/powervc/bin/ directory and has the following syntax and options: powervc-restore [-h] [--noprompt] [--targetdir LOCATION] Note: If an error occurs while you run the powervc-backup command, you can check the powervc-backup logs file in /opt/ibm/powervc/log.
  • 114. 90 IBM PowerVC Version 1.2.3: Introduction and Configuration Table 4-5 shows the powervc-uninstall command options. Table 4-5 Options for the powervc-restore command Complete the following steps to recover PowerVC data. 1. Ensure that the pwrvcdb user has, at a minimum, the read and execute permissions to the file structure for the target directory. 2. Open a CLI to the operating system on the VM on which PowerVC is installed. 3. Navigate to the /opt/ibm/powervc/bin/ directory. 4. Run the powervc-restore command with any necessary options. If prompts are not suppressed, respond to them as needed. The following example shows the command with a non-default target directory: powervc-restore --targetDir=/powervcbkp This command displays a prompt to confirm that you want to stop all of the services. Type y to accept and continue (see Example 4-8). Example 4-8 Example of PowerVC recovery CContinuing with this operation will stop all PowerVC services. Do you want to continue? (y/N):y Stopping PowerVC services... Backing up the NOVA database... Backing up the QTM_IBM database... Backing up the CINDER database... Backing up the GLANCE database... Backing up the NOSQL database... Backing up the KEYSTONE database... Backing up the data files... Database and file backup completed. Backup data is in archive /powervcbkp/20150615164334862394/powervc_backup.tar.gz Starting PowerVC services... PowerVC backup completed successfully. [root@jay118 bin]# ./powervc-restore Continuing with this operation will stop all PowerVC services and overwrite critical PowerVC data in both the database and the file system. Do you want to continue? (y/N):y Using archive /powervcbkp/20150615164334862394/powervc_backup.tar.gz for the restore. Stopping PowerVC services... Restoring the data files... Option Description -h, --help Show the help message and exit. --noprompt If specified, no user intervention is required during the execution of the restore process. --targetdir LOCATION Target location where the backup archive is located. The default value is /var/opt/ibm/powervc/backups/<most recent>. Important: During the recovery, most PowerVC services are stopped and all other users are logged off from PowerVC until the operation completes.
  • 115. Chapter 4. PowerVC installation 91 Restoring the KEYSTONE database... Restoring the NOSQL database... Restoring the GLANCE database... Restoring the CINDER database... Restoring the QTM_IBM database... Restoring the NOVA database... Starting PowerVC services... PowerVC restore completed successfully. When the restore operation completes, PowerVC runs with all of the data from the targeted backup file. 4.6.3 Status messages during backup and recovery During the backup and recovery tasks, all PowerVC processes and databases are shut down. Any user that is working with PowerVC receives the maintenance message that is shown in Figure 4-1 and is logged out. Figure 4-1 Maintenance message for logged-in users Accessing PowerVC during the backup and recovery tasks is not allowed. Any user that attempts to log on to PowerVC receives the maintenance message that is shown in Figure 4-2. Figure 4-2 Maintenance message 4.6.4 Consideration about backup and recovery The PowerVC backup and recovery task must be part of a backup plan for your infrastructure. The PowerVC backup and recovery commands save only information that relates to PowerVC. We suggest that you save the management station operating systems by using the tool that you prefer at the same time that you back up PowerVC.
  • 116. 92 IBM PowerVC Version 1.2.3: Introduction and Configuration 4.7 PowerVC command-line interface PowerVC offers a CLI to perform tasks outside of the GUI. The CLI is used mainly for maintenance and for troubleshooting problems. Table 4-6 shows the PowerVC commands that are available for the following versions: PowerVC Standard Edition for managing PowerVM PowerVC Standard Edition for managing PowerKVM Table 4-6 PowerVC available commands Command Description Link to IBM Knowledge Center powervc-audit View and edit the current audit configuration, and export previously collected audit data. This command is deprecated. Use the powervc-config and powervc-audit-export commands instead. https://ibm.biz/BdXKQi powervc-audit-export Extract audit data. https://ibm.biz/BdXKQi powervc-backup Backs up essential PowerVC data so that you can restore to a working state in a data corruption situation or disaster. https://ibm.biz/BdXKQj powervc-config Facilitates PowerVC management node configuration changes. https://ibm.biz/BdXKQY powervc-diag Collects diagnostic data from your PowerVC installation. https://ibm.biz/BdXKQz powervc-domainname Sets a default domain name that PowerVC assigns to all newly deployed VMs. https://ibm.biz/BdXKQf powervc-encrypt Prompts the user for a string, then encrypts the string and returns it. Use the command to encrypt passwords, tokens, and strings that are stored by PowerVC. https://ibm.biz/BdXKQP install Installs PowerVC. https://ibm.biz/BdXKQy powervc-keystone Avoids Lightweight Directory Access Protocol (LDAP) user group conflicts. You can also use this command to list users, user groups, and roles. https://ibm.biz/BdXKQM powervc-ldap-config Configures PowerVC to work with an existing LDAP server. https://ibm.biz/BdXK3S powervc-restore Recovers PowerVC data that was previously backed up. https://ibm.biz/BdXK3v
  • 117. Chapter 4. PowerVC installation 93 Table 4-7 shows the PowerVC commands that are available for PowerVC Standard for managing PowerKVM. Table 4-7 Commands for PowerVC Standard for managing PowerKVM 4.7.1 Exporting audit data IBM Power Virtualization Center provides auditing support for the OpenStack services. Use the powervc-audit-export command to export audit data to a specified file. An audit record is a recording of characteristics, including user ID, time stamp, activity, and location, of each request that is made by PowerVC. Reviewing audit records is helpful when you are trying to solve problems or resolve errors. For example, if a host was deleted and you need to determine the user who deleted it, the audit records show that information. powervc-services Start, stop, restart, and view the status of PowerVC services. https://ibm.biz/BdXKT2 powervc-uninstall Uninstalls PowerVC from your management server or host. https://ibm.biz/BdXK3L powervc-validate Validates that your environment meets certain hardware and software requirements. https://ibm.biz/BdXK35 powervc-volume-image-import Creates a deployable image by using one or more volumes. Command Description Link to IBM Knowledge Center powervc-iso-import Imports ISO images into PowerVC. https://ibm.biz/BdXK37 powervc-log-management View and modify the settings for log management for PowerVC. The default action is to view the current settings. powervc-register Register a storage provider that is supported by OpenStack. Command Description Link to IBM Knowledge Center
  • 118. 94 IBM PowerVC Version 1.2.3: Introduction and Configuration The powervc-audit-export command is in the /usr/bin directory. The syntax and options are shown in Example 4-9. Example 4-9 powervc-audit command use powervc-audit-export [-h] [-u <user name>] [-n <number of records>] [-o <output file>] [-f <filter file>] [-x {json,csv}] Table 4-8 explains the powervc-audit-export command options. Table 4-8 Options for the powervc-audit-export command Complete the following steps to export PowerVC audit data: 1. Open a CLI to the operating system of the VM on which PowerVC is installed. 2. Navigate to the /usr/bin directory. 3. Run the powervc-audit-export command with any necessary options. Export audit records in JSON format to the /user's_home_directory/myexport_file file by running this command: /usr/bin/powervc-audit-export -o myexport_file Export audit records in CSV format to the /user's_home_directory/myexport_file.csv file by running this command: /usr/bin/powervc-audit-export -o myexport_file.csv -x csv For more information, see this website: http://www-01.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.standar d.help.doc/powervc_cli_hmc.html?lang=en 4.8 Virtual machines that are managed by PowerVC This section provides recommendations for the operating system on the managed VMs. Option Description -h, --help Displays help information about the command. -u <user name>, --user_name <user name> The user that requests audit data. This flag is optional. The default is the logged-in user. -n <number of records>, --top_n <number of records> Upper limit for the number of audit records to return. The request and response audit records are returned in pairs. This flag is optional. -o <output file>, --output <output file> The file to contain the exported audit data. This flag is optional. The default file is export_audit.json or export_audit.csv, depending on the specified output format. -f <filter file>, --filter <filter file> The file that contains the filter records. The format of the records is JSON. Reference the PowerVC IBM Knowledge Center for examples of filter records. This flag is optional. -x {text,csv}, --output_format {text,csv} The format of the exported audit data. This flag is optional. The formats are text (JSON format) and csv. If not specified, the default is json.
  • 119. Chapter 4. PowerVC installation 95 4.8.1 Linux on Power virtual machines If you plan to use Logical Partition Mobility (LPM) or Dynamic Logical Partitioning with your Linux VM, you must install the IBM Installation Toolkit, especially the Reliable Scalable Cluster Technology (RSCT) utilities and RSCT core tools. Run the following command to start the IBM Installation Toolkit installation process: [IBM Installation Toolkit directory]/install.sh Follow the instructions. Example 4-10 shows the common installation output. Example 4-10 IBM Installation Toolkit sample output [root@linux01 mnt1]# ./install Do you want to copy the repository of IBM packages to your machine? [y/n] y Do you want to configure your machine to receive updates of IBM packages? [y/n] n IBMIT needs the ports 4234 and 8080 to be accessed remotely. Would you like to open those ports? [y/n] y The licenses BSD, GPL, ILAN and MIT must be accepted. You can read their text using the options below and then accept or decline them. 1) Read license: BSD 2) Read license: GPL 3) Read license: ILAN 4) Read license: MIT 5) I have read and accept all the licenses 6) I do not accept any of the licenses #? 5 Configuring an installation repository for your Linux distribution Where is the installation media to be used? 1) DVD 2) Network (HTTP or FTP) 3) Directory 4) I already have a repository configured. Skip. 5) I don't know #? 1 Insert the DVD in the drive Press Enter to continue Verifying if there is a repository on DVD Available DVD devices: /dev/sr1 /dev/sr0 Checking /dev/sr1 Adding repository configuration to repository manager Repository successfully configured Package ibmit4linux was successfully installed After you install the Installation Toolkit, install ibm-power-managed-rhel.ppc64 by running the following command: yum install -y ibm-power-managed-rhel6.ppc64 After the installation completes, check the Resource Monitoring and Control (RMC) status by running the following command: lssrc -a
  • 120. 96 IBM PowerVC Version 1.2.3: Introduction and Configuration The output appears as shown in Example 4-11. Example 4-11 RMC status Subsystem Group PID Status ctrmc rsct 3916 active IBM.DRM rsct_rm 3966 active IBM.ServiceRM rsct_rm 4059 active IBM.HostRM rsct_rm 4096 active ctcas rsct inoperative IBM.ERRM rsct_rm inoperative IBM.AuditRM rsct_rm inoperative IBM.SensorRM rsct_rm inoperative IBM.MgmtDomainRM rsct_rm inoperative For more information about the toolkit, including installation information, see the IBM Installation Toolkit for Linux on Power web page: https://www-304.ibm.com/webapp/set2/sas/f/lopdiags/installtools/home.html 4.8.2 IBM AIX virtual machines To install VMs when your system runs on the IBM AIX operating system, no additional setup is necessary. After the IP address is configured, an RMC connection is automatically created. 4.8.3 IBM i virtual machines PowerVC can also manage the IBM i VMs. After you add the Power hosts, import the IBM i VMs. No unique requirements exist among IBM i, AIX, or Linux on Power VMs. Note: PowerVC, PowerVM, and the HMC rely on the RMC services. When these services are down, most of the concurrent and dynamic tasks cannot be executed. Check the RMC status every time that you need to change the VM dynamically. For more information about RMC, see these IBM Redbooks publications: IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 IBM Power Systems HMC Implementation and Usage Guide, SG24-7491 Tip: By default, AIX does not contain SSH or SSL tools. We recommend that you install them if you want to access a managed machine with commands other than telnet. Note: The storage connection must be based on N_Port ID Virtualization (NPIV) or a shared storage pool (SSP).
  • 121. © Copyright IBM Corp. 2014, 2015. All rights reserved. 97 Chapter 5. PowerVC Standard Edition for managing PowerVM This chapter describes the general setup of IBM Power Virtualization Center Standard Edition (PowerVC) for managing PowerVM. In the following sections, we explain the discovery or configuration of the managed objects. We describe the verification of the environment and the operations that can be performed on virtual machines (VMs) and images: 5.1, “PowerVC graphical user interface” on page 98 5.2, “Introduction to PowerVC setup” on page 99 5.3, “Connecting to PowerVC” on page 100 5.4, “Host setup” on page 101 5.5, “Host Groups setup” on page 106 5.7, “Storage and SAN fabric setup” on page 111 5.8, “Storage port tags setup” on page 115 5.9, “Storage connectivity group setup” on page 116 5.10, “Storage template setup” on page 120 5.11, “Storage volume setup” on page 123 5.12, “Network setup” on page 124 5.13, “Compute template setup” on page 126 5.14, “Environment verification” on page 128 5.15, “Management of virtual machines and images” on page 133 5
  • 122. 98 IBM PowerVC Version 1.2.3: Introduction and Configuration 5.1 PowerVC graphical user interface First, we briefly present the PowerVC graphical user interface (GUI) and explain how to access functions from the PowerVC Home page, as illustrated in Figure 5-1. The management functions of PowerVC are grouped by classes, which can be accessed from different locations. In all PowerVC windows, you can find hot links to several areas and components: User administration, environment configuration, and message logs at the top of the PowerVC window Management functions that relate to VM images, VMs, hosts, networks, and storage in the column of icons at the left of the window (which also includes a link to the home page) The hot links are highlighted in red in the illustration. Figure 5-1 Home page access to a group of functions In all PowerVC windows, most of the icons and text are hot links to groups of functions. Several ways exist to access a group of functions. The blue arrows on Figure 5-1 show, for example, the two hot links that can be used from the home window to access the VM management functions. Tips: For examples in this chapter, use “click Virtual Machines” to either click the icon or the link within the page. In several PowerVC windows, you might see this pencil icon. Click it to edit values. Access to Virtual Machines Main entry points to group of functions
  • 123. Chapter 5. PowerVC Standard Edition for managing PowerVM 99 5.2 Introduction to PowerVC setup Before you can start to perform tasks in PowerVC, you must discover and register the resources that you want to manage. You can register storage systems and hosts, and you can create networks to use when you deploy images. When you register resources with PowerVC, you make them available to the management functions of PowerVC (such as deploying a VM on a discovered host or storing images of captured VMs). This discovery or registration mechanism is the key to the smooth deployment of PowerVC in an existing environment. For example, a host can host several partitions while you deploy PowerVC. You first register the host without registering any of the hosted partitions. All PowerVC functions that relate to host management are available to you, but no objects exist where you can apply the functions for managing partitions. You can then decide whether you want to manage all of the existing partitions with PowerVC. If you prefer a progressive adoption plan instead, start by managing only a subset of these partitions. Ensure that the following preliminary steps are complete before you proceed to 5.3, “Connecting to PowerVC” on page 100: 1. Configuration of the IBM Power Systems environment to be managed through the Hardware Management Console (HMC). 2. Setup of users’ accounts with an administrator role on PowerVC. See 3.7, “Planning users and groups” on page 67 for details. 3. Setup of host name, IP address, and an operator user ID for the HMC.
  • 124. 100 IBM PowerVC Version 1.2.3: Introduction and Configuration 5.3 Connecting to PowerVC After PowerVC is installed and started on a Linux partition, you can connect to the PowerVC management GUI by following these steps: 1. Open a web browser on your workstation and point it to the PowerVC address: https://<ipaddress or hostname>/ 2. Log in to PowerVC as an administrative user (Figure 5-2). The first time that you use PowerVC, this administrative user is root. We recommended that after the initial setup of PowerVC, you define other user IDs and passwords rather than using the root user. For information about how to add, modify, or remove users, see 3.7.1, “User management” on page 67. Figure 5-2 PowerVC Login window 3. Now, you see the IBM PowerVC Home page. Important: It is important that your environment meets all of the hardware and software requirements and that it is configured correctly before you start to work with PowerVC and register your resources.
  • 125. Chapter 5. PowerVC Standard Edition for managing PowerVM 101 4. We recommend that your first action is to check the PowerVC installation by clicking Verify Environment as shown in Figure 5-3. Figure 5-3 Initial system check Then, you can click View Results to verify that PowerVC is installed correctly. 5.4 Host setup The first step to perform is to enable PowerVC to communicate with the HMCs in the environment to manage the storage and networking devices. After hosts, storage, and networks are configured correctly in the PowerVC domain, you can add a VM. For more information about supported hosts, see 3.1.2, “PowerVC Standard Edition requirements” on page 30.
  • 126. 102 IBM PowerVC Version 1.2.3: Introduction and Configuration To discover the HMCs and the hosts that they manage, perform the following steps: 1. On the Home page (Figure 5-3 on page 101), click Add Hosts. 2. In the Add Hosts dialog window (Figure 5-4), provide the name and credentials for the HMC. In the Display name field, enter the string that will be used by PowerVC to refer to this HMC in all of its windows. Click Add Connection. PowerVC will connect to the HMC and read the host information. Figure 5-4 HMC connection information The user ID and password can be the default HMC hscroot administrator user ID combination. The ID can also be other IDs by using the hscsuperadmin role that you created to manage the HMC. 3. PowerVC might present a message that indicates that the HMC’s certificate is untrusted or invalid. Review the certificate details to determine whether you are willing to override this warning. If you are willing to trust the certificate, click Connect to continue. Note: We recommend that you do not specify hscroot for the user ID. Instead, create a user ID on the HMC with the hscsuperadmin role and use it for managing the HMC from PowerVC. Use this approach to identify actions on the HMC that were initiated by a user who was logged in to the HMC or from the PowerVC management station. If a security policy requires that the hscroot password is changed regularly, the use of a different user ID for PowerVC credentials avoids breaking the PowerVC ability to connect to the HMC after a system administrator changes the hscroot password.
  • 127. Chapter 5. PowerVC Standard Edition for managing PowerVM 103 4. Next, you see information about all hosts that are managed by that HMC. Figure 5-5 shows the dialog for an HMC that manages three IBM POWER S824 servers that are based on POWER8 technology. To choose the hosts to manage with PowerVC, click their names. By holding down the Shift key while you click the host names, you can select several host names simultaneously. When the HMC manages several hosts, you can use the filter to select the name that contains the character string that is used as a filter. Figure 5-5 PowerVC Add Hosts dialog window
  • 128. 104 IBM PowerVC Version 1.2.3: Introduction and Configuration 5. After a few seconds, the Home page is updated and it shows the number of added objects. Figure 5-6 shows that two hosts were added. Figure 5-6 Managed hosts 6. Click the Hosts tab to open a Hosts window that is similar to Figure 5-7, which shows the status of the discovered hosts. Figure 5-7 PowerVC shows the managed hosts Add hosts by clicking Add Host. The dialog windows to add a host are the same as the windows in step 2 on page 102 and step 4 on page 103.
  • 129. Chapter 5. PowerVC Standard Edition for managing PowerVM 105 7. Click one host name to see the detailed host information as shown in Figure 5-8. The Manage Existing option is used for discovering pre-existing VMs in the environment. After hosts, storage, and networks are configured correctly in the PowerVC domain, you can add a VM by expanding the Virtual Machines section. Figure 5-8 Host information
  • 130. 106 IBM PowerVC Version 1.2.3: Introduction and Configuration 5.5 Host Groups setup After you add hosts, you can group the hosts into host groups for different business needs. For example, we added a host group for our test. As shown in Figure 5-9, open the Host Groups tab, and click Create. Figure 5-9 Host Groups page
  • 131. Chapter 5. PowerVC Standard Edition for managing PowerVM 107 A pop-up page opens as shown in Figure 5-10. Enter the host group name and the placement policy of the host group. Click Add to add hosts, and then click Create Host Group. For the placement policies that are supported by PowerVC, see 3.3.2, “Placement policies” on page 39. Figure 5-10 Create Host Group 5.6 Hardware Management Console management Beginning with PowerVC version 1.2.3, users can add redundant HMCs for Power Systems servers. If one HMC fails, the user can change the HMC to one of the redundant HMCs. Note: Beginning with PowerVC version 1.2.3, placement policies are associated with host groups, not a global setting any longer.
  • 132. 108 IBM PowerVC Version 1.2.3: Introduction and Configuration 5.6.1 Add an HMC With PowerVC version 1.2.3 or later, you can add redundant HMCs for Power System servers. To add an HMC, on the HMC Connections page, click Add HMC, as shown in Figure 5-11. Enter the HMC host name or IP address, display name, user ID, and password. Click Add HMC Connection. The new HMC is added. You also can click Remove HMC to remove an HMC. Figure 5-11 Add HMC Connection
  • 133. Chapter 5. PowerVC Standard Edition for managing PowerVM 109 5.6.2 Changing HMC credentials If you want to change the credentials that are used by PowerVC to access the HMC, open the Hosts page and select the HMC Connections tab. Select the row for the HMC that you want to work with, and then click Edit. A pop-up window opens (Figure 5-12) where you can specify another user ID, which must already be defined on the HMC with the hscsuperadmin role. Figure 5-12 Changing HMC credentials
  • 134. 110 IBM PowerVC Version 1.2.3: Introduction and Configuration 5.6.3 Change the HMC With PowerVC version 1.2.3 or later, you can add redundant HMCs for Power Systems servers. But PowerVC uses only one HMC for one server. If one HMC fails, you only need to change the management console to another HMC. As shown in Figure 5-13, on the Hosts page, select all of the servers that you want to change, click Change HMC, select the HMC you want, and click OK. Figure 5-13 Change HMC The management console of the Power System servers changes to the new HMC, as shown in Figure 5-14. Figure 5-14 Select the new HMC for hosts
  • 135. Chapter 5. PowerVC Standard Edition for managing PowerVM 111 5.7 Storage and SAN fabric setup When you use external storage are network (SAN) storage, you need to prepare the storage controllers and Fibre Channel (FC) switches before they can be managed by PowerVC. PowerVC needs management access to the storage controller. When you use user authentication, the administrative user name and password for the storage controller must be set up. For IBM Storwize storage, another option is the use of cryptographic key pairs. For instructions to generate use key pairs, see the documentation for your device. To configure the storage controller and SAN switch, follow these preliminary steps: 1. Configure the FC SAN fabric for the PowerVC environment. 2. Connect the required FC ports that are owned by the Virtual I/O Server (VIOS) and the storage controllers to the SAN switches. 3. Set up the host names, IP addresses, and administrator user ID and password combination for the SAN switches. 4. Set up the host names, IP addresses, and the administrator user ID and password combination for the storage controllers. 5. Create volumes for the initial VMs that are to be imported (installed) to PowerVC later. For more information about supported storage in PowerVC Standard Edition, see 3.1.1, “Hardware and software requirements” on page 30. Note: For EMC storage, more setup actions are needed before EMC storage can be registered in PowerVC. See the IBM Knowledge Center: http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.stan dard.help.doc/powervc_planning_storage_hmc.html Important: Pay attention to the correct setting of the name resolution of the host names of FC switches, storage controllers, the HMC, and Virtual I/O Servers that will be managed by PowerVC. The host names of those components must match the names that were defined in the Domain Name Server (DNS). Both forward and reverse DNS resolutions must work correctly before the initial setup of PowerVC. Note: PowerVC creates VMs from an image. No image is provided with PowerVC. Therefore, you must manually configure at least one initial partition, from which you will create this image. The storage volumes for this initial partition must be created manually, also. When PowerVC creates more partitions, it will also create the storage volumes for them. Note: For PowerVC version 1.2.2 and higher, you can import an image (that you created earlier) from storage into PowerVC.
  • 136. 112 IBM PowerVC Version 1.2.3: Introduction and Configuration 5.7.1 Add a storage controller to PowerVC The following steps guide you through setting up storage providers and the SAN fabric: 1. To add a storage controller, click the Add Storage link on the PowerVC home page that is shown in Figure 5-3 on page 101. If a storage provider is already defined, the icon differs slightly. Click the plus sign (+) to the right of Storage Providers, as shown in Figure 5-15. Figure 5-15 Adding extra storage providers 2. The dialog window that is shown in Figure 5-16 requires this information: – Type. Four types are supported: Storwize, IBM XIV Storage System, EMC VMAX, and EMC VNX. We selected Storwize for our IBM V7000 storage. – Storage controller name or IP address and display name. – User ID and password or Secure Shell SSH encryption key. (The encryption key option is only for IBM Storwize storage.) 3. Click Add Storage. PowerVC presents a message that indicates that the authenticity of the storage cannot be verified. Confirm that you want to continue. PowerVC connects to the storage controller and retrieves information. Figure 5-16 Add Storage
  • 137. Chapter 5. PowerVC Standard Edition for managing PowerVM 113 4. PowerVC presents information about storage pools that are configured on the storage controller. You must select the default pool where PowerVC creates logical unit numbers (LUNs) for this storage provider, as shown in Figure 5-17. Click Add Storage, and PowerVC finishes adding the storage controller. Figure 5-17 PowerVC Standard Edition window to select a storage pool 5.7.2 Add SAN fabric to PowerVC Add the SAN fabric to PowerVC. After you add the storage, PowerVC automatically prompts to add fabrics. Open the window that is shown in Figure 5-18, and click Add Fabric. Figure 5-18 Add Fabric window Tip: For more information about the storage template, see 5.9, “Storage connectivity group setup” on page 116.
  • 138. 114 IBM PowerVC Version 1.2.3: Introduction and Configuration You must complete the following information about the first SAN switch to add under the PowerVC control: Fabric type. For PowerVC 1.2.2 or later, Brocade and Cisco SAN switches are supported. Principal switch name or IP address and display name User ID and password In the Add Fabric window, click Add Fabric, and then confirm the connection in the pop-up window. PowerVC connects to the switch and retrieves the setup information. The dialog is shown in Figure 5-19. Figure 5-19 IPowerVC Standard Edition Add Fabric Figure 5-20 shows the PowerVC Storage window after you successfully add the SAN storage controllers and SAN switches. The Storage Providers tab is selected. To show managed SAN switches, click the Fabrics tab. Figure 5-20 PowerVC Storage providers tab
  • 139. Chapter 5. PowerVC Standard Edition for managing PowerVM 115 Additional storage controllers can be added by clicking Storage → the Storage Providers tab → Add Storage. The dialog window to add a storage controller is the same window that was used for the first storage controller in steps 1 and 2 in 5.7.1, “Add a storage controller to PowerVC” on page 112. You can add SAN switches by clicking Storage → the Fabrics tab → Add Fabric. The dialog window to add a switch is the same window that was used for the first switch (fabric) in 5.7.2, “Add SAN fabric to PowerVC” on page 113. 5.8 Storage port tags setup The next step to customize PowerVC is the FC port tag setup. This setting is optional. Individual FC ports in Virtual I/O Servers that are managed by PowerVC can be tagged with named labels. For more information about PowerVC tags and storage connectivity groups, see 3.5.3, “Storage connectivity groups and tags” on page 58. To set up tagging, start from the PowerVC Home page and select Configuration → Fibre Channel Port Configuration to open the dialog window that is shown in Figure 5-21 on page 116. Note: PowerVC version 1.2.3 supports a maximum of two fabrics. Note: Tagging is optional. It is needed only when you want to partition the I/O traffic and restrict certain traffic to use a subset of the available FC ports.
  • 140. 116 IBM PowerVC Version 1.2.3: Introduction and Configuration For each FC adapter in all Virtual I/O Servers that are managed by PowerVC, you can enter or select a port tag (arbitrary name) and a switch to which this port is connected (fabric). You can either double-click a Port Tag field and enter a new tag or use the drop-down menu to select a tag from a list of predefined tags. You can also set the tag to None or define your own tag. You can also select N_Port ID Virtualization (NPIV) or virtual SCSI (vSCSI) for the Connectivity field to strict the port to special SAN access. In this example, two sets of FC ports were defined, with Product and Test tags. Certain ports allow NPIV access only, and other ports allow vSCSI, or Any. Do not forget to click Save to validate your port settings, as shown in Figure 5-21. Figure 5-21 PowerVC Fibre Channel port configuration 5.9 Storage connectivity group setup Next, define the storage connectivity groups. A storage connectivity group is a set of Virtual I/O Servers with access to the same storage controllers. The storage connectivity group also controls the boot volumes and data volumes to use NPIV or vSCSI storage access. For a detailed description, see 3.6, “Network management planning” on page 63. Storage connectivity group setup is a mandatory step for the deployment of VMs on PowerVC. Note: Situations exist where you add adapters to a host after PowerVC is installed and configured. Assign them to a VIOS. Enable the VIOS to discover them by using the cfgdev command. Then, PowerVC automatically discovers them. If you open the Fibre Channel Port Configuration window, PowerVC shows these new adapters.
  • 141. Chapter 5. PowerVC Standard Edition for managing PowerVM 117 Follow these steps to set up a storage connectivity group: 1. Start from the PowerVC Home page. Select Configuration → Storage Connectivity Groups to open the dialog window that is shown in Figure 5-22. Figure 5-22 PowerVC Storage Connectivity Groups dialog window Default storage connectivity groups are defined for the following components: – All ports of all Virtual I/O Servers that can access the storage providers by using NPIV – A vSCSI boot volume storage connectivity group is added if the environment meets the requirements of vSCSI SAN access – For all Virtual I/O Servers that belong to the shared storage pools (SSPs) that PowerVC discovered if SSP was configured 2. You can then create your own storage connectivity group. Click Create. In the next window, enter information or select predefined options for the new storage connectivity group: – Name of the storage connectivity group. – Boot and Data volume connectivity types: NPIV or vSCSI. – “Automatically add applicable Virtual I/O Servers from newly registered hosts to this storage connectivity group”. If checked, from now on, newly added Virtual I/O Servers are added to this group if they can access the same storage (fabrics and tags) as the other members of the group. – “Allow deployments using this storage connectivity group (enable)”. If checked, the storage connectivity group is enabled for deployment on VMs; otherwise, it is disabled. You can change this selection later, if necessary.
  • 142. 118 IBM PowerVC Version 1.2.3: Introduction and Configuration – Restrict image deployments to hosts with FC-tagged ports. This setting is optional. If you use tags, you can select a specific tag. VMs that are deployed to this storage connectivity group (with a selected tag) can access storage only through FC ports with the specified tag. – NPIV Fabric Access Requirement. This setting controls how the FC paths will be created when a VM is created. You can choose Any, Dual, Dual per VIOS, Fabric A, or Fabric B. 3. When the information is complete, click Add Member to open the window in Figure 5-23. You must select which Virtual I/O Servers become members of the group. If a tag was previously selected, only eligible Virtual I/O Servers are available to select. After you select the Virtual I/O Servers, click Add Member. Selected Virtual I/O Servers are added to the storage connectivity group. Then, click Add Group, and the group is created. Now, the group is available for VM deployment. Figure 5-23 PowerVC Add Member to storage connectivity group window
  • 143. Chapter 5. PowerVC Standard Edition for managing PowerVM 119 A storage connectivity group can be disabled to prevent deployment of VMs in this group. To disable a group, you must clear the check box for Allow deployments using storage connectivity group (enable) on the detailed properties page of the storage connectivity group, as shown in Figure 5-24. Figure 5-24 Disabling a storage connectivity group
  • 144. 120 IBM PowerVC Version 1.2.3: Introduction and Configuration 5.10 Storage template setup After you configure your storage connectivity group, you can also create storage templates. Storage templates provide predefined storage configuration to use when you create a disk. You must define different information on the storage templates for different types of storage. For example, as shown in Figure 5-25, this storage template is for the IBM XIV storage device. You do not need any configuration information except the template name and pool name. For a full description, see 3.5.2, “Storage templates” on page 56. Figure 5-25 IBM XIV storage template A default storage template is automatically created by PowerVC for each storage provider. However, if the storage contains several storage pools, create a storage template for each storage pool that you want to use. For IBM Storwize storage, you also need to create a storage template for each I/O group that you want to use, and each volume mirroring pool pair that you want to use. Figure 5-26 on page 121 shows the dialog window to create a storage template for IBM Storwize storage. To access it, from the PowerVC Home page, click Configuration → Storage Templates → Create. Then, complete these steps: 1. Select a storage provider. 2. Select a storage pool within the selected storage provider. 3. Provide the storage template name.
  • 145. Chapter 5. PowerVC Standard Edition for managing PowerVM 121 4. Select the type of provisioning: – Generic means full space allocation (also known as thick provisioning). – Thin-provisioned is self-explanatory. If you select thin-provisioned, the Advanced Settings option is available. If you click Advanced Settings, an additional dialog window (Figure 5-27 on page 122) offers these options: • I/O group • Real capacity % of virtual storage • Automatically expand • Warning threshold • Thin-provisioned grain size • Use all available worldwide port names (WWPNs) for attachment • Enable mirroring. You need to select another pool to enable mirroring. For more information about how these settings affect PowerVC disk allocation, see 3.5.2, “Storage templates” on page 56. – Compressed for storage arrays that support compression. Figure 5-26 PowerVC Create Storage Template window Thick (full) provisioning Storage Controller Pool
  • 146. 122 IBM PowerVC Version 1.2.3: Introduction and Configuration Figure 5-27 shows the advanced settings that are available for thin-provisioned templates. The advanced settings can be configured only for storage that is backed by SAN-accessed devices. When the storage is backed by an SSP in thin-provisioning mode, PowerVC does not offer the option to specify these advanced settings. Figure 5-27 PowerVC Create Storage Template Advanced Settings 5. After you click Create, the storage template is created and it is available for use when you create storage volumes. The page that summarizes the available storage templates is shown in Figure 5-28. Figure 5-28 PowerVC Storage Templates page
  • 147. Chapter 5. PowerVC Standard Edition for managing PowerVM 123 5.11 Storage volume setup After you add storage providers and define storage templates, you can create storage volumes. When you create a volume, you must select a template that determines where (which storage controller and pool) and what the parameters are (thin or thick provisioning, grain size, and so on) for the volume to create. When you create a volume, you must select these elements: A storage template The new volume name A short description of the volume (optional) The volume size (GB) Enable sharing or not. If this option is selected, the volume can be attached to multiple VMs. This option is for PowerHA or similar solutions. When you create a volume, follow these steps: 1. From the PowerVC home page, click Storage Volumes → the Data Volumes tab → Create to open the window that is shown in Figure 5-29. Figure 5-29 PowerVC Create Volume window Note: Only data volumes need to be created manually. Boot volumes are handled by PowerVC automatically. When you deploy a partition as described in 5.15.6, “Deploy a new virtual machine” on page 159, PowerVC automatically creates the boot volumes and data volumes that are included in the images.
  • 148. 124 IBM PowerVC Version 1.2.3: Introduction and Configuration 2. After you click Create Volume, the volume is created. A list of existing volumes is displayed, as shown in Figure 5-30. This figure shows that the provisioned disks are in the available state. 3. From the Storage page, you can manage volumes. Valid operations are the creation or deletion of already managed volumes or the discovery of volumes that are defined on a storage provider and not yet managed by PowerVC. You also can edit the volumes to enable or disable sharing. Figure 5-30 List of PowerVC storage volumes 5.12 Network setup When you create a VM, you must select a network. If the network uses static IP assignment, you must also select a new IP address for the VM or let PowerVC select a new IP address from the IP pools. For a full description of network configuration in PowerVC, see 3.6, “Network management planning” on page 63. Initially, PowerVC contains no network definition, so you need to create at least one network definition. To create a network definition in PowerVC, from the Home page, click Networks → Add Network to open the dialog window that is shown in Figure 5-31 on page 125. You must provide the following data when you create a network: Network name Virtual LAN (VLAN) ID Maximum transmission unit (MTU) size in bytes For IP address type, select Dynamic or Static (Select Dynamic if the IP address will be assigned automatically by a Dynamic Host Configuration Protocol (DHCP) server.) Subnet mask Gateway Primary/Secondary DNS (This field is optional if you do not use DNS.)
  • 149. Chapter 5. PowerVC Standard Edition for managing PowerVM 125 Starting IP address and ending IP address in the IP pool Shared Ethernet adapter mapping (Select adapters within Virtual I/O Servers with access to the specific network and that are configured with the correct VLAN ID.) After you click Add Network, the network is created. From the Networks page, you can also edit the network (change network parameters) and delete networks. Consider these factors: PowerVC detects the shared Ethernet adapter to use for each host. Verify that PowerVC made the correct choice. If PowerVC chooses the wrong shared Ethernet adapter to use for a specific host, you can change the shared Ethernet adapter later. Figure 5-31 PowerVC network definition Note: You cannot modify the IP pool after you create the network. Ensure that you enter the correct IP addresses. You can only remove and add a network if you want to update the IP addresses in an IP pool.
  • 150. 126 IBM PowerVC Version 1.2.3: Introduction and Configuration You can also check the IP address status in the IP Pool on the IP Pool page, as shown in Figure 5-32. Figure 5-32 IP Pool tab 5.13 Compute template setup A compute template provides a predefined compute configuration to use when you create a VM. You can customize processor, memory, and other features. You select a compute template when you add a VM. You can change the values that are set in the compute template that is associated with a VM to resize. You can also create new compute templates on the Configuration page. For the full description about compute templates, see 3.3.4, “Information that is required for compute template planning” on page 42. Figure 5-33 on page 127 shows the window that opens when you create a compute template. To access the compute template configuration from the PowerVC Home page, click Configuration → Compute Templates → Create Compute Template. You need to specify the following settings for images that are deployed with the compute template: For Template settings, select Advanced. Provide the compute template name. Provide the number of virtual processors. Provide the number of processing units. Provide the amount of memory. Select the compatibility mode. Important: In the shared Ethernet adapter mapping list, the Primary VLAN column refers to the Port Virtual LAN Identifier (PVID) that is attached to the adapter. The VLAN number that you specify does not need to match the primary VLAN.
  • 151. Chapter 5. PowerVC Standard Edition for managing PowerVM 127 If you selected Advanced settings, additional information is required: Provide the minimum, desired, and maximum number of virtual processors. Provide the minimum, desired, and maximum number of processing units. Provide the minimum, desired, and maximum amounts of memory (MB). Enter the processor sharing type and weight (0 - 255). Enter the availability priority (0 - 255). Figure 5-33 PowerVC Create Compute Template
  • 152. 128 IBM PowerVC Version 1.2.3: Introduction and Configuration After you click Create Compute template, the Compute Templates window opens for use when you create a VM. The page that summarizes the available compute templates is shown in Figure 5-34. Figure 5-34 PowerVC Compute Templates 5.14 Environment verification After you add the hosts, storage providers, networks, and templates, we recommend that you verify your PowerVC environment before you try to capture, deploy, or onboard VMs. Virtualization management function failures might occur when dependencies and prerequisite configurations are not met.
  • 153. Chapter 5. PowerVC Standard Edition for managing PowerVM 129 PowerVC reduces the complexity of virtualization and cloud management. It can check for almost all required dependencies and prerequisite configurations and clearly communicate the failures. It can also accurately pinpoint validation failures and remediation actions when possible. Figure 5-35 shows the PowerVC Home interface where you start the verification process by clicking Verify Environment. Access the verification report by clicking View Results. Figure 5-35 PowerVC interface while environment verification in process The validation of the PowerVC environment takes from a few seconds to a few minutes to complete. You use the environment validation function architecture to add and evolve validators to check solution-specific environment dependencies and prerequisite configurations. This architecture is intended to allow the evolution of the tool to improve on performance, reliability, and scalability of validation execution with the increase in the number of endpoints, their configurations, and their interconnectivity.
  • 154. 130 IBM PowerVC Version 1.2.3: Introduction and Configuration 5.14.1 Verification report validation categories After the validation process finishes, you can access a report of the results, as shown in Figure 5-36. This report consists of a table with four columns where you see the following values: Status System Validation Category Description Figure 5-36 Verification Results view The following list shows the validation categories in this report and a description for the types of messages to expect from each of the categories: Access and Credentials Validation of reachability and credentials from the management server to the PowerVC domain, including user IDs, passwords, and SSH keys for all resources. File System, CPU and Memory on Management Server Minimum processing and storage requirements for the PowerVC management server. OS, services, database This category groups all messages that relate to the availability of the service daemons that are needed for the correct operation and message passing on the PowerVC domain. This category includes operating system services, OpenStack services, platform Enterprise Grid Orchestrator (EGO) services, and IBM DB2 database configuration.
  • 155. Chapter 5. PowerVC Standard Edition for managing PowerVM 131 HMC version Hardware Management Console software level and K2 services are up and running. HMC managed Power Systems server resources Power Systems hosts when they are managed by an HMC. Validation messages include the operating state, PowerVM Enterprise Edition enablement, PowerVM Live Partition Mobility (LPM) capabilities, ability to run a VIOS, maximum number of supported Power Systems servers, firmware level, and processor compatibility. This category is visible from PowerVC Standard Edition. Virtual I/O Server count, level and RMC state Minimum number of configured Virtual I/O Servers on each managed host, software level, Resource Monitoring and Control (RMC) connection and state to the HMC, license agreement state, and maximum number that is required for virtual adapter slots. This category is viewable from PowerVC Standard Edition. Virtual Network: Shared Ethernet adapter The shared Ethernet adapter is configured on the PowerVC management server network and in the Active state. The maximum number of required virtual slots. Virtual I/O Server shared Ethernet adapter count, state This category relates to the validation of at least one shared Ethernet adapter on one VIOS. You can view this category from PowerVC Standard Edition. Host storage LUN Visibility LUN visibility test. LUNs are created on storage providers and are visible to Virtual I/O Servers. Host storage FC Connectivity Messages that relate to the enabled access to the SAN fabric by the Virtual I/O Servers and the correct WWPN to validate that VIOS - Fabric - Storage connectivity is established. This category is viewable from PowerVC Standard Edition. Storage Model Type and Firmware Level Messages that relate to the minimum SAN Volume Controller and storage providers’ firmware levels and the allowed machine types and models (MTMs). Brocade Fabric Validations Validation for the switch presence, zoning enablement, and firmware level.
  • 156. 132 IBM PowerVC Version 1.2.3: Introduction and Configuration Figure 5-37 shows the depth of information that is provided by PowerVC. This example shows error messages and then confirmation of an acceptable configuration. By clicking or hovering the mouse pointer over each row of the verification report, you can see pop-up windows with extra information. In addition to the entry description, PowerVC suggests a solution to fix the cause of an error or an informational message. Figure 5-37 Example of a validation message for an error status
  • 157. Chapter 5. PowerVC Standard Edition for managing PowerVM 133 Figure 5-38 shows another validation report that contains informational messages. Figure 5-38 Example of a validation message for an informational message status 5.15 Management of virtual machines and images The following sections describe the operations that can be performed on VMs and images by using the PowerVC management host: 5.15.1, “Virtual machine onboarding” on page 134 5.15.2, “Refresh the virtual machine view” on page 143 5.15.3, “Start the virtual machine” on page 144 5.15.4, “Stop the virtual machine” on page 144 5.15.5, “Capture a virtual machine image” on page 145 5.15.9, “Resize the virtual machine” on page 167 5.15.10, “Migration of virtual machines” on page 169 5.15.11, “Host maintenance mode” on page 172 5.15.12, “Restart virtual machines remotely from a failed host” on page 175 5.15.13, “Attach a volume to the virtual machine” on page 180 5.15.14, “Detach a volume from the virtual machine” on page 181 5.15.15, “Reset the state of a virtual machine” on page 183 5.15.16, “Delete images” on page 184 5.15.17, “Unmanage a virtual machine” on page 185 5.15.18, “Delete a virtual machine” on page 185
  • 158. 134 IBM PowerVC Version 1.2.3: Introduction and Configuration Most of these operations can be performed from the Virtual Machines window as shown on Figure 5-39. However, removing a VM, adding an existing VM, and attaching or detaching a volume from a VM are performed from other panels. Figure 5-39 Operations icons on the Virtual Machines view 5.15.1 Virtual machine onboarding PowerVC can manage VMs that were not created by PowerVC, such as VMs that were created before the PowerVC deployment. Follow these steps to add an existing VM: 1. From the PowerVC Home window, click the hosts icon within the main panel (host icon on the left) or click the Hosts link, as shown in Figure 5-40. Figure 5-40 Selecting a host window
  • 159. Chapter 5. PowerVC Standard Edition for managing PowerVM 135 2. Click the line of the host on which the VMs that you want to manage are deployed. The background color of the line changes to light blue. Click the host name in the Name column, as shown in Figure 5-41. Figure 5-41 Selected hosts window 3. The detailed host window opens. On Figure 5-42, the Information and Capacity sections are collapsed for improved viewing. To collapse and expand the sections, click the section names, and you will see the collapse and expand buttons. The Virtual Machines section is expanded, but it contains no data, because PowerVC does not yet manage any VM on this host. Figure 5-42 Collapse and expand sections 4. Under the Virtual Machines section (or in the home Hosts section), click Manage Existing to open a pop-up window with two options: – Manage all fully supported VMs that are not currently being managed by PowerVC. VMs that require preparation need to be selected individually. – Select specific VMs. 5. Check Select specific virtual machines.
  • 160. 136 IBM PowerVC Version 1.2.3: Introduction and Configuration 6. After you load data from the HMC, PowerVC displays a new page with two tabs. The Supported tab shows you all of the VMs that can be added to be managed by the PowerVC. Select one or more VMs that you want to add. The background color changes to light blue for the selected VMs as shown in Figure 5-43. Figure 5-43 Adding existing VMs After you click Manage, PowerVC starts to manage the processing of the selected VMs. Note: Checking Manage any supported virtual machines that are not currently being managed by PowerVC and then clicking Manage results in adding all candidate VMs without asking for confirmation. Note: If a VM does not meet all of the requirements, the VM appears on the Not supported tab. The tab also shows the reason why PowerVC cannot manage the VM. Note: The detailed eligibility requirements to add a VM into a PowerVC managed PowerVM host are available in the IBM Knowledge Center: https://ibm.biz/BdXK6a
  • 161. Chapter 5. PowerVC Standard Edition for managing PowerVM 137 7. PowerVC displays a pop-up message in the lower-right corner during this process, as shown in Figure 5-44. These messages remain visible for a few seconds. Figure 5-44 Example of an informational pop-up message 8. After you discover a VM, click the Virtual Machines icon to return to view the Manage Existing window. Select the recently added VM. The background color changes to light blue. Double-click the recently added VM to display its detailed information. The VM’s details window can be accessed by double-clicking Home → Hosts → host name → virtual machine name where host name is the name of the server that contains the vm that you want to view and virtual machine name is the correct VM. Tip: You can display the messages again by clicking Messages on the black bar with the IBM logo at the top of the window.
  • 162. 138 IBM PowerVC Version 1.2.3: Introduction and Configuration 9. For improved viewing, you can collapse sections on the window. Figure 5-45 presents the detailed view of a VM with all sections collapsed. You can collapse and expand each section by clicking the section names: Information, Specifications, Network Interfaces, Collocation Rules, and Details. Figure 5-45 Virtual machine detailed view with collapsed sections 10.The Information section displays information about the VM status, health, and creation dates. Table 5-1 explains the fields in the Information section. Table 5-1 Information section fields Field Description Name The name of the VM. State The actual state for the VM. Health The actual health status for the VM. The following health statuses are valid: OK: The target resource, all related resources, and the PowerVC management services for the resources report zero problems. Warning: The target resource or a related resource requires user attention. Important: Nova or cinder host services that manage the resources report problems and require user attention. Critical: The target resource or a related resource is in an error state. Unknown: PowerVC is unable to determine the health status of the resource. ID This internal ID is used by PowerVC management hosts to uniquely identify the VM. Host Host server name where the VM is allocated. Created Creation date and time. Last updated Last update date and time. Note: Each host, network, VM, and any other resource that is created in the PowerVC management host has its own ID number. This ID uniquely identifies each resource to the PowerVC management host.
  • 163. Chapter 5. PowerVC Standard Edition for managing PowerVM 139 11.In Figure 5-46, the Information section is expanded to display details about the recently added VM. Figure 5-46 Virtual machine detailed view of expanded Information section 12.Collapse the Information view and expand the Specifications section. This section contains information that relates to the VM capacity and resources. Table 5-2 provides the fields in the Specifications section. Table 5-2 Specifications section’s fields Field Description Remote restart enabled Remote restart is enabled or not. Remote restart state Status of the remote restart. Memory Amount of memory (expressed in MB). Processors Amount of entitled processing capacity. Minimum memory (MB) Amount of minimum desired memory. Maximum memory (MB) Amount of maximum memory. Minimum processors Amount of minimum virtual processor capacity. Maximum processors Amount of maximum virtual processor capacity. Availability priority Priority number for availability when a processor fails. Processor mode Shared or dedicated processor mode selected. Minimum processing units Amount of minimum entitled processing capacity. Maximum processing units Amount of maximum entitled processing capacity. Sharing mode Uncapped or capped mode selected. Shared weight Weight to request shared resources. Processor compatibility mode The processor compatibility mode is determined when the instance is powered on.
  • 164. 140 IBM PowerVC Version 1.2.3: Introduction and Configuration 13.Figure 5-47 provides an example of the Specifications section for the recently added VM. Figure 5-47 Virtual machine detailed view of expanded Specifications section Desired compatibility mode The processor compatibility mode that is wanted for the VM. Operating system The name and level of the operating system that is installed on the partition. Field Description
  • 165. Chapter 5. PowerVC Standard Edition for managing PowerVM 141 14.Collapse the Specifications section and expand the Network Interfaces section. This section contains information that relates to the virtual network connectivity, as shown in Figure 5-48. Figure 5-48 Virtual machine detailed view of expanded Network Interfaces section
  • 166. 142 IBM PowerVC Version 1.2.3: Introduction and Configuration 15.Double-click Network Interfaces. Two tabs are shown. The Overview tab displays the Network detailed information, including the VLAN ID, the Virtual I/O Servers that are involved, the shared Ethernet adapters, and other useful information. The IP Pool tab displays the range of IP addresses that make up the IP pool (if you previously defined it). Figure 5-49 displays the Network Overview tab. Figure 5-49 Detailed Network Overview tab 16.The Collocation Rules section displays the collocation rules that are used to allocate the VM (if you configured collocation rules). 17.The last section of the Virtual Machine window is the Details section that presents the status and the hypervisor names for the VM as listed in Table 5-3. Table 5-3 Details section’s fields Field Description Power state Power status for the VM Task status Whether a task is running on the VM and the status of the task Disk config How the disk was configured into the VM Hypervisor host name The name of the host in the hypervisor and the HMC Hypervisor partition name The name of the VM in the hypervisor and the HMC
  • 167. Chapter 5. PowerVC Standard Edition for managing PowerVM 143 5.15.2 Refresh the virtual machine view Refresh will reload the information for the currently selected VM. Click Refresh to reload the information. Figure 5-50 shows the detailed Information section of the Overview tab for the selected VM. Figure 5-50 Virtual machine Refresh icon Out-of-band operations In the context of PowerVC, the term out-of-band operation refers to any operation on an object that is managed by PowerVC that is not performed from the PowerVC tool. For example, an LPM operation that is initiated directly from an HMC is considered an out-of-band operation. With the default polling interval settings, it might take several minutes for PowerVC to be aware of the change to the environment as a result of an out-of-band operation. Note: On many PowerVC windows, you can see a Refresh icon, as shown by the red highlighting in Figure 5-50. Most windows update asynchronously through long polling in the background. Refresh is available if you think that the window does not show the latest data from those updates. (You suspect something went wrong with a network connection, or you want to ensure that the up-to-date data displays.) By clicking the Refresh icon, a Representational State Transfer (REST) call is made to the PowerVC server to get the latest data that is available from PowerVC.
  • 168. 144 IBM PowerVC Version 1.2.3: Introduction and Configuration 5.15.3 Start the virtual machine From the Virtual Machines window, you can use the Start option to power on the currently selected VM. After the VM finishes the startup process, the VM is available for operations that are performed through the PowerVC management host. The process takes more time than the boot process of the operating system. PowerVC waits until the RMC service is available to communicate with the VM. Even though the status field is Active (because the VM is powered on), the health field displays a message warning that is similar to “Reason: RMC state of virtual machine vmaix01 is Inactive”. Wait for a few minutes for the health field to display a status of OK before you manage the VM from PowerVC. Figure 5-51 displays the VM after it starts. Figure 5-51 Virtual machine fully started 5.15.4 Stop the virtual machine From the VM’s detailed window, click Stop to shut down the VM. Important: PowerVC presents a pop-up window that asks for confirmation that you want to shut down the machine before PowerVC acts.
  • 169. Chapter 5. PowerVC Standard Edition for managing PowerVM 145 When the VM completes the shutdown process, the state changes to Shutoff as shown in Figure 5-52. This process takes a few minutes to complete. Figure 5-52 Virtual machine powered off 5.15.5 Capture a virtual machine image You can capture an operating system image of a VM that you created or deployed. This image will then be used to install the operating system of the future VMs that are created from PowerVC. Before you capture the VM, you must first prepare and enable it. To enable a VM, you can use either the activation engine or the cloud-init technologies. Next, the steps to install each technology are described. Requirements for capture To be eligible for image capture, a VM must meet several requirements: The VM must use any of the operating system versions that are supported by PowerVC. Your PowerVC environment is configured. The host on which the VM executes is managed by PowerVC. The VM uses virtual I/Os and virtual storage; the network and storage devices are provided by the VIOS. Note: If an active RMC connection exists between PowerVC and the target VM, a shutdown of the operating system is triggered. If no active RMC connection exists, the VM is shut down without shutting down the operating system. Note: See the “Capture requirements” page in the IBM Knowledge Center to prepare the VM and to verify that all of the capture requirements are met: https://ibm.biz/BdXK6a
  • 170. 146 IBM PowerVC Version 1.2.3: Introduction and Configuration The /var directory on the PowerVC management hosts must have enough space (PowerKVM only). When you capture VMs that use local storage, the /var directory on the management server is used as the repository for storing the images. The file system that contains the /var directory needs to have enough space to store the captured images. This amount can be several GBs, depending on the VM to capture. If you plan for a Linux VM with multiple paths to storage, you must configure Linux for multipath I/O (MPIO) on the root device. If you want to capture an IBM i VM, multiple boot volumes are supported. The VM is powered off. When you power off a VM, the status will appear as Active until the VM completely shuts down. You can select the VM for capture even if the status is displayed as Active. Operating systems that use a Linux Loader (LILO) or Yaboot boot loader, such as SUSE Linux Enterprise Server (SLES) 10, SLES 11, RHEL 5, and RHEL 6, require special steps when you use VMs with multiple disks. These operating systems must be configured to use a Universally Unique Identifier (UUID) to reference their boot disk. SLES 11 virtual servers mount devices by using -id notation, by default, which means that they are represented by symbolic links. To address this issue, you need to perform one of the following configurations before you capture a SLES VM for the first time: – Configure Linux for MPIO on the root device on VMs that will be deployed to multiple Virtual I/O Servers or multipath environments. – Update /etc/fstab and /etc/lilo.conf to use UUIDs instead of symbolic links. Follow these steps to change the devices so that they are mounted by UUID: a. Search the file system table /etc/fstab for the presence of symbolic links. Symbolic links look like this example: /dev/disk/by-* b. Store the mapping of /dev/disk/by-* symlinks to their target devices in a scratch file and ensure that you use the device names in it, for example: ls -l /dev/disk/by-* > /tmp/scratchpad.txt c. The contents of the scratchpad.txt file are similar to Example 5-1. Example 5-1 scratchpad.txt file /dev/disk/by-id: total 0 lrwxrwxrwx 1 root root 9 Apr 10 12:07 scsi-360050768028180ee380000000000603c -> ../../sda lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-360050768028180ee380000000000603c-part1 -> ../../sda1 Tip: Because the default Red Hat Enterprise Linux (RHEL) configuration creates a restricted list for all WWPN entries, you must remove them to enable the deployment of a captured image. The following RHEL link describes how to remove them: https://ibm.biz/BdXapw Important: When you enable the activation engine, the VM is powered off automatically. When you use cloud-init, you must shut down the VM manually before the capture.
  • 171. Chapter 5. PowerVC Standard Edition for managing PowerVM 147 lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-360050768028180ee380000000000603c-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-360050768028180ee380000000000603c-part3 -> ../../sda3 lrwxrwxrwx 1 root root 9 Apr 10 12:07 wwn-0x60050768028180ee380000000000603c -> ../../sda lrwxrwxrwx 1 root root 10 Apr 10 12:07 wwn-0x60050768028180ee380000000000603c-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Apr 10 12:07 wwn-0x60050768028180ee380000000000603c-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Apr 10 12:07 wwn-0x60050768028180ee380000000000603c-part3 -> ../../sda3 total 0 lrwxrwxrwx 1 root root 9 Apr 10 12:07 scsi-0:0:1:0 -> ../../sda lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part3 -> ../../sda3 /dev/disk/by-uuid: total 0 lrwxrwxrwx 1 root root 10 Apr 10 12:07 3cb4e486-10a4-44a9-8273-9051f607435e -> ../../sda2 lrwxrwxrwx 1 root root 10 Apr 10 12:07 c6a9f4e8-4e87-49c9-b211-89086c2d1064 -> ../../sda3 d. Edit the /etc/fstab file. Replace the /dev/disk/by-* entries with the device names to which the symlinks point, as laid out in your scratchpad.txt file. Example 5-2 shows how the lines look before you edit them. Example 5-2 scratchpad.txt file /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part2 swap swap defaults 0 0 /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part3 / ext3 acl,user_xattr 1 1 In this example, those lines are changed to refer to the specific device names. See Example 5-3. Example 5-3 Specific device names for the /etc/fstab file /dev/sda2 swap swap defaults 0 0 /dev/sda3 / ext3 acl,user_xattr 1 1 e. Edit the /etc/lilo.conf file so that the root lines correspond to the device UUID and the boot line corresponds to the device names. Example 5-4 shows how the lines look before you edit them. Example 5-4 /etc/lilo.conf file boot = /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part1 root = /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part3
  • 172. 148 IBM PowerVC Version 1.2.3: Introduction and Configuration In Example 5-5, those lines were changed to refer to the specific device names. Example 5-5 Specific devices names for the /etc/lilo.conf file boot = /dev/sda1 root = /dev/sda3 f. Run the lilo command. g. Run the mkinitrd command. Preparing a virtual machine with cloud-init The cloud-init script enables VM activation and initialization, and it is widely used for OpenStack. Before you capture a VM, install the cloud-init initialization package. This package is available at the /opt/ibm/powervc/images/cloud-init path in the PowerVC host. Follow these steps: 1. Before you install cloud-init, you must install the dependencies for cloud-init. These dependencies are not included with the operating systems: – For SLES, install the dependencies that are provided in the SLES repo: ftp://ftp.unicamp.br/pub/linuxpatch/cloud-init-ppc64/sles11 (or sles12) – For RHEL, add the EPEL yum repository for the latest level of the dependent RPMs: Use these commands for RHEL6, for example: wget http://dl.fedoraproject.org/pub/epel/6Server/ppc64/epel-release-6-8.noarch.rpm rpm -Uvh epel-release-6*.rpm Use these commands for RHEL7, for example: wget http://dl.fedoraproject.org/pub/epel/7/ppc64/e/epel-release-7-5.noarch.rpm rpm -Uvh epel-release-7*.rpm – For AIX, follow the instructions to download the cloud-init dependencies: ftp://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/cloudinit 2. Install the appropriate cloud-init RPM for your operating system that is available at /opt/ibm/powervc/images/cloud-init. However, if the VM already has an installed cloud-init RPM, you must uninstall the existing RPM first. – For RHEL, install the appropriate RPM from /opt/ibm/powervc/images/cloud-init/rhel: • RHEL6: cloud-init-0.7.4-5.el6.noarch.rpm • RHEL7: cloud-init-0.7.4-5.el7.noarch.rpm Important: If you are installing the cloud-init package to capture a VM on which the activation engine is already installed, you must first uninstall the activation engine. To check whether the activation engine Red Hat Package Managers (RPMs) are installed, run this command on the VM: # rpm -qa | grep activation
  • 173. Chapter 5. PowerVC Standard Edition for managing PowerVM 149 – For SLES, install the appropriate RPM from /opt/ibm/powervc/images/cloud-init/sles: • SLES 11: cloud-init-0.7.4-2.4.ppc64.rpm • SLES 12: cloud-init-0.7.5-8.10.ppc64le.rpm – For Ubuntu Linux, install the appropriate RPM from /opt/ibm/powervc/images/cloud-init/ubuntu: Ubuntu 15: cloud-init_0.7.7~bzr1091-0ubuntu1_all.deb – For AIX, download the AIX cloud-init RPM from this address: ftp://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/cloudinit 3. After you install cloud-init, modify the cloud.cfg file, which is available at /etc/cloud/cloud.cfg, by using the following values: – For RHEL, set the following values: disable_root: 0 ssh_pwauth: 1 ssh_deletekeys: 1 – For SLES, perform these tasks: • Remove the following field: users: -root • Add the following fields: ssh_pwauth: true ssh_deletekeys: true – For both RHEL and SLES, add the following new values to the cloud.cfg file: disable_ec2_metadata: True datasource_list: ['ConfigDrive'] – For SLES only, after you update and save the cloud.cfg file, run the following commands: • chkconfig -s cloud-init-local on • chkconfig -s cloud-init on • chkconfig -s cloud-config on • chkconfig -s cloud-final on – For RHEL 7.0 and 7.1, ensure that the following conditions are set on the VM that you are capturing: • Set SELinux to permissive or disabled on the VM that you are capturing or deploying. • The Network Manager must be installed and enabled. • Ensure that the net-tools package is installed. • Edit all of the /etc/sysconfig/network-scripts/ifcfg-eth* files to update their NM_CONTROLLED = no settings. Note: This package is not installed by default when you select the Minimal Install software option during the installation of RHEL 7.0 and 7.1 from an International Organization for Standardization (ISO) image.
  • 174. 150 IBM PowerVC Version 1.2.3: Introduction and Configuration 4. Remove the Media Access Control (MAC) address information. For more information about how to remove the MAC address information, see the OpenStack page: http://docs.openstack.org/image-guide/content/ch_openstack_images.html 5. Enable and configure the modules (Table 5-4) and host name behavior by modifying the cloud.cfg file: – Linux: /etc/cloud/cloud.cfg – AIX: /opt/freeware/etc/cloud/cloud.cfg – We recommend that you enable reset-rmc and update-bootlist on Linux. – Host name: If you want to change the host name after the deployment, remove "-update_hostname" from the list of cloud_init_modules. If you do not remove it, cloud-init resets the host name to the original host name deployed value when the system is restarted. Table 5-4 Modules and descriptions Important: The /etc/sysconfig/network-scripts file path that is mentioned in the previous OpenStack page about the HWADDR must be applied only to RHEL. For SLES, the HWADDR path is /etc/sysconfig/network. For example, for the ifcfg-eth0 adapter, on RHEL, remove the HWADDR line from /etc/sysconfig/network-scripts/ifcfg-eth0, and on SLES, remove the HWADDR line from /etc/sysconfig/network/ifcfg-eth0. The 70-persistent-net.rules and 75-persistent-net-generator.rules files are required to add or remove network interfaces on the VMs after deployment. Ensure that you save these files so that you can restore them after the deployment is complete. These rules files are not supported by RHEL 7.0 and 7.1. Therefore, after you remove the adapters, you must update the adapter configuration files manually on the VM to match the current set of adapters. Module Description restore_volume_group This module restores non-rootVG volume groups when you deploy a new VM. Note: For AIX, run the /opt/freeware/lib/cloud-init/create_pvid_to_vg_mappings.sh command to save the information that is used to restore custom volume groups on all VMs that are deployed from the image that will be captured. Saving this information is useful if you have a multidisk VM that has a dataVG volume group defined. The module will restore the dataVG after the deployment. set_multipath_hcheck_interval Use this module to set the hcheck interval for multipath. If you deploy a multidisk VM and this module is enabled, you can deploy specifying a cloud-config data entry that is named "multipath_hcheck_interval" and give it an integer value that corresponds to seconds. Post-deployment, each of the VM’s disks must have their hcheck_interval property set to the value that was passed through the cloud-config data. Use the lsattr -El hdisk# -a hcheck_interval command for verification. If you do not specify the value within the cloud-config data, the module will set each disk’s value to 60 seconds.
  • 175. Chapter 5. PowerVC Standard Edition for managing PowerVM 151 6. You can also deploy with both static and Dynamic Host Configuration Protocol (DHCP) interfaces on SLES 11 and SLES 12: – If you want cloud-init to set the host name, in the /etc/sysconfig/network/dhcp file, set the DHCLIENT_SET_HOSTNAME option to no. – If you want cloud-init to set the default route by using the first static interface, which is standard, set the DHCLIENT_SET_DEFAULT_ROUTE option in the /etc/sysconfig/network/dhcp file to no. If you do not set these settings to no and then deploy with both static and DHCP interfaces, the DHCP client software might overwrite the value that cloud-init sets for the host name and default route, depending on how long it takes to get DHCP leases for each DHCP interface: – reset-rmc: This module automatically resets RMC. This action is enabled by default on AIX. It can be enabled on Linux by adding - reset-rmc to the cloud_init_modules: section. – update-bootlist: This module removes the temporary virtual optical device, which is used to send configuration information to the VM, from the VM’s bootlist. This action is enabled by default on AIX. It can be enabled on Linux by adding - update-bootlist to the cloud_init_modules: section. 7. For AIX, run the /opt/freeware/lib/cloud-init/create_pvid_to_vg_mappings.sh command to save the information that is used to restore custom volume groups on all VMs that are deployed from the image that will be captured. 8. Manually shut down the VM. Preparing a virtual machine with activation-engine Follow these steps to install and enable the activation engine: 1. Look for the vmc.vsae.tar activation engine package on the PowerVC management host in the /opt/ibm/powervc/activation-engine directory. 2. Copy the vmc.vsae.tar file to the VM that you will capture. This file can be stored in any directory that matches your environment’s guidelines. 3. On the VM that you will capture, extract the contents of the vmc.vsae.tar file. set_hostname_from_dns Use this module to set your VM’s host name by using the host name values from your Domain Name Server (DNS). To enable this module, add this line to the cloud_init_modules section: - set_hostname_from_dns Then, remove these lines: - set_hostname - update_hostname set_hostname_from_interface Use this module to choose the network interface and therefore IP address to be used for the reverse lookup. The valid values are interface names, such as eth0 and en1. On Linux, the default value is eth0. On AIX, the default value is en0. set_dns_shortname This module specifies whether to use the short name to set the host name. Valid values are True to use the short name or False to use the fully qualified domain name. The default value is False. Module Description
  • 176. 152 IBM PowerVC Version 1.2.3: Introduction and Configuration 4. For AIX, perform these tasks: – Ensure that the JAVA_HOME environment variable is set and points at a Java runtime environment (JRE), for example: # export JAVA_HOME=/usr/java5/jre – Run the activation engine installation command: ./aix-install.sh 5. For Linux, run the following command, which was included in the vmc.vsae.tar file: linux-install.sh When you run this command on Linux, you are asked whether the operating system is running on a kernel-based VM (KVM) hypervisor. Answer no to this question. 6. You can remove the .tar file and extracted files now, unless you want to remove the activation engine later. Before you capture a VM, you must enable the activation engine that is installed on it. To enable the activation engine, follow these steps: 1. If you previously captured the VM and want to capture it again, run the commands that are shown in Example 5-6. Example 5-6 Commands to enable the activation engine rm /opt/ibm/ae/AP/* cp /opt/ibm/ae/AS/vmc-network-restore/resetenv /opt/ibm/ae/AP/ovf-env.xml 2. Prepare the VM to be captured by running the following command: /opt/ibm/ae/AE.sh -R 3. Wait until the VM is powered off. See Example 5-7 for an example of the output of the command. Example 5-7 Output from the /opt/ibm/ae/AE.sh -R command # /opt/ibm/ae/AE.sh -R JAVA_HOME=/usr/java5/jre [2013-11-01 16:44:55,831] INFO: Looking for platform initialization commands [2013-11-01 16:44:55,841] INFO: OS: AIX Version: 7.1 [2013-11-01 16:44:56,315] INFO: No initialization commands found....continuing [2013-11-01 16:44:56,319] INFO: Base PA: /opt/ibm/ae/ovf-env-base.xml [2013-11-01 16:44:56,322] INFO: VSAE Encryption Level: Disabled [2013-11-01 16:44:56,323] INFO: CLI parameters are '['AE/ae.py', '-R']' [2013-11-01 16:44:56,325] INFO: AE base directory is /opt/ibm/ae/ [2013-11-01 16:44:56,345] INFO: windowting system. AP file: None. Interactive: False [2013-11-01 16:44:56,513] INFO: In window [2013-11-01 16:44:56,513] INFO: windowting products [2013-11-01 16:44:56,515] INFO: Start to window com.ibm.ovf.vmcontrol.system Important: The following step will shut down the VM. Ensure that no users or programs are active and that the machine can be stopped before you execute this step. Note: When this command finishes, the VM is powered off and ready to be captured.
  • 177. Chapter 5. PowerVC Standard Edition for managing PowerVM 153 0821-515 ifconfig: error loading /usr/lib/drivers/if_eth: A file or directory in the path name does not exist. [2013-11-01 16:44:56,846] INFO: Start to window com.ibm.ovf.vmcontrol.restore.network 0821-515 ifconfig: error loading /usr/lib/drivers/if_eth: A file or directory in the path name does not exist. [2013-11-01 16:44:59,917] INFO: windowting the operating system [2013-11-01 16:44:59,947] INFO: Cleaning AR and AP directories [2013-11-01 16:44:59,957] INFO: Shutting down the system SHUTDOWN PROGRAM Fri Nov 1 16:45:01 CDT 2013 Broadcast message from root@vmaix01 (tty) at 16:45:01 ... shutdown: PLEASE LOG OFF NOW !!! System maintenance is in progress. All processes will be killed now. Broadcast message from root@vmaix01 (tty) at 16:45:01 ... shutdown: THE SYSTEM IS BEING SHUT DOWN NOW JAVA_HOME=/usr/java5/jre [2013-11-01 16:45:10,040] INFO: Looking for platform initialization commands [2013-11-01 16:45:10,049] INFO: OS: AIX Version: 7.1 [2013-11-01 16:45:10,424] INFO: No initialization commands found....continuing [2013-11-01 16:45:10,428] INFO: Base PA: /opt/ibm/ae/ovf-env-base.xml [2013-11-01 16:45:10,430] INFO: VSAE Encryption Level: Disabled [2013-11-01 16:45:10,433] INFO: CLI parameters are '['AE/ae.py', '-d', 'stop']' [2013-11-01 16:45:10,434] INFO: AE base directory is /opt/ibm/ae/ [2013-11-01 16:45:10,453] INFO: Stopping AE daemon. [2013-11-01 16:45:10,460] INFO: AE daemon was not running. 0513-044 The sshd Subsystem was requested to stop. Wait for '....Halt completed....' before stopping. Error reporting has stopped.
  • 178. 154 IBM PowerVC Version 1.2.3: Introduction and Configuration If you need to uninstall the activation engine from a VM, log on to this VM command-line interface (CLI). Change your working directory to the directory where you unpacked (tar -x) the vmc.vsae.tar activation engine package. Run the following commands: For AIX, run this command: aix-install.sh -u For Linux, run this command: linux-install.sh -u Capture the virtual machine image Follow these steps to capture a VM image: 1. After you complete the previous steps to install and prepare the VM for capture, log on to the PowerVC GUI. Go to the Virtual Machines view. Select the VM that you want to capture, as shown in Figure 5-53. Click Continue. Figure 5-53 Capture window 2. Use PowerVC to choose the name for your future image and select the volumes (either boot volumes or data volumes) that you want to capture.
  • 179. Chapter 5. PowerVC Standard Edition for managing PowerVM 155 3. When you capture a VM, all volumes that belong to its boot set are included in the image that is generated by the capture. If the VM is brought into PowerVC management, the boot set consists of all volumes that are marked as the boot set when PowerVC manages the VM. If the VM is deployed from an image that is created within PowerVC, the boot set consists of all volumes that the user chooses as the boot set when the user creates the image. Unlike the volumes that belong to the VM’s boot set, the user can choose which data volumes to include in the image that is generated by the capture. Figure 5-54 shows an example of choosing to capture both boot volumes and data volumes. Click Capture. Figure 5-54 Capture boot and data volumes
  • 180. 156 IBM PowerVC Version 1.2.3: Introduction and Configuration 4. PowerVC shows a confirmation page that lists all of the VM volumes that were chosen for capture. See Figure 5-55. Click Capture again to start the capture process. Figure 5-55 Capture window confirmation 5. On Figure 5-56, the Task column displays a “Pre-capture processing started” message. In addition, a pop-up message, which states that PowerVC is taking a snapshot of the VM image, appears for a few seconds in the lower-right corner of the window, as shown in Figure 5-56. Figure 5-56 Image snapshot in progress
  • 181. Chapter 5. PowerVC Standard Edition for managing PowerVM 157 6. If you open the Images window while an image capture is ongoing, you will see the image state displayed as Queued as shown in Figure 5-57. Figure 5-57 Image creation in progress 7. When the image capture is complete, the state in the Images view change to Active, which is visible in Figure 5-57. 8. Look at the Storage volumes window. You can see the storage volumes that were created to hold the VM images. For example, Figure 5-58 shows two volumes that contain the images that were captured on the same VM. Figure 5-58 Storage volumes view 9. The PowerVC management host captures the image in the same way that it manages adding a volume to the system, but it adds information to use this volume as an image. This information enables the image to appear in the Image view to deploy new VMs.
  • 182. 158 IBM PowerVC Version 1.2.3: Introduction and Configuration 10.Click the Images icon on the left bar to return to the Images view. Select the image to display its information in detail. Double-click the image to open a window that is similar to the window that is shown in Figure 5-59. Figure 5-59 Expanded information for a captured image 11.Table 5-5 explains each field in the Information section. Table 5-5 Description of the fields in the Information section Field Description Name Name of the image capture State Current state of the image capture ID Unique identifier number for the resource Description Quick description of the image Checksum Verification sum for the resource Captured VM Name of the VM that was used to create the image Created Created date and time Last updated Last updated date and time
  • 183. Chapter 5. PowerVC Standard Edition for managing PowerVM 159 12.Table 5-6 explains each field of the Specifications section. Table 5-6 Description of the fields in the Specifications section 13.The Volumes section displays all of the storage information about the image. 14.The Virtual Machines section displays the list of VMs that were deployed by using this image. The Virtual Machines section is shown in Figure 5-60. Figure 5-60 Volumes section and Virtual Machines section 5.15.6 Deploy a new virtual machine You can deploy a new VM by reusing one of the images that was captured as described in 5.15.5, “Capture a virtual machine image” on page 145. You can deploy to a specific host, or the placement policy can choose the best location for the new VM. For more information about the placement policy functionality, see 3.3, “Placement policies and templates” on page 38. Field Description Image type Description of the image type Container format Type of container for the data Disk format The specific format for the disk Operating system The operating system on the image Hypervisor type The name of the hypervisor that is managing the image Architecture Architecture of the image Endianness Big endian or little endian
  • 184. 160 IBM PowerVC Version 1.2.3: Introduction and Configuration PowerVC version 1.2.3 has the following limits on deployments: PowerVC supports a maximum of 50 concurrent deployments. We recommend that you do not exceed eight concurrent deployments for each host. Running more than 10 concurrent deployment operations might require additional memory and processor capacity on the PowerVC management host. If you use only SAN storage and you plan to batch-deploy over 100 VMs that are based on one image, you must make multiple copies of that image and deploy the VMs in batches of 10. The following settings might increase the throughput and decrease the duration of deployments: Use the striping policy instead of the packing policy. Limit the number of concurrent deployments to match the number of hosts. The host group and storage connectivity group that you select determine the hosts that are available as target hosts in the deployment operation. For more information, see 3.5.3, “Storage connectivity groups and tags” on page 58. Important: Before you deploy an image, you can set a default domain name that PowerVC uses when it creates new VMs by using the powervc-domainname command. This domain name is used to create the fully qualified name of the new VM. If you set the domain name to ibm.com and you create a partition with the name, new_VM, its fully qualified host name name will be new_VM.ibm.com. If you do not set a default domain name in the nova.conf file, PowerVC uses the domain that is set for the VIOS on the host to which you are deploying. If PowerVC cannot retrieve that value, it will use the domain name of the PowerVC management host. If it cannot retrieve that value, no domain name is set and you must set the domain name manually after you deploy the image. See 4.7, “PowerVC command-line interface” on page 92 for details about the PowerVC CLI and the powervc-domainname command.
  • 185. Chapter 5. PowerVC Standard Edition for managing PowerVM 161 You can initiate a new deployment from the Images window to list the available images. Follow these steps: 1. Select the image that you want to install on the VM that you create. The selected image background changes to light blue. Then, click Deploy, as shown in Figure 5-61. Figure 5-61 Image capture that is selected for deployment 2. PowerVC opens a new window where you need to define information about the new VM. Figure 5-62 on page 163 presents an example of this window. In advance, during the planning phase of the partition creation, you defined the following information: – VM name – Instances If you have a DHCP server or an IP pool that is configured, you can deploy several VMs simultaneously. – Host or host group Manually select the target host where the new VM will be deployed, or select the host group so that PowerVC selects the host based on the configured policy. See 3.3, “Placement policies and templates” on page 38 for details about the automatic placement of partitions. – Storage connectivity group Select one storage connectivity group for the new VM to access its storage. PowerVC can use a storage connectivity group to determine the use of vSCSI or NPIV to access SAN storage. See 3.5.3, “Storage connectivity groups and tags” on page 58 for details about the selection of the storage path and FC ports to use. – Compute template Select the compute template that you want to use to deploy the new VM with standard resource definitions. See 3.3.4, “Information that is required for compute template planning” on page 42 for detailed information about planning for CPU and memory resources by using templates. You can see on Figure 5-62 on page 163 that PowerVC displays the values pre-set in the template in fields that can be overwritten. You can change the amount of resources that you need for this new VM.
  • 186. 162 IBM PowerVC Version 1.2.3: Introduction and Configuration – Image volumes Since PowerVC version 1.2.3, you can capture a multiple-volume image. In this case, two volumes are included in the image. You need to select the storage template that you want for each volume to deploy the new VM with predefined storage capacity. You can select different storage templates for those volumes to meet your business needs. PowerVC presents a drop-down menu that lists the storage templates that are available in the storage provider in which the image volumes are stored. – New and existing volumes You can add new or existing volumes in addition to the volumes that are included in the image. To add volumes, click Add volume. The Add Volume page, where you attach a volume to the VM opens. – Network: • Primary network Select the network. If the selected network does not have a configured DHCP server, you must also manually provide an IP address or PowerVC selects an IP address from the IP pool. • Additional networks If two or more networks were defined in PowerVC, you can click the plus sign (+) icon to add more networks. Select the network. Get the IP address from the DHCP server, provide the IP address manually, or select one from the IP pool automatically. – Activation input: You can upload configuration scripts or add configuration data at the time of deploying a VM by using the activation input option. This script or data will automatically configure your VM according to your requirements, after it is deployed. For more information about the accepted data formats in cloud-init and examples of commonly used cloud configuration data formats, see the cloud-init documentation. For more information about activation input, see the IBM Knowledge Center: http://www-01.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.s tandard.help.doc/powervc_deploy_considerations.html Note: PowerVC verifies that the IP address that you provide is not already used for another VM, even if the IP address is used in a VM that is powered off. Note: The file or scripts that you upload and add here are used by the cloud-init initialization package and the activation engine (AE) for AIX VMs only. The activation engine for AIX VMs supports shell scripts that start with #! only, and it does not support the other cloud-init data formats. For any other operating system, the activation engine does not use the data that you upload for activation. Note: On the right part of the window, PowerVC displays the amount of available resources on the target host and the amount of additional resources that are requested for the new partition. So, you can see the amount of resources that are used and free on this host after the installation of the new partitions.
  • 187. Chapter 5. PowerVC Standard Edition for managing PowerVM 163 Figure 5-62 shows the window where you define information about the new VM. Figure 5-62 Information to deploy an image 3. Click Deploy on the lower part of the window to start the deployment of the new VM. This process might take a few minutes to finish. Important: For other vendor’s storage devices, no technique is available like the IBM FlashCopy® service in IBM Storwize storage. They use LUN migration, instead. A deployment might take an hour to complete. The amount of time depends on the volumes’ sizes and the storage device performance. Contact your storage administrator for more information before you design your PowerVC infrastructure.
  • 188. 164 IBM PowerVC Version 1.2.3: Introduction and Configuration 4. When the deployment finishes, you can see a new VM in the Virtual Machines window. This new VM is a clone of the captured image. The new VM is already configured and powered on as shown in Figure 5-63. Figure 5-63 Newly deployed virtual machine Tip: The new VM is a clone of the image, so you can log on to this VM with the same user ID and password combination that is defined in the VM from which the image was captured.
  • 189. Chapter 5. PowerVC Standard Edition for managing PowerVM 165 5.15.7 Add virtual Ethernet adapters for virtual machines After the VM was deployed successfully, you can add more virtual Ethernet adapters for the VM if you defined more networks in PowerVC. In a VM, PowerVC allows only one virtual Ethernet adapter for each network. Follow these steps: 1. To add a virtual Ethernet adapter for a VM, select the VM name on the Virtual Machines page. 2. Then, go to the VM’s details page. As shown in Figure 5-64, in the Network Interfaces section, click Add. 3. Select the network that you want to connect. Assign an IP address or PowerVC will select an IP address from the IP pool. 4. Click Add Interface. A new virtual Ethernet adapter is added for the VM. Figure 5-64 Add an Ethernet adapter for a virtual machine 5.15.8 Add collocation rules Use collocation rules to specify that selected VMs must always be kept on the same host (affinity) or that they can never be placed on the same host (anti-affinity). These rules are enforced when a VM is relocated. For example, in PowerHA scenarios, we need to force the pair of high availability (HA) VMs to exist on different physical machines. Otherwise, a single point of failure (SPOF) risk exists. Use the anti-affinity collocation rule to create this scenario. Note: After you add the virtual Ethernet adapter, you must refresh the hardware list in the partition. For example, run the cfgmgr command in AIX to assign the IP address to the newly discovered Ethernet adapter manually.
  • 190. 166 IBM PowerVC Version 1.2.3: Introduction and Configuration To create a new collocation rule, select Configuration → Collocation Rules → Create Collocation Rule, as shown in Figure 5-65. Enter the collocation rule name, select the policy type (either Affinity and Anti-Affinity), select the VMs, and click Create. The collocation rule creation is complete. Figure 5-65 Create Collocation Rule Important: When VMs are migrated or restarted remotely, one VM is moved at a time, which has the following implications for VMs in collocation rules that specify affinity: The VMs cannot be migrated or restarted remotely on another host. When you put a host into maintenance mode, if that host has multiple VMs in the same collocation rule, you cannot migrate active VMs to another host. To migrate a VM or restart a VM remotely in these situations, the VM must first be removed from the collocation rule. After the VM is migrated or restarted remotely, the VM can be added to the correct collocation rule.
  • 191. Chapter 5. PowerVC Standard Edition for managing PowerVM 167 5.15.9 Resize the virtual machine The PowerVC management host can resize the managed VMs dynamically. Follow these steps: 1. From the Virtual Machines window, click Resize on the upper bar on the window as shown in Figure 5-66. Figure 5-66 Virtual Machine resize 2. In the next window (Figure 5-67), enter the new values for the resources or choose an existing compute template. Select the option that best fits your business needs. Figure 5-67 VM Resize dialog window to select a compute template
  • 192. 168 IBM PowerVC Version 1.2.3: Introduction and Configuration When you enter the new value, it is verified and checked against the minimum and maximum values that are defined in the partition profile. If the requested new values exceed these limits for the VM, PowerVC rejects the request, highlights the field with a red outline, and issues an error notice. See Figure 5-68. Figure 5-68 Exceeded value for resizing 3. After you complete the information that is required in this window, click Resize to start the resizing process. You will see a pop-up message window in the lower-right part of the window and a “complete” message in the message view. 4. The resize process can take a few minutes. When it finishes, you can see the new sizes in the Specifications section of the VM. Tip: The PowerVC management server compares the entered values with the values in the profile of the selected VM. If you modify the VM profile, you must shut down and restart the VM for the changes to take effect. Important: To refresh the profile, shut down and restart the VM rather than reboot it. Rebooting the VM keeps the current values rather than reading the new values that you set in the profile. Note: With the PowerVC resize function, you can change the current settings of the machine only. You cannot use the resize function to change the minimum and maximum values that are set in the partition profile or to change a partition from shared to dedicated.
  • 193. Chapter 5. PowerVC Standard Edition for managing PowerVM 169 5.15.10 Migration of virtual machines PowerVC can manage the Live Partition Mobility (LPM) feature. Use the LPM feature to migrate VMs from one host to another host. Migration requirements To migrate VMs by using the IBM PowerVC management server, ensure that the source and destination hosts and the VMs are configured correctly. To migrate a VM, the following requirements must be met: The VM is in Active status in the PowerVC management host. The PowerVM Enterprise Edition or PowerVM for IBM Linux on Power hardware feature is activated on your hosts. This feature enables the use of the LPM feature. The networks for both source and target hosts must be mapped to shared Ethernet adapters by using the same virtual Ethernet switch. We recommend that the maximum number of virtual resources (virtual adapters) is set to at least 200 on all of the hosts in your environment. This value ensures that you can create enough VMs on your hosts. The logical memory block size on the source host and the destination host must be the same. Both the source and destination hosts must have an equivalent configuration of Virtual I/O Servers that belong to the same storage connectivity group. The processor compatibility mode on the VM that you want to migrate must be supported by the destination host. The VM must have an enabled Resource Monitoring and Control (RMC) connection. To migrate a VM with a vSCSI attachment, the destination VIOS must be zoned to the backing storage. At least one pair of VIOS VMs must be storage-ready and members of the storage connectivity group. Each of these VIOS VMs must have at least two physical FC ports ready. Each of the two physical FC ports must be connected to a distinct fabric, and the fabric must be set correctly on the FC ports’ Configuration pages. The following restrictions apply when you migrate a VM: You cannot migrate a VM to a host that is a member of a different host group. If the VM is running a little endian guest, the target host must support little endian guests. If the VM was created as remote restart-capable, the target host must support remote restart. Certain IBM Power System servers can run only Linux workloads. When you migrate an AIX or IBM i VM, these hosts are not considered for placement. Note: If the source host has two Virtual I/O Servers and the target host has only one VIOS, it is not possible to migrate a partition by accessing its storage through both Virtual I/O Servers on the source. However, if a partition on the source host is using only one VIOS to access its storage, it can be migrated (assuming that other requirements, such as port tagging, are met).
  • 194. 170 IBM PowerVC Version 1.2.3: Introduction and Configuration You cannot exceed the maximum number of simultaneous migrations that are designated for the source and destination hosts. The maximum number of simultaneous migrations depends on the number of migrations that are supported by the Virtual I/O Servers that are associated with each host. A source host in a migration operation cannot serve concurrently as a destination host in a separate migration operation. If you deployed a VM with a processor compatibility mode of POWER7 and later changed the mode to POWER6, you cannot migrate the VM to a POWER6 host. The MAC address for a POWER7 VM is generated by PowerVC during the deployment. To migrate to a POWER6 host, the MAC address of the VM must be generated by the HMC. To migrate from a POWER7 to a POWER6 host, you must initially deploy to a POWER7 system with the processor compatibility mode set to a POWER6 derivative, or you must initially deploy to a POWER6 host. PowerVM does not support the migration of a VM whose attachment type will change its multipathing solution between the source and destination Virtual I/O Servers. For example, a VM on a path control module (PCM)-attached VIOS can be successfully migrated only to a PCM-attached VIOS. However, PowerVM does not enforce this requirement. To avoid unsupported migrations, create separate storage connectivity groups for PCM and PowerPath multipathing solutions. Collocation rules are enforced during migration: – If the VM is a member of a collocation rule that specifies affinity and multiple VMs are in that collocation rule, you cannot migrate it. Otherwise, the affinity rule is broken. To migrate a VM in this case, remove it from the collocation rule and then add it to the correct group after the migration. – If the VM is a member of a collocation rule that specifies anti-affinity, you cannot migrate it to a host that has a VM that is a member of the same collocation rule. For example, assume the following scenario: • Virtual Machine A is on Host A. • Virtual Machine B is on Host B. • Virtual Machine A and Virtual Machine B are in a collocation rule that specifies anti-affinity. Then, Virtual Machine A cannot be migrated to Host B. – Only one migration or remote restart at a time is allowed for VMs in the same collocation rule. Therefore, if you try to migrate a VM or restart a VM remotely and any other VMs in the same collocation rule are being migrated or restarted remotely, that request fails.
  • 195. Chapter 5. PowerVC Standard Edition for managing PowerVM 171 Migrate the virtual machine Follow these steps to migrate a VM: 1. Open the Virtual Machines window, and then select the VM that you want to migrate. The background changes to light blue. 2. Click Migrate, as shown in Figure 5-69. Figure 5-69 Migrate a selected virtual machine 3. You can select the target host, or the placement policy can determine the best target, as shown in Figure 5-70. Figure 5-70 Select target server before the migration
  • 196. 172 IBM PowerVC Version 1.2.3: Introduction and Configuration 4. Figure 5-71 shows that during the migration, the Virtual Machines window displays the partition with the state and task both set to Migrating. Figure 5-71 Virtual machine migration in progress 5. After the migration completes, you can check the Virtual Machines window to verify that the partition is now hosted on the target host, as shown in Figure 5-72. Figure 5-72 Virtual machine migration finished 5.15.11 Host maintenance mode You move a host to maintenance mode to perform maintenance activities on a host, such updating firmware or replacing hardware. Note: A warning message in the Health column is normal. It takes a few minutes to change to OK. Source host before migration Target host after migration
  • 197. Chapter 5. PowerVC Standard Edition for managing PowerVM 173 Maintenance mode requirements Before you move the host into maintenance mode, check whether the following requirements are met: If the request was made to migrate active VMs when the host entered maintenance mode, the following conditions must also be true: – The hypervisor must be licensed for LPM. – The VMs on the host cannot be in the error, paused, or building states. – On all active VMs, the health must be OK and the RMC connections must be active. – All requirements for live migration must be met. See “Migration requirements” on page 169 for details. The host’s hypervisor state must be operating. If it is not, VM migrations might fail. If the request was made to migrate active VMs when the host entered maintenance mode, the following conditions cannot also be true, or the request will fail: – A VM on the host is a member of a collocation rule that specifies affinity and has multiple members. – The collocation rule has a member that is already undergoing a migration or is being restarted remotely. Put the host in maintenance mode If all of the requirements are met, you can put a host in maintenance mode by following these steps: 1. On the Hosts window, select the host that you want to put into maintenance mode, and click Enter Maintenance Mode as shown in Figure 5-73. Figure 5-73 Enter Maintenance Mode
  • 198. 174 IBM PowerVC Version 1.2.3: Introduction and Configuration 2. If you want to migrate the VMs to other hosts, select Migrate active virtual machines to another host as shown in Figure 5-74. This option is unavailable if no hosts are available for the migration. Figure 5-74 Migrate virtual machines to other hosts 3. Click OK. After maintenance mode is requested, the host’s maintenance state is Entering Maintenance while the VMs are migrated to another host, if requested. This status changes to Maintenance On after the migration is complete and the host is fully in the maintenance state. To remove a host from maintenance mode, select the host and select Exit Maintenance Mode. Click OK on the confirmation window as shown in Figure 5-75. Figure 5-75 Exit Maintenance Mode You can add VMs again to the host after it is brought out of maintenance mode.
  • 199. Chapter 5. PowerVC Standard Edition for managing PowerVM 175 5.15.12 Restart virtual machines remotely from a failed host PowerVC can restart VMs remotely from a failed host to another host. To successfully restart VMs remotely by using PowerVC, you must ensure that the source host and destination host are configured correctly. Remote restart requirements To restart a VM remotely, the following requirements must be met: The source and destination hosts must have access to the storage that is used by the VMs. The source and destination hosts must have all of the appropriate virtual switches that are required by networks on the VM. The hosts must be running firmware 820 or later. The HMC must be running with HMC 820 Service Pack (SP)1 or later, with the latest program temporary fix (PTF). The hosts must support the simplified remote restart capability. Both hosts must be managed by the same HMC. The service processors must be running and connected to the HMC. The source host must be in the Error, Power Off, or Error - dump in progress state on the HMC. The VM must be created with the simplified remote restart capability enabled. The remote restart state of the VM must be Remote restartable. Shared storage pools are not officially supported through PowerVM simplified remote restart. Tip: You can edit the period after which the migration operation times out and the maintenance mode enters an error state by running the following commands: /usr/bin/openstack-config --set /etc/nova/nova.conf DEFAULT prs_ha_timeout_seconds <duration_in_seconds> For example, to set the timeout for two hours, run this command: /usr/bin/openstack-config --set /etc/nova/nova.conf DEFAULT prs_ha_timeout_seconds 7200 Then, restart the openstack-nova-ibm-ego-ha-service: service openstack-nova-ibm-ego-ha-service restart
  • 200. 176 IBM PowerVC Version 1.2.3: Introduction and Configuration Restart a virtual machine remotely Before you can restart a VM on PowerVM remotely, you must deploy or configure the VM with remote restart capability. You can deploy or configure the VM with remote restart capability in two ways: Create a compute template with the enabled remote restart capability and deploy a VM with that compute template as shown in Figure 5-76. Figure 5-76 Create a compute template with enabled remote restart capability
  • 201. Chapter 5. PowerVC Standard Edition for managing PowerVM 177 Modify the remote restart property after the VM is deployed. In Figure 5-77, you can see a VM with the correct remote restart state, which is Remote restartable. Figure 5-77 Correct remote restart state under the Specifications section The Remote Restart task is available under the Hosts view as shown in Figure 5-78. Figure 5-78 Remotely Restart Virtual Machines option Note: You can change the remote restart capability of a VM only if the VM is shut off. Important: VM can be restarted remotely in PowerVM only if the Remote Restart state is Remote restartable. When a VM is deployed initially, the HMC needs to collect partition and resource configuration information. The remote restart state changes from Invalid to different states. When it changes to Remote restartable, PowerVC can initiate the remote restart operation for that VM.
  • 202. 178 IBM PowerVC Version 1.2.3: Introduction and Configuration To restart a VM remotely, select the failed host and then select Remotely Restart Virtual Machines. Then, you can select to either restart a specific VM remotely or the restart all of the VMs on the failed host remotely as shown in Figure 5-79. Figure 5-79 Remotely Restart Virtual Machines The scheduler can choose a destination host automatically by placement policy, or you can choose a destination host (Figure 5-80). Figure 5-80 Destination host A notification on the user interface indicates that a VM was successfully restarted remotely.
  • 203. Chapter 5. PowerVC Standard Edition for managing PowerVM 179 When you select to restart all VMs on a failed host remotely, the host experiences several transitions. Table 5-7 shows the host states during the transition. Table 5-7 Host states during the transition State Description Remote Restart Started PowerVC is preparing to rebuild the VMs. This process can take up to one minute. Remote Restart Rebuilding PowerVC is rebuilding VMs. After the VMs are restarted remotely on the destination host, the source host goes back to displaying its state. Remote Restart Error An error occurred while one or more VMs were moved to the destination host. You can check the reasons for the failure in the corresponding compute log file in the /var/log/nova directory.
  • 204. 180 IBM PowerVC Version 1.2.3: Introduction and Configuration 5.15.13 Attach a volume to the virtual machine The PowerVC management server can handle storage volumes. By using the management server, you can attach a new or existing volume to a VM. Follow these steps: 1. Click the Virtual Machines icon on the left, and then select the VM to which you want to add a volume. The background color changes to light blue. 2. Click Attach Volume. In the pop-up window that opens, you can attach an existing volume, or you can create a volume and attach it in one step. In the example in Figure 5-81, PowerVC will create a disk. Figure 5-81 Attaching a new volume to a virtual machine 3. Select the storage template to select the backing device, choose the volume name, and choose the volume size in GBs. You can add a short description for the new volume. The Storage bar on the right side of the window changes dynamically when you change the size. Click Attach. PowerVC creates a volume, attaches it to the VM, and then displays a message at the bottom of the window to confirm the creation of the disk. Note: You can select the Enable sharing check box so that other VMs can use the volume also, if needed.
  • 205. Chapter 5. PowerVC Standard Edition for managing PowerVM 181 4. To see the new volume, open the VM’s detailed information window and select the Attached Volumes tab. This tab displays the current volumes that are attached to the VM, as shown in Figure 5-82. Figure 5-82 Attached Volumes tab view 5. To complete the process, you must execute the correct command on the VM command line: – For IBM AIX operating systems, execute this command as root: cfgmgr – For Linux operating systems, execute this command as root, where host_N is the controller that manages the disks on the VM: echo “- - -” > /sys/class/scsi_host/host_N/scan 5.15.14 Detach a volume from the virtual machine To detach a volume from the VM, you must first remove it from the operating system. Remove the volume from the operating system For the IBM AIX operating system, execute this command as root, where hdisk_N is the disk that you want to remove: rmdev -dl hdisk_N For the Linux operating system, reboot after you detach the volume. Note: The Attached Volumes tab displays only volumes that were attached to the machine after its creation or import. This tab does not display the boot volume of the partition. Note: We recommend that you cleanly unmount all file systems from the disk, remove the logical volume, and remove the disk from AIX before you detach the disk from PowerVC.
  • 206. 182 IBM PowerVC Version 1.2.3: Introduction and Configuration Detach the volume from a virtual machine The PowerVC management server can handle storage volumes. By using the PowerVC management server, you can detach an existing volume from a VM: 1. Click the Virtual Machines icon, and then double-click the VM from which you want to detach a volume. 2. Click the Attached Volumes tab to display the list of volumes that are attached to this VM. Select the volume that you want to detach. The background color changes to light blue. 3. Click Detach, as shown in Figure 5-83. Figure 5-83 Detach a volume from a virtual machine 4. PowerVC displays a confirmation window. See Figure 5-84. Figure 5-84 Confirmation window 5. You will see a Detaching status in the State column. When the process finishes, the volume is detached from the VM. The detached volume is still managed by the PowerVC management host. You can see the volume from the Storage window.
  • 207. Chapter 5. PowerVC Standard Edition for managing PowerVM 183 5.15.15 Reset the state of a virtual machine In certain situations, a VM becomes unavailable or it is in an unrecognized state for the PowerVC management server. When these situations occur, you can execute a Reset State procedure. This process will set the machine back to an active state. Figure 5-85 shows a VM’s detailed information window with a Reset State hot link that appears on the State line of the Information section. Click Reset State to start the reset process. Figure 5-85 Resetting the virtual machine’s state Note: No changes are made to the connection or database.
  • 208. 184 IBM PowerVC Version 1.2.3: Introduction and Configuration 6. The PowerVC management server displays a confirmation window. Click OK to continue. See Figure 5-86. Figure 5-86 State reset confirmation window 5.15.16 Delete images To delete an image that is not in use, open the Images window, and then select the image that you want to delete. The background color changes to light blue. Then, click Delete as shown in Figure 5-87. Figure 5-87 Image selected Note: This process can take a few minutes to complete. If the state does not change, try to restore the VM or deploy the VM again from an image.
  • 209. Chapter 5. PowerVC Standard Edition for managing PowerVM 185 The PowerVC management server displays a confirmation window, as shown in Figure 5-88. If you want to delete the image from the storage permanently, select the check box and click OK. Otherwise, the volume that contains the image will remain in the storage pool, but it will no longer be usable to deploy an image. This function is specific to PowerVM. Figure 5-88 Delete an image confirmation window PowerVC opens a pop-up window with a message that indicates that the image is being deleted. 5.15.17 Unmanage a virtual machine The Unmanage function is used to discontinue the management of a VM from PowerVC. After a VM becomes unmanaged, the VM is no longer listed in the Virtual Machines window, but the VM still exists. The VM and its resources remain configured on the host. The VM can still be managed from the HMC. The VM remains up and running. To unmanage a VM, open the Virtual Machines window, and select the VM that you want to remove from PowerVC. The Unmanage option is enabled. Click Unmanage to remove this VM from the PowerVC environment. Figure 5-89 shows the Unmanage option to unmanage a VM. Figure 5-89 Unmanage an existing virtual machine 5.15.18 Delete a virtual machine PowerVC can delete VMs completely from your systems. Important: By deleting a VM, you completely remove the VM from the host system and from the HMC, and PowerVC no longer manages it.
  • 210. 186 IBM PowerVC Version 1.2.3: Introduction and Configuration To remove a VM, open the Virtual Machines window and select the VM that you want to remove. The background color changes to light blue. Click Delete, as shown in Figure 5-90. Figure 5-90 Delete a virtual machine The PowerVC management server displays a confirmation window (Figure 5-91). To permanently delete the VM, click OK. PowerVC then confirms the deletion. Figure 5-91 Confirmation window to delete a virtual machine When PowerVC deletes storage, it behaves differently, depending on how volumes were created: Volumes that were created by PowerVC (the boot volumes) are deleted and removed from the VIOS and storage back-ends. Volumes that were attached to the partition are detached only during the partition deletion. The zoning to storage is removed by the deletion operation. Important: You can delete a VM while it is running. The process stops the running VM and then deletes it.
  • 211. © Copyright IBM Corp. 2014, 2015. All rights reserved. 187 Chapter 6. PowerVC Standard Edition for managing PowerKVM Using IBM Power Virtualization Center Standard Edition (PowerVC) for managing PowerKVM for the setup, storage management, and the way that PowerVC handles the capture of International Organization for Standardization (ISO) images requires special considerations. In this chapter, we cover the installation and setup specifics and the basic steps to import, capture, and deploy ISO images: 6.1, “Install PowerVC Standard to manage PowerKVM” on page 188 6.2, “Set up PowerVC Standard managing PowerKVM” on page 188 6.3, “Host group setup” on page 201 6.4, “Import ISO images” on page 201 6.5, “Capture a virtual machine” on page 212 6.6, “Deploy images” on page 220 6.7, “Resize virtual machines” on page 223 6.8, “Suspend and resume virtual machines” on page 224 6.9, “Restart a virtual machine” on page 224 6.10, “Migrate virtual machines” on page 225 6.11, “Restarting virtual machines remotely” on page 226 6.12, “Delete virtual machines” on page 228 6.13, “Create and attach volumes” on page 229 6.14, “Attach volumes” on page 229 For configuration and use, see Chapter 5, “PowerVC Standard Edition for managing PowerVM” on page 97. 6
  • 212. 188 IBM PowerVC Version 1.2.3: Introduction and Configuration 6.1 Install PowerVC Standard to manage PowerKVM This section outlines the slight differences between the installation of PowerVC Standard Edition for managing PowerKVM and the installation of PowerVC Standard Edition for managing PowerVM. Before you install PowerVC, a Linux Installation must be ready, as described in Chapter 4, “PowerVC installation” on page 77. We do not cover the Linux installation in this section because it does not differ from the Linux installation for managing PowerVM. For the installation details, see 4.2, “Installing PowerVC” on page 82. After the PowerVC installation for Linux is ready, follow these steps: 1. From the Linux command-line interface (CLI), change the working directory to the location of the installation script. 2. Install PowerVC Standard for managing PowerKVM by using this command: ./install 3. Select the offering type to install from the following two options: – 1 - Standard managing PowerVM – 2 - Standard managing PowerKVM – 9 - Exit Enter 2 to install with PowerVC Standard managing PowerKVM. The rest of the installation process is the same for all versions. For more information, see 4.2, “Installing PowerVC” on page 82. 6.2 Set up PowerVC Standard managing PowerKVM In this section, we cover the steps to add a PowerKVM host, a storage provider, and a network.
  • 213. Chapter 6. PowerVC Standard Edition for managing PowerKVM 189 6.2.1 Add the PowerKVM host Follow these steps: 1. In the PowerVC GUI, type your user and password, and click Log In (Figure 6-1). Figure 6-1 PowerVC Login window Figure 6-2 PowerVC Home page Note: The Home page (Figure 6-2) does not offer the option to add a fabric.
  • 214. 190 IBM PowerVC Version 1.2.3: Introduction and Configuration 2. Click Add host to add the PowerKVM host, as shown in the Figure 6-3. Figure 6-3 PowerVC Add Host window During the Add Host task, a package is transferred and installed in the PowerKVM host. As Figure 6-4 shows, messages appear in the lower-right side of the browser. Figure 6-4 Informational messages After the host is added, you see the message in Figure 6-5. Figure 6-5 Host added successfully
  • 215. Chapter 6. PowerVC Standard Edition for managing PowerKVM 191 3. To review the messages, click the black menu bar at the top of the browser. Figure 6-6 shows the Home page with the available PowerKVM hosts. Figure 6-6 PowerVC managing PowerKVM hosts 4. For a detailed view of the added PowerKVM, click the Hosts icon in the left navigation panel (highlighted in Figure 6-6). 5. Figure 6-7 displays the new PowerKVM hosts. Figure 6-7 Detailed Hosts view
  • 216. 192 IBM PowerVC Version 1.2.3: Introduction and Configuration 6. Click a PowerKVM host to display more information, as shown in Figure 6-8. Figure 6-8 PowerKVM host information and capacity section
  • 217. Chapter 6. PowerVC Standard Edition for managing PowerKVM 193 You can expand and collapse any sections. The display information about virtual switches and virtual machines (VMs) is shown in Figure 6-9. Figure 6-9 PowerKVM Virtual Switches and Virtual Machines sections
  • 218. 194 IBM PowerVC Version 1.2.3: Introduction and Configuration 6.2.2 Add storage Follow these steps to add storage: 1. Add the storage by clicking the Add Storage plus sign (+) in the center of the PowerVC Home page. Figure 6-10 shows a pop-up window to specify the storage array IP address and credentials. In our lab environment, we use an IBM SAN Volume Controller (SVC). Enter the name, user ID, and password. Click Connect. Figure 6-10 Add a storage device to PowerVC
  • 219. Chapter 6. PowerVC Standard Edition for managing PowerKVM 195 2. After you provide the IP connection settings and credentials, specify the SAN Volume Controller storage pool that is assigned to your environment. In Figure 6-11, the SVC shows three pools. We selected DS4800_site2_p02. Click Add Storage. Figure 6-11 SVC storage pool choice
  • 220. 196 IBM PowerVC Version 1.2.3: Introduction and Configuration After you add the SVC and storage pool successfully, a new storage provider appears on the PowerVC Home page, as shown in Figure 6-12 (Storage Providers: 1). The storage provider does not have a managed volume yet. Figure 6-12 The new SVC storage provider
  • 221. Chapter 6. PowerVC Standard Edition for managing PowerKVM 197 6.2.3 Add a network Follow these steps to add a network: 1. Add a network by clicking Add Network to open the window that is shown in Figure 6-13. 2. Add the network name, virtual LAN (VLAN) ID, subnet mask, default gateway, Domain Name Server (DNS), and the address deployment choice (Dynamic Host Configuration Protocol (DHCP) or Static). The configured virtual switch is automatically retrieved from the PowerKVM configuration. Figure 6-13 Add a network to the PowerVC configuration
  • 222. 198 IBM PowerVC Version 1.2.3: Introduction and Configuration 3. After you add the network to the configuration, the Home page is updated, as shown in Figure 6-14. Figure 6-14 Network is configured now
  • 223. Chapter 6. PowerVC Standard Edition for managing PowerKVM 199 Managing virtual switches PowerVC Standard for managing PowerKVM can manage multiple virtual switches to accommodate your business requirements. Follow these steps: 1. To edit the virtual switch configuration, from the PowerVC Home page, click the Hosts icon, and then double-click the host that you want to use. Expand the Virtual Switches section, if it is not expanded. The virtual switches are defined on the host as shown in Figure 6-15. Figure 6-15 List of virtual switches 2. Select the switch that you need to edit and click Edit Switch. From the list of available components, select the physical component that you want to link to the virtual switch, and click Save, as shown in Figure 6-16. Figure 6-16 Edit virtual switch window
  • 224. 200 IBM PowerVC Version 1.2.3: Introduction and Configuration 3. The message that is shown in Figure 6-17 appears. Verify that no other activity is running on the host, and click OK. Figure 6-17 Message about conflicts with the updated virtual switch selections 4. After the process finishes, the component is shown in the Components column. Click View Components to see the details that are shown in Figure 6-18. Figure 6-18 Details of the virtual switch components Environment verification Check the overall PowerVC configuration by clicking Verify Environment. Note: This verification is the same procedure for all PowerVC versions. For more information, see 5.14.1, “Verification report validation categories” on page 130.
  • 225. Chapter 6. PowerVC Standard Edition for managing PowerKVM 201 6.3 Host group setup With PowerVC version 1.2.3 or later, you can group hosts into host groups. You can set different placement policies for each host group. To create a new host group, select Hosts → Host Groups and click Create Host Group, as shown in Figure 6-19. Enter the host group name, select the placement policy, and the hosts. Click Create Host Group at the bottom of the window. Figure 6-19 Create a host group 6.4 Import ISO images PowerVC Standard managing PowerKVM offers you the option to use ISO images to create Linux VMs. The setup differs slightly from PowerVC Standard managing PowerVM. After the environment is verified, you can import ISO images to the PowerVC domain.
  • 226. 202 IBM PowerVC Version 1.2.3: Introduction and Configuration 6.4.1 Importing ISO images by using the command-line interface The first step to import an ISO image to PowerVC is to transfer the file to the PowerVC hosts. Then, you can run the powervc-iso-import command to add the ISO to PowerVC. Example 6-1 shows an example of importing a Red Hat Enterprise Linux (RHEL) ISO image by using the command-line interface (CLI). Example 6-1 Importing a Red Hat ISO image [admin@powerkvm bin]# powervc-iso-import --name rhel65dvd2 --os rhel --location /softimg/rhel-server-6.5-ppc64-dvd.iso Password +----------------------------+--------------------------------------+ | Property | Value | +----------------------------+--------------------------------------+ | Property 'architecture' | ppc64 | | Property 'hw_vif_model' | virtio | | Property 'hypervisor_type' | qemu | | Property 'os_distro' | rhel | | checksum | 66bb956177d7b55946a5602935e67013 | | container_format | bare | | created_at | 2014-05-27T21:14:57.012159 | | deleted | False | | deleted_at | None | | disk_format | iso | | id | a898e706-c835-42c6-87c2-e53d8efb98ae | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | rhel65dvd2 | | owner | 9c03022ea2a146b78c495cc9a00b0487 | | protected | False | | size | 3347902464 | | status | active | | updated_at | 2014-05-27T21:15:47.330608 | | virtual_size | None | +----------------------------+--------------------------------------+
  • 227. Chapter 6. PowerVC Standard Edition for managing PowerKVM 203 6.4.2 Importing ISO images by using the GUI Follow these steps to import ISO images by using the graphical user interface (GUI): 1. To import ISO images or qcow2 images into PowerVC by using the GUI, click Images on the left navigation panel in PowerVC. Then, click Upload. Enter the image name, operating system, and image type, as shown in Figure 6-20. Click Browse to navigate to the ISO image. Select the ISO image. Finally, click Upload. Figure 6-20 Upload Image window Note: This process takes a few seconds or minutes, depending on the network bandwidth and the size of the image.
  • 228. 204 IBM PowerVC Version 1.2.3: Introduction and Configuration 2. After the ISO image is successfully imported, the ISO image appears on the left navigation panel of the PowerVC Home page, as shown in Figure 6-21. Figure 6-21 ISO images that were imported to PowerVC 3. The status of the ISO images can be verified by clicking the Images icon on the left navigation panel to open the Images view that is shown in Figure 6-22. Figure 6-22 Status of the imported ISO image
  • 229. Chapter 6. PowerVC Standard Edition for managing PowerKVM 205 4. Click the rhel65dvd2 image to get details, such as the ID, as shown in Figure 6-23. Figure 6-23 RHEL ISO image details The images are in the /var/lib/glance/images/ directory. Example 6-2 displays the ISO image file based on the ID in the Images interface that is shown in Figure 6-23. Example 6-2 ISO image location and naming in PowerVC [admin@dpowervckvm ~]$ ls /var/lib/glance/images a898e706-c835-42c6-87c2-e53d8efb98ae
  • 230. 206 IBM PowerVC Version 1.2.3: Introduction and Configuration 6.4.3 Deploying an RHEL ISO image After an ISO image is imported, you can deploy it to a VM. This VM will be a base that is ready for future image captures and the automatic deployments of other VMs. Follow these steps: 1. From the Images window, on the left navigation panel (Figure 6-24), select the image and click Deploy. Figure 6-24 Select the image for deployment
  • 231. Chapter 6. PowerVC Standard Edition for managing PowerKVM 207 2. After the image is selected for deployment, you must specify the following parameters for the target VM before any deployment can start (Figure 6-25): – VM name – Target host or host group – Compute template The following default values can be overridden when they are available: • Processors • Processor units • Memory size • Disk size – Network template – VM’s IP address, or PowerVC can select an IP address automatically from the IP pool Figure 6-25 Virtual machine deployment parameters 3. Complete the required information, and click Deploy to start the VM’s deployment. During the deployment process, PowerVC displays several messages. Figure 6-26 shows the deployment in-progress message. Figure 6-26 Deployment in-progress message
  • 232. 208 IBM PowerVC Version 1.2.3: Introduction and Configuration 4. Figure 6-27 shows the successful deployment message. Figure 6-27 Successful deployment verification message 5. The VM’s deployment can be monitored from the left navigation area also, as shown in Figure 6-28. Figure 6-28 Virtual Machines view with highlighted State and Health columns
  • 233. Chapter 6. PowerVC Standard Edition for managing PowerKVM 209 6. Click the name to see the detailed Information and Specifications sections about the deployed image, as shown in Figure 6-29. Figure 6-29 Detailed information
  • 234. 210 IBM PowerVC Version 1.2.3: Introduction and Configuration 7. The sections can be collapsed and expanded as needed. Figure 6-30 shows the expanded Network Interfaces and Details sections and the collapsed Information and Specifications sections. Figure 6-30 Detailed information with expanded or collapsed sections 8. The Active status and OK health mean that the VM is deployed. Although this status seems definitive, you still must install the initial Linux installation manually. 9. The machine is prepared and ready for the operating system (OS) installation. A shutdown is required. Select the deployed VM and click Stop, as shown in Figure 6-31. Figure 6-31 Stopping the virtual machine
  • 235. Chapter 6. PowerVC Standard Edition for managing PowerKVM 211 Linux installation for the virtual machine The following steps describe the manual installation of a Linux VM by using an ISO image: 1. Start the VM by clicking Start on PowerVC. When the VM is started, the state is Active, as shown in Figure 6-32. Figure 6-32 Virtual machine started and active 2. After the VM status is Active and Health is OK, proceed with the manual installation steps. 3. Open a remote console connection from the PowerKVM command line to the VM by using the virsh console command. First, list all of the VMs by running the virsh list --all command. Example 6-3 shows the output for the virsh command. Example 6-3 virsh list --all output [admin@powerkvm ~]# virsh list --all Id Name State ---------------------------------------------------- - linux20-36d9ca31-00000017 shut off [admin@powerkvm ~]# 4. Copy the name of the VM and run the command: virsh console [virtual_machine_name] Note: This extra manual installation step is necessary only for ISO image deployment, not for captured VMs. Tip: When you select the VM, the action buttons become active. If no VM is selected, all of the buttons remain inactive (gray). Note: The Health status might remain in the Warning state for several minutes.
  • 236. 212 IBM PowerVC Version 1.2.3: Introduction and Configuration 5. This command opens a remote virtual console with the selected VM. Press any key to get the initial input. You see the “Disc Found” message after RHEL boots, as shown in Example 6-4. Example 6-4 Virtual console that shows Disc Found message Welcome to Red Hat Enterprise Linux for ppc64 +-----------¦ Disc Found +-----------+ ¦ ¦ ¦ To begin testing the media before ¦ ¦ installation press OK. ¦ ¦ ¦ ¦ Choose Skip to skip the media test ¦ ¦ and start the installation. ¦ ¦ ¦ ¦ +----+ +------+ ¦ ¦ ¦ OK ¦ ¦ Skip ¦ ¦ ¦ +----+ +------+ ¦ ¦ ¦ ¦ ¦ +------------------------------------+ <Tab>/<Alt-Tab> between elements | <Space> selects | <F12> next screen 6. Follow the instructions to complete the Linux installation. When the installation finishes, the VM is ready to be captured and deployed several times. 6.5 Capture a virtual machine A VM can be captured when it is in the Active state or a powered-off state. This section describes how to capture a VM that is running and managed by PowerVC. This section covers the necessary steps to capture the VM: 1. Install cloud-init on the VM that you want to capture. You need to perform this step only the first time that you capture a VM. 2. Perform any pre-capture preparations, such as deleting or cleaning up log files, on the VM. For SLES VMs, change the devices so that they are mounted by device name or Universally Unique Identifier (UUID). Before you can capture a VM, you must ensure that the following requirements are met: Your PowerVC environment is configured as described in 6.2, “Set up PowerVC Standard managing PowerKVM” on page 188. The host on which the VM is configured is registered in PowerVC. When you capture VMs that use local storage, the /var/lib/glance/images/ directory on the PowerVC management server is used as the repository for storing the qcow2 and ISO images. The file system that contains the /var/lib/glance/images/ directory must have enough space to store the captured images.
  • 237. Chapter 6. PowerVC Standard Edition for managing PowerKVM 213 6.5.1 Install cloud-init on the virtual machine The cloud-init script enables VM activation and initialization. It is widely used for OpenStack. Before you capture a VM, install the cloud-init initialization package. This package is available at /opt/ibm/powervc/images/cloud-init in PowerVC. Install the required dependencies Before you install cloud-init, you must install the necessary dependencies for cloud-init, such as the following examples, from the repository: Python boto Yellowdog Updater, Modified (YUM) Extra Packages for Enterprise Linux (EPEL) Any other package manager Not all dependencies are available in the regular RHEL repository. For SLES, install the dependencies that are provided: ftp://ftp.unicamp.br/pub/linuxpatch/cloud-init-ppc64/sles11 For RHEL 6 and 7, follow these steps: 1. Install the dependencies from the FTP location: ftp://ftp.unicamp.br/pub/linuxpatch/cloud-init-ppc64 2. Add the EPEL YUM repository to get the dependent Red Hat Package Managers (RPMs): – Run the following commands to set up the repository for RHEL 6: wget http://dl.fedoraproject.org/pub/epel/6Server/ppc64/epel-release-6-8.noarch.rpm rpm -Uvh epel-release-6*.rpm – Run the following commands to set up the repository for RHEL 7: wget http://dl.fedoraproject.org/pub/epel/7/ppc64/e/epel-release-7-5.noarch.rpm rpm -Uvh epel-release-7*.rpm Install cloud-init Install the appropriate cloud-init RPM for your OS from /opt/ibm/powervc/images/cloud-init: For RHEL 6, install cloud-init-0.7.4-*.el6.noarch.rpm. For RHEL 7, install cloud-init-0.7.4-*.el7.noarch.rpm from the /opt/ibm/powervc/images/cloud-init/rhel location. Important: If you are installing the cloud-init package to capture a VM, and the activation engine is installed, you must uninstall the activation engine. To uninstall the activation engine, see “Preparing a virtual machine with activation-engine” on page 151. Note: The EPEL RPM packages might be renamed with the updated version. You can obtain the new versions from the following page with the correct version selected: http://dl.fedoraproject.org/pub/epel/
  • 238. 214 IBM PowerVC Version 1.2.3: Introduction and Configuration Modify the cloud.cfg file After you install cloud-init, modify the cloud.cfg file that is available at /etc/cloud/cloud.cfg with the following values, according to your OS. For RHEL, update the cloud.cfg file with the following values: disable_root: 0 ssh_pwauth: 1 ssh_deletekeys: 1 For SLES, edit the following fields in the cloud.cfg file: 1. Remove the following field: users: -root 2. Add the following fields: – ssh_pwauth: true – ssh_deletekeys: true For both RHEL and SLES, add the following new values to the cloud.cfg file: disable_ec2_metadata: True datasource_list: ['ConfigDrive'] For SLES only, after you update and save the cloud.cfg file, run the following commands: chkconfig -s cloud-init-local on chkconfig -s cloud-init on chkconfig -s cloud-config on chkconfig -s cloud-final on For RHEL 7, ensure that the following conditions are set on the VM that you are capturing or deploying: SLES is set to permissive or disabled on the VM that you are capturing or deploying. The Network Manager must be installed and enabled. Ensure that the net-tools package is installed. Edit all of the /etc/sysconfig/network-scripts/ifcfg-eth* files and update NM_CONTROLLED=no in them. Remove the MAC address information After you install the cloud-init initialization package, remove the Media Access Control (MAC) address information: 1. Replace /etc/udev/rules.d/70-persistent-net.rules with an empty file. (The .rules file contains network persistence rules, including the MAC address.) 2. Replace /lib/udev/rules.d/75-persistent-net-generator.rules with an empty file, which generates the .rules file. Note: This package is not installed by default when you select the Minimal Install software option during the installation of RHEL 7 from an ISO image. Note: The recommended action is to replace the previous files with empty files rather than deleting the files. If you delete the files, you might receive an udev kernel warning at boot time.
  • 239. Chapter 6. PowerVC Standard Edition for managing PowerKVM 215 3. Remove this HWADDR line from Fedora-based images: /etc/sysconfig/network-scripts/ifcfg-eth0 6.5.2 Change devices to be mounted by name or UUID For SLES virtual servers, use literal names for device names rather than symbolic links. By default, devices are mounted by using -id, which means that they are represented by symbolic links. You must change the devices so they are mounted by device name or UUID rather than by -id. You must perform this task before you capture a SLES VM for the first time. After you capture a SLES VM for the first time, you can capture and deploy an image of the resulting VM without performing this task. To change the devices so that they are mounted by device name or UUID, complete the following steps: 1. Search the file system table /etc/fstab for the presence of symbolic links. Symbolic links will look like /dev/disk/by-*. 2. Store the mapping of the /dev/disk/by-* symbolic links to their target devices in a scratch file by running this command: ls -l /dev/disk/by-* > /tmp/scratchpad.txt The contents of the scratchpad.txt file might look like Example 6-5. Example 6-5 Symbolic links mapping /dev/disk/by-id: total 0 lrwxrwxrwx 1 root root 9 Apr 10 12:07 scsi-360050768028180ee380000000000603c -> ../../sda lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-360050768028180ee380000000000603c-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-360050768028180ee380000000000603c-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-360050768028180ee380000000000603c-part3 -> ../../sda3 lrwxrwxrwx 1 root root 9 Apr 10 12:07 wwn-0x60050768028180ee380000000000603c -> ../../sda lrwxrwxrwx 1 root root 10 Apr 10 12:07 wwn-0x60050768028180ee380000000000603c-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Apr 10 12:07 wwn-0x60050768028180ee380000000000603c-part2 -> ../../sda2 Tip: The /etc/sysconfig/network-script file path for the HWADDR applies for RHEL only. For example, for the ifcfg-eth0 adapter on RHEL, remove the HWADDR line from /etc/sysconfig/network-script/ifcfg-eth0. For SLES, the HWADDR path is /etc/sysconfig/network. On SLES, remove the HWADDR line from /etc/sysconfig/network/ifcfg-eth0. Important: You must remove the network persistence rules in the image because they cause the network interface in the instance to come up as an interface other than eth0. Your image has a record of the MAC address of the network interface card when it was first installed, and this MAC address is different each time that the instance boots.
  • 240. 216 IBM PowerVC Version 1.2.3: Introduction and Configuration lrwxrwxrwx 1 root root 10 Apr 10 12:07 wwn-0x60050768028180ee380000000000603c-part3 -> ../../sda3 /dev/disk/by-path: total 0 lrwxrwxrwx 1 root root 9 Apr 10 12:07 scsi-0:0:1:0 -> ../../sda lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part3 -> ../../sda3 /dev/disk/by-uuid: total 0 lrwxrwxrwx 1 root root 10 Apr 10 12:07 3cb4e486-10a4-44a9-8273-9051f607435e -> ../../sda2 lrwxrwxrwx 1 root root 10 Apr 10 12:07 c6a9f4e8-4e87-49c9-b211-89086c2d1064 -> ../../sda3 / 3. Edit /etc/fstab, replacing the /dev/disk/by-* entries with the device names that the symbolic links point to, as laid out in your scratchpad.txt file. Example 6-6 shows what these lines might look like before you edit them. Example 6-6 Sample device names before the change /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part2 swap swap defaults 0 0 /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part3 / ext3 acl,user_xattr 1 1 Example 6-7 shows what these lines might look like after you edit them. Example 6-7 Sample device names after the change /dev/sda2 swap swap defaults 0 0 /dev/sda3 / ext3 acl,user_xattr 1 1 4. Edit the /etc/lilo.conf file so that the boot and root lines correspond to the device names. Example 6-8 shows what these lines might look like before you edit them. Example 6-8 lilo.conf file before change boot = /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part1 root = /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part3 Example 6-9 shows what these lines might look like after you edit them. Example 6-9 lilo.conf file after change boot = /dev/sda1 root = /dev/sda3 Important: For the following steps, ensure that you use the device names in your own scratchpad.txt file. The following values are merely examples.
  • 241. Chapter 6. PowerVC Standard Edition for managing PowerKVM 217 5. Run the lilo command. 6. Run the mkinitrd command. 6.5.3 Capture the virtual machine Before you can capture a VM, the VM must meet specific requirements. If you do not prepare the VM before you capture it, you might get errors when you deploy the resulting image. The following steps describe how to capture a VM by using the cloud-init initialization package: 1. Install cloud-init on the VM that you want to capture. You only perform this step the first time that you capture a VM. For more information about how to install cloud-init, see 6.5.1, “Install cloud-init on the virtual machine” on page 213. 2. If the VM that you want to capture is running a SUSE Linux (SLES) operating system, change the device mounting. For more information, see 6.5.2, “Change devices to be mounted by name or UUID” on page 215. 3. Perform any pre-capture preparation, such as deleting or cleaning up log files, on the VM. 4. From the PowerVC home window, click Virtual Machines, select the VM to capture, and click Capture. 5. When the message that is shown in Figure 6-33 appears, click Continue to proceed. Figure 6-33 Warning message before you capture the VM Note: The installation steps for cloud-init might change with the update of cloud-init or PowerVC. Check the latest information about the cloud-init installation at the IBM Knowledge Center: http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.hel p.doc/powervc_install_cloudinit_kvm.html
  • 242. 218 IBM PowerVC Version 1.2.3: Introduction and Configuration 6. Name the new image. Figure 6-34 shows a text box to enter the name, and it displays the required default resources for this image. Figure 6-34 Capture window 7. Click Capture to continue. PowerVC starts to capture the VM. PowerVC presents the message that is shown in Figure 6-35. Figure 6-35 Snapshot in-progress message Note: You can override the amount of required resources when you deploy a new VM with this image.
  • 243. Chapter 6. PowerVC Standard Edition for managing PowerKVM 219 The process can take from a few seconds to a few minutes. To see the status of the capture operation, click Virtual Machines. Then, check the Task column to see the status of the snapshot, as shown in Figure 6-36. Figure 6-36 Status from the Virtual Machines view You can see the capture status by clicking Images, as shown in Figure 6-37. Figure 6-37 Snapshot status from the Images view Important: It is not necessary to shut down the VM that you want to capture. You can capture images dynamically from VMs that are running, but you might need to review and check any inconsistency in the data or applications outside of the operating system.
  • 244. 220 IBM PowerVC Version 1.2.3: Introduction and Configuration 6.6 Deploy images The process to create a VM by using an existing image is simple. The process is completely automated by PowerVC. Follow these few steps to deploy a new VM: 1. Click Image, select the image that you want to deploy, and then click Deploy. Complete the requested information. Figure 6-38 displays the first two sections of the Deploy window. Figure 6-38 General and network sections of the window to deploy a VM
  • 245. Chapter 6. PowerVC Standard Edition for managing PowerKVM 221 2. Figure 6-39 shows the expanded Activation Input section. In this section, you can upload scripts or add configuration data. After the VM is deployed, the script or data automatically configures the VM according to your requirements. Figure 6-39 Activation Input section of the window to deploy a virtual machine After you click Deploy, PowerVC displays a message similar to the message that is shown in Figure 6-40. Figure 6-40 Deployment is started message
  • 246. 222 IBM PowerVC Version 1.2.3: Introduction and Configuration 3. When the deployment is complete, you can click Virtual Machines to see the new deployed image, as shown in Figure 6-41. Figure 6-41 Virtual Machines view Note: The network is configured automatically by PowerVC during the task to build the VM. When the deployment task finishes, the VM is up, running, and connected to the network.
  • 247. Chapter 6. PowerVC Standard Edition for managing PowerKVM 223 6.7 Resize virtual machines PowerVC Standard managing PowerKVM can resize VMs with a simple procedure. Follow these steps to resize your VMs: 1. From the page that lists all VMs, select the VM to resize. 2. Click Resize to open the window that is shown in Figure 6-42. Figure 6-42 Resize virtual machine window Note: You can select a compute template to populate the required resource values or edit each field manually. Important: If you change the size of the disk, ensure that you go into the OS of the VM and complete the required steps so that the OS can use the new space that was configured on the disk. For more information, see your OS documentation.
  • 248. 224 IBM PowerVC Version 1.2.3: Introduction and Configuration 6.8 Suspend and resume virtual machines PowerVC can suspend and resume a running VM. To suspend a VM, select it and then click Suspend. Two methods exist to suspend a VM as shown in Figure 6-43. Figure 6-43 Suspend or pause a virtual machine After you select the option, click OK. The VM state changes to paused or suspended. To resume the VM, select it and click Resume. 6.9 Restart a virtual machine PowerVC can restart a VM. Follow these steps: 1. To restart a VM, select the VM and click Restart. 2. As the Restart window shows (Figure 6-44), you can select either a soft restart or a hard restart. Figure 6-44 Restart a virtual machine Important: It is not possible to restart VMs that are in a suspended or paused state. The only available option is to perform a hard restart.
  • 249. Chapter 6. PowerVC Standard Edition for managing PowerKVM 225 6.10 Migrate virtual machines PowerVC also support the migration of VMs between PowerKVM hosts if the VM meets the requirements of migration; for example, Network File System (NFS) shared storage was configured for the PowerKVM hosts. For the detailed requirements of VM migration, see the IBM Knowledge Center: http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.help.d oc/powervc_relocation_reqs_kvm.html Follow these steps to migrate a VM: 1. Go to the Virtual Machines page, select the VM to migrate, and click Migrate. Select the destination host, as shown in Figure 6-45, and then click Migrate. Figure 6-45 Migrate a virtual machine Note: It is possible to restart VMs in a suspended or paused state. However, the only available option is to perform a hard restart.
  • 250. 226 IBM PowerVC Version 1.2.3: Introduction and Configuration 2. The VM is migrated to the destination host live, as shown in Figure 6-46. Figure 6-46 Migrating a virtual machine 6.11 Restarting virtual machines remotely With PowerVC version 1.2.3 or later, you can restart VMs remotely if a PowerKVM host failed. Follow these steps: 1. After a PowerKVM host fails, go to the Hosts page, select the failed host, and click Remotely Restart Virtual Machines, as shown in Figure 6-47. Figure 6-47 Remotely Restart Virtual Machines option
  • 251. Chapter 6. PowerVC Standard Edition for managing PowerKVM 227 2. Then, select the VM, or all VMs, select the destination host, as shown in Figure 6-48, and click Remote Restart. Figure 6-48 Select virtual hosts to restart remotely
  • 252. 228 IBM PowerVC Version 1.2.3: Introduction and Configuration 3. The selected VMs are restarted remotely on the destination PowerKVM host, as shown in Figure 6-49. The remote restart function provides a new way to enhance the availability of applications. Figure 6-49 Virtual machines that were restarted remotely 6.12 Delete virtual machines PowerVC can delete a VM. The process deletes the VM and the associated storage. Follow these steps to delete a VM: 1. To delete a VM, select it and click Delete. 2. When you see a confirmation message that is similar to the message that is shown in Figure 6-50, click OK if the message shows the correct machine. Figure 6-50 Delete a virtual machine Note: Before you use the remote restart function, you need to set up PowerVC to meet the requirements. For the detailed remote restart requirements, see the IBM Knowledge Center: http://www.ibm.com/support/knowledgecenter/SSXK2N_1.2.3/com.ibm.powervc.kvm.hel p.doc/powervc_recovery_reqs_kvm.html
  • 253. Chapter 6. PowerVC Standard Edition for managing PowerKVM 229 6.13 Create and attach volumes PowerVC can create volumes in the available storage providers. These volumes can be assigned to a VM later, or it is possible to create and attach volumes in one single step. To create a volume, click Storage Volumes, and then click Create. A window that is similar to the window that is shown in Figure 6-51 opens. Figure 6-51 Create Volume window It is possible to attach the volume later to an existing VM. 6.14 Attach volumes PowerVC can attach a volume to existing VMs. It is also possible to create the volume and attach it in the same operation. To attach volumes, click Virtual Machines, select the VM, and click Attach Volume.
  • 254. 230 IBM PowerVC Version 1.2.3: Introduction and Configuration In the Attach Volume window (Figure 6-52), click Attach a new volume to this virtual machine to add a new volume. Enter the storage template, volume name, description, and size (GB). Click Attach. Figure 6-52 Attaching new volume to a virtual machine
  • 255. Chapter 6. PowerVC Standard Edition for managing PowerKVM 231 To attach an existing volume, click Attach an existing volume to this virtual machine. A list of volumes will be displayed, as shown in Figure 6-53. Figure 6-53 Attach an existing volume to this virtual machine It is possible to attach volumes to paused and suspended VMs. Note: When you attach volumes to Linux VMs, additional work is required for the OS to discover the volumes. For more information, check the documentation for your Linux distribution.
  • 256. 232 IBM PowerVC Version 1.2.3: Introduction and Configuration
  • 257. © Copyright IBM Corp. 2014, 2015. All rights reserved. 233 Chapter 7. PowerVC lab environment This chapter describes the test environment that we used to write this book to demonstrate the IBM Power Virtualization Center Standard Edition (PowerVC) features and to capture the screen examples. We installed, configured, and used several environments to share our experience with this IBM software. This chapter includes the following topics: 7.1, “PowerVC Standard Edition lab environment for managing PowerVM” on page 234 7.2, “PowerVC Standard managing PowerKVM lab” on page 243 7
  • 258. 234 IBM PowerVC Version 1.2.3: Introduction and Configuration 7.1 PowerVC Standard Edition lab environment for managing PowerVM This section describes the hardware components that were used in the Standard Edition lab environment for managing PowerVM. Figure 7-1 shows the lab environment that was used for PowerVC. It includes the real host names that were used on the PowerVC domain. Figure 7-1 PowerVC Standard Edition hardware lab for managing PowerVM The PowerVC management station (labeled RHEL7.1LE in Figure 7-1) that was used for lab tests is deployed on one of the IBM POWER8 S824 servers that is managed by PowerVC. However, this virtual machine (VM) is not managed by PowerVC. 7.1.1 Hardware Management Console Table 7-1 shows the hardware specifications of the Hardware Management Console (HMC) that is used to manage the Power Systems infrastructure for the lab environment. Table 7-1 HMC that was used XIV P8_9 P8_10 p8_9_vio1 SLES11 Network hmc Network switch Resource Manager Power Systems Storage NetworkKey Virtual Machines IBM i AIX71 AIX61 AIX71 RHEL7.1LE AIX61 p8_9_vio2 p8_10_vio1 p8_10_vio2 Fabric A AIX61 Fabric B EMCStorwize Hardware Type Model Version Release HMC 7042 CR7 Version 8 Build Level 20150602.1 Release 8.3.0 Service Pack 0
  • 259. Chapter 7. PowerVC lab environment 235 7.1.2 Power Systems hardware Table 7-2 shows the IBM Power Systems servers that were used in the PowerVC Standard Edition lab environment for managing PowerVM. Table 7-2 Hardware test environment 7.1.3 Storage infrastructure This section describes the storage components that were used for testing. Storage SAN switch Table 7-3 shows the specifications of the two storage area network (SAN) switches that were used in this test lab. Table 7-3 Storage switch specifications IBM SAN Volume Controller Table 7-4 lists the specifications of the storage IBM SAN Volume Controller that was used in the original book test lab. Table 7-4 IBM SAN Volume Controller specifications 7.1.4 Storage configuration This book covers multiple versions of PowerVC, as explained in 2.1, “Previous versions and milestones” on page 10. The environment that is described next was used in the original book about PowerVC versions 1.2.0 and 1.2.1. This section also describes the lab environment that was used in the previous publication. SAN configuration for PowerVC versions 1.2.0 and 1.2.1 tests This section describes the storage device configuration that was used in the PowerVC Standard Edition lab environment for managing PowerVM. Host name Hardware Model Type Firmware level P8_9 IBM POWER8 S824 8286 42A FW830.00 (TV830.38) P8_10 IBM POWER8 S824 8286 42A FW830.00 (TV830.38) Manufacturer Type Fabric operating system version IBM 2498-B40 v7.0.2a Manufacturer Type SAN Volume Controller operating system version IBM 2145-8G4 -[GFE145AUS-1.15]-
  • 260. 236 IBM PowerVC Version 1.2.3: Introduction and Configuration Figure 7-2 on page 237 shows the layers of physical and logical devices. Physical storage devices are managed by the SAN Volume Controller. The test environment includes one IBM DS8300 storage device and one IBM DS4800 storage device that are attached to the SAN Volume Controller. The SAN Volume Controller manages the external storage and creates physical disk pools. It also provides protection and thin-provisioning features. The DS8300 is configured with two shared storage pools (SSP), which are named SSP_powervc and DS8300_site2_p01. The DS4800 is configured with one storage pool, which is named DS4800_site2_p2. Storage pools are a group of physical storage devices. They can be partitioned in units of storage that are called logical unit numbers (LUNs). These LUNs can be mapped to a host. The storage provider layer converts LUNs to storage pools and then converts the storage pools to physical storage devices. For more information about the IBM SAN Volume Controller, see Implementing the IBM System Storage SAN Volume Controller V7.4, SG24-7933. The Virtual I/O Servers add a logical layer between the storage and the VMs. The Virtual I/O Servers map a virtual disk of a virtual I/O (VIO) client to any of these objects: An entire LUN A part of a LUN or group of LUNs by using volume groups and logical volumes that are defined on the VIOS A file by using a file-backed device The Virtual I/O Servers can map devices to the VMs by using virtual Small Computer System Interface (vSCSI) or N_Port ID Virtualization (NPIV). For more information about PowerVM storage virtualization, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940, and IBM PowerVM Enhancements What is New in 2013, SG24-8198.
  • 261. Chapter 7. PowerVC lab environment 237 As shown in Figure 7-2, the Virtual I/O Server (VIOS) accesses the SAN Volume Controller pools’ LUNs by using NPIV, and the SSP logical units use vSCSI. Figure 7-2 Physical to logical management layers Virtual I/O Servers support shared storage pools. Shared storage pools are group of hdisks (LUNs) that are accessed simultaneously by several Virtual I/O Servers to create a common storage space. Any VIOS member of the SSP can create a logical unit in this space. This logical unit is visible from all Virtual I/O Servers in the SSP and can be mapped to a VM as a vSCSI device. PowerVC Management SAN Volume Controller SVC Pools Logical UnitsSSP Logical Units DS-8300 DS-4800 SSP_powervc VIOS LPARs NPIV NPIVvSCSI DS8300_site2_p01 DS4800_site2_p02 Storage pools Storage providers Physical connections VIOs management Virtual connections LPARs access Physical storage devices
  • 262. 238 IBM PowerVC Version 1.2.3: Introduction and Configuration As Figure 7-3 shows, the lab contains an SSP that is named powervc_cluster, which is stored in the DS8300 that is managed by the SAN Volume Controller. The DS8300 LUNs are accessed by all Virtual I/O Servers. They are used to create an SSP. Logical units are created in this SSP and mapped to the VMs by using vSCSI. Figure 7-3 Shared storage pools For the VM operating system, the access to the storage does not require any special device or driver configuration other than the standard configuration for vSCSI disk devices. PowerVC is the tool that integrates all of these layers and creates a centralized environment to manage the storage and the options. As shown in Figure 7-2 on page 237, PowerVC can manage the SAN Volume Controller configuration and the SSP configuration, and it can create NPIV or vSCSI connections between the storage and the VMs. SAN configuration for PowerVC versions 1.2.2 and 1.2.3 tests These PowerVC versions introduce significant changes to storage support. EMC is now supported. The newest versions of the IBM XIV Storage System and IBM Storwize are also supported. ... Shared storage pool SAN Volume Controller DS-8300 SSP_powervc Logical Devices (LUNs) Physical Devices VIOS1 VIOS2 VIOS3 VIOSn NPIV LPARs vSCSI Storage Pools SSP Logical Units VIOS
  • 263. Chapter 7. PowerVC lab environment 239 Figure 7-4 shows the storage configuration that used for this book. For this lab, it was not necessary to test the SSPs because no new features or functions were announced for these versions. Figure 7-4 Storage configuration that was set for this publication 7.1.5 Storage connectivity groups and port tagging Storage connectivity groups and port tagging are part of the new features that were introduced in PowerVC version 1.2.1. No new functions or updates were announced for PowerVC versions 1.2.2 and 1.2.3. The lab that is described corresponds to the tests that were performed on PowerVC version 1.2.1. PowerVC Management IBM XIV VIOS LPARs NPIV NPIV Storage providers Physical connections VIOs management Virtual connections Storwize V7000 LPARs access EMC VMAX
  • 264. 240 IBM PowerVC Version 1.2.3: Introduction and Configuration Figure 7-5 shows the Fibre Channel (FC) adapters and the tags that were used in the lab. Port tagging is useful for storage that is accessed through NPIV only. Port tagging is not used for storage that is backed by an SSP. In Figure 7-5, the yellow adapters do not support NPIV. Therefore, they are not tagged, and they are used for SSP access only. The three green adapters support NPIV. We defined two tags for partitioning the ports into a development environment and a production environment. In the figure, the red striped ports are tagged as Prod and the blue striped ports are tagged as Dev. By mixing storage connectivity groups and tags, you can dedicate Virtual I/O Servers and FC ports to classes of traffic. Figure 7-5 Storage groups and tagged ports configuration lab XIV P8_9 P8_10 p8_9_vio1 SLES11 Network hmc Network switch Resource Manager Power Systems Storage NetworkKey Virtual Machines IBM i AIX71 AIX61 AIX71 RHEL7.1LE AIX61 p8_9_vio2 p8_10_vio1 p8_10_vio2 Fabric A AIX61 Fabric B EMCStorwize
  • 265. Chapter 7. PowerVC lab environment 241 The lab contains four storage connectivity groups, as shown in Figure 7-6. Two of the storage connectivity groups are defined by PowerVC, by default: One storage connectivity group contains all of the Virtual I/O Servers of all of the managed hosts that access the storage controllers. One storage connectivity group contains all of the Virtual I/O Servers of all of the hosts that belong to the SSP. Also, we also defined two connectivity groups for storage-backed devices: Dev and Prod. These storage connectivity groups contain the same three Virtual I/O Servers (those Virtual I/O Servers that use NPIV-compatible adapters). One storage connectivity group uses the ports that are tagged as Dev, and the other storage connectivity group uses the ports that are tagged as Prod. Figure 7-6 Storage connectivity groups in the lab
  • 266. 242 IBM PowerVC Version 1.2.3: Introduction and Configuration Figure 7-7 shows ports without tags because they do not support NPIV. Ports that are tagged as Prod or Dev are also shown. Figure 7-7 Fibre Channel port tags that are used in the lab 7.1.6 Software stack for PowerVC lab environment Table 7-5 shows the level of software that is used to test PowerVC. Table 7-5 Software versions and releases Software Operating system or firmware version Virtual I/O Server 2.2.3.52 Red Hat Enterprise Linux (RHEL) 7.1 PowerVC 1.2.3 IBM AIX operating system 7.1 TL 3 IBM i 7.2 Storage SAN switch 7.0.2a SAN Volume Controller 6.4.1.4 (build 75.3.1303080000) Note: No specific requirements existed for the network switch, so we did not update its configuration during the lab tests.
  • 267. Chapter 7. PowerVC lab environment 243 7.2 PowerVC Standard managing PowerKVM lab This section describes all of the components that were used during the PowerVC Standard Edition version 1.2.3 for managing PowerKVM labs, including the installation and setup. Figure 7-8 shows the lab environment that was created for PowerVC. The lab environment includes the real host names that were used on the PowerVC domain. Figure 7-8 PowerVC Standard managing PowerKVM lab setup Important: PowerVC Standard Edition supports only internal or Internet SCSI (iSCSI) disks when it manages PowerKVM. See 3.1.2, “PowerVC Standard Edition requirements” on page 30. KVM-171 Resource Manager Power Systems Storage NetworkKey Virtual Machines PowerKVM linux20 Fabric SAN Ethernet Network iSCSI connections PowerVC KVM-175 PowerKVM linux21 NFS connections
  • 268. 244 IBM PowerVC Version 1.2.3: Introduction and Configuration
  • 269. © Copyright IBM Corp. 2014, 2015. All rights reserved. 245 ABI application binary interface AC alternating current ACL access control list AFPA Adaptive Fast Path Architecture AIO Asynchronous I/O APAR authorized program analysis report API application programming interface ARP Address Resolution Protocol ASMI Advanced System Management Interface BFF Backup File Format BIND Berkeley Internet Name Domain BIST Built-In Self-Test BLV Boot Logical Volume BOOTP Bootstrap Protocol BOS Base Operating System BSD Berkeley Software Distribution CA certificate authority CATE Certified Advanced Technical Expert CD compact disc CD-R compact disc recordable CD-ROM compact-disc read-only memory CDE Common Desktop Environment CEC central electrical complex CHRP Common Hardware Reference Platform CLI command-line interface CLVM Concurrent LVM CPU central processing unit CRC cyclic redundancy check CSM Cluster Systems Management CUoD Capacity Upgrade on Demand CVUT Czech Technical University DCM Dual Chip Module Abbreviations and acronyms DES Data Encryption Standard DGD Dead Gateway Detection DHCP Dynamic Host Configuration Protocol DLPAR dynamic LPAR DMA direct memory access DNS Domain Name Server DR dynamic reconfiguration DRM dynamic reconfiguration manager DVD digital versatile disc EC EtherChannel ECC error correction code EGO Enterprise Grid Orchestrator EOF end-of-file EPOW emergency power-off warning ERRM Event Response resource manager IBM ESS IBM Enterprise Storage Server® FC Fibre Channel FC-AL Fibre Channel Arbitrated Loop FDX full duplex FLOP floating point operation FRU field-replaceable unit FTP File Transfer Protocol IBM GDPS® IBM Geographically Dispersed Parallel Sysplex™ GID group ID IBM GPFS IBM General Parallel File System GUI graphical user interface IBM HACMP™ IBM High Availability Cluster Multiprocessing HBA host bus adapter HMC Hardware Management Console HTML Hypertext Markup Language HTTP Hypertext Transfer Protocol Hz hertz I/O input/output
  • 270. 246 IBM PowerVC Version 1.2.3: Introduction and Configuration IBM International Business Machines ID identifier IDE Integrated Device Electronics IEEE Institute of Electrical and Electronics Engineers IP Internet Protocol IPAT IP address takeover IPL initial program load IPMP IP network multipathing iSCSI Internet SCSI ISV independent software vendor ITSO International Technical Support Organization IVM Integrated Virtualization Manager IaaS Infrastructure as a Service JFS journaled file system JRE Java runtime environment KVM kernel-based virtual machine L1 Level 1 L2 Level 2 L3 Level 3 LA Link Aggregation LACP Link Aggregation Control Protocol LAN local area network LDAP Lightweight Directory Access Protocol LED light-emitting diode LMB Logical Memory Block LPAR logical partition LPM Live Partition Migration LPP licensed program product LU logical unit LUN logical unit number LV logical volume LVCB Logical Volume Control Block LVM Logical Volume Manager MAC Media Access Control MBps megabytes per second MCM multiple chip module ML Maintenance Level MP Multiprocessor MPIO Multipath I/O MTU maximum transmission unit Mbps megabits per second NFS Network File System NIB Network Interface Backup NIC network interchange controller NIM Network Installation Management NIMOL NIM on Linux NPIV N_Port Identifier Virtualization NVRAM nonvolatile random access memory N_PORT Node Port ODM Object Data Manager OS operating system OSPF Open Shortest Path First PCI Peripheral Component Interconnect PCI Express Peripheral Component Interconnect Express PCM path control module PIC Pool Idle Count PID process ID PKI public key infrastructure PLM Partition Load Manager POST power-on self-test POWER Performance Optimization with Enhanced Risc (Architecture) PPC Physical Processor Consumption PPFC Physical Processor Fraction Consumed PTF program temporary fix PTX Performance Toolbox PURR Processor Utilization Resource Register PV physical volume PVID Port Virtual LAN Identifier PoE Proof of Entitlement QoS quality of service RAID Redundant Array of Independent Disks RAM random access memory RAS reliability, availability, and serviceability RBAC role-based access control RCP Remote Copy
  • 271. Abbreviations and acronyms 247 RDAC Redundant Disk Array Controller RDO Red Hat OpenStack REST Representational State Transfer RHEL Red Hat Enterprise Linux RIO remote input/output RIP Routing Information Protocol RISC reduced instruction-set computer RMC Resource Monitoring and Control RPC Remote Procedure Call RPL Remote Program Loader RPM Red Hat Package Manager RSA Rivest-Shamir-Adleman algorithm RSCT Reliable Scalable Cluster Technology RSH Remote Shell SAN storage area network SCG storage connectivity group SCSI Small Computer System Interface SDD Subsystem Device Driver SDDPCM Subsystem Device Driver Path Control Module SEA shared Ethernet adapter SLES SUSE Linux Enterprise Server SMIT System Management Interface Tool SMP symmetric multiprocessor SMS system management services SMT simultaneous multithreading SP Service Processor SPOT Shared Product Object Tree SRC System Resource Controller SRN service request number SSA Serial Storage Architecture SSH Secure Shell SSL Secure Sockets Layer SSP shared storage pool SUID Set User ID SVC SAN Volume Controller TCP/IP Transmission Control Protocol/Internet Protocol TL Technology Level TLS Transport Layer Security UDF Universal Disk Format UDID Universal Disk Identification VSAE Virtual Solutions Activation Engine VG volume group VGDA Volume Group Descriptor Area VGSA Volume Group Status Area VIOS Virtual I/O Server VIPA virtual IP address VLAN virtual local area network VM virtual machine VP virtual processor VPD vital product data VPN virtual private network vSCSI virtual SCSI VRRP Virtual Router Redundancy Protocol VSD Virtual Shared Disk WLM Workload Manager WWN worldwide name WWPN worldwide port name
  • 272. 248 IBM PowerVC Version 1.2.3: Introduction and Configuration
  • 273. © Copyright IBM Corp. 2014, 2015. All rights reserved. 249 Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book. IBM Redbooks The following IBM Redbooks publications provide additional information about the topic in this document. Note that some publications referenced in this list might be available in softcopy only. IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 IBM PowerVM Virtualization Managing and Monitoring, SG24-7590 IBM Power Systems HMC Implementation and Usage Guide, SG24-7491 Implementing the IBM System Storage SAN Volume Controller V7.4, SG24-7933 IBM PowerVM Enhancements What is New in 2013, SG24-8198 You can search for, view, download or order these documents and other Redbooks, Redpapers, Web Docs, draft and additional materials, at the following website: ibm.com/redbooks Online resources These websites are also relevant as further information sources: Information about IBM Platform Resource Scheduler http://www.ibm.com/systems/platformcomputing/products/rs/ Latest PowerVC Standard Edition requirements http://ibm.co/1jC4Xx0 IBM Knowledge Center http://www.ibm.com/support/knowledgecenter/ OpenStack http://www.openstack.org/foundation/ https://wiki.openstack.org/wiki/Main_Page Help from IBM IBM Support and downloads ibm.com/support
  • 274. 250 IBM PowerVC Version 1.2.3: Introduction and Configuration IBM Global Services ibm.com/services