SG 247460
SG 247460
SG 247460
ibm.com/redbooks
International Technical Support Organization IBM PowerVM Live Partition Mobility March 2009
SG24-7460-01
Note: Before using this information and the product it supports, read the information in Notices on page xv.
Second Edition (March 2009) This edition applies to AIX Version 6.1, AIX 5L Version 5.3 TL7, HMC Version 7.3.2 or later, and POWER6 technology-based servers, such as the IBM Power System 570 (9117-MMA) and the IBM Power System 550 Express (8204-E8A).
Copyright International Business Machines Corporation 2007, 2009. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii The team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Chapter 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Cross-system flexibility is the requirement . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Live Partition Mobility is the answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4.1 Inactive migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4.2 Active migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5.1 Hardware infrastructure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.5.2 Components involved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.6 Operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.6.1 Inactive migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.6.2 Active migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.7 Combining mobility with other features . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.7.1 High availability clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.7.2 AIX Live Application Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Chapter 2. Live Partition Mobility mechanisms . . . . . . . . . . . . . . . . . . . . . 19 2.1 Live Partition Mobility components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.1.1 Other components affecting Live Partition Mobility . . . . . . . . . . . . . . 22 2.2 Live Partition Mobility prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.1 Capability and compatibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.2 Readiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.2.3 Migratability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3 Partition migration high-level workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4 Inactive partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
iii
2.4.2 Validation phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.4.3 Migration phase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.4.4 Migration completion phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.4.5 Stopping an inactive partition migration . . . . . . . . . . . . . . . . . . . . . . 31 2.5 Active partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.5.1 Active partition state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.5.2 Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.5.3 Validation phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.5.4 Partition migration phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.5.5 Migration completion phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.6 Virtual I/O Server selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.5.7 Source and destination mover service partitions selection . . . . . . . . 41 2.5.8 Stopping an active migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.6 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.7 AIX and active migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.8 Linux and active migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Chapter 3. Requirements and preparation . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.2 Skill considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.3 Requirements for Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.4 Live Partition Mobility preparation checks . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.5 Preparing the systems for Live Partition Mobility . . . . . . . . . . . . . . . . . . . 54 3.5.1 HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.5.2 Logical memory block size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.5.3 Battery power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.5.4 Available memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.5.5 Available processors to support Live Partition Mobility . . . . . . . . . . . 58 3.6 Preparing the HMC for Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . 61 3.7 Preparing the Virtual I/O Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.7.1 Virtual I/O Server version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.7.2 Mover service partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.7.3 Synchronize time-of-day clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.8 Preparing the mobile partition for mobility . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.8.1 Operating system version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.8.2 RMC connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.8.3 Disable redundant error path reporting . . . . . . . . . . . . . . . . . . . . . . . 68 3.8.4 Virtual serial adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.8.5 Partition workload groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.8.6 Barrier-synchronization register . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.8.7 Huge pages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.8.8 Physical or dedicated I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.8.9 Name of logical partition profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
iv
3.8.10 Mobility-safe or mobility-aware . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.8.11 Changed partition profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.9 Configuring the external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.10 Network considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.11 Distance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Chapter 4. Basic partition migration scenario . . . . . . . . . . . . . . . . . . . . . . 89 4.1 Basic Live Partition Mobility environment . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.1.1 Minimum requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.1.2 Inactive partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.1.3 Active partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.2 Virtual IO Server attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.2.1 Mover service partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.2.2 Virtual Asynchronous Services Interface device . . . . . . . . . . . . . . . . 93 4.2.3 Time reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.3 Preparing for an active partition migration. . . . . . . . . . . . . . . . . . . . . . . . . 94 4.3.1 Enabling the mover service partition . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.3.2 Enabling the Time reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.4 Migrating a logical partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.4.1 Performing the validation steps and eliminating errors . . . . . . . . . . . 99 4.4.2 Inactive or active migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.4.3 Migrating a mobile partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Chapter 5. Advanced topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 5.1 Dual Virtual I/O Servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 5.1.1 Dual Virtual I/O Server and client mirroring. . . . . . . . . . . . . . . . . . . 121 5.1.2 Dual Virtual I/O Server and multipath I/O . . . . . . . . . . . . . . . . . . . . 124 5.1.3 Single to dual Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.2 Multiple concurrent migrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5.3 Dual HMC considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 5.4 Remote Live Partition Mobility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 5.4.1 Requirements for remote migration. . . . . . . . . . . . . . . . . . . . . . . . . 132 5.4.2 HMC considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5.4.3 Remote validation and migration. . . . . . . . . . . . . . . . . . . . . . . . . . . 138 5.4.4 Command-line interface enhancements . . . . . . . . . . . . . . . . . . . . . 145 5.5 Multiple shared processor pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 5.5.1 Shared processor pools in migration and validation GUI . . . . . . . . 147 5.5.2 Processor pools on command line . . . . . . . . . . . . . . . . . . . . . . . . . 148 5.6 Migrating a partition with physical resources. . . . . . . . . . . . . . . . . . . . . . 149 5.6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 5.6.2 Configure a Virtual I/O Server on the source system . . . . . . . . . . . 150 5.6.3 Configure a Virtual I/O Server on the destination system . . . . . . . . 152 5.6.4 Configure storage on the mobile partition . . . . . . . . . . . . . . . . . . . . 153
Contents
5.6.5 Configure network on the mobile partition. . . . . . . . . . . . . . . . . . . . 157 5.6.6 Remove adapters from the mobile partition . . . . . . . . . . . . . . . . . . 160 5.6.7 Ready to migrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 5.7 The command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 5.7.1 The migrlpar command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 5.7.2 The lslparmigr command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 5.7.3 The lssyscfg command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 5.7.4 The mkauthkeys command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 5.7.5 A more complex example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 5.8 Migration awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 5.9 Making applications migration-aware . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 5.9.1 Migration phases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 5.9.2 Making programs migration aware using APIs . . . . . . . . . . . . . . . . 179 5.9.3 Making applications migration-aware using scripts . . . . . . . . . . . . . 182 5.10 Making kernel extension migration aware . . . . . . . . . . . . . . . . . . . . . . . 185 5.11 Virtual Fibre Channel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 5.11.1 Basic virtual Fibre Channel Live Partition Mobility preparation . . . 190 5.11.2 Migration of a virtual Fibre Channel based partition . . . . . . . . . . . 193 5.11.3 Dual Virtual I/O Server and virtual Fibre Channel multipathing. . . 195 5.11.4 Live Partition Mobility with Heterogeneous I/O . . . . . . . . . . . . . . . 198 5.12 Processor compatibility modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 5.12.1 Verifying the processor compatibility mode of mobile partition . . . 208 Chapter 6. Migration status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 6.1 Progress and reference code location. . . . . . . . . . . . . . . . . . . . . . . . . . . 214 6.2 Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 6.3 A recovery example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 221 7.1 Migration types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 7.2 Requirements for Live Partition Mobility on IVM . . . . . . . . . . . . . . . . . . . 222 7.3 How active Partition Mobility works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 7.4 How inactive Partition Mobility works . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 7.5 Validation for active Partition Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 7.6 Validation for inactive Partition Mobility. . . . . . . . . . . . . . . . . . . . . . . . . . 231 7.7 Preparation for partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 7.7.1 Preparing the source and destination servers. . . . . . . . . . . . . . . . . 232 7.7.2 Preparing the management partition for Partition Mobility . . . . . . . 238 7.7.3 Preparing the mobile partition for Partition Mobility. . . . . . . . . . . . . 239 7.7.4 Preparing the virtual SCSI configuration for Partition Mobility . . . . 244 7.7.5 Preparing the virtual Fibre Channel configuration . . . . . . . . . . . . . . 248 7.7.6 Preparing the network configuration for Partition Mobility . . . . . . . . 253 7.7.7 Validating the Partition Mobility environment . . . . . . . . . . . . . . . . . 257
vi
7.7.8 Migrating the mobile partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Appendix A. Error codes and logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 SRCs, current state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 SRC error codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 IVM source and destination systems error codes . . . . . . . . . . . . . . . . . . . . . 262 Operating system error logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Contents
vii
viii
Figures
1-1 Hardware infrastructure enabled for Live Partition Mobility. . . . . . . . . . . . . 9 1-2 A mobile partition during migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1-3 The final configuration after a migration is complete. . . . . . . . . . . . . . . . . 11 1-4 Migrating all partitions of a system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1-5 AIX Workload Partition example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2-1 Live Partition Mobility components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2-2 Inactive migration validation workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2-3 Inactive migration state flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2-4 Inactive migration workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2-5 Active migration validation workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2-6 Migration phase of an active migration . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2-7 Active migration partition state transfer path . . . . . . . . . . . . . . . . . . . . . . . 37 3-1 Activation of Enterprise Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3-2 Enter activation code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3-3 Checking the current firmware level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3-4 Checking and changing LMB size with ASMI . . . . . . . . . . . . . . . . . . . . . . 55 3-5 Checking the amount of memory of the mobile partition. . . . . . . . . . . . . . 56 3-6 Available memory on destination system . . . . . . . . . . . . . . . . . . . . . . . . . 57 3-7 Checking the number of processing units of the mobile partition . . . . . . . 59 3-8 Available processing units on destination system . . . . . . . . . . . . . . . . . . . 60 3-9 Checking the version and release of HMC . . . . . . . . . . . . . . . . . . . . . . . . 61 3-10 Upgrading the Hardware Management Console. . . . . . . . . . . . . . . . . . . 62 3-11 Install Corrective Service to upgrade the HMC . . . . . . . . . . . . . . . . . . . . 63 3-12 Enabling mover service partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3-13 Synchronizing the time-of-day clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3-14 Disable redundant error path handling . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3-15 Verifying the number of serial adapters on the mobile partition . . . . . . . 70 3-16 Disabling partition workload group - Other tab . . . . . . . . . . . . . . . . . . . . 71 3-17 Disabling partition workload group - Settings tab . . . . . . . . . . . . . . . . . . 72 3-18 Checking the number of BSR arrays on the mobile partition . . . . . . . . . 73 3-19 Setting number of BSR arrays to zero . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3-20 Checking if huge page memory equals zero . . . . . . . . . . . . . . . . . . . . . . 75 3-21 Setting Huge Page Memory to zero . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3-22 Checking if there are required resources in the mobile partition . . . . . . . 77 3-23 Logical Host Ethernet Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3-24 Virtual SCSI client adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3-25 Virtual SCSI server adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3-26 Checking free virtual slots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
ix
3-27 The Virtual SCSI Topology of the mobile partition . . . . . . . . . . . . . . . . . 86 4-1 Basic Live Partition Mobility configuration . . . . . . . . . . . . . . . . . . . . . . . . . 90 4-2 Hardware Management Console Workplace . . . . . . . . . . . . . . . . . . . . . . 95 4-3 Create LPAR Wizard window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4-4 Changing the Virtual I/O Server partition property . . . . . . . . . . . . . . . . . . 97 4-5 Enabling the Mover service partition attribute . . . . . . . . . . . . . . . . . . . . . . 97 4-6 Enabling the Time reference attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4-7 Validate menu on the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4-8 Selecting the Remote HMC and Destination System . . . . . . . . . . . . . . . 101 4-9 Partition Validation Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4-10 Partition Validation Warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4-11 Validation window after validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4-12 System environment before migrating . . . . . . . . . . . . . . . . . . . . . . . . . 104 4-13 Migrate menu on the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4-14 Migration Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4-15 Specifying the profile name on the destination system . . . . . . . . . . . . . 107 4-16 Optionally specifying the Remote HMC of the destination system . . . . 108 4-17 Selecting the destination system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4-18 Sample of Partition Validation Errors/Warnings . . . . . . . . . . . . . . . . . . 110 4-19 Selecting mover service partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4-20 Selecting the VLAN configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4-21 Selecting the virtual SCSI adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 4-22 Specifying the shared processor pool . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4-23 Specifying wait time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 4-24 Partition Migration Summary panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 4-25 Partition Migration Status window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 4-26 Migrated partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5-1 Dual VIOS and client mirroring to dual VIOS before migration . . . . . . . . 122 5-2 Dual VIOS and client mirroring to dual VIOS after migration . . . . . . . . . 123 5-3 Dual VIOS and client mirroring to single VIOS after migration . . . . . . . . 124 5-4 Dual VIOS and client multipath I/O to dual VIOS before migration . . . . . 125 5-5 Dual VIOS and client multipath I/O to dual VIOS after migration . . . . . . 126 5-6 Single VIOS to dual VIOS before migration . . . . . . . . . . . . . . . . . . . . . . 127 5-7 Single VIOS to dual VIOS after migration . . . . . . . . . . . . . . . . . . . . . . . . 128 5-8 Live Partition Mobility infrastructure with two HMCs . . . . . . . . . . . . . . . . 133 5-9 Live Partition Mobility infrastructure using private networks . . . . . . . . . . 134 5-10 One public and one private network migration infrastructure . . . . . . . . 135 5-11 Network ping successful to remote HMC . . . . . . . . . . . . . . . . . . . . . . . 136 5-12 HMC option for remote command execution. . . . . . . . . . . . . . . . . . . . . 137 5-13 Remote command execution window . . . . . . . . . . . . . . . . . . . . . . . . . . 137 5-14 Remote migration information entered for validate task . . . . . . . . . . . . 139 5-15 Validation window after destination system refresh . . . . . . . . . . . . . . . 140 5-16 Validation window after validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5-17 Local HMC environment before migrating. . . . . . . . . . . . . . . . . . . . . . . 142 5-18 Remote HMC selection window in Migrate task . . . . . . . . . . . . . . . . . . 143 5-19 Remote migration summary window . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 5-20 Remote HMC view after remote migration success . . . . . . . . . . . . . . . 145 5-21 Shared processor pool selection in migration wizard . . . . . . . . . . . . . . 147 5-22 Shared processor pool selection in Validate task . . . . . . . . . . . . . . . . . 148 5-23 The mobile partition is using physical resources. . . . . . . . . . . . . . . . . . 149 5-24 The source Virtual I/O Server is created and configured . . . . . . . . . . . 151 5-25 The destination Virtual I/O Server is created and configured . . . . . . . . 153 5-26 The storage devices are configured on the mobile partition . . . . . . . . . 154 5-27 The root volume group extends on to virtual disks . . . . . . . . . . . . . . . . 155 5-28 The root volume group of the mobile partition is on virtual disks only. . 156 5-29 The mobile partition has a virtual network device created . . . . . . . . . . 158 5-30 The mobile partition has unconfigured its physical network interface . . 159 5-31 The mobile partition with only virtual adapters . . . . . . . . . . . . . . . . . . . 161 5-32 The mobile partition on the destination system . . . . . . . . . . . . . . . . . . . 162 5-33 Basic NPIV virtual Fibre Channel infrastructure before migration . . . . . 187 5-34 Basic NPIV virtual Fibre Channel infrastructure after migration . . . . . . 188 5-35 Client partition virtual Fibre Channel adapter WWPN properties . . . . . 190 5-36 Virtual Fibre Channel adapters in the Virtual I/O Server . . . . . . . . . . . 191 5-37 Virtual I/O Server Fibre Channel adapter properties. . . . . . . . . . . . . . . 191 5-38 Selecting the virtual Fibre Channel adapter . . . . . . . . . . . . . . . . . . . . . 193 5-39 Virtual Fibre Channel migration summary window . . . . . . . . . . . . . . . . 194 5-40 Migrated partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 5-41 Dual VIOS and client multipath I/O to dual NPIV before migration. . . . 196 5-42 Dual VIOS and client multipath I/O to dual VIOS after migration. . . . . . 197 5-43 The mobile partition using physical resources . . . . . . . . . . . . . . . . . . . 198 5-44 Virtual Fibre Channel server adapter properties . . . . . . . . . . . . . . . . . . 199 5-45 Virtual Fibre Channel client adapter properties . . . . . . . . . . . . . . . . . . . 200 5-46 The mobile partition using physical and virtual resources. . . . . . . . . . . 202 5-47 The mobile partition using virtual resources . . . . . . . . . . . . . . . . . . . . . 203 5-48 The mobile partition on the destination system . . . . . . . . . . . . . . . . . . . 204 5-49 Processor compatibility mode options of the mobile partition . . . . . . . . 209 5-50 Current processor compatibility mode of the mobile partition . . . . . . . . 210 6-1 Partition reference codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 6-2 Migration progress window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 6-3 Recovery menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 6-4 Recovery pop-up window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 6-5 Interrupted active migration status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 7-1 Checking release level of the Virtual I/O Server . . . . . . . . . . . . . . . . . . . 223 7-2 More Tasks menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 7-3 Validation task for migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 7-4 Checking LMB size with the IVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Figures
xi
7-5 Checking the amount of memory of the mobile partition. . . . . . . . . . . . . 234 7-6 Checking the amount of memory on the destination server . . . . . . . . . . 235 7-7 Checking the amount of processing units of the mobile partition . . . . . . 236 7-8 Checking the amount of processing units on the destination server . . . . 237 7-9 Enter PowerVM Edition key on the IVM. . . . . . . . . . . . . . . . . . . . . . . . . . 239 7-10 Processor compatibility mode on the IVM . . . . . . . . . . . . . . . . . . . . . . . 241 7-11 Checking the partition workload group participation . . . . . . . . . . . . . . . 243 7-12 Checking if the mobile partition has physical adapters . . . . . . . . . . . . . 244 7-13 View/Modify Virtual Fibre Channel window . . . . . . . . . . . . . . . . . . . . . . 249 7-14 Virtual Fibre Channel Partition Connections window . . . . . . . . . . . . . . 250 7-15 Partition selected shows Automatically generate . . . . . . . . . . . . . . . . . 250 7-16 Virtual Fibre Channel on source system . . . . . . . . . . . . . . . . . . . . . . . . 251 7-17 Virtual Fibre Channel on destination system. . . . . . . . . . . . . . . . . . . . . 252 7-18 Selecting physical adapter to be used as a virtual Ethernet bridge . . . 254 7-19 Create virtual Ethernet adapter on the mobile partition. . . . . . . . . . . . . 255 7-20 Create a virtual Ethernet adapter on the management partition . . . . . . 256 7-21 Partition is migrating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
xii
Tables
1-1 3-1 3-2 3-3 5-1 5-2 A-1 A-2 A-3 A-4 A-5 PowerVM Live Partition Mobility Support . . . . . . . . . . . . . . . . . . . . . . . . . 16 Supported migration matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Preparing the environment for Live Partition Mobility . . . . . . . . . . . . . . . . 53 Virtual SCSI adapter worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Dynamic reconfiguration script commands for migration . . . . . . . . . . . . 183 Processor compatibility modes supported by server type . . . . . . . . . . . . 206 Progress SRCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 SRC error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Source system generated error codes . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Destination system generated error codes . . . . . . . . . . . . . . . . . . . . . . . 264 Operating system error log entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
xiii
xiv
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurement may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.
xv
COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX 5L AIX BladeCenter DB2 Enterprise Storage Server GDPS Geographically Dispersed Parallel Sysplex GPFS HACMP i5/OS IBM Parallel Sysplex POWER Hypervisor Power Systems POWER4 POWER5 POWER6+ POWER6 POWER7 PowerHA PowerVM POWER Redbooks Redpapers Redbooks (logo) System p Tivoli Workload Partitions Manager
The following terms are trademarks of other companies: SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States and other countries. Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
xvi
Preface
Live Partition Mobility is the next step in the IBMs Power Systems virtualization continuum. It can be combined with other virtualization technologies, such as logical partitions, Live Workload Partitions, and the SAN Volume Controller, to provide a fully virtualized computing platform that offers the degree of system and infrastructure flexibility required by todays production data centers. This IBM Redbooks publication discusses how Live Partition Mobility can help technical professionals, enterprise architects, and system administrators: Migrate entire running AIX and Linux partitions and hosted applications from one physical server to another without disrupting services and loads. Meet stringent service-level agreements. Rebalance loads across systems quickly, with support for multiple concurrent migrations. Use a migration wizard for single partition migrations. This book can help you understand, plan, prepare, and perform partition migration on IBM Power Systems servers that are running AIX. Note: Minor updates and technical corrections are marked by change bars such as the ones in the left margin on this page. A 2010 update was made to include POWER7 servers.
xvii
Thomas Prokop is a Consulting Certified IT Specialist working as a Field Technical Sales Specialist in IBM US Sales & Distribution supporting clients and IBM sales and Business Partners. He also provides pre-sales consultation and implementation of IBM POWER and AIX high-end system environments. He has 18 years of experience with IBM Power Systems and has experience in the fields of virtualization, performance analysis, PowerVM and complex implementations. Guido Somers is a Cross Systems Certified Senior Enterprise Infrastructure Architect working for the IBM Global Technology Services organization in Belgium. His focus is on server consolidation, IT optimization and virtualization. He has 13 years of experience in the Information Technology field, ten years of which were within IBM. He holds degrees in Biotechnology, Business Administration, Chemistry, and Electronics, and did research in the field of Theoretical Physics. His areas of expertise include AIX, Linux, system performance and tuning, logical partitioning, virtualization, PowerHA, SAN, IBM Power Systems servers, and other IBM hardware offerings. He is an author of many IBM Redbooks publications. The authors of the first edition of the IBM System p Live Partition Mobility Redbook are: Mitchell Harding, Narutsugu Itoh, Peter Nutt, Guido Somers, Federico Vagnini, Jez Wain The project that produced this document was managed by: Scott Vetter, PMP Thanks to the following people for their contributions to this project: John E. Bailey, John Banchy, Kevin J. Cawlfield, Eddie Chen, Steven J. Finnes, Matthew Harding, Mitchell P. Harding, Tonya L. Holt, David Hu, Robert C. Jennings, Anil Kalavakolanu, Timothy Marchini, Josh Miers, Kasturi Patel, Timothy Piasecki, Steven E. Royer, Elizabeth A. Ruth, Maneesh Sharma, Luc R. Smolders, John D. Spangenberg, Ravindra Tekumallah, Vasu Vallabhaneni, Jonathan R. Van Niewaal, Dean S. Wilcox IBM USA Nigel A. Griffiths, James Lee, Chris Milsted, Dave Williams IBM U.K. Jun Nakano IBM Japan
xviii
Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this book or other Redbooks in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an e-mail to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
Preface
xix
xx
Chapter 1.
Overview
In this chapter, we provide an overview of Live Partition Mobility with a high-level description of its features. This chapter contains the following topics: 1.1, Introduction on page 2 1.2, Partition migration on page 3 1.3, Cross-system flexibility is the requirement on page 3 1.4, Live Partition Mobility is the answer on page 5 1.5, Architecture on page 6 1.6, Operation on page 12 1.7, Combining mobility with other features on page 14
1.1 Introduction
Live Partition Mobility allows you to migrate partitions that are running AIX and Linux operating systems and their hosted applications from one physical server to another without disrupting the infrastructure services. The migration operation, which takes just a few seconds, maintains complete system transactional integrity. The migration transfers the entire system environment, including processor state, memory, attached virtual devices, and connected users. IBM Power Systems servers are designed to offer the highest stand-alone availability in the industry. Enterprises must occasionally restructure their infrastructure to meet new IT requirements. By letting you move your running production applications from one physical server to another, Live Partition Mobility allows for nondisruptive maintenance or modification to a system (to your users). This mitigates the impact on partitions and applications formerly caused by the occasional need to shut down a system. Today, even small IBM Power Systems servers frequently host many logical partitions. As the number of hosted partitions increases, finding a maintenance window acceptable to all becomes increasingly difficult. Live Partition Mobility allows you to move partitions around so that you can perform previously disruptive operations on the machine at your convenience, rather than when it causes the least inconvenience to the users. Live Partition Mobility helps you meet increasingly stringent service-level agreements (SLAs) because it allows you to proactively move running partitions and applications from one server to another. The ability to move running partitions from one server to another offers you the ability to balance workloads and resources. If a key applications resource requirements peak unexpectedly to a point where there is contention for server resources, you might move it to a more powerful server or move other, less critical, partitions to different servers, and use the freed-up resources to absorb the peak. Live Partition Mobility may also be used as a mechanism for server consolidation, because it provides an easy path to move applications from individual, stand-alone servers to consolidation servers. If you have partitions with workloads that have widely fluctuating resource requirements over time (for example, with a peak workload at the end of the month or the end of the quarter) you can use Live Partition Mobility to consolidate partitions to a single server during the off-peak period, allowing you to turn off unused servers. Then move the partitions to their own, adequately configured servers, just prior to the peak. This approach also offers energy savings by reducing the power to run machines and the power to keep them cool during off-peak periods.
Live Partition Mobility can be automated and incorporated into system management tools and scripts. Support for multiple concurrent migrations allows you to liberate system resources very quickly. For single-partition, point-in-time migrations, the Hardware Management Console (HMC) and the Integrated Virtualization Manager (IVM) interfaces offer easy-to-use migration wizards. Live Partition Mobility contributes to the goal of continuous availability, as follows: Reduces planned down time by dynamically moving applications from one server to another Responds to changing workloads and business requirements when you move workloads from heavily loaded servers to servers that have spare capacity Reduces energy consumption by allowing you to easily consolidate workloads and turn off unused servers Live Partition Mobility is the next step in the IBM PowerVM continuum. It can be combined with other virtualization technologies, such as logical partitions, Live Workload Partitions, and the SAN Volume Controller to provide a fully virtualized computing platform offering the degree of system and infrastructure flexibility required by todays production data centers.
Chapter 1. Overview
the new requirements in a very short time, but also with minimal to no impact on the service level. Configuration changes must be applied in a very simple and secure way, with limited administrator intervention, to reduce change management costs and the related risk. The Advanced POWER Virtualization feature introduced in POWER5-based systems provides excellent flexibility capabilities within each system. The virtualization of processor capacity and the granular distribution of memory, combined with network and disk virtualization, enable administrators to create multiple fine-grained logical partitions within a single system. Computing power can be distributed among partitions automatically in real time, depending on real application needs, with no user action. System configuration changes are made by policy-based controls or by administrators with very simple and secure operations that do not interrupt service. Although single-system virtualization greatly improves the flexibility of an IT solution, service requirements of clients often demand a more comprehensive view of the entire infrastructure. In many instances, applications are distributed across multiple systems ensuring isolation, optimization of global system resources, and adaptability of the infrastructure to new workloads. One of the most time consuming activities in a complex environment is the transfer of a workload from one system to another. Although many reasons for the migration exist, several reasons are: Resource balancing A system does not have enough resources for the workload while another system does. New system deployment A workload running on an existing system must be migrated to a new, more powerful one. Availability requirements When a system requires maintenance, its hosted applications must not be stopped and can be migrated to another system. Without a way to migrate a partition, all these activities require careful planning and highly skilled people, and often cause a significant downtime. In some cases, an SLA may be so strict that planned outages are not tolerated.
Chapter 1. Overview
communicate to the source and destination servers, and their respective Virtual I/O Servers. It is executed in a controlled way and with minimal administrator interaction so that it can be safely and reliably performed in a very short time frame. When the service provided by the partition cannot be interrupted, its relocation can be performed, with no loss of service, by using the active migration feature.
1.5 Architecture
Live Partition Mobility requires a specific hardware infrastructure. Several platform components are involved. Live Partition Mobility is controlled by the Hardware Management Console (HMC) or the Integrated Virtualization Manager (IVM). This section describes the HMC-based architecture. Chapter 7, Integrated Virtualization Manager for Live Partition Mobility on page 221 describes the IVM-based Live Partition Mobility in detail.
Chapter 1. Overview
Live Partition Mobility requires a specific hardware and microcode configuration that is currently available on POWER6 technology-based systems only. The procedure that performs the migration identifies the resource configuration of the mobile partition on the source system and then reconfigures both source and destination systems accordingly. Because the focal-point of hardware configuration is the HMC, it has been enhanced to coordinate the process of migrating partitions. The mobile partitions configuration is not changed during the migration. The destination system must be able to host the mobile partition and must have enough free processor and memory resources to satisfy the partitions requirements before migration is started. No limitation exists on the size of the mobile partition; it can even use all resources of the source system offered by the Virtual I/O Server. The operating system and application data must reside on external disks of the source system because the mobile partitions disk data must be available after the migration to the destination system is completed. An external, shared-access storage subsystem is therefore required. The mobile partition must not own any physical adapters and must use the Virtual I/O Server for both network and external disk access. External disks may be presented to the mobile partition as virtual SCSI, or virtual Fibre resources, or both. Because the mobile partitions external disk space must be available to the Virtual I/O Servers on the source and destination systems, you cannot use storage pools. Each Virtual I/O Server must create virtual target devices using physical disks and not logical volumes. Virtual network connectivity must be established before activating the partition migration task, while virtual disk setup is performed by the migration process. Both the source and the target system must have an appropriate shared Ethernet adapter environment to host a moving partition. All virtual networks in use by the mobile partition on the source system must be available as virtual networks on the destination system. VLANs defined by port virtual IDs (PVIDs) on the VIOS have no meaning outside of an individual server as all packets are bridged untagged. It is possible for VLAN 1 on CEC 1 to be part of the 192.168.1 network while VLAN 1 on CEC 2 is part of the 10.1.1 network.
Because two networks are possible, you cannot verify whether VLAN 1 exists on both servers. You have to check whether VLAN 1 maps to the same network on both servers. Figure 1-1 shows a basic hardware infrastructure enabled for Live Partition Mobility and that is using a single HMC. Each system is configured with a single Virtual I/O Server partition. The mobile partition has only virtual access to network and disk resources. The Virtual I/O Server on the destination system is connected to the same network and is configured to access the same disk space used by the mobile partition. For illustration purposes, the device numbers are all shown as zero, but in practice, they can vary considerably.
POWER6 System #1
AIX Client Partition 1 hdisk0 vscsi0 ent0 Service Processor Service Processor VLAN POWER Hypervisor POWER Hypervisor
POWER6 System #2
VLAN
hdisk0 fcs0
Storage Subsystem
LUN
ent1 virt
ent1 virt
Chapter 1. Overview
The migration process creates a new logical partition on the destination system. This new partition uses the destinations Virtual I/O Server to access the same mobile partitions network and disks. During active migration, the state of the mobile partition is copied, as shown in Figure 1-2.
POWER6 System #1
AIX Client Partition 1 hdisk0 vscsi0 ent0 Service Processor Service Processor VLAN POWER Hypervisor POWER Hypervisor VLAN
POWER6 System #2
AIX Client Partition 1
hdisk0 fcs0
Storage Subsystem
LUN
10
ent1 virt
ent1 virt
When the migration is complete, the source Virtual I/O Server is no longer configured to provide access to the external disk data. The destination Virtual I/O Server is set up to allow the mobile partition to use the storage. The final configuration is shown in Figure 1-3.
POWER6 System #1 POWER6 System #2
AIX Client Partition 1 hdisk0 ent0 Service Processor Service Processor POWER Hypervisor POWER Hypervisor VLAN vscsi0
VLAN
vtscsi0 hdisk0 fcs0 ent2 SEA ent0 ent2 SEA ent0 hdisk0 fcs0
Storage Subsystem
LUN
ent1 virt
ent1 virt
vhost0
Chapter 1. Overview
11
Memory management of an active migration is assigned to a mover service partition on each system. During an active partition migration, the source mover service partition extracts the mobile partitions state from the source system and sends it over the network to the destination mover service partition, which in turn updates the memory state on the destination system. Any Virtual I/O Server partition can be configured as a mover service partition. Live Partition Mobility has no specific requirements on the mobile partitions memory size or the type of network connecting the mover service partitions. The memory transfer is a process that does not interrupt a mobile partitions activity and might take time when a large memory configuration is involved on a slow network. Use a high bandwidth connection, such as 1 Gbps Ethernet or larger.
1.6 Operation
Partition migration can be performed either as an inactive or an active operation. This section describes HMC-based partition migration. See Chapter 7, Integrated Virtualization Manager for Live Partition Mobility on page 221 for details regarding IVM-based migration.
12
c. It updates the source Virtual I/O Server to remove resources used to provide access of virtual SCSI, virtual Fibre Channel, or both to the mobile partitions disk resources. d. It removes the mobile partition configuration on the source system. 5. When migration is complete, the mobile partition can be activated on the destination system. The steps executed are similar to those an administrator would follow when performing a manual migration. These actions normally require accurate planning and a system-wide knowledge of the configuration of the two systems because virtual adapters and virtual target devices have to be created on the destination system, following virtualization configuration rules. The inactive migration task takes care of all planning and validation and performs the required activities without user action. This mitigates the risk of human error and executes the movement in a timely manner.
Chapter 1. Overview
13
Active migration performs similar steps to inactive migration, but also copies physical memory to the destination system. It keeps applications running, regardless of the size of the memory used by the partition; the service is not interrupted, the I/O continues accessing the disk, and network connections keep transferring data.
14
This case is shown in Figure 1-4, where system A has to be shut down. The production database partition is actively migrated to system B, while the production Web application partition is actively migrated to system C. The test environment is not considered vital and is shut down during the outage.
POWER6 System B
POWER6 System A
Database
Database
VIOS
Test Environment
Web Application
POWER6 System C
Web Application
VIOS
VIOS
Live Partition Mobility is a reliable procedure for system reconfiguration and it may be used to improve the overall system availability. High availability environments also require the definition of automated procedures that detect software and hardware events and activate recovery plans to restart a failed service as soon as possible. Live Partition Mobility increases global availability, but it is not a high availability solution. It requires both source and destination systems be operational and that the partition is not in a failed state. In addition, it does not monitor operating system and application state and it is, by default, a user-initiated action. Unplanned outages still require specific actions that are normally executed by cluster solutions such as IBM PowerHA
Chapter 1. Overview
15
IBM PowerHA for AIX, also known as High Availability Cluster Multiprocessing (HACMP) for AIX, supports Live Partition Mobility for all IBM POWER6 technology-based servers. Table 1-1 provides support details.
Table 1-1 PowerVM Live Partition Mobility Support PowerVM Live Partition Mobility support PowerHA v5.3 AIX 5.3 HACMP # IZ07791 AIX 5.3.7.1 RSCT 2.4.5.4 HACMP # IZ02620 AIX 5.3.7.1 RSCT 2.4.5.4 AIX 5.3 TL9 RSCT 2.4.10.0 AIX 6.1 HACMP # IZ07791 AIX 6.1.0.1 RSCT 2.5.0.0 HACMP # IZ02620 AIX 6.1.0.1 RSCT 2.5.0.0 AIX 6.1 TL2 SP1 RSCT 2.5.2.0
PowerHA v5.4.1
PowerHA v5.5
Cluster software and Live Partition Mobility provide different functions that can be used together to improve the availability and uptime of applications. They can simplify administration, reducing the related cost.
16
The Workload Partition migration function does not require a configuration of virtual devices in the source and destination systems. AIX keeps running on both systems and continues to use its allocated resources. It is the system administrators task to perform a dynamic partition reconfiguration operation to reduce the footprint of the source partition and enlarge the destination partition. Workload Partition migration also requires the destination partition to exist and be running before it is started. Figure 1-5 represents an example of Live Workload Partitions usage. System B is a system with three different workloads. Each of them can be migrated to another AIX Version 6.1 image even if they run on different hardware platforms.
POWER6 System B
AIX 6
POWER6 System C
AIX 6
POWER5 System A
AIX 6 Web
Test
AIX 6 DB
Common Filesystem
Live Partition Mobility and AIX Live Application Mobility have different scopes but have similar characteristics. They can be used in conjunction to provide even higher flexibility in a POWER6 or POWER7 environment.
Chapter 1. Overview
17
18
Chapter 2.
19
HMC
Mobile Partition AIX/Linux Virtual I/O Server
DLPAR-RM
DLPAR-RM RMC
Mover Service
RMC
VASI
Partition Profiles
POWER Hypervisor
Service Processor
These components and their roles are described in the following list. Hardware Management Console (HMC) The HMC is the central point of control. It coordinates administrator initiation and setup of the subsequent migration command sequences that flow between the various partition migration components. The HMC provides both a graphical user interface (GUI) wizard and a command-line interface to control migration. The HMC interacts with the service processors and POWER Hypervisor on the source and destination servers, the mover service partitions, the Virtual I/O Server partitions, and the mobile partition itself. Resource Monitoring and Control (RMC) The RMC is a distributed framework and architecture that allows the HMC to communicate with a managed logical partition. Dynamic LPAR Resource Manager This component is an RMC daemon that runs inside the AIX, Linux, and Virtual I/O Server partitions. The HMC uses this capability to remotely execute partition specific commands.
20
Mover service partition (MSP) MSP is an attribute of the Virtual I/O Server partition. It enables the specified Virtual I/O Server partition to allow the function that asynchronously extracts, transports, and installs partition state. Two mover service partitions are involved in an active partition migration: one on the source system, the other on the destination system. Mover service partitions are not used for inactive migrations. Virtual asynchronous services interface (VASI) The source and destination mover service partitions use this virtual device to communicate with the POWER Hypervisor to gain access to partition state. The VASI device is included on the Virtual I/O Server, but is only used when the server is declared as a mover service partition. POWER Hypervisor Active partition migration requires server hypervisor support to process both informational and action requests from the HMC and to transfer a partition state through the VASI device in the mover service partitions. Virtual I/O Server Only virtual adapters can be migrated with a partition. The physical resources that back the mobile partitions virtual adapters must be accessible by the Virtual I/O Servers on both the source and destination systems. Partition profiles The HMC copies all of the mobile partition's profiles without modification to the target system as part of the migration process. The HMC creates a new migration profile containing the partition's current state. Unless you specify a profile name when the migration is started, this profile replaces the existing profile that was last used to activate the partition. If you specify an existing profile name, the HMC replaces that profile with the new migration profile. Therefore, if you do not want the migration profile to replace any of the partition's existing profiles, you must specify a new, unique profile name when starting the migration. All profiles belonging to the mobile partition are deleted from the source server after the migration has completed. If the mobile partition's profile is part of a system profile on the source server, then it is automatically removed after the source partition is deleted. It is not automatically added to a system profile on the target server.
21
Time reference Time reference is an attribute of partitions, including Virtual I/O Server partitions. This partition attribute is only supported on managed systems that are capable of active partition migration. Synchronizing the time-of-day clocks for the source and destination Virtual I/O Server partitions is optional for both active and inactive partition migration. However, it is a recommended step for active partition migration. If you choose not to complete this step, the source and destination systems will synchronize the clocks while the mobile partition is moving from the source system to the destination system. The time reference partition (TRP) setting has been introduced to enable the POWER Hypervisor to synchronize the mobile partition's time-of-day as it moves from one system to another. It uses Coordinate Universal Time (UTC) derived from a common network time protocol (NTP) server with NTP clients on the source and destination systems. More than one TRP can be specified per system. The POWER Hypervisor uses the longest running time reference partition as the provider of authoritative system time. It can be set or reset through POWER Hypervisor while the partition is running.
22
Barrier Synchronization Registers (BSR) Barrier synchronization registers provide a fast, lightweight barrier synchronization between CPUs. This facility is intended for use by application programs that are structured in a single instruction, multiple data (SIMD) manner. Such programs often proceed in phases where all tasks synchronize processing at the end of each phase. The BSR is designed to accomplish this efficiently. Barrier synchronization registers cannot be migrated or reconfigured dynamically.
23
A migratable, ready partition to be moved from the source system to the destination system. For an inactive migration, the partition must be powered down, but must be capable of booting on the destination system. For active migrations, a mover service partition on the source and destination systems One or more storage area networks (SAN) that provide connectivity to all of the mobile partitions disks to the Virtual I/O Server partitions on both the source and destination servers. The mobile partition accesses all migratable disks through virtual Fibre Channel, or virtual SCSI, or a combination of these devices. The LUNs used for virtual SCSI must be zoned and masked to the Virtual I/O Servers on both systems. Virtual Fibre Channel LUNs should be configured as described in Chapter 2 of PowerVM Virtualization on IBM System p: Managing and Monitoring, SG24-7590. Hardware-based iSCSI connectivity may be used in addition to SAN. SCSI reservation must be disabled. The mobile partitions virtual disks, which must be mapped to LUNs and cannot be part of a storage pool or logical volume on the Virtual I/O Server One or more physical IP networks (LAN) that provide the necessary network connectivity for the mobile partition through the Virtual I/O Server partitions on both the source and destination servers. The mobile partition accesses all migratable network interfaces through virtual Ethernet devices. An RMC connection to manage inter-system communication Before initiating the migration of a partition, the HMC verifies the capability and compatibility of the source and destination servers, and the characteristics of the mobile partition to determine whether or not a migration is possible. The hardware, firmware, Virtual I/O Servers, mover service partitions, operating system, and HMC versions that are required for Live Partition Mobility along with the system compatibility requirements are described in Chapter 3, Requirements and preparation on page 45.
2.2.2 Readiness
Migration readiness is a dynamic partition property that changes over time.
Server readiness
A server that is running on battery power is not ready to receive a mobile partition; it cannot be selected as a destination for partition migration. A server that is running on battery power may be the source of a mobile partition; indeed, that it is running on battery power may be the impetus for starting the migration.
24
Infrastructure readiness
A migration operation requires a SAN and a LAN to be configured with their corresponding virtual SCSI, virtual Fibre Channel, VLAN, and virtual Ethernet devices. At least one Virtual I/O Server on both the source and destination systems must be configured as a mover service partitions for active migrations. The HMC must have RMC connections to the Virtual I/O Servers and a connection to the service processors on the source and destination servers. For an active migration, the HMC also needs RMC connections to the mobile partition and the mover service partitions.
2.2.3 Migratability
The term migratability refers to a partitions ability to be migrated and is distinct from partition readiness. A partition may be migratable but not ready. A partition that is not migratable may be made migratable with a configuration change. For active migration, consider whether a shutdown and reboot is required. When considering a migration, also consider the following additional prerequisites: General prerequisites: The memory and processor resources required to meet the mobile partitions current entitlements must be available on the destination server. The partition must not have any required dedicated physical adapters. The partition must not have any logical host Ethernet adapters. The partition is not a Virtual I/O Server. The partition is not designated as a redundant error path reporting partition. The partition does not have any of its virtual SCSI disks defined as logical volumes in any Virtual I/O Server. All virtual SCSI disks must be mapped to LUNs visible on a SAN or iSCSI. The partition has virtual Fibre Channel disks configured as described in Section 5.11, Virtual Fibre Channel on page 187. The partition is not part of an LPAR workload group. A partition can be dynamically removed from a group. The partition has a unique name. A partition cannot be migrated if any partition exists with the same name on the destination server. In an inactive migration only, the following characteristics apply: It is a partition in the Not Activated state May use huge pages May use the barrier synchronization registers
25
In an active migration only, the two default server serial adapters that are automatically created and assigned to a partition when a partition is created are automatically recreated on the destination system by the migration process.
26
2.4.1 Introduction
The HMC is the central point of control, coordinating administrator actions and migration command sequences. Because the mobile partition is powered off, only the static partition state (definitions and configurations) is transferred from source to destination. The transfer is performed by the controlling HMC, the service processors, and the POWER Hypervisor on the two systems; there is no dynamic state, so mover service partitions are not required. The HMC creates a migration profile for the mobile partition on the destination server corresponding to its current configuration. All profiles associated with the mobile partition are moved to the destination server after the partition definition has been created on the destination server. Note: Because the HMC always migrates the latest activated profile, an inactive partition that has never been activated is not migratable. To meet this requirement, booting to an operating system is unnecessary; booting to the SMS menu is sufficient. Any changes to the latest activated profile after power-off are not preserved. To save the changes, the mobile partition must be reactivated and shut down.
27
The inactive migration validation process performs the following operations: Checks the Virtual I/O Server and hypervisor migration capability and compatibility on the source and destination Checks that resources (processors, memory, and virtual slots) are available to create a shell partition on the destination system with the exact configuration of the mobile partition Verifies the RMC connections to the source and destination Virtual I/O Servers Ensures that the partition name is not already in use at the destination Checks for virtual MAC address uniqueness Checks that the partition is in the Not Activated state Ensures that the mobile partition is an AIX or Linux partition, is not an alternate path error logging partition, is not a service partition, and is not a member of a workload group Ensures that the mobile partition has an active profile Checks the number of current inactive migrations against the number of supported inactive migrations
28
Checks that all required I/O devices are connected to the mobile partition through a Virtual I/O Server, that is, there are no required physical adapters Verifies that the virtual SCSI disks assigned to the partition are accessible by the Virtual I/O Servers on the destination system. Creates the virtual adapter migration map that associates adapters on the source Virtual I/O Servers with adapters on the destination Virtual I/O Servers Ensures that no virtual SCSI disks are backed by logical volumes and that no virtual SCSI disks are attached to internal disks (not on the SAN)
HMC
POWER Hypervisor
POWER Hypervisor
Source System
Destination System
29
Source System Destination System Source Virtual I/O Server Destination Virtual I/O Server Mobile Partition
Validation New LPAR creation Virtual Storage removal Virtual storage adapter setup
LPAR removal
Notification of completion
Time
The HMC performs the following workflow steps: 1. Inhibits any changes to the source system and the mobile partition that might invalidate the migration. 2. Extracts the virtual device mappings from the source Virtual I/O Servers and uses this to generate a source-to-destination virtual adapter migration map. This map ensures no loss of multipath I/O capability for virtual SCSI, virtual Fibre Channel, and virtual Ethernet. The HMC fails the migration request if the device migration map is incomplete. 3. Creates a compatible partition shell on the destination system. 4. Creates a migration profile for the mobile partitions current (last-activated) profile. If the mobile partition was last activated with profile my_profile and resources were moved in to or out of the partition before the partition was shutdown, the migration profile will differ from that of my_profile. 5. Copies over the partition profiles. Copying includes all existing profiles associated with the mobile partition on the source system and the migration profile. The existing partition profiles are not modified at all during the migration; the virtual devices are not re-mapped to the new system.
30
6. Creates the required adapters (virtual SCSI, virtual Fibre Channel, or both) in the Virtual I/O Servers on the destination system and completes the logical unit number (LUN) to virtual SCSI adapter mapping as well as the NPIV-enabled adapter to virtual Fibre Channel adapter mapping. 7. On completion of the transfer of state, HMC sets the migration state to completed and informs the POWER Hypervisor on both the source and destination.
31
2.5.2 Preparation
After you have created the Virtual I/O Servers and enabled the mover service partitions, you must prepare the source and destination systems for migration: 1. Synchronize the time-of-day clocks on the mover service partition using an external time reference, such as the network time protocol (NTP). This step is optional; it increases the accuracy of time measurement during migration. The step is not required by the migration mechanisms. Even if this step is omitted, the migration process correctly adjusts the partition time. Time never goes backward on the mobile partition during a migration. 2. Prepare the partition for migration: a. Use dynamic reconfiguration on the HMC to remove all dedicated I/O, such as PCI slots, GX slots, virtual optical devices, and Integrated Virtual Ethernet from the mobile partition. b. Remove the partition from a partition workload group. 3. Prepare the destination Virtual I/O Server. a. Configure the shared Ethernet adapter as necessary to bridge VLANs. b. Configure the SAN such that requisite storage devices are available.
32
4. Initiate the partition migration by selecting the following items, with either the graphical user interface (GUI) or command-line interface (CLI) on the HMC: The partition to migrate The destination system Optionally, the mover service partition on the source and destination systems If there is only one active mover service partition on the source or the destination server, the mover service partition selection is automatic. If there are multiple active mover service partitions on one or both, you can either specify which ones to use, or let the HMC choose for you. Optionally, the virtual device mappings in the destination Virtual I/O Server. See 5.7, The command-line interface on page 162 for details.
33
The workflow for the active migration validation is shown in Figure 2-5.
Source System Destination System Source Virtual I/O Server Destination Virtual I/O Server Mobile Partition
Configuration checks
The HMC performs the following configuration checks: Checks the source and destination systems, POWER Hypervisor, Virtual I/O Servers, and mover service partitions for active partition migration capability and compatibility Checks that the RMC connections to the mobile partition, the source and destination Virtual I/O Servers, and the connection between the source and destination mover service partitions are established Checks that there are no required physical adapters in the mobile partition and that there are no required virtual serial slots higher than slot 2 Checks that no client virtual SCSI disks on the mobile partition are backed by logical volumes and that no disks map to internal disks Checks the mobile partition, its OS, and its applications for active migration capability. An application registers is capability with AIX, and may block migrations Checks that the logical memory block size is the same on the source and destination systems Checks that the type of the mobile partition is AIX or Linux and that it is not an alternate error logging partition or not a mover service partition Checks that the mobile partition is not configured with barrier synchronization registers Checks that the mobile partition is not configured with huge pages
34
Checks that the partition state is active or running Checks that the mobile partition is not in a partition workload group Checks the uniqueness of the mobile partitions virtual MAC addresses Checks that the mobile partitions name is not already in use on the destination server Checks the number of current active migrations against the number of supported active migrations
35
Source System Destination System Source Virtual I/O Server Destination Virtual I/O Server Mobile Partition
Validation New LPAR creation
LPAR removal
Memory copy
Notification of completion
MSP Setup
Time
For active partition migration, the transfer of partition state follows a path: 1. From the mobile partition to the source systems hypervisor. 2. From the source systems hypervisor to the source mover service partition. 3. From the source mover service partition to the destination mover service partition. 4. From the destination mover service partition to the destination system's hypervisor. 5. From the destination systems hypervisor to the partition shell on the destination.
36
VASI
VASI
1
POWER Hypervisor
4
POWER Hypervisor
Source System
Destination System
The migration process consists of the following steps: 1. The HMC creates a compatible partition shell on the destination system. This shell partition is used to reserve the resources required to receive the inbound mobile partition. The pending values of the mobile partition (changes made to the partitions profile since activation) are not preserved across the migration; the current values of the partition on the source system become both the pending and the current values of the partition on the destination system. The configuration of the partition on the source system includes: Processor configuration, which is dedicated or shared processors, processor counts, and entitlements (minimum, maximum, and desired) Memory configuration (minimum, maximum, and desired) Virtual adapter configuration The creation of the partition shell on the destination system ensures that all required resources are available for the mobile partition and cannot be stolen during the migration. The current partition profile associated with the mobile partition is created on the destination system. 2. The HMC configures the mover service partitions on the source and destination systems. These two movers establish: A connection to their respective POWER Hypervisor through the VASI adapter A private, full-duplex communications channel between themselves, over a standard TCP/IP connection, for transporting the moving partitions state 3. The HMC issues a prepare for migration event to the migrating operating system (still on the source system), giving the mobile partition the opportunity
37
to get ready to be moved. The operating system passes this event to registered kernel extensions and applications, so that they may take any necessary actions, such as reducing memory footprint, throttling workloads, adjusting heartbeats, and other timeout thresholds. The operating system inhibits access to the PMAPI registers and zeroes internal counters upon receipt of this event. If the partition is not ready to perform a migration at this time, then it returns a failure indicator to the HMC, which cancels the migration and rolls back all changes. 4. The HMC creates the virtual target devices, virtual Fibre Channel, and virtual SCSI server adapters in each of the Virtual I/O Servers on the destination system that will host the virtual SCSI and virtual Fibre Channel client adapters of the mobile partition. This step uses the virtual adapter migration map created during the validation phase. Migration stops if an error occurs. 5. The mover on the source system starts sending the partition state to the mover on the destination system, copying the mobile partitions physical pages to the physical memory reserved by the partition shell on the destination. 6. Because the mobile partition is still active, with running applications, its state continues to change while the memory is being moved from one system to the other. Memory pages that are modified during the transfer of state are marked modified, or dirty. After the first pass, the source mover re-sends all the dirty pages. This process is repeated until the number of pages marked as dirty at the end of each loop no longer decreases, or is considered sufficiently small, or a timeout is reached. Based on the total number of pages associated with partition state and the number of pages left to transmit, the mover service partition instructs the hypervisor on the source system to suspend the mobile partition. 7. The mobile partition confirms the suspension by quiescing all its running threads.
38
that has not yet been migrated, the page is demand-paged from the source system. This technique significantly reduces the length of the pause, during which the partition is unavailable. 10.The mobile partition recovers I/O, retrying all pending I/O requests that were not completed while on the source system. It also sends a gratuitous ARP request on all VLAN virtual adapters to update the ARP caches in the various switches and systems in the external network.
The partition is now active and visible again. End of suspend window period.
11.When the destination mover service partition receives the last dirty page from the source system, the migration is complete. The suspend window period (from end of step 7 through end of step 10) lasts only a few seconds.
39
40
Both possible and suggested HMC-selected Virtual I/O Servers, if they exist, are viewable in HMC through the GUI and CLI lslparmigr -r virtualio command, as displayed in Example 2-1.
Example 2-1 Sample output of the lslparmigr -r virtualio command
$ lslparmigr -r virtualio -m 9117-MMA-SN100F6A0-L9 \ -t 9117-MMA-SN101F170-L10 --filter lpar_names=PROD possible_virtual_scsi_mappings=30/VIOS1_L10/1,\ suggested_virtual_scsi_mappings=30/VIOS1_L10/1,\ possible_virtual_fc_mappings=none,\ suggested_virtual_fc_mappings=none
$ lslparmigr -r msp -m 9117-MMA-SN100F6A0-L9 -t 9117-MMA-SN101F170-L10\ --filter lpar_names=PROD source_msp_name=VIOS1_L9,source_msp_id=1,dest_msp_names=VIOS1_L10,\ dest_msp_ids=1,ipaddr_mappings=9.3.5.3//1/VIOS1_L10/9.3.5.111/ If either of the chosen mover service partitions determines that its VASI cannot handle a migration or if the HMC receives a VASI device error from a mover service partition, the HMC stops the migration with an error.
41
If the source or destination server is powered down after the HMC has enabled suspension on the mobile partitions, the HMC must stop the migration and perform a rollback of all reversible changes. When the hypervisor resumes operation, the partitions come back up in the powered off state, with a migration state of invalid.
42
43
44
Chapter 3.
45
3.1 Introduction
Requirements and preparation must be fulfilled whether you perform an inactive or an active partition migration. As previously described: Inactive partition migration allows you to move a powered-off logical partition, including its operating system and applications, from one system to another. Active partition migration is the ability to move a running logical partition, including its operating system and applications, from one system to another without disrupting the operation of that logical partition. When you have ensured that all these requirements are satisfied and all preparation tasks are completed, the HMC verifies and validates the Live Partition Mobility environment. If this validation turns out to be successful, then you can initiate the partition migration by using the wizard on the HMC graphical user interface (GUI) or through the HMC command-line interface (CLI). Note: Information about preparation and requirements with the Integrated Virtualization Manager can be found in Chapter 7, Integrated Virtualization Manager for Live Partition Mobility on page 221.
46
47
v. Click Close. vi. If the Enterprise Edition code is not activated, you must repeat the first three steps and then select Enter Activation Code to enable Live Partition Mobility as shown on Figure 3-2.
Both source and destination systems must be at firmware level 01Ex320 or later, where x is an S for BladeCenter, an L for Entry servers (such as the Power 520, Power 550, and Power 560), an M for Midrange servers (such as the Power 570) or an H for Enterprise servers (such as the Power 595). To upgrade the firmware, see the firmware fixes Web site: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.j sp?topic=/ipha5/fix_serv_firm_kick.htm The current firmware level can be checked after completing the following steps on the HMC: i. In the navigation are, open Systems Management and select the system.
48
ii. Select Updates in the task list iii. By selecting View system information a new pop-up window called Specify LIC Repository will appear. iv. Select None - Display current values in this new window and click OK. Finally the current firmware level appears in the new window called View system information. This is shown in Figure 3-3. If the version is not at the required level for Live Partition Mobility, you have to perform an update through the HMC by selecting Upgrade Licensed Internal Code to a new release from Updates in the task list.
Note: You can also check the firmware level by executing the lslic command on the HMC. Although there is a minimum required firmware level, each system may have a different level of firmware. The level of source system firmware must be compatible with the destination firmware.
49
Table 3-1 gives an overview of the supported mixed firmware levels. For a current list of firmware level compatibilities, and how to migrate, see Live Partition Mobility Support for Power Systems: http://www14.software.ibm.com/webapp/set2/sas/f/pm/migrate.html Or check the IBM Prerequisite Web site for POWER6 and POWER7 compatibility: https://www-912.ibm.com/e_dir/eserverprereq.nsf Note: On the IBM Prerequisite Web site: Choose the Software tab In the OS/Firmware dropdown, select Live Partition Mobility between POWER6 and POWER7 In the Product dropdown, select Live Partition Mobility For the Function, select ALL Functions
Table 3-1 Supported migration matrix
To From EM320_031 EM320_040 EM320_046 EM320_061 or higher Blocked Blocked Supported Supported EM330_028 or higher Blocked Blocked Supported Supported EM340_039 or higher Blocked Blocked Supported Supported
Blocked
Blocked
Supported
Supported
Supported
Supported
Blocked
Blocked
Supported
Supported
Supported
Supported
Source and destination Virtual I/O Server requirements At least one Virtual I/O Server at release level 1.5.1.1or higher has to be installed both on the source and destination systems. A new partition attribute, called the mover service partition, has been defined that enables you to indicate whether a mover-capable Virtual I/O Server partition should be considered during the selection process of the MSP for a migration. By default, all Virtual I/O Server partitions have this new partition attribute set to FALSE.
50
In addition to having the mover partition attribute set to TRUE, the source and destination mover service partitions communicate with each other over the network. On both the source and destination servers, the Virtual Asynchronous Services Interface (VASI) device provides communication between the mover service partition and the POWER Hypervisor. To determine the current release of the Virtual I/O Server and to see if an upgrade is necessary, use the ioslevel command. More technical information about the Virtual I/O Server and latest downloads are on the Virtual I/O Server Web site:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/download/home.html
Operating system requirements The operating system running in the mobile partition has to be AIX or Linux. A Virtual I/O Server logical partition or a logical partition running the IBM i operating system cannot be migrated. The operating system must be at one of the following levels: AIX 5L Version 5.3 Technology Level 7 or later (the required level is 5300-07-01) AIX Version 6.1 or later (the required level is 6100-00-01) Red Hat Enterprise Linux Version 5 (RHEL5) Update 1 or later (with the required kernel security update) SUSE Linux Enterprise Server 10 (SLES 10) Service Pack 1 or later (with the required kernel security update) To download the Linux kernel security updates: http://www14.software.ibm.com/webapp/set2/sas/f/pm/component.html Previous versions of AIX and Linux can participate in inactive partition migration if the operating systems support virtual devices and IBM Power Systems POWER6- and POWER7-based servers. Note: Ensure that the target hardware supports the operating system you are migrating. Storage requirements For a list of supported disks and optical devices, see the Virtual I/O Server data sheet for VIOS: http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/d atasheet.html
51
Network requirements The migrating partition uses the virtual LAN (VLAN) for network access. The VLAN must be bridged (if there is more than one, then it also has to be bridged) to a physical network using a shared Ethernet adapter in the Virtual I/O Server partition. Your LAN must be configured so that migrating partitions can continue to communicate with other necessary clients and servers after a migration is completed.
52
Table Note 1: For inactive migration, you perform fewer preparatory tasks on the Virtual I/O Server because: You do not have to enable the mover service partition on either the source or destination Virtual I/O Server. You do not have to synchronize the time-of-day clocks. Table Note 2: For inactive migration, you have to perform fewer preparatory tasks on the mobile partition: RMC connections are not required. The mobile partition can have dedicated I/O. These dedicated I/O devices will be removed automatically from the partition before the migration occurs. Barrier-synchronization registers can be used in the mobile partition. The mobile partition can use huge pages. The applications do not have to be migration-aware or migration-safe. Certain settings can be changed dynamically (partition workload groups, mover service partitions and time reference), but others have to be changed statically (barrier synchronization registers and redundant error path reporting).
53
3.5.1 HMC
Ensure that the source and destination systems are managed by the same HMC (or a redundant HMC pair). Note: HMC Version 7 Release 3.4 introduces an additional migration scenario. In this case, the source server is managed by one HMC and the destination server is managed by a different HMC. Additional requirements include: Both HMCs must be connected to the same network so that they can communicate with each other. Secure Shell has to be set up correctly between both the source as the destination HMC with the mkauthkeys command. For more information about this HMC migration scenario see 5.4, Remote Live Partition Mobility on page 130.
54
Figure 3-4 shows how the size of the logical memory block can be modified in the Performance Setup menu of the Advanced System Management Interface (ASMI). The ASMI can be launched through the Operations section in the task list on the HMC.
55
56
2. Determine the memory available on the destination system: a. In the contents area, select the destination system and select Properties in the task list. b. Select the Memory tab. c. Record the Available memory and Current memory available for partition usage. d. Click OK. Figure 3-6 shows the result of the actions.
3. Compare the values from the previous steps: If the destination system has enough available memory to support the mobile partition, skip the rest of this procedure and continue with other preparation tasks. If the destination system does not have enough available memory to support the mobile partition, you must dynamically free up some memory (or use the Capacity on Demand (CoD) feature to activate additional memory, where available) on the destination system before the actual migration can take place.
57
58
Figure 3-7 shows the result of the actions. Note: In recent HMC levels, p6 appears as POWER6. See Figure 3-7.
Figure 3-7 Checking the number of processing units of the mobile partition
2. Determine the processors available on the destination system: a. In the contents area, select the destination system and select Properties in the task list. b. Select the Processors tab. c. Record the Available processors available for partition usage. d. Click OK.
59
3. Compare the values from the previous steps. If the destination system has enough available processors to support the mobile partition, then skip the rest of this procedure and continue with the remaining preparation tasks for Live Partition Mobility. If the destination system does not have enough available processors to support the mobile partition, you must dynamically free up processors (or use the CoD feature, when available) on the destination system before the actual migration can take place.
60
Note: Live Partition Mobility requires HMC Version 7 Release 3.2 or higher to be used. In this publication, we used the latest Version 7 Release 3.4 of the HMC software (see Figure 3-9 on page 61). You can also verify the current HMC version, and release and service pack level with the lshmc command. When using Live Parition Mobility with an HMC managing at least one POWER7-based server, HMC V7R710 or later is required.
61
If the HMC is not at the correct version and release, an upgrade is required. Select Updates (1) and then click Update HMC (2), as shown in Figure 3-10. Also see Figure 3-11 on page 63.
For more information about upgrading the Hardware Management Console, see: http://www14.software.ibm.com/webapp/set2/sas/f/hmc/home.html
62
After you click OK, the window shown in Figure 3-11 opens.
63
If the source and destination Virtual I/O Servers do not meet the requirements, perform an upgrade.
64
65
Note: After the Virtual I/O Server infrastructure is configured, a backup of the Virtual I/O Servers is recommended; this approach produces an established checkpoint prior to migration.
66
RMC can be configured to monitor resources and perform an action in response to a defined condition. The flexibility of RMC enables you to configure response actions or scripts that manage general system conditions with little or no involvement from the system administrator. To establish an RMC connection for the mobile partition, you must be a super administrator (a user with the HMC hmcsuperadmin role, such as hscroot) on the HMC and complete the following steps: 1. Sign on to the operating system of the mobile partition with root authority. 2. From the command line, enter the following command to check if the RMC connection is established: lsrsrc IBM.ManagementServer This command is shown in Example 3-2.
Example 3-2 Checking IBM.ManagementServer resource # lsrsrc IBM.ManagementServer Resource Persistent Attributes for IBM.ManagementServer resource 1: Name = "9.3.5.180" Hostname = "9.3.5.180" ManagerType = "HMC" LocalHostname = "9.3.5.115" ClusterTM = "9078-160" ClusterSNum = "" ActivePeerDomain = "" NodeNameList = {"mobile"} resource 2: Name = "9.3.5.128" Hostname = "9.3.5.128" ManagerType = "HMC" LocalHostname = "9.3.5.115" ClusterTM = "9078-160" ClusterSNum = "" ActivePeerDomain = "" NodeNameList = {"mobile"} #
If the command output includes ManageType = "HMC", then the RMC connection is established, and you can skip 3 on page 68, and continue with the additional preparation tasks by going to 3.8.3, Disable redundant error path reporting on page 68. If you received a message indicating that there is no IBM.ManagementServer resource or that ManagerType does not equal HMC, then continue to the next step.
67
3. Establish the RMC connection specifically for your operating system: For AIX, see Configuring Resource Monitoring and Control (RMC) for the Partition Load Manager, found at: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.j sp?topic=/iphbk/iphbkrmc_configuration.htm For Linux, install the RSCT utilities. Download these tools from the Service and productivity tools Web site (and select the appropriate HMC- or IVM-managed servers link):
http://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html
Red Hat Enterprise Linux: Install additional software (RSCT Utilities) for Red Hat Enterprise Linux on HMC managed servers. SUSE Linux Enterprise Server: Install additional software (RSCT Utilities) for SUSE Linux Enterprise Server on HMC managed servers.
68
69
4. Select the active logical partition profile and select Edit from the Actions menu. 5. Select the Virtual Adapter tab. 6. If there are more than two virtual serial adapters listed, then ensure that the adapters in slots 2 and higher are not selected as Required. 7. Click OK. Figure 3-15 shows the result of the steps.
Figure 3-15 Verifying the number of serial adapters on the mobile partition
70
3. Select the mobile partition and select Properties. 4. Click the Other tab. 5. In the Workload group field, select (None). 6. In the contents area, open the mobile partition and select Configuration Manage Profiles. 7. Select the active logical partition profile and select Edit from the Actions menu. 8. Click the Settings tab. 9. In the Workload Management area, select (None) and click OK. 10.Repeat the last three steps for all partition profiles associated with the mobile partition. Figure 3-16 and Figure 3-17 on page 72 show the tabs for the disablement of the partition workload group (both in the partition and in the partition profiles).
71
72
5. Click the Hardware tab. 6. Click the Memory tab. If the number of BSR arrays equals zero, the mobile partition can participate in inactive or active migration. This is shown on Figure 3-18. You can now continue with additional preparatory tasks for the mobile partition.
Figure 3-18 Checking the number of BSR arrays on the mobile partition
If the number of BSR arrays is not equal to zero, take one of the following actions: Perform an inactive migration instead of an active migration. Skip the remaining steps and see 2.4, Inactive partition migration on page 27. Click OK and continue to the next step to prepare the mobile partition for an active migration.
7. In the contents area, open the mobile partition and select Configuration Manage Profiles. 8. Select the active logical partition profile and select Edit from the Actions menu. 9. Click the Memory tab. 10.Enter 0 in the BSR arrays for this profile field and click OK. This is shown in Figure 3-19 on page 74.
73
11.Because modifying BSR cannot be done dynamically, you have to shut down the mobile partition, then power it on by using the profile with the BSR modifications.
74
To configure huge pages for the mobile partition using the HMC, you must be a super administrator and complete the following steps: 1. Open the source system and select Properties. 2. Click the Advanced tab. If the current huge page memory equals zero (0), shown in Figure 3-20, skip the remaining steps of this procedure and continue with additional preparatory tasks for the mobile partition in 3.8.8, Physical or dedicated I/O on page 76.
If the current huge page memory is not equal to 0, take one of the following actions: Perform an inactive migration instead of an active migration. Skip the remaining steps and see 2.4, Inactive partition migration on page 27. Click OK and continue with the next step to prepare the mobile partition for an active migration.
3. In the contents area, open the mobile partition and select Configuration Manage Profiles. 4. Select the active logical partition profile and select Edit from the Actions menu.
75
5. Click the Memory tab. 6. Enter 0 in the field for desired huge page memory, and click OK. This is shown in Figure 3-21. 7. Because changing huge pages cannot be done dynamically, you have to shut down the mobile partition, then turn it on by using the profile with the modifications.
76
migration, the required or physical I/O configuration must be verified. Physical I/O marked as desired can be removed dynamically with a dynamic LPAR operation. To remove required I/O from the mobile partition using the HMC, you must be a super administrator and complete the following steps: 1. In the navigation area, expand Systems Management. 2. Select Server and select the source system. 3. In the contents area, open the mobile partition and select Configuration Manage Profiles. 4. Select the active logical partition profile and select Edit from the Actions menu. 5. Click the I/O tab. See Figure 3-22. Note the following information. If Required is not selected for any resource, skip the remainder of this procedure and continue with additional preparatory tasks for the mobile partition, in 3.8.9, Name of logical partition profile on page 78.
Figure 3-22 Checking if there are required resources in the mobile partition
77
If Required is selected for any resource, take one of the following actions: Perform an inactive migration instead of an active migration. Skip the remaining steps and see 2.4, Inactive partition migration on page 27. Continue with the next step to prepare the mobile partition for an active migration.
6. For each resource that is selected as Required, deselect Required and click OK. 7. Shut down the mobile partition, then turn it on by using the profile with the required I/O resource modifications. Note: You must also verify that no Logical Host Ethernet Adapter (LHEA) devices are configured, because these are also considered as physical I/O. Neither Inactive nor Active migration are possible if any LHEAs are configured. Figure 3-23 shows you how to verify whether a LHEA is configured for the mobile partition. First, select an IVE physical port to define a LHEA, and then verify whether there are logical port IDs. If no logical port ID is in this column, then no logical Host Ethernet Adapter is configured for this partition. More information about Integrated Virtual Ethernet adapters can be found in the Integrated Virtual Ethernet Adapter Technical Overview and Introduction, REDP-4340 publication.
78
replaces the existing profile that was last used to activate the partition. Also, if you specify an existing profile name, the HMC replaces that profile with the new migration profile. If you do not want the migration profile to replace any of the partitions existing profiles, you must specify a unique profile name. The new profile contains the partitions current configuration and any changes that are made during the migration.
79
To list the attributes of hdiskX, type the following command: lsdev -dev hdiskX -attr If reserve_policy is not set to no_reserve, use the following command: chdev -dev hdiskX -attr reserve_policy=no_reserve 3. Verify that the physical volume has the same unique identifier, physical identifier, or an IEEE volume attribute. These identifiers are required in order to export a physical volume as a virtual device. To list disks with a unique identifier (UDID): i. Type the oem_setup_env command on the Virtual I/O Server CLI. ii. Type the odmget -qattribute=unique_id CuAt command to list the disks that have a UDID. See Example 3-3.
Example 3-3 Output of odmget command
CuAt: name = "hdisk6" attribute = "unique_id" value = "3E213600A0B8000291B080000520E023C6B8D0F1815 type = "R" generic = "D" rep = "nl" nls_index = 79 CuAt: name = "hdisk7" attribute = "unique_id" value = "3E213600A0B8000114632000073244919ADCA0F1815 type = "R" generic = "D" rep = "nl" nls_index = 79
FAStT03IBMfcp"
FAStT03IBMfcp"
80
To list disks with a physical identifier (PVID): i. Type the lspv command to list the devices with a PVID. See Example 3-4. If the second column has a value of none, the physical volume does not have a PVID. A recommendation is to put a PVID on the physical volume before it is exported as a virtual device.
Example 3-4 Output of lspv command $ lspv NAME hdisk0 hdisk6 hdisk7 PVID 00c1f170d7a97dec 00c0f6a0915fc126 00c0f6a08de5008b VG rootvg None None STATUS active
ii. Type the chdev command to put a PVID on the physical volume in the following format: chdev - dev physicalvolumename -attr pv=yes -perm To list disks with an IEEE volume attribute identifier, issue the following command (in the shell oem_setup_env): lsattr -El hdiskX 4. Verify that the mobile partition has access to a source Virtual I/O Server virtual SCSI adapter. You have to verify the configuration of the virtual SCSI adapters on the mobile partition and the source Virtual I/O Server logical partition to ensure that the mobile partition has access to storage. You must be a super administrator (such as hscroot) to complete the following steps: a. Verify the virtual SCSI adapter configuration of the mobile partition: i. In the navigation area, open Systems Management. ii. Click Servers. iii. In the contents area, open the source system. iv. Select the mobile partition and click Properties. v. Click the Virtual Adapters tab. vi. Record the Slot ID and Remote Slot ID for each virtual SCSI adapter. vii. Click OK.
81
b. Verify the virtual SCSI adapter configuration of the source Virtual I/O Server virtual SCSI adapter: i. In the navigation area, open Systems Management. ii. Click Servers. iii. In the contents area, open the source system. iv. Select the Virtual I/O Server logical partition and click Properties. v. Click the Virtual Adapters tab. vi. Verify that the Slot ID corresponds to the Remote Slot ID that you recorded (in step vi on page 81) for the virtual SCSI adapter on the mobile partition. vii. Verify that the Remote Slot ID is either blank or that it corresponds to the Slot ID that you recorded (in step vi on page 81) for the virtual SCSI adapter on the mobile partition. viii.Click OK.
82
83
c. If the values are incorrect, plan the slot assignments and connection specifications for the virtual SCSI adapters by using a worksheet similar to the one in Table 3-3.
Table 3-3 Virtual SCSI adapter worksheet Virtual SCSI adapter Source Virtual I/O Server virtual SCSI adapter Destination Virtual I/O Server virtual SCSI adapter Mobile partition Virtual SCSI adapter on source system Mobile partition Virtual SCSI adapter on destination system Slot number Connection specification
When all virtual SCSI adapters on the source Virtual I/O Server logical partition allow access to virtual SCSI adapters of every logical partition (not only the mobile partition), you have two solutions: You may create a new virtual SCSI server adapter on the source Virtual I/O Server and allow only the virtual SCSI client adapter on the mobile partition to access it. You may change the connection specifications of a virtual SCSI server adapter on the source Virtual I/O Server so that it allows access to the virtual SCSI adapter on the mobile partition. This means that the virtual SCSI adapter of the client logical partition that currently has access to the virtual SCSI adapter on the source Virtual I/O Server will no longer have access to the adapter.
5. Verify that the destination Virtual I/O Server has sufficient free virtual slots to create the virtual SCSI adapters required to host the mobile partition in order to create a virtual SCSI adapter after it moves to the destination system. To verify the virtual SCSI configuration using the HMC, you must be a super administrator (such as hscroot) to complete the following steps: a. In the navigation area, open Systems Management. b. Select Servers. c. In the contents area, open the destination system. d. Select the destination Virtual I/O Server logical partition and click Properties.
84
e. Select the Virtual Adapters tab and compare the number of virtual adapters to the maximum virtual adapters. This is shown in Figure 3-26,
If, after verification, the number of maximum virtual adapters is higher or equal to the number of virtual adapters plus the number of virtual SCSI adapters required to host the migrating partition, you can continue with additional preparatory tasks at step 6 on page 86. If the maximum virtual adapter value does not allow the addition of required virtual SCSI adapters for the mobile partition, then you have to modify its partition profile by completing the following steps: i. In the navigation area, open Systems Management. ii. Select Servers. iii. In the contents area, open the destination system. iv. Select the destination Virtual I/O Server logical partition v. Click in the task area on configuration, click Manage profiles vi. Select the active logical partition profile and select Edit from the Actions menu. vii. Click the Virtual Adapters tab and modify (increase) the number of maximum virtual adapters. You must shut down and restart the logical partition for the change to take effect.
85
6. Verify that the mobile partition has access to the same physical storage on the storage area network from both the source and destination environments. This requirement has to be fulfilled for Live Partition Mobility to be successful. In the source environment, check that the following connections exist: A virtual SCSI client adapter on the mobile partition must have access to a virtual SCSI adapter on the source Virtual I/O Server logical partition. That virtual SCSI server adapter on the source Virtual I/O Server logical partition must have access to a remote storage adapter on the source Virtual I/O Server logical partition. That remote storage adapter on the source Virtual I/O Server logical partition must be connected to a storage area network and have access to some physical storage in the network.
In the destination environment, check that a remote storage adapter on the destination Virtual I/O Server logical partition has access to the same physical storage as the source Virtual I/O Server logical partition. To verify the virtual adapter connections by using the HMC, you must be a super administrator and complete the following steps: a. Select Systems Management. b. Select Servers. c. In the contents area, open the source system and select a mobile partition. d. Select the mobile partition, select Hardware Information, select Virtual I/O Adapters, and select SCSI. e. Verify all the information and click OK. The result is shown in Figure 3-27.
If the information is correct, go to step f on page 87. If the information is incorrect, return to the beginning of this section and complete the task associated with the incorrect information.
86
f. In the contents area, open the destination system. g. Select the destination Virtual I/O Server logical partition. h. Select Hardware Information, select Virtual I/O Adapters, and select SCSI. i. Verify the information and click OK. 7. Verify that the mobile partition does not have physical or required I/O adapters and devices. This is only an issue for active partition migration. If you want to perform an active migration, you must move the physical or required I/O from the mobile partition, as explained in 3.8.8, Physical or dedicated I/O on page 76. 8. All profile changes on the mobile partitions profile must be activated before starting the migration so that the new values can take effect: a. If the partition is not activated, it must be powered on. It is sufficient to activate the partition to the SMS menu. b. If the partition is active, you can shut it down and power on the partition again by using the changed logical partition profile.
87
Perform the following steps on the source and destination Virtual I/O Servers: 1. Ensure that you connect the source and destination Virtual I/O Servers and the shared Ethernet adapter to the network. 2. Configure virtual Ethernet adapters for the source and destination Virtual I/O Server partitions. If virtual switches are available, be sure that the virtual Ethernet adapters on the source Virtual I/O Server is configured on a virtual switch that has the same name of the virtual switch that is used on the destination Virtual I/O Server. 3. Ensure that the mobile partition has a virtual Ethernet adapter created by using the HMC GUI. 4. Activate the mobile partition to establish communication between its virtual Ethernet and the Virtual I/O Servers virtual Ethernet. 5. Verify that the operating system on the mobile partition sees the new Ethernet adapter by using the following command: lsdev -Cc adapter 6. Check that the client partition can access the external network. To check, configure the TCP/IP connections for the virtual adapters on the client logical partitions by using the client partitions operating systems (AIX or Linux), by using the following command: mktcpip -h hostname -a IPaddress -i interface -g gateway
88
Chapter 4.
89
vscsi0
ent0
POWER Hypervisor
POWER Hypervisor
vtscsi0
hdiskX
ent2 SEA
en2 if
en2 if
ent2 SEA
fcs0
ent0
ent0
fcs0
HMC
Ethernet Network
90
ent1 virt
ent1 virt
91
For a migration of running partitions (active migration), both the source and destination Virtual I/O Server partitions must be able to communicate with each other to transfer the mobile partition state. We suggest you use a dedicated network that has1 Gbps bandwidth, or more. Shared disks One or more shared disks must be connected to the source and destination Virtual I/O Server partitions. At least one physical volume that is mapped by the Virtual I/O Server to a LUN on external SAN storage must be attached to the mobile partition. The reserve_policy attribute of all the physical volumes belonging to the mobile partition must be set as no_reserve on the source and destination Virtual I/O Server partitions. When using virtual SCSI disks, you change this attribute by using the chdev command on the Virtual I/O Server partition: $ chdev -dev hdiskX -attr reserve_policy=no_reserve Power supply The destination system must be running on a regular power source. If the destination system is running on a battery power, return the system to its regular power source before migrating a partition.
92
93
Important: Configuring the VASI device is not required. The VASI device is automatically created and configured when the Virtual I/O Server is installed.
94
2. In the Tasks pane, expand Configuration Create Logical Partition, and select VIO Server, as shown in Figure 4-2, to start the Create LPAR Wizard.
95
3. Enter the partition name, change the ID if you want to, and check the Mover service partition box on the Create Lpar Wizard window. See Figure 4-3.
4. The mover service partition will be activated with the partition. Proceed with the remaining steps of the Virtual I/O Server partition creation. You can also set the mover service partition attribute dynamically for an existing Virtual I/O Server partition while the partition is in the Running state. 1. In the navigation pane, expand Systems Management Servers, and select the desired system. 2. In the Contents pane (the top right of the Hardware Management Console Workplace), select the Virtual I/O Server for which you want to enable the mover service partition attribute.
96
3. Click view popup menu button and select Configuration Properties, as shown in Figure 4-4.
4. Check the Mover service partition box on the General tab in the Partition Properties window, and click OK. See Figure 4-5.
97
98
99
3. Click view popup menu button and select Operations Mobility Validate, as shown in Figure 4-7, to start the validation process.
100
4. Select the destination system, specify Destination profile name and Wait time, and then click the Validate button (Figure 4-8).
If you are proceeding with this step when the mobile partition is in the Not Activated state, the destination and source mover service partition and wait time entries do not appear, because these are not required for the inactive partition migration. Note: Figure 4-8 on page 101 shows the option of entering a remote HMCs information. This step applies only to a remote migration between systems managed by different HMCs. Our example shows migration of a partition between systems managed by a single HMC. See 5.4, Remote Live Partition Mobility on page 130 for more details on remote migration. 5. Check for errors or warnings in the Partition Validation Errors/Warnings window, and eliminate any errors. If any errors occur, check the messages in the window and the prerequisites for the migration. You cannot perform the migration steps with any errors.
101
For example, if you are proceeding with the validation steps on the mobile partition with physical adapters in the Running state (active migration), then you get the error shown in Figure 4-9.
If the mobile partition is in the Not Activated state, a warning message is reported, as shown in Figure 4-10.
102
6. After closing the Partition Validation Errors/Warnings window, a validation window, as shown in Figure 4-11, opens again. If you have no errors in the previous step, you may perform the migration at this point by clicking the Migrate button.
103
2. In the contents pane, select the partition to migrate to the destination system, that is, the mobile partition.
104
3. Click view popup menu button and select Operations Mobility Migrate, as shown in Figure 4-13, to start the Partition Migration wizard.
4. Check the Migration Information of the mobile partition in the Partition Migration wizard.
105
If the mobile partition is powered off, Migration Type is Inactive. If the partition is in the Running state, it is Active, as shown in Figure 4-14.
5. You can specify the New destination profile name in the Profile Name panel, as shown in Figure 4-15 on page 107.
106
If you leave the name blank or do not specify a unique profile name, the profile on the destination system will be overwritten.
107
6. Optionally enter the Remote HMC network address and Remote User. In our example, we use a single HMC. See Figure 4-16. Click Next.
Figure 4-16 Optionally specifying the Remote HMC of the destination system
108
7. Select the destination system and click Next. See Figure 4-17.
109
8. Check errors or warnings in the Partition Validation Errors/Warnings panel, Figure 4-18, and eliminate any errors. If errors exist, you cannot proceed to the next step. If only warnings exist, you may proceed to the next step.
110
9. If you are performing an inactive migration, skip this step and go to step 10 on page 112. If you are performing an active migration, select the source and the destination mover service partitions to be used for the migration. See Figure 4-19.
In this basic scenario, one Virtual I/O Server partition is configured on the destination system, so the wizard window shows only one mover service partition candidate. If you have more than one Virtual I/O Server partition on the source or on the destination system, you can select which mover server partitions to use.
111
112
11.Select the virtual storage adapter assignment. See Figure 4-21. In this case, one Virtual I/O Server partition is configured on each system, so this wizard window shows one candidate only. If you have more than one Virtual I/O Server partition on the destination system, you may choose which Virtual I/O Server to use as the destination.
113
12.Select the shared processor pool from the list of shared processor pools matching the source partitions shared processor pool configuration. See Figure 4-22.
Note: If there is only one shared processor pool this option might not appear. See 5.5, Multiple shared processor pools on page 147 for more information about shared processor pools and Live Partition Mobility. 13.Specify the wait time in minutes (Figure 4-23 on page 115). The wait time value is passed to the commands that are invoked on the HMC and perform migration-related operations on the relevant partitions using the Remote Monitoring and Control (RMC). For example, the command syntax of drmgr can be used to install and configure dynamic logical partitioning (dynamic LPAR) scripts:
drmgr {-i script_name [-w minutes] [-f] | -u script_name} [-D hostname]
114
The wait time value is used for the argument for the -w option. If you specify 5 minutes as the wait time, as shown in Figure 4-23, the drmgr command is executed with -w 5.
115
14.Check the settings that you have specified for this migration on the Summary panel, and then click Finish to begin the migration. See Figure 4-24.
15.The Migration status and Progress is shown in the Partition Migration Status panel, as shown in Figure 4-25.
116
16.When the Partition Migration Status window indicates that the migration is 100% complete, verify that the mobile partition is Running on the destination system. The mobile partition is on the destination system, as shown in Figure 4-26.
17.If you keep a record of the virtual I/O configuration of the partitions, check and record the migrating partitions configuration in the destination system. Although the migrating partition retains the same slot numbers as on the source system, the server virtual adapter slot numbers can be different between the source and destination Virtual I/O Servers. Also, the virtual target device name might change during migration.
117
118
Chapter 5.
Advanced topics
This chapter discusses various advanced topics relating to Live Partition Mobility. The chapter assumes you are familiar with the information in the preceding chapters. This chapter contains the following topics: 5.1, Dual Virtual I/O Servers on page 120 5.2, Multiple concurrent migrations on page 128 5.3, Dual HMC considerations on page 130 5.4, Remote Live Partition Mobility on page 130 5.5, Multiple shared processor pools on page 147 5.6, Migrating a partition with physical resources on page 149 5.7, The command-line interface on page 162 5.8, Migration awareness on page 177 5.9, Making applications migration-aware on page 178 5.10, Making kernel extension migration aware on page 185 5.11, Virtual Fibre Channel on page 187 5.12, Processor compatibility modes on page 205
119
120
Tip: The best practice is to always perform a validation before performing a migration. The validation checks the configuration of the involved Virtual I/O Servers and shows you the configuration that will be applied. Use the validation menu on the GUI or the lslparmigr command described in 5.7.2, The lslparmigr command on page 166. In this section, we describe three different migration scenarios where the source and destination systems provide disk access either with one or two Virtual I/O Servers using virtual SCSI adapters. More information about virtual Fibre Channel adapters can be found in 5.11, Virtual Fibre Channel on page 187.
121
If the destination system has two Virtual I/O Servers, one of them should be configured to access the disk space provided by the first storage subsystem; the other must access the second subsystem, as shown in Figure 5-1.
vscsi0
vscsi1
Hypervisor Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 vhost0 vtscsi0 hdisk0 Storage adapter vhost0 vtscsi0 hdisk0 Storage adapter Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2
Hypervisor
Figure 5-1 Dual VIOS and client mirroring to dual VIOS before migration
The migration process automatically detects which Virtual I/O Server has access to which storage and configures the virtual devices to keep the same disk access topology.
122
When migration is complete, the logical partition has the same disk configuration it had on previous system, still using two Virtual I/O Servers, as shown in Figure 5-2.
vscsi1
vscsi0
Hypervisor vhost0 vtscsi0 hdisk0 Storage adapter vhost0 vtscsi0 hdisk0 Storage adapter Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2
Hypervisor
Figure 5-2 Dual VIOS and client mirroring to dual VIOS after migration
123
If the destination system has only one Virtual I/O Server, the migration is still possible and the same virtual SCSI setup is preserved at the client side. The destination Virtual I/O Server must have access to all disk spaces and the process creates two virtual SCSI adapters on the same Virtual I/O Server, as shown in Figure 5-3.
vscsi1
vscsi0
Hypervisor Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 vhost0 vtscsi0 vhost1 vtscsi1
Hypervisor
hdisk1
Figure 5-3 Dual VIOS and client mirroring to single VIOS after migration
124
The migration is possible only if the destination system is configured with two Virtual I/O Servers that can provide the same multipath setup. They both must have access to the shared disk data, as shown in Figure 5-4.
vscsi0
vscsi1
Hypervisor Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 vhost0 vhost0 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2
Hypervisor
Figure 5-4 Dual VIOS and client multipath I/O to dual VIOS before migration
125
When migration is complete, on the destination system, the two Virtual I/O Servers are configured to provide the two paths to the data, as shown in Figure 5-5.
vscsi0
vscsi1
Hypervisor vhost0 vhost0 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2
Hypervisor
Figure 5-5 Dual VIOS and client multipath I/O to dual VIOS after migration
If the destination system is configured with only one Virtual I/O Server, the migration cannot be performed. The migration process would create two paths using the same Virtual I/O Server, but this setup is not allowed, because having two virtual target devices that map the same backing device on different virtual SCSI server devices is not possible. To migrate the partition, you must first remove one path from the source configuration before starting the migration. The removal can be performed without interfering with the running applications. The configuration becomes a simple single Virtual I/O Server migration.
126
If access to all disk data required by the partition is provided by only one Virtual I/O Server on the destination system, after migration the partition will use just that Virtual I/O Server. If no destination Virtual I/O Server provides all disk data, the migration cannot be performed. When both destination Virtual I/O Servers have access to all the disk data, the migration can select either one or the other. When you start the migration, you have the option of choosing a specific Virtual I/O Server. The HMC automatically makes a selection if you do not specify the server. The situation is shown in Figure 5-6.
vscsi0
Hypervisor Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 vhost0 Virtual I/O Server (VIOS) 1
Hypervisor
When the migration is performed using the GUI on the HMC, a list of possible Virtual I/O Servers to pick from is provided. By default, the command-line interface makes the automatic selection if no specific option is provided.
127
After migration, the configuration is similar to the one shown in Figure 5-7.
vscsi0
Hypervisor Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 vhost0 Virtual I/O Server (VIOS) 1
Hypervisor
128
Several practical considerations should be taken into account when planning for multiple migrations, especially when the time required by the migration process has to be evaluated. For each mobile partition, you must use an HMC GUI wizard or an HMC command. While a migration is in progress, you can start another one. When the number of migrations to be executed grows, the setup time using the GUI can become long and you should consider using the CLI instead. The migrlpar command may be used in scripts to start multiple migrations in parallel. An active migration requires more time to complete than an inactive migration because the system performs additional activities to keep applications running while the migration is in progress. Consider the following information: The time required to complete an active migration depends on the size of the memory to be migrated and on the mobile partitions workload. The Virtual I/O Servers selected as mover service partitions are loaded by memory moves and network data transfer, as follows: High speed network transfers can become processor-intensive workloads. At most, four concurrent active migrations can be managed by the same mover service partition. The active migration process has been designed to handle any partition memory size and it is capable of managing any memory workload. Applications can update memory with no restriction during migration and all memory changes are taken into account, so elapsed migration time can change with workload. Although the algorithm is efficient, planning the migration during low activity periods can help to reduce migration time. Virtual I/O Servers selected as mover service partitions are involved in partitions memory migration and must manage high network traffic. Network management can cause high CPU usage and usual performance considerations apply; use uncapped Virtual I/O Servers and add virtual processors if the load increases. Alternatively, create dedicated Virtual I/O Servers on the source and destination systems that provide the mover service function separating the service network traffic from the migration network traffic. You can combine or separate virtualization functions and mover service functions to suit your requirements. If multiple mover service partitions are available on either the source or destination systems, we suggest distributing the load among them. This process can be done explicitly by selecting the mover service partitions, either by using the GUI or the CLI. Each mover service partition can manage up to four concurrent active migrations and explicitly using multiple Virtual I/O Servers avoids queuing of requests.
129
130
The following list indicates the high-level prerequisites for remote migration. If any of the following elements are missing, a migration cannot occur: A ready source system that is migration-capable A ready destination system that is migration-capable Compatibility between the source and destination systems Destination system managed by a remote HMC Network communication between local and remote HMC A migratable, ready partition to be moved from the source system to the destination system. For an inactive migration, the partition must be turn off, but must be capable of booting on the destination system. For active migrations, an MSP on the source and destination systems One or more SANs that provide connectivity to all of the mobile partitions disks to the Virtual I/O Server partitions on both the source and destination servers. The mobile partition accesses all migratable disks through devices (virtual Fibre Channel, virtual SCSI, or both). The LUNs used for virtual SCSI must be zoned and masked to the Virtual I/O Servers on both systems. Virtual Fibre Channel LUNs should be configured as described in Chapter 2 of PowerVM Virtualization on IBM System p: Managing and Monitoring, SG24-7590. Hardware-based iSCSI connectivity may be used in addition to SAN. SCSI reservation must be disabled. The mobile partitions virtual disks must be mapped to LUNs; they cannot be part of a storage pool or logical volume on the Virtual I/O Server. One or more physical IP networks (LAN) that provide the necessary network connectivity for the mobile partition through the Virtual I/O Server partitions on both the source and destination servers. The mobile partition accesses all migratable network interfaces through virtual Ethernet devices. An RMC connection to manage inter-system communication Remote migration operations require that each HMC has RMC connections to its individual systems Virtual I/O Servers and a connection to its systems service processors. The HMC does not have to be connected to the remote systems RMC connections to its Virtual I/O Servers nor does it have to connect to the remote systems service processor. The remote active and inactive migrations follow the same workflow as described in Chapter 2, Live Partition Mobility mechanisms on page 19. The local HMC, which manages the source server in a remote migration, serves as the controlling HMC. The remote HMC, which manages the destination server, receives requests from the local HMC and sends responses over a secure network channel.
131
132
Figure 5-8 displays the Live Partition Mobility infrastructure involving the two remote HMCs and their respective managed systems.
POWER6 System #1 (Source system)
AIX Client Partition 1 (Mobile partition) hdisk0 vscsi0 ent0 Service Processor Service Processor VLAN POWER Hypervisor POWER Hypervisor VLAN
ent0
Local HMC
Ethernet Network Storage Area Network Storage Subsystem
Remote HMC
ent0
fcs0
LUN
ent1 virt
ent1 virt
vhost0
133
Figure 5-9 displays the infrastructure involving private networks that link each service processor to its HMC. The HMC for both systems contains a second network interface that is connected to the public network.
POWER6 System #1 (Source system)
AIX Client Partition 1 (Mobile partition) hdisk0 vscsi0 ent0 Service Processor VLAN POWER Hypervisor Service Processor POWER Hypervisor VLAN
vtscsi0 ent2 SEA ent0 en2 if en2 if ent2 SEA hdisk0 ent0 fcs0
Local HMC
Ethernet Network Storage Area Network Storage Subsystem
Remote HMC
LUN
134
ent1 virt
ent1 virt
vhost0
Figure 5-10 shows the situation where one POWER System is in communication with the HMC on a private network, and the destination sever is communicating by using the public network.
POWER6 System #1 (Source system)
AIX Client Partition 1 (Mobile partition) hdisk0 vscsi0 ent0 Service Processor Service Processor VLAN POWER Hypervisor POWER Hypervisor VLAN
vtscsi0 ent2 SEA ent0 en2 if en2 if ent2 SEA hdisk0 ent0 fcs0
Local HMC
Ethernet Network Storage Area Network Storage Subsystem
Remote HMC
LUN
Figure 5-10 One public and one private network migration infrastructure
ent1 virt
ent1 virt
vhost0
135
capable, the attribute remote_lpar_mobility_capable displays a value of 1; if the HMC is incapable, the attribute indicates a value of 0.
136
To enable remote command execution (see Figure 5-12): 1. In the navigation area, select HMC Management. 2. In the Administration section of the contents area, select Remote Command Execution.
3. In the Remote Command Execution window, enable the check box to Enable remote command execution using the ssh facility, as shown in Figure 5-13. Click OK.
137
Use the mkauthkeys command in the CLI to retrieve authentication keys from the current HMC managing the mobile partition. You must be logged in as a user with hmcsuperadmin privileges, such as the hscroot user, and authenticate to the remote HMC by using a remote user ID with hmcsuperadmin privileges. Authentication to a remote system (in our case, 9.3.5.180) using RSA authentication is displayed in Example 5-1. For details about the mkauthkeys command, see 5.7.4, The mkauthkeys command on page 173.
Example 5-1 mkauthkeys command execution
hscroot@hmc1:~> mkauthkeys --ip 9.3.5.180 -u hscroot -t rsa Enter the password for user hscroot on the remote host 9.3.5.180:
138
4. Enter the Remote HMC IP address or host name and the Remote User ID information, which was used for authentication, and then click the Refresh Destination System button. See Figure 5-14. All migration-ready systems managed by the remote HMC are listed. If your local HMC manages any other migration-ready systems, you will see a list of those, in the Destination system listing, prior to the refresh.
If the destination systems refresh properly, continue to step 5 on page 140. If you encounter an error, check the following items: a. SSH authentication was configured properly. b. Network communication to the remote HMC is available. c. Correct remote HMC and User ID entered correctly. d. Migration-ready systems exist on the remote HMC.
139
5. Select the remote Destination system. You have the option of also specifying the Destination profile name and Wait time. Click the Validate button (Figure 5-15). If you are proceeding with this step when the mobile partition is in the Not Activated state, the destination and source mover service partition and wait time entries do not appear, because these are not required for the inactive partition migration.
6. If errors or warnings occur, the Partition Validation Errors/Warnings window opens. Perform the following steps: Note: If the window does not appear, you have no errors or warnings. a. Check the messages in the window and the prerequisites for the migration: For error messages: You cannot perform the migration steps if errors exist. Eliminate any errors. For warning messages: If only warnings occur (no errors), you may migrate the partition after the validation steps.
140
b. Close the Partition Validation Errors/Warnings window. A validation window opens again, as shown in Figure 5-16. If you had warning messages only (no error messages), you may click the Migrate button.
141
At this point, you can see the mobile partition is on the source system managed by the local HMC in Figure 5-17 and that only the source system is available on the local HMC for this scenario.
2. In the contents pane, select the partition that you will migrate to the destination system, that is, the mobile partition. 3. Click view popup menu and select Operations Mobility Migrate to start the Partition Migration wizard. 4. Check the Migration Information of the mobile partition in the Partition Migration wizard. If the mobile partition is powered off, the Migration Type is inactive. If the partition is in the Running state, the Migration Type is active. You can specify the New destination profile name in the Profile Name window. If you leave the name blank or do not specify a unique profile name, the profile on the destination system will be overwritten.
142
5. Select Remote Migration, enter the Remote HMC and Remote User information, as shown in Figure 5-18, and then click Next.
6. Select the destination system and click Next. The HMC validates the partition migration environment. 7. Check errors or warnings in the Partition Validation Errors/Warnings window, and eliminate any errors. If there are any errors, you cannot proceed to the next step. You may proceed to the next step if it shows warnings only. 8. If you are performing inactive migration, skip this step and go to 9. If you are performing active migration, select the source and the destination mover service partitions to be used for the migration. 9. Select the VLAN configuration. 10.Select the virtual storage adapter assignment. 11.Specify the wait time in minutes.
143
12.Check the settings that you have specified for this migration on the Summary window, and then click Finish to begin the migration. See Figure 5-19.
13.After migration is complete, check that the mobile partition is on the destination system on the remote HMC.
144
You can see the mobile partition is on the destination system, as shown in Figure 5-20.
14.If you keep a record of the virtual I/O configuration of the partitions, check the migrating partitions configuration in the destination system. Although the migrating partitions retain the same slot numbers as on the source systems, the server virtual adapter slot numbers can be different between the source and destination Virtual I/O Servers. Also, the virtual target device name can change during migration.
145
lslparmigr -r msp --ip 9.3.5.180 -u hscroot -m 9117-MMA-SN100F6A0-L9 \ -t 9117-MMA-SN101F170-L10 --filter lpar_names=PROD source_msp_name=VIOS1_L9,source_msp_id=1,dest_msp_names=VIOS1_L10, dest_msp_ids=1,ipaddr_mappings=9.3.5.3//1/VIOS1_L10/9.3.5.111/
Example 5-3 The migrlpar command with remote options
hscroot@hmc1:~> migrlpar -o v --ip 9.3.5.180 -u hscroot -m 9117-MMA-SN100F6A0-L9 -t 9117-MMA-SN101F170-L10 -p mobile Warnings: HSCLA295 As part of the migration process, the HMC will create a new migration profile containing the partition's current state. The default is to use the current profile, which will replace the existing definition of this profile. While this works for most scenarios, other options are possible. You may specify a different existing profile, which would be replaced with the current partition definition, or you may specify a new profile to save the current partition state.
146
If you use the CLI, the migration operation will fail if the arrival of the migrating partition would cause the maximum processors in the chosen shared pool on the destination to be exceeded.
147
The ability to select a specific shared processor pool is also presented during the Validate task after an error-free validation has occurred as shown in Figure 5-22.
If the migration is initiated after a change has occurred on the destination system where the selected processor pool can no longer accommodate the client partition, the migration will fail.
148
5.6.1 Overview
Three types of adapters cannot be present in a partition when it is participating in an active migration are physical adapters, Integrated Virtual Ethernet adapters, and non-default virtual serial adapters. A non-default virtual serial adapter is a virtual serial adapter other than the two automatically created virtual serial adapters in slots 0 and 1. If a partition has non-default virtual serial adapters, you must deconfigure them; you might have to switch from physical to virtual resources. For this scenario, we assume you are beginning with a mobile partition that uses a single physical Ethernet adapter and a single physical SCSI adapter. See Figure 5-23.
Source System Mobile Partition rootvg hdisk0 hdisk1 Destination System
ent0
sisioa0
Hypervisor
SCSI Enclosure
149
If the mobile partition has any adapters that cannot be migrated, then they must be removed from the mobile partition before it can participate in an active migration. If these adapters are marked as desired in the active profile, remove them using dynamic logical partitioning. If these adapters are marked as required in the active profile, activate the partition with a profile that does not have them marked as required. The process described in this section covers both the case where the mobile partition does not have such required adapters, and the case where it does. Before proceeding, verify that the requirements for Live Partition Mobility are met, as outlined in Chapter 3, Requirements and preparation on page 45. However, in that chapter, ignore the requirement to check that the adapters cannot be migrated because this exception is discussed in all of 5.6, Migrating a partition with physical resources on page 149.
150
On the source Virtual I/O Server, set the reserve_policy on the disks to no_reserve by using the chdev command: $ chdev -dev hdisk5 -attr reserve_policy=no_reserve Assign the hdisks as targets of the virtual SCSI server adapter that you created, using the mkvdev command. Do not create volume groups and logical volumes on the hdisks within the Virtual I/O Server. 3. Configure shared Ethernet adapters for each physical network interface that is configured on the mobile partition. 4. Ensure the Mover service partition box is checked in the Virtual I/O Server partition properties. Figure 5-24 shows the created and configured source Virtual I/O Server.
Source System Mobile Partition rootvg hdisk0 hdisk1 Destination System
ent0
sisioa0
hdisk5
hdisk6
Storage adapter
Ethernet adapter
Storage Subsystem
Figure 5-24 The source Virtual I/O Server is created and configured
151
152
Figure 5-25 shows the created and configured destination Virtual I/O Server. In the figure, the hdisk numbers on the destination Virtual I/O Server differ from those on the source Virtual I/O Server. The hdisk numbers may be different, but they are the same LUNs on the storage subsystem.
Destination System
ent0
sisioa0
hdisk5
hdisk6
hdisk3
hdisk4
Storage adapter
Ethernet adapter
Storage adapter
Ethernet adapter
Storage Subsystem
Figure 5-25 The destination Virtual I/O Server is created and configured
153
3. Configure the virtual SCSI devices on the mobile partition, as follows: a. Run the cfgmgr command on the mobile partition. b. Verify that the virtual SCSI adapters are in the Available state by using the lsdev command: # lsdev -t IBM,v-scsi c. Verify that the virtual SCSI disks are in the Available state by using the lsdev command: # lsdev -t vdisk Figure 5-26 shows the configured storage devices on the mobile partition.
Destination System
ent0
sisioa0
vscsi0
hdisk5
hdisk6
hdisk3
hdisk4
Storage adapter
Ethernet adapter
Storage adapter
Ethernet adapter
Storage Subsystem
Figure 5-26 The storage devices are configured on the mobile partition
154
4. On the mobile partition, move rootvg from physical disks to virtual disks. For example, assume hdisk0 and hdisk1 are the physical disks in rootvg, and that hdisk7 and hdisk8 are the virtual disks you created whose sizes are at least as large as hdisk0 and hdisk1. Move rootvg as follows: a. Extend rootvg on to virtual disks using the extendvg command. # extendvg rootvg hdisk7 hdisk8 If the extendvg command fails, there is another approach. Depending on the size of the disks, you might have to change the factor of the volume group, by using the chvg command, to extend to the new disks (do not use the chvg command unless the extendvg command fails.) # chvg -t 10 rootvg Figure 5-27 shows rootvg extended on to the virtual disks.
Mobile Partition
Destination System
ent0
sisioa0
vscsi0
hdisk5
hdisk6
hdisk3
hdisk4
Storage adapter
Ethernet adapter
Storage adapter
Ethernet adapter
Storage Subsystem
155
b. Migrate physical partitions off the physical disks in rootvg on to the virtual disks in rootvg using the migratepv command: # migratepv hdisk0 hdisk7 # migratepv hdisk1 hdisk8 c. Set the bootlist to a virtual disk in rootvg using the bootlist command: # bootlist -m normal hdisk7 hdisk8 d. Run the bosboot command on a virtual disk in rootvg: # bosboot -ad /dev/hdisk7 e. Remove physical disks from rootvg using the reducevg command: # reducevg rootvg hdisk0 hdisk1 5. Repeat the previous step (excluding the bootlist command and the bosboot command) for all other volume groups on the mobile partition. Figure 5-28 shows rootvg on the mobile partition now wholly on the virtual disks.
Destination System
hdisk0
hdisk1
ent0
sisioa0
vscsi0
hdisk5
hdisk6
hdisk3
hdisk4
Storage adapter
Ethernet adapter
Storage adapter
Ethernet adapter
Storage Subsystem
Figure 5-28 The root volume group of the mobile partition is on virtual disks only
156
157
Figure 5-29 shows the mobile partition with a virtual network device created.
Source System Mobile Partition rootvg hdisk7 hdisk8 Destination System
hdisk0
hdisk1
ent0
sisioa0
vscsi0
ent1
hdisk5
hdisk6
hdisk3
hdisk4
Storage adapter
Ethernet adapter
Storage adapter
Ethernet adapter
Storage Subsystem
Figure 5-29 The mobile partition has a virtual network device created
Now that the virtual network adapters are configured, stop using the physical network adapters and begin using the virtual network adapters. To move to virtual networks on the mobile partition, use new or existing IP addresses. Both procedures, discussed in this section, affect network connectivity differently. Understand how all running applications use the networks; take appropriate actions before proceeding.
158
hdisk0
hdisk1
ent0
sisioa0
vscsi0
ent1
hdisk5
hdisk6
hdisk3
hdisk4
Storage adapter
Ethernet adapter
Storage adapter
Ethernet adapter
Storage Subsystem
Figure 5-30 The mobile partition has unconfigured its physical network interface
159
160
Figure 5-31 shows the mobile partition with only virtual adapters.
Destination System
vscsi0
ent1
hdisk5
hdisk6
hdisk3
hdisk4
Storage adapter
Ethernet adapter
Storage adapter
Ethernet adapter
Storage Subsystem
161
Figure 5-32 shows the mobile partition migrated to the destination system.
Source System
vscsi0
ent1
Hypervisor ent1
SCSI Enclosure
Source VIOS
hdisk5
hdisk6
hdisk3
hdisk4
Storage adapter
Ethernet adapter
Storage adapter
Ethernet adapter
Storage Subsystem
162
The HMC commands can be launched either locally on the HMC or remotely by using the ssh -l <hmc> <hmc_command> command. Tip: Use the ssh-keygen command to create the public and private key-pair on your client. Then add these keys to the HMC users key-chain by using the mkauthkeys --add command on the HMC.
Command conventions
The commands follow the HMC command conventions, which are: Single character parameters are preceded by a single dash (-). Multiple character parameters are preceded by a double dash (--). All filter and attribute names are lower case, with underscores joining words together, for example vios_lpar_id.
The flags used in this command are: -o The operation to perform, which can be: m - validate and migrate r - recover s - stop v - validate
163
-m <managed system> -t <managed system> -p <partition name> --ip <IP address> -u <user ID> --id <partitionID> -n <profile name> -f <input data file>
The source managed systems name The destination managed systems name The partition on which to perform the operation The IP address or host name of the target managed system's HMC The user ID to use on the target managed system's HMC The ID of the partition on which to perform the operation The name of the partition profile to be created on the destination. The name of the file containing input data for this command. Use either of the following formats: attr_name1=value,attr_name2=value,... attr_name1=value1,value2,...
-i <input data>
The input data for this command, typically the virtual adapter mapping from source to destination or the destination shared-processor pool. This format is the same format as the input data file of the -f option. The time, in minutes, to wait for any operating system command to complete The level of detail requested from operating system commands; values range from 0 (none) to 5 (highest) Verbose mode Force the recovery. This option should be used with caution. Prints a help message
164
The data specified with the virtual_scsi_mappings attribute consists of one or more source virtual SCSI adapter to destination virtual SCSI adapter mappings in the format: client_virtual_slot_num/dest_vios_lpar_name/dest_vios_lpar_id The data format specified with the virtual_fc_mappings attribute mirrors the format of the virtual_scsi_mappings attribute as it relates to virtual Fibre Channel adapter mappings for N_Port ID Virtualization (NPIV).
Examples
To migrate the partition myLPAR from the system srcSystem to the destSystem using the default MSPs and adapter maps, use the following command: $ migrlpar -o m srcSystem -t destSystem -p myLPAR In an environment with multiple mover service partitions on the source and destination, you can specify which mover service partitions to use in a validation
165
or migration operation. The following command validates the migration in the previous example with specific mover service partitions. Note that you can use both partition names and partition IDs on the same command: $ migrlpar -o v -m srcSystem -t destSystem -p myLPAR \ -i source_msp_id=2,dest_msp_name=VIOS2_L10 When the destination system has multiple shared-processor pools, you can stipulate to which shared-processor pool the moving partition will be assigned at the destination with either of the following commands: $ migrlpar -o m -m srcSystem -t destSystem -p myLPAR -i "shared_proc_pool_id=1" $ migrlpar -o m -m srcSystem -t destSystem -p myLPAR -i "shared_proc_pool_name="DefaultPool" The capacity of the chosen shared-processor pool must be sufficient to accommodate the migrating partition otherwise the migration operation will fail. The syntax to stop a partition migration is: $ migrlpar -o s -m srcSystem -p MyLPAR The syntax to recover a failed migration is: $ migrlpar -o r -m srcSystem -p MyLPAR You can use the --force flag on the recover command, but you should only when the partition migration fails, leaving the partition definition on both the source and destination systems.
166
The flags used in this command are: -r The type of resources for which to list information: lpar - partition manager - Hardware Management Console (HMC) msp - mover service partitions procpool - shared processor pool sys - managed system (CEC) virtualio - virtual I/O The source managed system's name. The destination managed system's name. The IP address or host name of the target managed system's HMC. The user ID to use on the target managed system's HMC. Filters the data to be listed in CSV format. Use either of the following formats: filter_name1=value,filter_name2=value,... filter_name1=value1,value2,... Valid filter names are: lpar_ids and lpar_names The filters are mutually exclusive. This parameter is not valid with -r sys, is optional with -r lpar, and is required with -r msp and -r virtualio. With -r msp and -r virtualio, exactly one partition name or ID must be specified and the partition must be an AIX or Linux partition. -F [<attribute names>] Comma-separated list of the names of the attributes to be listed. If no attribute names are specified, then all attributes will be listed. Prints a header of attribute names when -F is also specified Prints a help message
-m <managed system> -t <managed system> --ip <IP address> -u <user ID> --filter <filter data>
--header --help
167
Examples
The following examples illustrate how this command is used.
168
This command produces: inactive_lpar_mobility_capable=1,num_inactive_migrations_supported=40,n um_inactive_migrations_in_progress=1,active_lpar_mobility_capable=1,num _active_migrations_supported=40,num_active_migrations_in_progress=0 In this example, we can see that the system is capable of both active and inactive migration and that there is one inactive partition migration in progress. By using the -F flag, the same information is produced in a CSV format: $ lslparmigr -r sys -m mySystem -F This command produces: 1,40,1,1,40,0 These attribute values are the same as in the preceding example, without the attribute identifier. This format is appropriate for parsing or for importing in to a spreadsheet. Adding the --header flag prints column headers on the first line: $ lslparmigr -r sys -m mySystem -F --header This command produces: inactive_lpar_mobility_capable,num_inactive_migrations_supported, num_inactive_migrations_in_progress,active_lpar_mobility_capable, num_active_migrations_supported,num_active_migrations_in_progress 1,40,1,1,40,0 On a terminal, the header is printed on a single line. If you are only interested in specific attributes, then you can specify these as options to the -F flag. For example, if you want to know just the number of active and inactive migrations in progress, use the following command: $ lslparmigr -r sys -m mySystem -F \ num_active_migrations_in_progress,num_inactive_migrations_in_progress This command produces the following results, which indicates that there are no active migrations and one active migration running: 0,1 If you want a space instead of a comma to separate values, surround the attributes with double quotes.
169
This option produces: remote_lpar_mobility_capable=1 Here, we see that the command supplies only one attribute for the user on the HMC from which it is executed. The attribute remote_lpar_mobility_capable displays a value of 1 if the HMC has the ability to perform migrations to a remote HMC. Conversely, a value of 0 indicates that the HMC is incapable of remote migrations. You may also use the -F flag followed by the attribute to limit the output of the command to the value. For example, the command: $ lslparmigr -r manager -F remote_lpar_mobility_capable This command produces: 1
170
Here, the output information is limited to the partition with ID=3, which is the one performing the inactive migration. Here, we see that the dest_lpar_id has now been chosen. You can use the -F flag to generate the same information in CSV format or to limit the output: $ lslparmigr -r lpar -m mySystem --filter lpar_ids=3 -F This flag produces: PROD,3,Migration Starting,inactive,9117-MMA-SN101F170-L10, 9117-MMA-SN100F6A0-L9,3,7,,unavailable,,unavailable Here the -F flag, without additional parameters, has printed all the attributes. In the example, the last four fields of output pertain to the MSPs; because the partition in question is undergoing an inactive migration, no MSPs are involved and these fields are empty. You can use the --header flag with the -F flag to print a line of column headers at the start of the output.
171
The flag produces: 1,0 This output indicates that processor pool IDs 1 and 0 are capable of hosting the client partition called TEST. The command requires the -m, -t, and --filter flags. The --filter flag requires that you either use the lpar_ids or lpar_names attributes to identify the client partition. You may only specify one client partition at a time. The command can also be used to identify shared-processor pools available on remote HMC migrations with the --ip and -u flags to specify the remote HMC and remote user ID, respectively. Also, without the -F flag you are given detail output of the attribute and values, as shown in the following example, with the addition of remote HMC specification and using lpar_ids to specify the client partition: $ lslparmigr -r procpool -m srcSystem --ip 9.3.5.180 \ -u hscroot -t destSystem --filter lpar_ids=2 This command shows that we are communicating with an HMC with IP address 9.3.5.180 using the HMCs User ID hscroot. The command then checks the remote HMC for the destSystem-managed system for possible processor pools. It produces the output: "shared_proc_pool_ids=1,0","shared_proc_pool_names=SharedPool01,Default Pool" Here, the system is showing that two shared-processor pools are possible destinations for the client partition.
172
The flags used in this command are: -a -g -r --remove --add Adds SSH key as an authorized key Gets the user's SSH public key Removes SSH key from user's authorized key list Verifies authentication to a remote HMC. The IP address or host name of a remote HMC with which to exchange authentication keys. The ID of a user whose authentication keys are to be managed. The password to use to log on to the remote HMC. If this parameter is omitted, you will be prompted for the password. The type of SSH authentication keys: rsa - RSA authentication dsa - DSA authentication
-t <key type>
173
Examples
To get the remote HMC user's SSH public key, you may simply use the -g flag: $ mkauthkeys --ip rmtHostName -u hscroot -g You may also specify a preferred authentication method. To choose between DSA authentication or RSA authentication key usage, you may use the -t flag in either case: $ mkauthkeys --ip rmtHostName -u hscroot -t rsa $ mkauthkeys --ip rmtHostName -u hscroot -t dsa In some cases, you may choose to remove the authentication keys, which you can do by using the mkauthkeys command with the -r flag: $ mkauthkeys -r ccfw@rmtHostName The HMC stores the key as user called ccfw. It is not stored as the user ID you specified in the steps to retrieve the authentication keys. Also note that the remote HMC's host name has to be specified in this command. If DNS is unable to resolve the host name and you used the IP address to configure the authentication, only then will you use the actual IP address in the place of rmtHostName. The --test flag allows you to check whether authentication is properly configured to the remote HMC: $ mkauthkeys --ip rmtHostName -u hscroot --test The command returns the following error if keys were not configured properly: HSCL3653 The Secure Shell (SSH) communication configuration between the source and target Hardware Management Consoles has not been set up properly for user hscroot. Please run the mkauthkeys command to set up the SSH communication authentication keys.
174
The algorithm starts by listing all the partitions on SRC_SERVER. It then filters out any Virtual I/O Server partitions and partitions that are already migrating. For each remaining partition, it invokes a migration operation to DEST_SERVER. In this example, the migrations take place sequentially. Running them in parallel is acceptable if there are no more than four concurrent active migrations per mover service partition. This is an exercise left to the reader.
How it works
The script starts by checking that both the source and destination systems are mobility capable. For this, it uses the new attributes given in the lssyscfg command. It then uses the lslparmigr command to list all the partitions on the system. It uses this list as an outer loop for the rest of the script. The program then performs a number of elementary checks: The source and destination must be capable of mobility. The lssyscfg command shows the mobility capability attribute. Only partitions of type aixlinux can be migrated. The script uses the lssyscfg command to ascertain the partition type. Determines whether to avoid migrating a partition that is already migrating. The script reuses the lslparmigr command for this. Validate the partition migration. The script uses the migrlpar -v and checks the return code. If all the checks pass, the migration is launched with the migrlpar command. The code snippet does some elementary error checking. If migrlpar returns a non-zero value, a recovery is attempted using the migrlpar -o r command. See Example 5-4 on page 176.
175
Example 5-4 Script fragment to migrate all partitions on a system # # Get the mobility capabilities of the source and destination systems # SRC_CAP=$(lssyscfg -r sys -m $SRC_SERVER \ -F active_lpar_mobility_capable,inactive_lpar_mobility_capable) DEST_CAP=$(lssyscfg -r sys -m $DEST_SERVER \ -F active_lpar_mobility_capable,inactive_lpar_mobility_capable) # # # Make sure that they are both capapble of active and inactive migration # if [ $SRC_CAP = $DEST_CAP ] && [ $SRC_CAP = "1,1" ] then # # List all the partitions on the source system # for LPAR in $(lslparmigr -r lpar -m $SRC_SERVER -F name) do # # Only migrate aixlinux partitions. VIO servers cannot be migrated # LPAR_ENV=$(lssyscfg -r lpar -m $SRC_SERVER \ --filter lpar_names=$LPAR -F lpar_env) if [ $LPAR_ENV = "aixlinux" ] then # # Make sure that the partition is not already migrating # LPAR_STATE=$(lslparmigr -r lpar -m $SRC_SERVER --filter lpar_names=$LPAR -F migration_state) if [ "$LPAR_STATE" = "Not Migrating" ] then # # Perform a validation to see if theres a good chance of success # migrlpar -o v -m $SRC_SERVER -t $DEST_SERVER -p $LPAR RC=$? if [ $RC -ne 0 ] then echo "Validation failed. Cannont migrate partition $LPAR" else # # Everything looks good, lets do it... # echo "migrating $LPAR from $SRC_SERVER to $DEST_SERVER" migrlpar -o m -m $SRC_SERVER -t $DEST_SERVER -p $LPAR
176
RC=$? if [ $RC -ne 0 ] then # # Something went wrong, lets try to recover # echo "There was an error RC = $RC . Attempting recovery" migrlpar -o r -m $SRC_SERVER -p $LPAR break fi fi fi fi done fi
177
of the IBM Java Virtual Machine is also optimized for the cache-line size of the processor on which it was launched. Performance analysis, capacity planning, and accounting tools and their agents should also be made migration-aware because the processor performance counters may change between the source and destination servers, as may the processor type and frequency. Additionally, tools that calculate an aggregate system load based on the sum of the loads in all hosted partitions must be aware that a partition has left the system or that a new partition arrived. Workload managers (WLM) An application that is migration-aware might perform the following actions: Keep track of changes to system characteristics, such as cache-line size or serial numbers, and modify tuning or behavior accordingly. Terminate the application on the source system and restart it on the destination. Reroute workloads to another system. Clean-up system-specific buffers and logs. Refuse new incoming requests or delay pending operations. Increase time-out thresholds, such as the PowerHA heartbeat. Block the sending of partition shutdown requests. Refuse a partition migration in the check phase to prevent a non-migratable application from being migrated.
178
179
3. Uses the dr_reconfig() system call, through the signal handler, to determine the nature of the reconfiguration event and other pertinent information. For the check phase, the application should pass DR_RECONFIG_DONE to accept a migration or DR_EVENT_FAIL to refuse. Only applications with root authority may refuse a migration. The dr_reconfig() system call has been modified to support partition migration. The returned dr_info structure includes the following bit-fields: migrate partition These fields are for the new migration action and the partition object that is the object of the action. The code snippet in Example 5-5 shows how dr_reconfig() might be used. This code would run in a signal-handling thread.
Example 5-5 SIGRECONFIG signal-handling thread
#include <signal.h> #include <sys/dr.h> : : struct dr_info drInfo; struct sigset_t signalSet; int signalId; int reconfigFlag int rc; // Initialise the signal set SIGINITSET(signalSet);
// // // // //
For event-related information The signal set to wait on Identifies signal was received For accepting or refusing the DR return code
// Add the SIGRECONFIG to the signal set SIGADDSET(signalSet, SIGRECONFIG); // loop forever while (1) { // Wait on signals in signal set sigwait(&signalSet, &signalId); if (signalID == SIGRECONFIG) { if (rc = dr_reconfig(DR_QUERY, &drInfo)) { // handle the error } else { if (drInfo.migrate) { if {drInfo.check) { /*
180
* If migration OK reconfigFlag = DR_RECONFIG_DONE * If migration NOK reconfigFlag = DR_EVENT_FAIL */ rc = dr_reconfig(reconfigFlag, &drInfo); } else if (drInfo.pre) { /* * Prepare the application for migration */ rc = dr_reconfig(DR_RECONFIG_DONE, &drInfo); } else if (drInfo.post) { /* * Were being woken up on the destination * Check new environment and resume normal service */ } else { // Handle the error cases } } else { // Its not a migration. Handle or ignore the DR } } } You can use the sysconf() system call to check the system configuration on the destination system. The _system_configuration structure has been modified to include the following fields: icache_size icache_asc dcache_size dcache_asc L2_cache_size L2_cache_asc itlb_size itlb_asc dtlb_size dtlb_asc tlb_attrib slb_size Size of the L1 instruction cache Associativity of the L1 instruction cache Size of the L1 data cache Associativity of the L1 data cache Size of the L2 cache Associativity of the L2 cache Instruction translation look-aside buffer size Instruction translation look-aside buffer associativity Data translation look-aside buffer size Data translation look-aside buffer associativity Translation look-aside buffer attributes Segment look-aside buffer size
181
These fields are updated after the partition has arrived at the destination system to reflect the underlying physical processor characteristics. In this fashion, applications that are moved from one processor architecture to another can dynamically adapt themselves to their execution environment. All new processor features, such as the single-instruction multiple-data (SIMD) and decimal floating point instructions, are exposed through the _system_configuration structure and the lpar_get_info() system call. The lpar_get_info() call returns two capabilities, defined in <sys/dr.h>: LPAR_INFO1_MSP_CAPABLE If the partition is a Virtual I/O Server partition, this capability indicates the partition is also a mover service partition. Indicates whether the partition is capable of migration.
LPAR_INFO1_PMIG_CAPABLE
182
premigrate <resource>
postmigrate <resource>
undopremigrate <resource>
In addition to the script commands, a pmig resource type indicates a partition migration operation. The register command of your dynamic LPAR scripts can choose to handle this resource type. A script supporting partition migration should write out the name-value pair DR_RESOURCE=pmig when it is invoked with the register command. A dynamic LPAR script can be registered to support only partition migration. No new environment variables are passed to the dynamic LPAR scripts for Live Partition Mobility support. The code in Example 5-6 on page 184 shows a Korn shell script that detects the partition migration reconfiguration events. For this example, the script simply logs the called command to a file.
183
Example 5-6 Outline Korn shell dynamic LPAR script for Live Partition Mobility #!/usr/bin/ksh if [[ $# -eq 0 ]] then echo "DR_ERROR=Script usage error" exit 1 fi ret_code=0 command=$1 case $command in scriptinfo ) echo "DR_VERSION=1.0" echo "DR_DATE=27032007" echo "DR_SCRIPTINFO=partition migration test script" echo "DR_VENDOR=IBM" echo "SCRIPTINFO" >> /tmp/migration.log;; usage ) echo "DR_USAGE=$0 command [parameter]" echo "USAGE" >> /tmp/migration.log;; register ) echo "DR_RESOURCE=pmig";; echo "REGISTER" >> /tmp/migration.log;; checkmigrate ) echo "CHECK_MIGRATE" >> /tmp/migration.log;; premigrate ) echo "PRE_MIGRATE" >> /tmp/migration.log postmigrate ) echo "POST_MIGRATE" >> /tmp/migration.log;; undopremigrate ) echo "UNDO_CHECK_MIGRATE" >> /tmp/migration.log;; * ) echo "*** UNSUPPORTED *** : $command" >> /tmp/migration.log;; ret_code=10;; esac exit $ret_code
184
If the file name of the script is migrate.sh, then you would register it with the dynamic reconfiguration infrastructure by using the following command. # drmgr -i ./migrate.sh Use the drmgr -l command to confirm script registration, as shown in Example 5-7. In this example, you can see the output from the scriptinfo, register, and usage commands of the shell script.
Example 5-7 Listing the registered dynamic LPAR scripts # drmgr -l DR Install Root Directory: /usr/lib/dr/scripts Syslog ID: DRMGR -----------------------------------------------------------/usr/lib/dr/scripts/all/migrate.sh partition migration test script Vendor:IBM, Version:1.0, Date:27032007 Script Timeout:10, Admin Override Timeout:0 Memory DR Percentage:100 Resources Supported: Resource Name: pmig Resource Usage: /usr/lib/dr/scripts/all/migrate.sh command [parameter] ------------------------------------------------------------
185
The interface to the handler is: int handler(void* event, void* h_arg, long long action, void* resource_info); The action parameter indicates the specific reconfiguration operation being performed, for example, DR_MIGRATE_PRE. The resource_info parameter maps to the following structure for partition migration: struct dri_pmig { int version; int destination_lpid; long long streamid } The version number is changed if additional parameters are added to this structure. The destination_lpid and streamid fields are not available for the check phase. The interfaces to the reconfig_unregister() and reconfig_complete() kernel services are not changed by Live Partition Mobility.
186
fcs0
ent0
POWER Hypervisor
POWER Hypervisor
ent2 SEA
en2 if
en2 if
ent2 SEA
ent0
ent0
fcs0
HMC1
Ethernet Network
Figure 5-33 Basic NPIV virtual Fibre Channel infrastructure before migration
ent1 virt
ent1 virt
187
After migration, the configuration is similar to the one shown in Figure 5-34.
POWER6 System #1 (Source system) POWER6 System #2 (Destination system)
AIX Client Partition 1 (Mobile partition) hdisk0
ent0
fcs0
POWER Hypervisor
POWER Hypervisor
ent2 SEA
en2 if
en2 if
ent2 SEA
fcs0
ent0
ent0
fcs0
HMC1
Ethernet Network
Figure 5-34 Basic NPIV virtual Fibre Channel infrastructure after migration
188
ent1 virt
ent1 virt
vfchost0
Standard multipathing software for the storage subsystem is installed on the mobile partition. Multipathing software is not installed into the Virtual I/O Server partition to manage virtual Fibre Channel disks. The absence of the software provides system administrators with familiar configuration commands and problem determination processes in the client partition. Partitions can take advantage of standard multipath features, such as load balancing across multiple virtual Fibre Channel adapters presented from dual Virtual I/O Servers.
Required components
The mobile partition must meet the requirements described in Chapter 2, Live Partition Mobility mechanisms on page 19. In addition, the following components must be configured in the environment: An NPIV-capable SAN switch An NPIV-capable physical Fibre Channel adapter on the source and destination Virtual I/O Servers HMC Version 7 Release 3.4, or later Virtual I/O Server Version 2.1 with Fix Pack 20.1, or later AIX 5.3 TL9, or later AIX 6.1 TL2 SP2, or later Each virtual Fibre Channel adapter on the Virtual I/O Server mapped to an NPIV-capable physical Fibre Channel adapter Each virtual Fibre Channel adapter on the mobile partition mapped to a virtual Fibre Channel adapter in the Virtual I/O Server At least one LUN mapped to the mobile partitions virtual Fibre Channel adapter Mobile partitions may have virtual SCSI and virtual Fibre Channel LUNs. Migration of LUNs between virtual SCSI and virtual Fibre Channel is not supported at the time of publication. See Chapter 2 in PowerVM Virtualization on IBM System p: Managing and Monitoring, SG24-7590 for details about virtual Fibre Channel and NPIV configuration.
189
Figure 5-35 Client partition virtual Fibre Channel adapter WWPN properties
190
Figure 5-36 shows an example of virtual Fibre Channel properties for the Virtual I/O Server.
Figure 5-36 Virtual Fibre Channel adapters in the Virtual I/O Server
Figure 5-37 shows an example of a virtual Fibre Channel properties for the Virtual I/O Server, called a Server Fibre Channel Adapter.
The Virtual I/O Server lsdev and lsmap commands can be used to query the virtual Fibre Channel configuration and mapping to the mobile partition, as shown in Example 5-8 on page 192.
191
Example 5-8 Virtual I/O Server commands lsmap and lsdev virtual Fibre Channel output $ lsmap -all -npiv Name Physloc ClntID ClntName ClntOS ============= ================================== ====== ============== ======= vfchost0 U9117.MMA.101F170-V1-C16 2 mobile2 AIX Status:LOGGED_IN FC name:fcs3 Ports logged in:2 Flags:a<LOGGED_IN,STRIP_MERGE> VFC client name:fcs0 $ lsdev -dev vfchost* name status vfchost0 Available
FC loc code:U789D.001.DQDYKYW-P1-C6-T2
192
193
Proceed with the remaining migration steps at step 12 on page 114 as described in 4.3, Preparing for an active partition migration on page 94. The Summary panel will look similar to Figure 5-39. Verify the settings you have selected and then click Finish to begin the migration.
194
When the migration is complete, verify that the mobile partition is on the destination system. Figure 5-40 shows that the mobile partition is on the destination system.
5.11.3 Dual Virtual I/O Server and virtual Fibre Channel multipathing
With multipath I/O, the logical partition accesses the same storage data using two different paths, each provided by a separate Virtual I/O Server. Note: With NPIV-based disks, both paths can be active. For NPIV and virtual Fibre Channel, the storage multipath code is loaded into the mobile partition.The multipath capabilities depend on the storage subsystem type and multipath code deployed in the mobile partition.
195
The migration is possible only if the destination system is configured with two Virtual I/O Servers that can provide the same multipath setup. They both must have access to the shared disk data, as shown in Figure 5-41.
fcs0
fcs1
Hypervisor Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 vfchost0 vfchost1 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2
Hypervisor
Storage adapter
Storage adapter
Storage adapter
Storage adapter
Figure 5-41 Dual VIOS and client multipath I/O to dual NPIV before migration.
196
When migration is complete, on the destination system, the two Virtual I/O Servers are configured to provide the two paths to the data, as shown in Figure 5-42.
fcs0
fcs1
Hypervisor vfchost0 vfchost1 vfchost0 vfchost1 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2
Hypervisor
Storage adapter
Storage adapter
Storage adapter
Storage adapter
Figure 5-42 Dual VIOS and client multipath I/O to dual VIOS after migration.
If the destination system is configured with only one Virtual I/O Server, the migration cannot be performed. The migration process would create two paths using the same Virtual I/O Server, but this setup of having one virtual Fibre Channel host device mapping the same LUNs on different virtual Fibre Channel adapters is not recommended. To migrate the partition, you must first remove one path from the source configuration before starting the migration. The removal can be performed without interfering with the running applications. The configuration becomes a simple single Virtual I/O Server migration.
197
hdisk0
fcs1
ent0
Hypervisor ent1
Storage adapter
Ethernet adapter
198
Before proceeding, verify that the environment meets the requirements for Live Partition Mobility with NPIV and virtual Fibre Channel as outlined in 5.11, Virtual Fibre Channel on page 187. This scenario describes how to deal with partitions containing physical adapters on the client so you may disregard the requirement of having no physical adapters assigned to your mobile partition. In this case, your physical adapter will be desired in the partition. You will use dynamic logical partitioning (dynamic LPAR) to remove the adapter from the mobile partition prior to migration.
2. Use dynamic LPAR to add a virtual Fibre Channel client adapter, with the same properties from the previous step, to the activated mobile partition.
199
Figure 5-45 shows the virtual Fibre Channel client adapter properties.
Record the virtual Fibre Channel client adapters slot number and WWPN pair for use when configuring the storage subsystem in step 4. 3. Save the changes made to the mobile partition to new profile name to preserve the generated WWPNs for future use by the mobile partition. Important: Similar to virtual SCSI, you do not have to create virtual Fibre Channel server adapters for your mobile partition on the destination Virtual I/O Server. They are created automatically for you during the migration. 4. By using standard SAN configuration techniques, assign the mobile partitions storage to the virtual Fibre Channel adapters that use the WWPN pair generated in step 2 on page 199, and properly zone the virtual Fibre Channel WWPNs with the storage subsystems WWPN. 5. On the source Virtual I/O Server, run the cfgdev command to discover the newly added virtual Fibre Channel server adapter (vfchost0). The lsdev command shows the changes as seen in Example 5-9.
Example 5-9 Show the virtual Fibre Channel server adapter
6. Execute the vfcmap command to associate the virtual Fibre Channel server adapter to the physical Fibre Channel adapter.
200
As shown in Example 5-10, the adapter port in use on the NPIV Fibre Channel adapter is fcs1.
Example 5-10 Virtual Fibre Channel mappings created and listed
$ vfcmap -vadapter vfchost0 -fcp fcs1 vfchost0 changed $ lsmap -all -npiv Name Physloc ClntID ClntName ClntOS ============= ================================== ====== ============== ======= vfchost0 U9117.MMA.100F6A0-V1-C70 2 mobile2 AIX Status:LOGGED_IN FC name:fcs1 Ports logged in:2 Flags:a<LOGGED_IN,STRIP_MERGE> VFC client name:fcs2
FC loc code:U789D.001.DQDWWHY-P1-C1-T2
7. Record the existing physical Fibre Channel adapter and disk configuration. Use these details when you remove the physical adapter from the partition. 8. Run the cfgmgr command on the mobile partition to configure the new virtual Fibre Channel client adapter. The lsdev command shows the new adapter as fcs2 in Example 5-11. Our physical adapter is a dual-port adapter listed as fcs0 and fcs1. The mobile partitions LUNs are attached using the fcs1 port.
Example 5-11 Fibre Channel device listing on the mobile partition # lsdev|egrep 'fcs*|fscs*' fcs0 Available 00-00 fcs1 Available 00-01 fcs2 Available 70-T1 fscsi0 Available 00-00-01 fscsi1 Available 00-01-01 fscsi2 Available 70-T1-01 4Gb FC PCI Express Adapter 4Gb FC PCI Express Adapter Virtual Fibre Channel Client FC SCSI I/O Controller Protocol FC SCSI I/O Controller Protocol FC SCSI I/O Controller Protocol
9. Verify that the partitions disks are enabled on the new virtual Fibre Channel adapters by using the lspath command as shown in Example 5-12. Because our storage subsystem uses active and passive controller paths, two paths are shown for each disk. Other storage subsystems might use different commands to list available paths and show different output.
Example 5-12 lspath output from the mobile partition # lspath Enabled hdisk0 Enabled hdisk0 Enabled hdisk0 Enabled hdisk0 fscsi1 fscsi1 fscsi2 fscsi2
201
Figure 5-46 shows the mobile partition using a virtual and physical path to disk.
Destination System
hdisk0
fcs1
fcs2
ent0
Hypervisor vfchost0 Source VIOS Storage adapter Ethernet adapter Storage Subsystem ent1
Figure 5-46 The mobile partition using physical and virtual resources
202
Verify that you are using only the virtual Fibre Channel path to the disk as displayed in Example 5-14.
Example 5-14 Remaining paths after physical adapter has been removed
# lspath Enabled hdisk0 fscsi2 2. Use your HMC to remove all physical adapter slots from the mobile partition that is using dynamic LPAR. 3. Remove all virtual serial adapters from slots 2 and above from the mobile partition using dynamic LPAR. Figure 5-47 shows the mobile partition using only virtual resources.
Destination System
hdisk0
fcs1
ent0
Hypervisor ent1
Storage adapter
Ethernet adapter
203
Ready to migrate
The mobile partition is now ready to be migrated. Close any virtual terminals on the mobile partition, because they will lose connection when the partition migrates to the destination system. Virtual terminals can be reopened when the partition is on the destination system. After the migration is complete, consider adding physical resources back to the mobile partition, if they are available on the destination system. Note: The active mobile partition profile is created on the destination system without any references to any physical I/O slots that were present in your profile on the source system. Any other mobile partition profiles are copied unchanged. Figure 5-48 shows the mobile partition migrated to the destination system.
Source System
fcs0
ent0
Hypervisor ent1
Storage adapter
Ethernet adapter
204
205
environment supports the corresponding non-enhanced mode, then the hypervisor assigns the enhanced mode to the logical partition when you activate the logical partition. Logical partitions in the POWER6 enhanced processor compatibility mode can only run on POWER6 technology-based servers. You cannot dynamically change the current processor compatibility of a logical partition. To change the current processor compatibility mode, you must change the preferred processor compatibility mode, shut down the logical partition, and restart the logical partition. The hypervisor attempts to set the current processor compatibility mode to the preferred mode that you specified. A POWER6 processor cannot emulate all features of a POWER5 processor. For example, certain types of performance monitoring might not be available for a logical partition if the current processor compatibility mode of a logical partition is set to the POWER5 mode. When you move an active logical partition between servers that have different processor types, both the current and preferred processor compatibility modes of the logical partition must be supported by the destination server. When you move an inactive logical partition between servers that have different processor types, only the preferred mode of the logical partition must be supported by the destination server. Table 5-2 lists current and preferred processor compatibility modes supported on each server type.
Table 5-2 Processor compatibility modes supported by server type Server processor type Refreshed POWER6 technology-based server (POWER6+) POWER6 technology-based server POWER7 technology-based server Supported current modes POWER5, POWER6, POWER6+, POWER6+ enhanced POWER5, POWER6, POWER6 enhanced POWER5, POWER6, POWER6+, POWER7 Supported preferred modes default, POWER6, POWER6+, POWER6+ enhanced default, POWER6, POWER6 enhanced default, POWER6, POWER6+, POWER7
For example, you want to move an active logical partition from a POWER6 technology-based server to a Refreshed POWER6 technology-based server so that the logical partition can take advantage of the additional capabilities available with the Refreshed POWER6 processor. You set the preferred processor compatibility mode to the default mode and when you activate the logical partition on the POWER6 technology-based server, it runs in the
206
POWER6 mode. When you move the logical partition to the Refreshed POWER6 technology-based server, both the current and preferred modes remain unchanged for the logical partition until you restart the logical partition. When you restart the logical partition on the Refreshed POWER6 technology-based server, the hypervisor evaluates the configuration. Because the preferred processor compatibility mode is set to the default mode and the logical partition now runs on a Refreshed POWER6 technology-based server, the highest mode available is the POWER6+ mode and the hypervisor changes the current processor compatibility mode to the POWER6+ mode. When you want to move the logical partition back to the POWER6 technology-based server, you must change the preferred mode from the default mode to the POWER6 mode (because the POWER6+ mode is not supported on a POWER6 technology-based server) and restart the logical partition on the Refreshed POWER6 technology-based server. When you restart the logical partition, the hypervisor evaluates the configuration. Because the preferred mode is set to POWER6, the hypervisor does not set the current mode to a higher mode than POWER6. Remember, the hypervisor first determines whether it can set the current mode to the preferred mode. If not, it determines whether it can set the current mode to the next highest mode, and so on. In this case, the operating environment supports the POWER6 mode, so the hypervisor sets the current mode to the POWER6 mode, so that you can move the logical partition back to the POWER6 technology-based server. The easiest way to maintain this moving back and forth type of flexibility between different types of processors is to determine the processor compatibility mode supported on both the source and destination servers and set the preferred processor compatibility mode of the logical partition to the highest mode supported by both servers. In this example, you set the preferred processor compatibility mode to the POWER6 mode, which is the highest mode supported by both POWER6 technology-based servers and Refreshed POWER6 technology-based servers. The same logic from the previous examples applies to inactive migrations, except inactive migrations do not require the current processor compatibility mode of the logical partition because the logical partition is inactive. After you move an inactive logical partition to the destination server and activate that logical partition on the destination server, the hypervisor evaluates the configuration and sets the current mode for the logical partition just like it does when you restart a logical partition after an active migration. The hypervisor attempts to set the current mode to the preferred mode. If it cannot, it checks the next highest mode and so on. If you specify the default mode as the preferred mode for an inactive logical partition, you can move that inactive logical partition to a server of any processor type. Remember, when you move an inactive logical partition between servers with different processor types, only the preferred mode of the logical partition
207
must be supported by the destination server. And because all servers support the default processor compatibility mode, you can move an inactive logical partition with the preferred mode of default to a server with any processor type. When the inactive logical partition is activated on the destination server, the preferred mode remains set to default, and the hypervisor determines the current mode for the logical partition.
208
3. If you plan to perform an inactive migration, skip this step and go to step 4 on page 210. If you plan to perform an active migration, identify the current processor compatibility mode of the mobile partition, as follows: a. In the navigation area of the HMC that manages the source server, expand Systems Management Servers and select the source server. b. In the contents area, select the mobile partition and click Properties. c. Select the Hardware tab and view the Processor Compatibility Mode, which is the current processor compatibility mode of the mobile partition. Record this value so that you can refer to it later.
209
4. Verify that the preferred and current processor compatibility modes that you identified in steps 2 on page 208 page and on page 209 are in the list of supported processor compatibility modes identified in step 1 on page 208 for the destination server. For active migrations, both the preferred and current processor compatibility modes of the mobile partition must be supported by the destination server. For inactive migrations, only the preferred processor compatibility mode must be supported by the destination server. Attention: If the current processor compatibility mode of the mobile partition is the POWER5 mode, be aware that the POWER5 mode does not appear in the list of modes supported by the destination server. However, the destination server supports the POWER5 mode even though it does not appear in the list of supported modes. 5. If the preferred processor compatibility mode of the mobile partition is not supported by the destination server, use step 2 on page 208 to change the preferred mode to a mode that is supported by the destination server. For example, the preferred mode of the mobile partition is the POWER6+ mode and you plan to move the mobile partition to a POWER6 technology-based server. Although POWER6 technology-based server does not support the POWER6+ mode, it does support the POWER6 mode. Therefore, you change the preferred mode to the POWER6 mode.
210
6. If the current processor compatibility mode of the mobile partition is not supported by the destination server, try the following solutions: a. If the mobile partition is active, a possibility is that the hypervisor has not had the opportunity to update the current mode of the mobile partition. Shut down and reactivate the mobile partition so that the hypervisor can evaluate the configuration and update the current mode of the mobile partition. b. If the current mode of the mobile partition still does not match the list of supported modes that you identified for the destination server, use step 2 on page 208 to change the preferred mode of the mobile partition to a mode that is supported by the destination server. Then, reactivate the mobile partition so that the hypervisor can evaluate the configuration and update the current mode of the mobile partition.
211
212
Chapter 6.
Migration status
This chapter discusses topics related to migration status and recovery procedures to be followed when errors occur during migration of a logical partition. The chapter assumes you have a working knowledge of Live Partition Mobility prerequisites and actions. This chapter contains the following topics: 6.1, Progress and reference code location on page 214 6.2, Recovery on page 216 6.3, A recovery example on page 218
213
The same information can be obtained from the HMCs CLI by using the lsrefcode and lslparmigr commands. See 5.7, The command-line interface on page 162 for details. Reference codes describe the progress of the migration. You can find a description of reference codes in SRCs, current state on page 260. When the reference code represents an error, a migration recovery procedure might be required.
214
After a migration is issued on the HMC GUI, a progress window is provided similar to the one shown in Figure 6-2. The percentage indicates the completion of memory state transfer during an active migration. In the case of an inactive migration, there is no memory management and the value is zero.
During an inactive migration, only the HMC is involved, and it holds all migration information. An active migration requires the coordination of the mobile partition and the two Virtual I/O Servers that have been selected as mover service partitions. All these objects record migration events in their error logs. You can find a description of partition-related error logs in Operating system error logs on page 266. The mobile partition records the start and the end of the migration process. You may extract the data by using the errpt command, as shown in Example 6-1.
Example 6-1 Migration log on mobile partition [mobile:/]# errpt IDENTIFIER TIMESTAMP T C RESOURCE_NAME A5E6DB96 1118164408 I S pmig 08917DC6 1118164408 I S pmig
...
Migration information is recorded also on the Virtual I/O Servers that acted as a mover service partition. To retrieve it, use the errlog command.
215
Example 6-2 shows the data available on the source mover service partition. The first event in the log states when the mobile partition execution has been suspended on the source system and has been activated on the destination system, while the second records the successful end of the migration.
Example 6-2 Migration log on source mover service partition $ errlog IDENTIFIER TIMESTAMP T C RESOURCE_NAME 3EB09F5A 1118164408 I S Migration 6CB10B8D 1118164408 I S unspecified ... DESCRIPTION Migration completed successfully Client partition suspend issued
On the destination mover service partition, the error log registers only the end of the migration, as shown in Example 6-3.
Example 6-3 Migration log on destination mover service partition $ errlog IDENTIFIER TIMESTAMP T C RESOURCE_NAME 3EB09F5A 1118164408 I S Migration ... DESCRIPTION Migration completed successfully
The error logs on the mobile partition and the Virtual I/O Servers also record events that prevent the migration from succeeding, such as user interruption or network problems. They can be used to trace all migration events on the system.
6.2 Recovery
Live Partition Mobility is designed to verify whether a requested migration can be executed and to monitor all migration processes. If a running migration cannot be completed, a rollback procedure is executed to undo all configuration changes applied. A partition migration might be prevented from running for two main reasons: The migration is not valid and does not meet prerequisites. An external event prevents a migration component from completing its job. The migration validation described in 4.4.1, Performing the validation steps and eliminating errors on page 99 takes care of checking all prerequisites. It can be explicitly executed at any moment and it does not affect the mobile partition. Perform a validation before requesting any migration. The migration process, however, performs another validation before starting any configuration changes.
216
After the inactive or active migration begins, the HMC manages the configuration changes and monitors the status of all involved components. If any error occurs, recovery actions automatically begin. When the HMC cannot perform a recovery, administrator intervention is required to perform problem determination and issue final recovery steps. This situation might occur when the HMC cannot contact a migration component (for example, the mobile partition, a Virtual I/O Server, or a system service processor) because of a network problem or an operator error. After a timeout, an error message is provided (requesting a recovery). When a recovery is required, the mobile partition name can appear on both the source and the destination system. The partition is either powered down (inactive migration) or really working only on one of the two systems (active migration). Configuration cleanup is made during recovery. Although a mobile partition requires a recovery, its configuration cannot be changed to prevent any attempt to modify its state before its state is returned to normal operation. Activating the same partition on two systems is not possible. Recovery is performed by selecting the migrating partition and then selecting Operations Mobility Recover, as shown in Figure 6-3.
217
A pop-up window opens, similar to the one shown in Figure 6-4, requesting recovery confirmation. Click Recover to start a recovery. Note: Use the Force recover check box only when: The HMC cannot contact one of the migration components that require a new configuration or if the migration has been started by another HMC. A normal recovery does not succeed.
The same actions performed on the GUI can be executed with the migrlpar command on the HMCs command line. See 5.7, The command-line interface on page 162 for details. After a successful recovery, the partition returns to normal operation state and changes to its configuration are then allowed. If the migration is executed again, the validation phase will detect the component that prevented the migration and will select alternate elements or provide a validation error.
218
destination system. Then, it is briefly suspended on the source and immediately reactivated on the destination. We unplugged the network connection of one mover service partition in the middle of a state transfer. We had to perform several tests in order to create this scenario because the migration on the partition (2 GB of memory) was extremely fast. In the HMC GUI, the migration process fails and an error message is displayed. Because the migration stopped in the middle of the state transfer, the partition configuration on the two involved systems is kept in the migrating status, waiting for the administrator to identify the problem and decide how to continue. In the HMC, the status of the migrating partition, mobile, is present in both systems, while it is active only on the source system. On the destination system, only the shell of the partition is present. The situation can viewed by expanding Systems Management Custom Groups All partitions. In the content area, a situation similar to Figure 6-5 is shown.
219
The applications running on the partition have not been affected by the network outage and are running on the source system. The only visible effect is on the partitions error log that shows the start and the abort of the migration, as described in Example 6-4. No action is required on the partition.
Example 6-4 Migrating partitions error log after aborted migration [mobile]# errpt IDENTIFIER TIMESTAMP T C RESOURCE_NAME 5E075ADF 1118180308 I S pmig 08917DC6 1118180208 I S pmig DESCRIPTION Client Partition Migration Aborted Client Partition Migration Started
Both Virtual I/O Servers, using a single mover service partition, have recorded the event in their error logs. On the Virtual I/O Server, where the cable was unplugged, we see both the physical network error and the mover service partition communication error, as indicated in Example 6-5.
Example 6-5 Mover service partition with network outage
$ errlog IDENTIFIER TIMESTAMP T C RESOURCE_NAME 427E17BD 1118181908 P S Migration 0B41DD00 1118181708 I H ent4 ... DESCRIPTION Migration aborted: MSP-MSP connection do ADAPTER FAILURE
The other Virtual I/O Server only shows the communication error of the mover service partition, because no physical error has been created, as indicated in Example 6-6.
Example 6-6 Mover service partition with communication error
$ errlog IDENTIFIER TIMESTAMP T C RESOURCE_NAME 427E17BD 1118182108 P S Migration ... DESCRIPTION Migration aborted: MSP-MSP connection do
To recover from an interrupted migration, you must select the mobile partition and select Operations Mobility Recover, as shown in Figure 6-3 on page 217. A pop-up window similar to the one shown in Figure 6-4 on page 218 opens. Click the Recover button and the partition state is cleaned up (normalized). The mobile partition is present only on the source system where it is running, and it is removed on the destination system, where it has never been executed. After the network outage is resolved, the migration can be issued again. Wait for the RMC protocol to reset communication between the HMC and the Virtual I/O Server that had the network cable unplugged.
220
Chapter 7.
221
222
223
The ioslevel command can be executed, from the CLI, on the Virtual I/O Server in order to determine the current version and fix pack level of the Virtual I/O Server and to see whether an upgrade is necessary. The output of this command is shown in Example 7-1.
Example 7-1 The output of the ioslevel command
$ ioslevel 2.1.0.1-FP-20.0 $ Note: On servers that are managed by the Integrated Virtualization Manager, the source and destination Virtual I/O Server logical partitions might also be referred to as the source and destination management partitions.
Note: IVM has reserved slots. Slots 0-9 are reserved on VIOS. Slots 0-3 on clients. These slots cannot be part of a migration.
Storage requirements
For a list of supported disks and optical devices, see the data sheet available on the Virtual I/O Server support Web site: http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/data sheet.html
Network requirements
The migrating partition uses the virtual LAN for network access. The VLAN must be bridged to a physical network using a shared Ethernet adapter in the Virtual
224
I/O Server partition. Your LAN must be configured such that migrating partitions can continue to communicate with other necessary clients and servers after a migration is completed.
225
b. The destination mover service partition receives the logical partition state information and installs it on the destination server. 6. The Integrated Virtualization Manager suspends the mobile partition on the source server. The source mover service partition continues to transfer the logical partition state information to the destination mover service partition. 7. The hypervisor resumes the mobile partition on the destination server. 8. The Integrated Virtualization Manager completes the migration. All resources that were consumed by the mobile partition on the source server are reclaimed by the source server. The Integrated Virtualization Manager removes the virtual SCSI adapters and the virtual Fibre Channel adapters (that were connected to the mobile partition) from the source Virtual I/O Server logical partitions. 9. You perform post-requisite tasks, such as adding dedicated I/O adapters to the mobile partition or adding the mobile partition to a partition workload group.
226
of recommended virtual adapter mappings for the mobile partition on the destination server. 5. The Integrated Virtualization Manager prepares the source and destination environments for Partition Mobility. 6. The Integrated Virtualization Manager transfers the partition state from the source environment to the destination environment. 7. The Integrated Virtualization Manager completes the migration. All resources that were consumed by the mobile partition on the source server are reclaimed by the source server. The Integrated Virtualization Manager removes the virtual SCSI adapters and the virtual Fibre Channel adapters (that were connected to the mobile partition) from the source Virtual I/O Server logical partitions. 8. You activate the mobile partition on the destination server. 9. You perform post-requisite tasks, such as establishing virtual terminal connections or adding the mobile partition to a partition workload group.
227
applications and kernel extensions that have registered to be notified of dynamic reconfiguration events. The operating system either accepts or rejects the migration That the logical memory block size is the same on the source and destination servers That the operating system on the mobile partition is AIX or Linux That the mobile partition is not the redundant error path reporting logical partition That the mobile partition is not configured with barrier synchronization registers (BSR) That the mobile partition is not configured with huge pages That the mobile partition does not have a Host Ethernet Adapter (or Integrated Virtual Ethernet) That the mobile partition state is Active or Running That the mobile partition is not in a partition workload group The uniqueness of the mobile partitions virtual MAC addresses That the required Virtual LAN IDs are available on the destination Virtual I/O Server That the mobile partitions name is not already in use on the destination server The number of current active migrations against the number of supported active migrations That the necessary resources (processors and memory) are available to create a shell logical partition on the destination system. During validation, the Integrated Virtualization Manager extracts the device description for each virtual adapter on the Virtual I/O Server logical partition on the source server. The Integrated Virtualization Manager uses the extracted information to determine whether the Virtual I/O Server logical partitions on the destination server can provide the mobile partition with the same virtual SCSI, virtual Ethernet, and virtual Fibre Channel configuration that exists on the source server. This includes verifying that the Virtual I/O Server logical partitions on the destination server have enough available slots to accommodate the virtual adapter configuration of the mobile partition. To initiate the validation through the Integrated Virtualization Manager: 1. In Partition Management, select View/Modify Partitions. 2. Select the mobile partition in the Partition Details section and select the More Tasks menu.
228
229
Figure 7-3 shows the Migrate Partition panel. 4. Ensure that the Remote IVM address, Remote IVM user ID and Remote IVM password are filled in to perform the validation before the actual migration. 5. Click Validate.
Note: Figure 7-3 gives you the impression that you might migrate from an IVM managed system to a remote IVM or HMC managed system. However at the time of this publication migration between IVM and HMC managed systems is not supported.
230
231
Ethernet, and virtual Fibre Channel configuration that exists on the source server. This includes verifying that the Virtual I/O Server logical partitions on the destination server have enough available slots to accommodate the virtual adapter configuration of the mobile partition.
232
3. Ensure that the destination server has enough available memory to support the mobile partition: a. Determine the amount of memory that the mobile partition requires: i. From the Partition Management menu, click View/Modify Partitions. The View/Modify Partitions panel opens. ii. Select the mobile partition. iii. From the More Tasks menu, select Properties. A new window named Partition Properties opens. iv. Click the Memory tab. v. Record the minimum, assigned, and maximum memory settings. vi. Click OK.
233
b. Determine the amount of memory that is available on the destination server: i. From the Partition Management menu, click View/Modify System Properties. The View/Modify System Properties panel opens. ii. Click the Memory tab. iii. From the General tab, record the Current memory available and the Reserved firmware memory.
234
c. Compare the values from the mobile partition and the destination server. Notes: Keep in mind that when you move the mobile partition to the destination server, the destination server requires more reserved firmware memory to manage the mobile partition. If necessary, you may add more available memory to the destination server to support the migration by dynamically removing memory from the other logical partitions. Use any role other than View Only to modify the memory. Users with the Service Representative (SR) role cannot view or modify storage values.
235
4. Ensure that the destination server has enough available processors to support the mobile partition: a. Determine how many processors the mobile partition requires: i. From the Partition Management menu, click View/Modify Partitions. The View/Modify Partitions panel opens. ii. Select the logical partition for which you want to view the properties. iii. From the More Tasks menu, select Properties. A new window named Partition Properties opens. iv. Click the Processing tab and record the minimum, maximum, and available processing units settings. v. Click OK. The result of these steps is shown in Figure 7-7.
Figure 7-7 Checking the amount of processing units of the mobile partition
b. Determine the processors available on the destination server: i. From the Partition Management menu, click View/Modify System Properties. The View/Modify System Properties panel opens. ii. Select the Processing tab. iii. Record the Current processing units available.
236
Figure 7-8 Checking the amount of processing units on the destination server
c. Compare the values from the mobile partition and the destination server. If the destination server does not have enough available processors to support the mobile partition, use the Integrated Virtualization Manager to dynamically remove the processors from the logical partition or you can remove processors from logical partitions on the destination server. Note: You must have a super administrator role to perform this task. 5. Verify that the source and destination Virtual I/O Server can communicate with each other.
237
If Partition Mobility is not enabled and the feature was purchased with the system, obtain the activation code from the IBM Capacity on Demand (CoD) Web site: http://www-912.ibm.com/pod/pod Enter the system type and serial number on the CoD site and click Submit. A list of available activation codes (such as VET or Virtualization Technology Code, POD, or CUoD Processor Activation Code) or keys with a type and description is displayed. If PowerVM Enterprise Edition was not purchased with the system, it can be upgraded through the Miscellaneous Equipment Specification (MES) process. If necessary, enter the activation code in the Integrated Virtualization Manager, as follows: 1. From the IVM Management menu in the navigation area, click Enter PowerVM Edition Key. The Enter PowerVM Edition Key window opens. 2. Enter your activation code for PowerVM Edition and click Apply.
238
Figure 7-9 shows how to enter the key. When PowerVM Enterprise is enabled, a Mobility section is added to the More Tasks menu in the View/Modify Partitions view.
239
2. Ensure that the source and destination management partitions can communicate with each other. 3. Verify whether the processor compatibility mode of the mobile partition is supported on the destination server, and update the mode if necessary, so that you can successfully move the mobile partition to the destination server. To verify that the processor compatibility mode of mobile partition is supported on the destination server using the Integrated Virtualization Manager: a. Identify the processor compatibility modes that are supported by the destination server by entering the following command in the command line of the Integrated Virtualization Manager on the destination server: lssyscfg -r sys -F lpar_proc_compat_modes Record these values so that you can refer to them later. b. Identify the processor compatibility mode of the mobile partition on the source server: i. From the Partition Management menu, click View/Modify Partitions. The View/Modify Partitions window is displayed. ii. In the contents area, select the mobile partition. iii. From the More Tasks menu, select Properties. The Partition Properties window opens. iv. Select the Processing tab. v. View the Current and Preferred processor compatibility mode values for the mobile partition. Record these values so that you can refer to them later. Note: In versions earlier than 2.1 of the Integrated Virtualization Manager, the Integrated Virtualization Manager displays only the current processor compatibility mode for the mobile partition.
240
c. Verify that the processor compatibility mode (which you identified in step b on page 240) is in the list of supported processor compatibility modes (which you identified in step a on page 240) for the destination server. For active migrations, both the preferred and current modes of the mobile partition must be supported by the destination server. For inactive migrations, only the preferred mode must be supported by the destination server. Note: If the current processor compatibility mode of the mobile partition is the POWER5 mode, be aware that the POWER5 mode does not appear in the list of modes supported by the destination server. However, the destination server does support the POWER5 mode even though it does not appear in the list of supported modes. d. If the preferred processor compatibility mode of the mobile partition is not supported by the destination server, use step b on page 240 to change the preferred mode to a mode that is supported by the destination server. For example, if the preferred mode of the mobile partition is the POWER6+ mode, and you plan to move the mobile partition to a POWER6 technology-based server. The POWER6 technology-based server does
241
not support the POWER6+ mode, but it does support the POWER6 mode. Therefore, you change the preferred mode to the POWER6 mode. e. If the current processor compatibility mode of the mobile partition is not supported by the destination server, try the following solutions: i. If the mobile partition is active, the hypervisor might not have had the opportunity to update the current mode of the mobile partition because the preferred mode was last changed. Restart the mobile partition so that the hypervisor can evaluate the configuration and update the current mode of the mobile partition. ii. If the current mode of the mobile partition still does not appear in the list of supported modes that you identified for the destination server, use step b on page 240 to change the preferred mode of the mobile partition to a mode that is supported by the destination server. Then, restart the mobile partition so that the hypervisor can evaluate the configuration and update the current mode of the mobile partition. For example, if the mobile partition runs on a Refreshed POWER6 processor- based server and its current mode is the POWER6+ mode. You want to move the mobile partition to a POWER6 technology-based server, which does not support the POWER6+ mode. You change the preferred mode of the mobile partition to the POWER6 mode and restart the mobile partition. The hypervisor evaluates the configuration and sets the current mode to the POWER6 mode, which is supported on the destination server. 4. Ensure that the mobile partition is not part of a partition workload group. A partition workload group identifies a set of logical partitions that are located on the same physical system. A partition workload group is defined when you use the Integrated Virtualization Manager to configure a logical partition. The partition workload group is intended for applications that manage software groups. You must remove the mobile partition from a partition workload group by completing the following steps: a. From the Partition Management menu, click View/Modify Partitions. The View/Modify Partitions window opens. b. Select the logical partition that you want to remove from the partition workload group. c. From the More Tasks menu, select Properties. A new window named Partition Properties opens. d. In the General tab, deselect the Partition workload group participant box. e. Click OK.
242
5. Ensure that the mobile partition does not have physical adapters, as follows: a. From the Partition Management menu, click View/Modify Partitions. The View/Modify Partitions window opens. b. Select the logical partition that you want to remove from the partition workload group. c. From the More Tasks menu, select Properties. A new window named Partition Properties appears. d. In the Physical Adapters tab, verify if there are no physical adapters configured. e. Click OK.
243
Note: During inactive migration, the Integrated Virtualization Manager removes physical I/O adapters that are assigned to the mobile partition. 6. Ensure that the applications running in the mobile partition are mobility-safe or mobility-aware. Most software applications running in AIX and Linux logical partitions do not require any changes to work correctly during active Partition Mobility. Certain applications might have dependencies on characteristics that change between the source and destination servers and other applications might have to adjust to support the migration.
244
The physical adapter on the source Virtual I/O Server logical partition connects to one or more virtual adapters on the source Virtual I/O Server logical partition. Similarly, the physical adapter on the destination Virtual I/O Server logical partition connects to one or more virtual adapters on the destination Virtual I/O Server logical partition. Each virtual adapter on the source Virtual I/O Server logical partition connects to at least one virtual adapter on a client logical partition. Similarly, each virtual adapter on the destination Virtual I/O Server logical partition connects to at least one virtual adapter on a client logical partition. When you move the mobile partition to the destination server, the Integrated Virtualization Manager automatically creates and connects virtual adapters on the destination server, as follows: Creates virtual adapters on the destination Virtual I/O Server logical partition Creates virtual adapters on the mobile partition Connects the virtual adapters on the destination Virtual I/O Server logical partition to the virtual adapters on the mobile partition Note: The Integrated Virtualization Manager automatically adds and removes virtual SCSI adapters to and from the management partition and the logical partitions when you create and delete a logical partition. Verify that the destination server provides the same virtual SCSI configuration as the source server so that the mobile partition can access its physical storage on the SAN after it moves to the destination server: 1. Verify that the physical storage that is used by the mobile partition is assigned to the management partition on the source server and to the management partition on the destination server. 2. Verify that the reserve_policy attributes on the physical volumes are set to no_reserve so that the mobile partition can access its physical storage on the SAN from the destination server. To set the reserve_policy attribute of the physical storage to no_reserve: a. From either the Virtual I/O Server logical partition on the source server or the Virtual I/O Server on the destination server, list the disks to which the Virtual I/O Server has access. Run the following command: lsdev -type disk b. List the attributes of each disk. Run the following command, where hdiskX is the name of the disk that you identified in the previous step: lsdev -dev hdiskx -attr
245
Path Control Module Persistant Reserve Key Value Algorithm Path/Ownership Autorecovery Device CLEARS its Queue on error Controller Delay Time Controller Health Check Interval Distributed Error Percentage Distributed Error Sample Time Health Check Command Health Check Interval Health Check Mode Location Label Logical Unit Number ID LUN Reset Supported Maximum Quiesce Time Maximum TRANSFER Size FC Node Name Physical volume identifier Use QERR bit Queuing TYPE Queue DEPTH REASSIGN time out value Reserve Policy READ/WRITE time out value SCSI ID START unit time out value FAStT03IBMfcp Unique device identifier FC World Wide Name
False True True True True True True True True True True True True False True True True False False True True True True True True False True False False
c. If the reserve_policy attribute is set to anything other than no_reserve, set the reserve_policy to no_reserve by running the following command, where hdiskX is the name of the disk for which you want to set the reserve_policy attribute to no_reserve. chdev -dev hdiskX -attr reserve_policy=no_reserve 3. Verify that the virtual disks have the same unique identifier, physical identifier, or an IEEE volume attribute, as follows: a. To verify whether the virtual device has an IEEE volume attribute identifier run the following command on the Virtual I/O Server: lsdev -dev hdiskX -attr If the output does not have the ieee_volname field, the virtual device has no IEEE volume identifier attribute. b. To verify whether the virtual device has a UDID, type the commands: oem_setup_env odmget -qattribute=unique_id CuAt exit Only disks that have a UDID will be listed in the output.
246
c. To verify whether the virtual device has a PVID, run the following command: lspv The output shows the disks with their respective PVIDs d. If the virtual disks do not have a UDID, IEEE volume attribute identifier, or PVID, assign an identifier, as follows: i. Upgrade your vendor software and repeat the procedure. Before upgrading, be sure to preserve any virtual SCSI devices that you created. ii. If the upgrade does not produce a UDID or IEEE volume attribute identifier, run the following command to put a PVID on the physical volume: chdev -dev hdiskX -attr pv=yes 4. Verify that the mobile partition has access to its physical storage from both the source and destination environments, as follows: a. From the Virtual Storage Management menu, click View/Modify Virtual Storage. b. On the Virtual Disk tab, verify that the logical partition does not own any virtual disk. c. On the Physical Volumes tab, verify that the physical volumes that mapped to the mobile partition are exportable. See step 3 on page 246, and see Example 7-4.
Example 7-4 The odmget command
$ oem_setup_env # odmget -qattribute=unique_id CuAt ... CuAt: name = "hdisk7" attribute = "unique_id" value = "3E213600A0B8000291B080000520C023C6B410F1815 type = "R" generic = "D" rep = "nl" nls_index = 79 ...
FAStT03IBMfcp"
247
248
You can now see the physical Fibre Channel adapters that are capable of being used for hosting virtual Fibre Channel adapters. 2. Select the physical adapter to use and click Modify Partition Connections.
249
The Virtual Fibre Channel Partition Connections window opens. See Figure 7-14.
3. You may now choose to add or remove virtual Fibre Channel adapter assignments for a partition. In this case, you will select the partition of your choice so that a virtual Fibre Channel adapter is created and WWPNs are generated for the client. After you select a partition, the phrase Automatically generate is displayed in the Worldwide Port Names column, as shown in Figure 7-15. Click OK. The WWPNs for the client partition are generated.
Next, verify that the destination server provides the same virtual Fibre Channel configuration as the source server so that the mobile partition can access its physical storage on the SAN after it moves to the destination server, as follows: 1. Verify, for each virtual Fibre Channel adapter on the mobile partition, that both WWPNs are assigned to the same physical storage on the SAN.
250
View and modify the properties of a logical partition, as follows: a. Select View/Modify Partitions under Partition Management. The View/Modify Partitions window opens. b. Select the logical partition for which you want to view or modify the properties. c. From the More Tasks menu, select Properties. A new window named Partition Properties appears. d. Select the Storage tab to view or to modify the logical partition storage settings. You can view and modify settings for virtual disks and physical volumes. e. Expand the Virtual Fibre Channel section. See Figure 7-16.
f. Click OK to save your changes. The View/Modify Partitions page is displayed. If the logical partition for which you changed the properties is inactive, the changes take effect when you next activate the partition. If the logical partition for which you changed the properties is active and is not capable of DLPAR, you must shut down and reactivate the logical partition before the changes take effect. 2. Verify that the switches to which the physical Fibre Channel adapters on both the source and destination management partitions are cabled support NPIV. 3. Verify that the management partition on the destination server provides a sufficient number of available physical ports for the mobile partition to
251
maintain access to its physical storage on the SAN from the destination server. In the management GUI on the destination system, you may use the View/Modify Virtual Fibre Channel option as described in step 1 on page 248. 4. Verify the number of physical ports that are available on the destination server, as follows: a. Determine the number of physical ports that the mobile partition uses on the source server: i. From the Partition Management menu, select View/Modify Partitions. The View/Modify Partitions panel opens. ii. Select the mobile partition. iii. From the More Tasks menu, click Properties. A new window called Partition Properties appears. iv. Click the Storage tab. v. Expand the Virtual Fibre Channel section. See Figure 7-17.
vi. Record the number of physical ports that are assigned to the mobile partition and click OK. b. Determine the number of physical ports that are available on the management partition on the destination server: i. From the I/O Adapter Management menu, select View/Modify Virtual Fibre Channel. The View/Modify Virtual Fibre Channel panel opens.
252
ii. Record the number of physical ports with available connections. iii. Compare the information that you identified in step a on page 252 to the information that you identified in b on page 252. Note: You may also use the lslparmigr command to verify that the destination server provides enough available physical ports to support the virtual Fibre Channel configuration of the mobile partition. 5. You may now choose to validate and migrate the mobile partition to the destination server. After migration is complete, notice the following points: The WWPNs assigned to the virtual Fibre Channel adapters on the partition do not change, but the adapter assignment is now to the physical adapter provided by the destination system. The number of connected partitions to the physical adapter also increases and the number of available ports decrease.
253
To prepare your network configuration for Partition Mobility: 1. Configure a virtual Ethernet bridge on the source and destination management partitions, as follows: a. From the I/O Adapter Management menu, select View/Modify Virtual Ethernet. The View/Modify Virtual Ethernet panel opens. b. Click the Virtual Ethernet Bridge tab. c. Set each Physical Adapter field to the physical adapter that you want to use as the virtual Ethernet bridge for each virtual Ethernet network. d. Click Apply for the changes take effect. The result of these steps is shown in Figure 7-18.
You may assign a Host Ethernet Adapter (or Integrated Virtual Ethernet) port to a logical partition so that the logical partition can directly access the external network by completing the following steps: a. From the I/O Adapter Management menu, select View/Modify Host Ethernet Adapters. b. Select a port with at least one available connection and click Properties.
254
c. Select the Connected Partitions tab. d. Select the logical partition that you want to assign to the Host Ethernet Adapter port and click OK. In the Performance area of the General tab you may adjust the settings (such as speed, maximum transmission unit) for the selected Host Ethernet Adapter port. 2. Create at least one virtual Ethernet adapter on the mobile partition: a. From the Partition Management menu, select View/Modify Partitions. b. Select the logical partition to which you want to assign the virtual Ethernet adapter. c. From the More Tasks menu, select Properties. A new window named Partition Properties opens. d. Select the Ethernet tab. The result of these steps is shown in Figure 7-19.
e. Create a virtual Ethernet adapter on the management partition: i. In the Virtual Ethernet Adapters section, click Create Adapter. ii. Enter the Virtual Ethernet ID and click OK to exit the Enter Virtual Ethernet ID window.
255
iii. Click OK to exit the Partition Properties window. The result of these steps is shown in Figure 7-20.
f. Create a virtual Ethernet adapter on a client partition: Note: This step is not required for inactive migration. i. In the Virtual Ethernet Adapters section, select a virtual Ethernet for the adapter and click OK. ii. If no adapters are available, click Create Adapter to add a new adapter to the list and then repeat the previous step. 3. Activate the mobile partition to establish communication between the virtual Ethernet and management partition virtual Ethernet adapter. 4. Verify that the operating system of the mobile partition recognizes the new Ethernet adapter.
256
257
If necessary, perform the following optional post-requisite tasks to complete migration of your logical partition: 1. Activate the mobile partition on the destination server in case of an inactive partition migration. 2. Add physical adapters to the mobile partition on the destination server. 3. If any virtual terminal connections were lost during the migration, re-establish the connections on the destination server. 4. Assign the mobile partition to a logical partition group. 5. If mobility-unaware applications were terminated on the mobile partition prior to its movement, then restart those applications on the destination.
258
Appendix A.
259
260
261
262
Code VIOSE0104202F VIOSE01042030 VIOSE01042037 VIOSE0104204A VIOSE01040104 VIOSE01042039 VIOSE0104203D VIOSE01040F04 VIOSE0104203F VIOSE01042042 VIOSE01042043, VIOSE01042044 VIOSE01042047 VIOSE01042049 VIOSE0104204D VIOSE01040F05 VIOSE01042032 VIOSE0104203E
Meaning The partition is not the source of the migration. The executed command must be run on the source. The partition is not in the process of a migration. An MSP on the source partition cannot communicate with any MSP on the destination managed system. The partition with the given ID is not an MSP. A command run on the Virtual I/O Server failed. The migration of the partition has been stopped. The partition cannot be migrated because it has a virtual Ethernet trunk adapter. A warning that the partition has a physical I/O resource assigned to it that will be removed as part of the inactive migration. The migrlpar process was unable to finish the migration on the source managed system because other tasks have not finished. Failed to lock the storage configuration on the source Virtual I/O Server. Failed to start the transmission of partition data on the source MSP. The destination manager does not support remote partition migration. The migration has been stopped by the managed system. The source Virtual I/O Server generated an error while processing a virtual LAN configuration. The source Virtual I/O Server generated a warning while processing a virtual LAN configuration. RMC is not active with the migrating partition. RMC needs to be active to perform active migrations. The partition cannot be migrated because it is already involved in a migration.
263
Table A-4 Destination system generated error codes Code VIOSE01090001 VIOSE01090003 VIOSE01090004 VIOSE01090005 VIOSE01090006 VIOSE01090008 VIOSE01090009 VIOSE0109000A VIOSE0109000B Meaning The migration requires a capability that the destination manager does not support. Unable to find a Virtual I/O Server partition with the given ID on the destination managed system. Unable to find a Virtual I/O Server partition with the given name on the destination managed system. The destination managed system does not have access to the storage assigned to the migrating partition. The maximum number of migrations are already taking place on the destination managed system. The processor compatibility mode of the migrating partition is not supported by the destination managed system. The memory region size on destination managed system is not the same as the source managed system. The target Virtual I/O Server does not support partition mobility. A VLAN that is bridged on the source Virtual I/O Server is not bridged on the target. This is only a warning message and does not cause the migration to fail. The name of the migrating partition is already in use on the destination managed system. The destination managed system already has the maximum number of partitions. A given partition name does not match the name of the partition with the given ID. The specified partition is not a mover service partition on the target managed system. This code appears only if the source makes a clean-up request to the target, but the partition on the target is not in the process of a migration. An unhandled extended error was received from firmware. The target managed system does not have enough available memory to create the partition. The target managed system does not have enough available processing units to create the partition.
264
Code VIOSE01090017 VIOSE01090018 VIOSE01090019 VIOSE0109001B VIOSE0109001C VIOSE0109001D VIOSE0109002E VIOSE01090030 VIOSE01090032 VIOSE01090033 VIOSE01090034 VIOSE01090035 VIOSE01090036 VIOSE01090037 VIOSE01090038 VIOSE01090039 VIOSE0109003A VIOSE0109003B
Meaning The target managed system does not have enough available processors to create the partition. The availability priority of the mobile partition is higher than the target management partition. The maximum processors value exceeds the largest supported processor value on the target managed system. A command called locally on the Virtual I/O Server failed. The partition with the given ID on the target managed system is not a Virtual I/O Server. A Virtual I/O Server partition with the given name does not exist on the target managed system. The RMC connection to a partition (either Virtual I/O Server or MSP) is not active. The destination managed system was not found. Unable to find the specified IP address on the target MSP Not enough memory is available for firmware to use with the new partition. The processor pool ID specified was not found on the target managed system. The processor pool name specified was not found on the target managed system. The command to set the storage configuration for the partition failed. The command to lock the storage configuration for the partition failed. The command to start data transmission on the destination MSP failed. The destination managed system was not able to clean up the migration because not all partitions involved have finished. The destination managed system is not capable of taking part in a migration. The partition with the specified name is not an MSP.
265
Code VIOSE0109003C
Meaning The destination Virtual I/O Server generated a warning while processing a virtual LAN configuration. This is only a warning and will not cause the migration to fail. The destination Virtual I/O Server generated an error while processing a virtual LAN configuration. The partition cannot be migrated because the target Virtual I/O Server has already reached its maximum number of virtual slots.
VIOSE0109003D VIOSE0109003E
266
267
FDX FLOP FRU FTP GDPS GID GPFS GUI HACMP HBA HEA HMC HPT HTML HTTP Hz I/O IBM ID IDE IEEE IP IPAT IPL IPMP ISV ITSO IVM JFS JIT
full duplex floating point operation field replaceable unit file transfer protocol Geographically Dispersed Parallel Sysplex Group ID General Parallel File System graphical user interface High-Availability Cluster Multi-Processing Host Bus Adapters Host Ethernet Adapter Hardware Management Console hardware page table Hypertext Markup Language Hypertext Transfer Protocol Hertz input/output International Business Machines Corporation Identification Integrated Device Electronics Institute of Electrical and Electronics Engineers Internetwork Protocol IP Address Takeover initial program load IP Multipathing independent software vendor International Technical Support Organization Integrated Virtualization Manager journaled file system just in time
L1 L2 L3 LA LACP LAN LDAP LED LHEA LMB LPAR LPP LUN LV LVCB LVM MAC Mbps MBps MCM ML MP MPIO MSP MTU NFS NIB NIM NIMOL NTP NVRAM ODM OSPF
Level 1 Level 2 Level 3 Link Aggregation Link Aggregation Control Protocol local area network Lightweight Directory Access Protocol light emitting diode Logical Host Ethernet Adapter logical memory block logical partition Licensed Program Product logical unit number logical volume logical volume control block Logical Volume Manager Media Access Control megabits per second megabytes per second Multi-Chip Module maintenance level multiprocessor multipath I/O mover service partition maximum transmission unit network file system Network Interface Backup Network Installation Management NIM on Linux Network Time Protocol non-volatile random access memory Object Data Manager Open Shortest Path First
268
Peripheral Component Interconnect Pool Idle Count process ID public key infrastructure Partition Load Manager Performance Monitor API Project Management Professional power-on self-test Performance Optimization with Enhanced Risc (Architecture) program temporary fix Performance Toolbox Processor Utilization Resource Register physical volume physical volume identifier Port Virtual LAN Identifier Quality of Service Redundant Array of Independent Disks random access memory reliability, availability, and serviceability remote copy redundant disk array controller remote I/O Routing Information Protocol reduced instruction set computer Resource Monitoring and Control remote procedure call remote program loader Red Hat Package Manager
RSA RSCT RSH SAN SCSI SDD SEA SIMD SMIT SMP SMS SMT SP SPOT SRC SRN SSA SSH SSL SUID SVC TCP/IP TSA UDF UDID VASI VIPA VG VGDA
Rivest-Shamir-Adleman algorithm Reliable Scalable Cluster Technology remote shell storage area network Small Computer System Interface Subsystem Device Driver Shared Ethernet Adapter single-instruction, multiple-data System Management Interface Tool symmetric multiprocessor System Management Services simultaneous mulithreading service processor shared product object tree System Resource Controller service request number Serial Storage Architecture Secure Shell Secure Sockets Layer set user ID SAN Virtualization Controller Transmission Control Protocol/Internet Protocol Tivoli System Automation Universal Disk Format Universal Disk Identification virtual asynchronous services interface virtual IP address volume group Volume Group Descriptor Area
PTF PTX PURR PV PVID PVID QoS RAID RAM RAS RCP RDAC RIO RIP RISC RMC RPC RPL RPM
269
Volume Group Status Area virtual local area network virtual processor vital product data virtual private network Virtual Router Redundancy Protocol virtual shared disk workload manager
270
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.
IBM Redbooks
For information about ordering these publications, see How to get IBM Redbooks on page 274. Note that several documents referenced here might be available in softcopy only. AIX 5L Differences Guide Version 5.3 Edition, SG24-7463 AIX 5L Practical Performance Tools and Tuning Guide, SG24-6478 Effective System Management Using the IBM Hardware Management Console for pSeries, SG24-7038 IBM System p Advanced POWER Virtualization (PowerVM) Best Practices, REDP-4194 Implementing High Availability Cluster Multi-Processing (HACMP) Cookbook, SG24-6769 Introduction to pSeries Provisioning, SG24-6389 Linux Applications on pSeries, SG24-6033 Managing AIX Server Farms, SG24-6606 NIM from A to Z in AIX 5L, SG24-7296 Partitioning Implementations for IBM eServer p5 Servers, SG24-7039 A Practical Guide for Resource Monitoring and Control (RMC), SG24-6615 Integrated Virtualization Manager on IBM System p5, REDP-4061 PowerVM Virtualization on IBM System p: Managing and Monitoring, SG24-7590 PowerVM Virtualization on IBM System p: Introduction and Configuration Fourth Edition, SG24-7940 IBM System p Advanced POWER Virtualization (PowerVM) Best Practices, REDP-4194
271
IBM BladeCenter JS12 and JS22 Implementation Guide, SG24-7655 Integrated Virtual Ethernet Adapter Technical Overview and Introduction, REDP-4340
Other publications
These publications are also relevant as further information sources: Documentation available on the support and services Web site includes: User guides System management guides Application programmer guides All commands reference volumes Files reference Technical reference volumes used by application programmers
The support and services Web site is: http://www.ibm.com/systems/p/support/index.html Virtual I/O Server and support for Power Systems (including Advanced PowerVM feature): https://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/ home.html Linux for pSeries installation and administration (SLES 9): http://www.ibm.com/developerworks/systems/library/es-pinstall/ Linux virtualization on POWER5: A hands-on setup guide: http://www.ibm.com/developerworks/edu/dw-esdd-virtual-i.html POWER5 Virtualization: How to set up the IBM Virtual I/O Server: http://www.ibm.com/developerworks/aix/library/au-aix-vioserver-v2/ Latest Multipath Subsystem Device Driver User's Guide http://www.ibm.com/support/docview.wss?rs=540&context=ST52G7&uid=ssg 1S7000303
272
Online resources
These Web sites are also relevant as further information sources: AIX and Linux on Power Systems Community http://www.ibm.com/systems/power/community/ Capacity on Demand http://www.ibm.com/systems/p/advantages/cod/ IBM PowerVM http://www.ibm.com/systems/power/software/virtualization/ IBM System p and AIX Information Center http://publib16.boulder.ibm.com/pseries/index.htm IBM System Planning Tool http://www.ibm.com/systems/support/tools/systemplanningtool/ IBM Systems Hardware Information Center http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp IBM Systems Workload Estimator http://www.ibm.com/systems/support/tools/estimator/index.html Novell SUSE Linux Enterprise Server information http://www.novell.com/products/server/index.html SCSI Technical Committee T10 http://www.t10.org SDDPCM software download page http://www.ibm.com/support/docview.wss?uid=ssg1S4000201 SDD software download page http://www.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430 &uid=ssg1S4000065&loc=en_US&cs=utf-8&lang=en Service and productivity tools for Linux on POWER http://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html Virtual I/O Server support for Power Systems home http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html
Related publications
273
Virtual I/O Server supported hardware http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/d atasheet.html Virtual I/O Server downloads
http://www14.software.ibm.com/webapp/set2/sas/f/vios/download/home.html
274
Index
A
accounting, AIX 43 active migration 5, 2526, 31 capability 23, 34 compatibility 23, 34 completion 39 concurrent 35 configuration checks 34 definition 5 differences 14 dirty page 38 entitlements 25 example 9 HMC 13 memory modification 38 migratability 25 migration phase 36 mover service partition 13 MSP selection 41 multiple concurrent migrations 128 preparation 32 prerequisites 25 processor compatibility mode 205206 reactivation 39 recovery 218 remote 130 requirements 93, 129 shared Ethernet adapter 8 state 31 stopping 41 time 129 validation 216, 218 VIOS selection 40 workflow 6, 13, 34 active profile 28 adapters dedicated 25, 32, 34 physical 25, 29, 34 virtual 29 advanced accounting 43 Advanced POWER Virtualization 4 AIX 28, 35, 43 kernel extensions 39, 43 AIX 5L 51 AIX 6 Live Workload Partitions 16 alternate error log 34 applications check-migrate 35 migration capability 34 prepare for migration 38 reconfiguration notification 39 ARP 39 ASMI 55 attribute mover service partition 21, 93 reserve_policy 92 time reference 22, 94 VASI 93 availability 15 checks 35 requirements 4
B
barrier-synchronization register See BSR basic environments 9091 battery power 24, 55 bootlist command 156 bosboot command 156 BSR 23, 25, 34, 72
C
capability 28 active migration 34 operating system 34 Capacity on Demand 57, 60, 238 cfgmgr command 154, 157, 201 changes non-reversible 31 reversible 31 rollback 31, 42 chdev command 8081, 246247 check-migrate request 35 chvg command 155 CLI 41, 162, 177
275
command line interface See CLI commands AIX bootlist 156 bosboot 156 cfgmgr 154, 157, 201 chvg 155 errpt 215, 220 extendvg 155 filemon 43 lsdev 154, 157, 201 migratepv 156 reducevg 156 ssh-keygen 163 topas 43 tprof 43 clients lsdev 88 mktcpip 88 HMC lslic 49 lslparmigr 41, 121, 128, 135, 145, 166, 172, 175, 214 lsrefcode 214 lsrsrc 67 lssyscfg 175 migrlpar 129, 145, 175, 218 mkauthkeys 138, 163, 173 ssh 163 IVM chdev 246247 ioslevel 224 lsdev 245246 lslparmigr 253 lspv 246247 lssyscfg 240 lsvet 238 odmget 246 VIOS chdev 8081, 151 errlog 215, 220 ioslevel 51, 64 lsattr 81 lsdev 79, 191 lslparmigr 172 lsmap 191 lspv 81 mkvdev 151
odmget 80 oem_setup_env 80 compatibility 28 active migration 34 completion active migration 39 configuration memory 37 processor 37 virtual adapter 37 configuration checks 34
D
dedicated adapters 25, 32 dedicated I/O 93 dedicated resources 149 demand paging 38 dirty memory pages 38, 42 disks, internal 34 distance, maximum 88 dynamic reconfiguration event check-migrate 35 post 39 post migration 35 prepare for migration 37
E
environments basic 90 errlog command 215, 220 error logging partition 28 error logs 266 error messages 101 errpt command 215, 220 EtherChannel 87 exclusive-use processor resource set (XRSET) 43 extendvg command 155
F
filemon command 43 firmware 48 supported migration matrix 50
G
gratuitous ARP 39
276
H
HACMP 16 Hardware Management Console See HMC hardware page table 32 HEA 32 IVE 22 heart-beat 38 help 274 High Availability Cluster Multiprocessing 16 HMC 20 configuration 8, 11 dual configuration 130 local 131 locking mechanism 130 migration progress window 215 preparation 61 recovery actions 217 redundant 23 reference code 214 refresh destination system 139 remote 131 requirements 47 RMC connection 25 roles hmcsuperadmin 56, 138 upgrade 62 hmcsuperadmin role 56, 138 HPT 32 hscroot user role 56 huge pages 25, 34, 74 hypervisor 21, 29, 34, 38, 42 processor compatibility mode 205
I
IEEE volume attribute 80 inactive migration 5, 25, 27 active profile 28 capability 23, 28 compatibility 23, 28 completion phase 31 dedicated I/O adapters 92 definition 5 example 9, 14 HMC 12 huge pages 25 migratability 25 migration phase 29
multiple concurrent migrations 128 partition profile 27, 30 processor compatibility mode 205, 208 remote 130 rollback 31 shared Ethernet adapter 8 stopping 31 validation 28, 216, 218 workflow 5, 12, 30 infrastructure flexibility 3 Integrated Virtual Ethernet 78 See IVE Integrated Virtualization Manager 221 activation of edition key 238 firmware 222 how active migration works 225 how inactive migration works 226 migrating 257 network 253 operating system requirements 224 partition workload group 242 physical adapters 243 preparation 232 processor compatibility mode 240 requirements 222 reserve policy 245 updates 223 validating 253, 257 validation for active migration 226227, 231 virtual Fibre Channel 248 internal disks 34 invalid state 42 ioslevel command 51, 64, 224 iSCSI 24 IVE 22 HEA 22 LHEA 25
K
kernel extensions 39, 43 check-migrate 35 prepare for migration 38
L
large pages, AIX 43 LHEA 25, 78 Link Aggregation 87 Linux 28, 44, 51
Index
277
Live Application Mobility 16 Live Partition Mobility high availability 15 PowerVM support 16 preparation 53 remote 130 Live Workload Partitions 16 LMB 34, 54 logical HEA See LHEA logical memory block See LMB logical unit number 31 logical volumes 24, 29 LPAR workload group 25, 28, 32 lsattr command 81 lsdev command 79, 88, 154, 157, 191, 201, 245246 lslic command 49 lslparmigr command 41, 121, 128, 135, 145, 166, 172, 175, 214, 253 remote capability 168 lsmap command 191 lspv command 81, 246247 lsrefcode command 214 lsrsrc command 67 lssyscfg command 175, 240 lsvet command 238 LUN mapping 31, 39
M
MAC address 28 uniqueness 35 memory affinity 43 available 56 configuration 37 dirty page 38 footprint 38 LPAR memory size 42 modification 38 pages 38 messages 101, 110 migratability 25 huge pages 25 redundant error path 25 versus partition readiness 25
migratepv command 156 migration active 31 inactive 27 messages 103 errors 101 warnings 101 mover service partition selection 111 processor compatibility mode 205 profile 27 remote 130 shared processor pool selection 114 specifying the destination profile 106 starting state 41 state 31 status window 116 steps 99 validation 110 virtual Fibre Channel 193 virtual SCSI adapter assignment 113 VLAN 112 workflow 26 migration phase active migration 36 migrlpar command 129, 145, 175, 218 example 165 migrate 165 recovery 165 stop 165 validate 165 minimal requirements 91 HMC 91 LMB 91 Network connection 91 partition 91 storage 92 VIOS 91 virtual SCSI 91 mkauthkeys command 138, 163, 173 mktcpip command 88 mkvdev command 151 mobility-aware 79 mobility-safe 79 mover service partition 24 See MSP MPIO 30, 35 MSP 21, 25, 3233, 3639, 64 configuration 96, 129, 143 definition 12
278
error log 215 information 168 lslparmigr command 168, 171 network 42 performance 42 selection 41
N
network performance 42 preparation 87 requirements 8, 12, 52, 131 state transfer 42 network time protocol See NTP new system deployment 4 non-volatile RAM 32 NPIV 7, 31, 39, 187 benefits 188 port enablement 190 switch 189 NTP 22, 32 NVRAM 27, 3132
O
odmget command 80, 246 oem_setup_env command 80 operating system migration capability 34 requirements 51 version 66
P
pages demand paging 38 transmission 38 partition alternate error log 34 configuration 32 error log 215, 220 error logging 28 functional state 39 information 170 lslparmigr command 170 memory 32 memory size 42 migration capability 34
migration from single to dual VIOS 126 migration recovery 217 minimal requirements 91 mirroring on two VIOS 121 multipath on two VIOS MPIO 124 virtual Fibre Channel 195 name 25, 28, 35 preparation 66 profile 21, 26, 30, 79 quiescing 38 readiness versus migratability 25 recovery 220 redundant error path 25 requirements 7 resumption 38 service 28 shell 30, 37 state 26, 28, 3132, 35 state transfer 38 type 34 validation 99, 109, 143 visibility 39 workload group 25, 32 partition workload groups 70 performance 42 performance monitor API 22 performance monitoring 43 physical adapters 25, 29, 34 requirements 7 physical I/O 76 physical identifier 80 physical resources 149 pinned memory 43 PMAPI 22, 38 post migration reconfiguration event 35 POWER Hypervisor 21, 34, 38, 42 powered off state 42 PowerVM 16 requirements 16 Workload Partitions Manager 16 PowerVM Enterprise Edition 47 enter activation code 48 view history log 47 prepare for migration event 37 prerequisites 23 processor compatibility mode 205 active migration 206 change 206
Index
279
current 205, 209 default 208 enhanced 205 examples 206 inactive migration 208 non-enhanced 206 preferred 205, 208 supported 206 verification 208 processors available 58 binding 43 configuration 37 state 32 profile 21, 26 active 28 last activated 27, 30 name 26, 78 pending values 37
R
RAS tools 44 reactivation active migration 39 readiness 24 battery power 24 infrastructure 25 server 24 Red Hat Enterprise Linux 51 Redbooks Web site 274 Contact us xix reducevg command 156 redundant error path reporting 68 remote migration 130131 considerations 135 information 169 infrastructure 133 lslparmigr command 169 migration 141 network test 136 private network 132 requirements 132 workflow 131 required I/O 76 requirements active migration 93 adapters 25 battery power 24
capability 23 compatibility 23 example 9 hardware 7 huge pages 25 memory 25 name 25 network 8, 12, 129 partition 7 physical adapter 7 physical adapters 93 processors 25 redundant error path 25 RMC 93 storage 8, 92 synchronization 32 VASI 93 VIOS 78 virtual SCSI 91 workload group 25 reserve_policy attributes 79, 92 resource availability 35 resource balancing 4 Resource Monitoring and Control See RMC resource sets, AIX 43 resource state 32 RMC 20, 2425, 28, 34, 66, 131 rollback 31, 42
S
SAN 2425, 32, 131 SCSI reservation 24 SEA 8, 32, 87 server readiness 24 service partition 28 shared Ethernet adapter See SEA shared processor pool 147, 171 CLI 148 information 168 lslparmigr command 168 SIMD 23 SMS 27 SSH key authentication 132 key generation 136 ssh command 163
280
ssh-keygen command 163 state active partition 32 changes 35 invalid 42 migration starting 41 of resource 32 powered off 42 processor 32 transfer 29 transmission 38 virtual adapter 32 state transfer network 42 stopping active migration 41 inactive migration 31 storage preparation 79 requirements 8, 51 storage area network See SAN storage pool 24 SUSE Linux Enterprise Server 51 suspend window 39 synchronization 32 system preparation 54 reference codes 260 requirements 47 trace 43
MAC address 28, 35 partition name 25, 35 upgrade licensed internal code 49
V
validation 121 inactive migration 28 remote migration 138 workflow 28 VASI 21, 37, 39 VIOS configuration 129 dual 121 error log 215 minimal requirements 91 multiple 120 preparation 63 requirements 78, 50 See also Virtual I/O Server shared Ethernet failover 120 single to dual 126 VASI 93 version 64 virtual adapter 29 configuration 37 migration map 30, 35, 38, 40 slot numbering 120 state 32 virtual device mapping 33 virtual Ethernet 24 virtual Fibre Channel 24, 30, 35, 3840, 120, 131, 187 basic configuration 187 benefits 188 migration 193 multipathing 196 preparation 190 requirements 189 worldwide port name (WWPN) 190 Virtual I/O Server 25, 29, 32, 34 information 168 lslparmigr command 168 See also VIOS 21 selection for active migration 40 virtual optical devices 32 virtual SCSI 24, 30, 3435, 3839, 120, 131 mappings 25, 165 reserve_policy 92
T
throttling workload 38 time synchronization 32 time of day 32 time reference configuration 98 time reference partition (TRP) 22 time-of-day clocks 22 synchronization 65 topas command 43 tprof command 43 TRP 22
U
unique identifier 80 uniqueness
Index
281
W
warning messages 101 warnings 103 workflow 26 active migration 34 inactive migration 30 validation 28 workload throttling 38 workload group 25, 28, 32 workload manager 43 workload partition See WPAR WPAR migration 16 requirements 17 WWPN 190
X
XRSET 43
282
Back cover
BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.