IBM Flex System Manual
IBM Flex System Manual
IBM Flex System Manual
David Watts
Jose Martin Abeleira
Kerry Anders
Alberto Damigella
Bill Miller
William Powell
ibm.com/redbooks
International Technical Support Organization
June 2012
SG24-7989-00
Note: Before using this information and the product it supports, read the information in
“Notices” on page ix.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
iv IBM Flex System p260 and p460 Planning and Implementation Guide
4.5 IBM POWER7 processor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.5.1 Processor options for Power Systems compute nodes. . . . . . . . . . . 77
4.5.2 Unconfiguring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.5.3 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.6 Memory subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.6.1 Memory placement rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.7 Active Memory Expansion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.8 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.8.1 Storage configuration impact to memory configuration . . . . . . . . . . . 96
4.8.2 Local storage and cover options . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.8.3 Local drive connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.8.4 RAID capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.9 I/O adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.9.1 I/O adapter slots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.9.2 PCI hubs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.9.3 Available adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.9.4 Adapter naming convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.9.5 IBM Flex System EN4054 4-port 10Gb Ethernet Adapter. . . . . . . . 102
4.9.6 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter. . . . . . . . . 104
4.9.7 IBM Flex System FC3172 2-port 8Gb FC Adapter . . . . . . . . . . . . . 105
4.9.8 IBM Flex System IB6132 2-port QDR InfiniBand Adapter. . . . . . . . 107
4.10 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.10.1 Flexible Support Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.10.2 Serial over LAN (SOL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.10.3 Anchor card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.11 Integrated features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.12 IBM EnergyScale. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.12.1 IBM EnergyScale technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.12.2 EnergyScale device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.14 Warranty and maintenance agreements . . . . . . . . . . . . . . . . . . . . . . . . 114
4.15 Software support and remote technical support . . . . . . . . . . . . . . . . . . 115
Contents v
5.4.2 SAN and Fibre Channel redundancy . . . . . . . . . . . . . . . . . . . . . . . 133
5.5 Dual VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.5.1 Dual VIOS on Power Systems compute nodes. . . . . . . . . . . . . . . . 136
5.6 Power planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
5.6.1 Power Systems compute node power supply features . . . . . . . . . . 138
5.6.2 Power Systems compute node PDU and UPS planning . . . . . . . . . 138
5.6.3 Chassis power supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
5.6.4 Power supply management policies . . . . . . . . . . . . . . . . . . . . . . . . 142
5.6.5 Power limiting and capping policies . . . . . . . . . . . . . . . . . . . . . . . . 144
5.6.6 Chassis power requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.7 Cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
5.7.1 IBM Flex System Enterprise Chassis fan population . . . . . . . . . . . 149
5.7.2 Active Energy Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
5.7.3 Supported environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
5.8 Planning for virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
5.8.1 Virtual servers without VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.8.2 Virtual server with VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
vi IBM Flex System p260 and p460 Planning and Implementation Guide
Chapter 7. Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
7.1 PowerVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
7.1.1 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
7.1.2 POWER Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
7.1.3 Preparing to use the IBM Flex System Manager for partitioning. . . 284
7.2 Creating the VIOS virtual server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
7.2.1 Using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
7.2.2 Using the IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . 288
7.3 Modifying the VIOS definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
7.3.1 Using the IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . 305
7.3.2 Using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
7.4 Creating an AIX or Linux virtual server . . . . . . . . . . . . . . . . . . . . . . . . . . 308
7.5 Creating an IBM i virtual server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
7.6 Preparing for a native operating system installation . . . . . . . . . . . . . . . . 315
7.6.1 Creating a full node server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Contents vii
viii IBM Flex System p260 and p460 Planning and Implementation Guide
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Intel Xeon, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
x IBM Flex System p260 and p460 Planning and Implementation Guide
Preface
The IBM Flex System p260 and p460 Compute Nodes are IBM Power
Systems™ servers optimized for virtualization, performance, and efficiency. The
nodes support IBM AIX®, IBM i, or Linux operating environments, and are
designed to run various workloads in IBM PureFlex System.
This book is for customers, IBM Business Partners, and IBM technical specialists
that want to understand the new offerings and to plan and implement an IBM
Flex System installation that involves the Power Systems compute nodes.
Kerry Anders is a Consultant for IBM POWER® systems and IBM PowerVM® in
Lab Services for the IBM Systems and Technology Group, based in Austin,
Texas. He is part of the Lab Service core team that implements IBM PureFlex
System and supports clients in implementing IBM Power Systems blades using
Virtual I/O Server, Integrated Virtualization Manager, and AIX. Previously, he
was the Systems Integration Test Team Lead for the IBM BladeCenter JS21
blade with IBM SAN storage using AIX and Linux. His prior work includes test
experience with the JS20 blade, and using AIX and Linux in SAN environments.
Kerry began his career with IBM in the Federal Systems Division supporting
NASA at the Johnson Space Center as a Systems Engineer. He transferred to
Austin in 1993. Kerry has authored four other IBM Redbooks publications, the
most recent being IBM BladeCenter PS703 and PS704 Technical Overview and
Introduction, REDP-4744.
xii IBM Flex System p260 and p460 Planning and Implementation Guide
Bill Miller is an IT Specialist in Lab Services Technical Training. He has been
with IBM since 1983. He has had an array of responsibilities, starting in
development, and then moving to roles as a Systems Engineer and IBM Global
Services consultant that focuses on AIX, IBM Tivoli® Storage Manager, and IBM
HACMP™ (IBM PowerHA®) planning and implementation. He is currently
responsible for course development, maintenance, and delivery for the PowerHA
and Flex System curriculums.
Will Powell has been a specialist in hardware and warranty support for
System x, BladeCenter, POWER blades, and IBM iDataPlex® since 2004 at the
IBM Technical Support Center in Atlanta, Georgia. He has particular expertise
and experience with integrated networking, storage, Fibre Channel, InfiniBand,
clustering, RAID, and high-availability computing. He is a corporate member of
the Technology Association of Georgia (TAG). He has provided technical
consulting to Rivers of the World since 2000. Will holds a Bachelor of Science in
Computer Science degree from North Georgia College & State University.
Figure 1 The team (l-r) - David, Martin, Kerry, Will, Alberto, Bill
Preface xiii
Mike Easterly
Diana Cunniffe
Kyle Hampton
Botond Kiss
Shekhar Mishra
Justin Nguyen
Sander Kim
Dean Parker
Hector Sanchez
David Tareen
David Walker
Randi Wood
Bob Zuber
xiv IBM Flex System p260 and p460 Planning and Implementation Guide
Now you can become a published author, too!
Here’s an opportunity to spotlight your skills, grow your career, and become a
published author—all at the same time! Join an ITSO residency project and help
write a book in your area of expertise, while honing your experience using
leading-edge technologies. Your efforts will help to increase product acceptance
and customer satisfaction, as you expand your network of technical contacts and
relationships. Residencies run from two to six weeks in length, and you can
participate either in person or as a remote resident working from your home
base.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
Preface xv
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the
IBM Redbooks weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
xvi IBM Flex System p260 and p460 Planning and Implementation Guide
1
Information technology (IT) is a constant part of business and of our lives. IBM
expertise in delivering IT solutions has helped the planet become smarter. As
organizational leaders seek to extract more real value from their data, business
processes, and other key investments, IT is moving to the strategic center
of business.
The IBM PureSystems offerings are optimized for performance and virtualized
for efficiency. These systems offer a no-compromise design with system-level
upgradeability. IBM PureSystems is built for cloud computing, containing
“built-in” flexibility and simplicity.
2 IBM Flex System p260 and p460 Planning and Implementation Guide
IBM PureFlex System recommends workload placement based on virtual
machine compatibility and resource availability. Using built-in virtualization
across servers, storage, and networking, the infrastructure system enables
automated scaling of resources and true workload mobility.
IBM PureFlex System combines advanced IBM hardware and software along
with patterns of expertise and integrates them into three optimized configurations
that are simple to acquire and deploy so you get fast time to value for
your solution.
IBM Flex System Manager IBM Flex System IBM Flex System Flex System Manager
software license Manager with 1-year Manager Advanced Advanced with 3-year
service and support with 3-year service and service and support
support
Chassis Management 2 2 2
Module
IBM Storwize® V7000 Disk Yes (redundant Yes (redundant Yes (redundant
System controller) controller) controller)
IBM Storwize V7000 Base with 1-year Base with 3-year Base with 3-year
Software software maintenance software maintenance software maintenance
agreement agreement agreement
The fundamental building blocks of IBM PureFlex System solutions are the IBM
Flex System Enterprise Chassis, complete with compute nodes, networking,
and storage.
For more details about IBM PureFlex System, see Chapter 2, “IBM PureFlex
System” on page 15.
4 IBM Flex System p260 and p460 Planning and Implementation Guide
With the IBM PureApplication System, you can provision your own patterns of
software, middleware, and virtual system resources. You can provision these
patterns within a unique framework that is shaped by IT preferred practices and
industry standards that are culled from many years of IBM experience with
clients and from a deep understanding of smarter computing. These IT preferred
practices and standards are infused throughout the system.
You can use these patterns to achieve the following types of value:
Agility. As you seek to innovate to bring products and services to market
faster, you need fast time-to-value. You can use expertise built into a solution
to eliminate manual steps, automate delivery, and support innovation.
Efficiency. To reduce costs and conserve valuable resources, you must get
the most out of your systems with energy efficiency, simple management, and
fast, automated response to problems. With built-in expertise, you can
optimize your critical business applications and get the most out of
your investments.
IBM PureApplication System is outside the scope of this book. For more details
about it, see the following website:
http://ibm.com/expert
6 IBM Flex System p260 and p460 Planning and Implementation Guide
1.3.1 Management
IBM Flex System Manager is designed to optimize the physical and virtual
resources of the IBM Flex System infrastructure while simplifying and automating
repetitive tasks. From easy system set-up procedures with wizards and built-in
expertise, to consolidated monitoring for all of your resources (compute, storage,
networking, virtualization, and energy), IBM Flex System Manager provides core
management functionality along with automation. It is an ideal solution that you
can use to reduce administrative expense and focus your efforts on
business innovation.
1.3.3 Storage
You can use the storage capabilities of IBM Flex System to gain advanced
functionality with storage nodes in your system while taking advantage of your
existing storage infrastructure through advanced virtualization.
1.3.4 Networking
With a range of available adapters and switches to support key network
protocols, you can configure IBM Flex System to fit in your infrastructure while
still being ready for the future. The networking resources in IBM Flex System are
standards-based, flexible, and fully integrated into the system, so you get
no-compromise networking for your solution. Network resources are virtualized
and managed by workload. These capabilities are automated and optimized to
make your network more reliable and simpler to manage.
1.3.5 Infrastructure
The IBM Flex System Enterprise Chassis is the foundation of the offering,
supporting intelligent workload deployment and management for maximum
business agility. The 14-node, 10 U chassis delivers high-performance
connectivity for your integrated compute, storage, networking, and management
resources. The chassis is designed to support multiple generations of technology
and offers independently scalable resource pools for higher utilization and lower
cost per workload.
8 IBM Flex System p260 and p460 Planning and Implementation Guide
1.4 IBM Flex System overview
The expert integrated system of IBM PureSystems is based on a new hardware
and software platform called IBM Flex System.
The FSM provides a world-class user experience with a truly “single pane of
glass” approach for all chassis components. Featuring an instant
resource-oriented view of the Enterprise Chassis and its components, the FSM
provides vital information for real-time monitoring.
Beyond the physical world of inventory, configuration, and monitoring, IBM Flex
System Manager enables virtualization and workload optimization for a new
class of computing:
Resource utilization: Within the network fabric, FSM detects congestions,
notification policies, and relocation of physical and virtual machines, including
storage and network configurations.
Resource pooling: FSM pools network switching, with placement advisors
that consider VM compatibility, processor, availability, and energy.
Intelligent automation: FSM has automated and dynamic VM placement
based on utilization, energy, hardware predictive failure alerts, or
host failures.
The ground-up design of the Enterprise Chassis reaches new levels of energy
efficiency through innovations in power, cooling, and air flow. Smarter controls
and futuristic designs allow the Enterprise Chassis to break free of “one size fits
all” energy schemes.
10 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 1-2 shows the IBM Flex System Enterprise Chassis.
You have the opportunity to gain future expansion capabilities to existing and
new compute nodes.
Providing innovation, leadership, and choice in the I/O module portfolio uniquely
positions IBM Flex System to provide meaningful solutions to address
customer needs.
12 IBM Flex System p260 and p460 Planning and Implementation Guide
Here are the I/O Modules offered with IBM Flex System:
IBM Flex System Fabric EN4093 10Gb Scalable Switch
IBM Flex System EN2092 1Gb Ethernet Scalable Switch
IBM Flex System EN4091 10Gb Ethernet Pass-thru
IBM Flex System FC3171 8Gb SAN Switch
IBM Flex System FC3171 8Gb SAN Pass-thru
IBM Flex System FC5022 16Gb SAN Scalable Switch
IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch
IBM Flex System IB6131 InfiniBand Switch
IBM Flex System IB6132 2-port QDR InfiniBand Adapter
Figure 1-4 shows the IBM Flex System Fabric EN4093 10Gb Scalable Switch.
Figure 1-4 IBM Flex System Fabric EN4093 10Gb Scalable Switch
This book is a comprehensive guide to IBM PureFlex System and the Power
Systems compute nodes. We introduce the offerings and describe the compute
nodes in detail. We then describe the management features of IBM PureFlex
System and describe partitioning and installing an operating system.
16 IBM Flex System p260 and p460 Planning and Implementation Guide
2.1 IBM PureFlex System Express
The tables in this section represent the hardware, software, and services that
make up IBM PureFlex System Express. We describe the following items:
Chassis
Top-of-Rack Ethernet switch
Top-of-Rack SAN switch
Compute nodes
IBM Flex System Manager
IBM Storwize V7000
Rack cabinet
Software
Services
To specify IBM PureFlex System Express in the IBM ordering system, specify the
indicator feature code listed in Table 2-1 for each machine type.
2.1.1 Chassis
Table 2-2 lists the major components of the IBM Flex System Enterprise Chassis
including the switches and options.
Feature codes: The tables in this section do not list all feature codes. Some
features are not listed here for brevity.
3593 A0TB IBM Flex System Fabric EN4093 10Gb Scalable Switch 1
ECB5 A1PJ 3m IBM Passive DAC SFP+ Cable 1 per EN4093 switch
18 IBM Flex System p260 and p460 Planning and Implementation Guide
2.1.3 Top-of-Rack SAN switch
If more than one chassis is configured, then a Top-of-Rack SAN switch is added
to the configuration. If only one chassis is configured, then the SAN switch is
optional. Table 2-4 lists the switch components.
Table 2-5 lists the major components of the IBM Flex System p260 Compute
Node.
Memory - 8 GB per core minimum with all DIMM slots filled with same memory type
Table 2-6 lists the major components of the IBM Flex System p24L Compute
Node.
Memory - 2 GB per core minimum with all DIMM slots filled with same memory type
20 IBM Flex System p260 and p460 Planning and Implementation Guide
Table 2-7 lists the major components of the IBM Flex System x240 Compute
Node.
1759 A1R1 IBM Flex System CN4054 10Gb Virtual Fabric Adapter 1
(select if x240 without embedded 10Gb Virtual Fabric is
selected - EN21/A1BD)
3767 A1AV 1TB 2.5” SATA 7.2K RPM hot-swap 6 Gbps HDD 1
a. In the AAS system, FC EM09 has pairs of DIMMs. In the XCC system, FC 8941 has single DIMMs.
The DIMMs are otherwise identical.
22 IBM Flex System p260 and p460 Planning and Implementation Guide
a. Select one PDU line item from this list. These items are mutually exclusive. Most of them have a
quantity of 2, except for the 16A PDU, which has a quantity of 4. the selection depends on the
customer’s country and utility power requirements.
2.1.8 Software
This section lists the software features of IBM PureFlex System Express.
Table 2-11 Software features for IBM PureFlex System Express with AIX and IBM i on Power
AIX V6 AIX V7 IBM i V6.1 IBM i V7.1
Operating system 5765-G62 AIX 5765-G98 AIX 5761-SS1 IBM 5770-SS1 IBM
Standard V6 Standard V7 i V6.1 i V7.1
5771-SWM 1 5771-SWM 1 5733-SSP 1 yr 5733-SSP 1 yr
yr SWMA yr SWMA SWMA SWMA
Table 2-12 Software features for IBM PureFlex System Express with RHEL and SLES on Power
Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES)
24 IBM Flex System p260 and p460 Planning and Implementation Guide
Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES)
Table 2-13 Software features for IBM PureFlex System Express on Intel Xeon based compute nodes
Intel Xeon based compute nodes Intel Xeon based compute nodes
(AAS) (HVEC)
IBM Flex System 5765-FMX FSM Standard 94Y9782 FSM Standard 1 year
Manager 5660-FMX 1 year software SWMA
maintenance
IBM Flex System 5765-FMS IBM Flex System 94Y9783 IBM Flex System Manager
Manager Manager Advanced Advanced
Operating system 5639-OSX RHEL for x86 5731RSI RHEL for x86 - L3 support
5639-W28 Windows 2008 R2 only
5639-CAL Windows 2008 Client 5731RSR RHEL for x86 - L1-L3
Access support
5731W28 Windows 2008 R2
5731CAL Windows 2008 Client
Access
To specify IBM PureFlex System Standard in the IBM ordering system, specify
the indicator feature code listed in Table 2-14 for each machine type.
26 IBM Flex System p260 and p460 Planning and Implementation Guide
2.2.1 Chassis
Table 2-15 lists the major components of the IBM Flex System Enterprise
Chassis, including the switches and options.
Feature codes: The tables in this section do not list all feature codes. Some
features are not listed here for brevity.
3593 A0TB IBM Flex System Fabric EN4093 10Gb Scalable Switch 1
28 IBM Flex System p260 and p460 Planning and Implementation Guide
Table 2-18 lists the major components of the IBM Flex System p460 Compute
Node.
Memory - 8 GB per core minimum with all DIMM slots filled with same memory type
Table 2-19 lists the major components of the IBM Flex System x240 Compute
Node.
1759 A1R1 IBM Flex System CN4054 10Gb Virtual Fabric Adapter 1
(select if x240 without embedded 10Gb Virtual Fabric is
selected - EN21/A1BD)
3767 A1AV 1TB 2.5” SATA 7.2K RPM hot-swap 6 Gbps HDD 1
a. In the AAS system, FC EM09 are pairs of DIMMs. In the XCC system, FC 8941 are single DIMMs.
The DIMMs are otherwise identical.
30 IBM Flex System p260 and p460 Planning and Implementation Guide
AAS feature XCC feature Description Minimum
code code quantity
Table 2-23 Software features for IBM PureFlex System Standard with AIX and IBM i on Power
AIX V6 AIX V7 IBM i V6.1 IBM i V7.1
Operating system 5765-G62 AIX 5765-G98 AIX 5761-SS1 IBM 5770-SS1 IBM
Standard V6 Standard V7 i V6.1 i V7.1
5773-SWM 5773-SWM 5773-SWM 5773-SWM
3 year SWMA 3 year SWMA 3 year SWMA 3 year SWMA
32 IBM Flex System p260 and p460 Planning and Implementation Guide
AIX V6 AIX V7 IBM i V6.1 IBM i V7.1
Cloud Software Not applicable Not applicable Not applicable Not applicable
(optional)
Table 2-24 Software features for IBM PureFlex System Standard with RHEL and SLES on Power
Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES)
Table 2-25 Software features for IBM PureFlex System Standard on Intel Xeon based compute nodes
Intel Xeon based compute nodes Intel Xeon based compute nodes
(AAS) (HVEC)
IBM Flex System 5765-FMX FSM Standard 94Y9787 FSM Standard, 3 year
Manager 5662-FMX 3 year software SWMA
maintenance
IBM Flex System 5765-FMS IBM Flex System 94Y9783 IBM Flex System Manager
Manager Manager Advanced Advanced
Operating system 5639-OSX RHEL for x86 5731RSI RHEL for x86 - L3 support
5639-W28 Windows 2008 R2 only
5639-CAL Windows 2008 Client 5731RSR RHEL for x86 - L1-L3
Access support
5731W28 Windows 2008 R2
5731CAL Windows 2008 Client
Access
34 IBM Flex System p260 and p460 Planning and Implementation Guide
2.2.9 Services
IBM PureFlex System Standard includes the following services:
Service & Support offerings:
– Software maintenance: 1 year 9x5 (9 hours per day, 5 days per week).
– Hardware maintenance: 3 years 9x5 Next Business Day service.
Maintenance and Technical Support (MTS) offerings:
– 3 years with one microcode analysis per year.
Lab Services:
– 5 days of on-site Lab services
– If the first compute node is a p260 or p460, 6911-300 is specified.
– If the first compute node is a x240, 6911-100 is specified.
To specify IBM PureFlex System Enterprise in the IBM ordering system, specify
the indicator feature code listed in Table 2-26 for each machine type.
Feature codes: The tables in this section do not list all feature codes. Some
features are not listed here for brevity.
3593 A0TB IBM Flex System Fabric EN4093 10Gb Scalable Switch 2
3596 A1EL IBM Flex System Fabric EN4093 10Gb Scalable Switch 2
Upgrade 1
3597 A1EM IBM Flex System Fabric EN4093 10Gb Scalable Switch 2
Upgrade 2
36 IBM Flex System p260 and p460 Planning and Implementation Guide
2.3.2 Top-of-Rack Ethernet switch
A minimum of two Top-of-Rack (TOR) Ethernet switches are required in the
Enterprise configuration. Table 2-28 lists the switch components.
Memory - 8 GB per core minimum with all DIMM slots filled with same memory type
Table 2-31 lists the major components of the IBM Flex System x240 Compute
Node.
1764 A2N5 IBM Flex System FC3052 2-port 8Gb FC Adapter 1 per
1759 A1R1 IBM Flex System CN4054 10Gb Virtual Fabric Adapter 1 per
(select if x240 without embedded 10Gb Virtual Fabric is
selected - EN21/A1BD)
38 IBM Flex System p260 and p460 Planning and Implementation Guide
AAS feature XCC feature Description Minimum
code code quantity
3767 A1AV 1TB 2.5” SATA 7.2K RPM hot-swap 6 Gbps HDD 1
a. In the AAS system, FC EM09 has pairs of DIMMs. In the XCC system, FC 8941 has single DIMMs.
The DIMMs are otherwise identical.
2.3.8 Software
This section lists the software features of IBM PureFlex System Enterprise.
40 IBM Flex System p260 and p460 Planning and Implementation Guide
AIX and IBM i
Table 2-35 lists the software features included with the Enterprise configuration
on POWER7 processor-based compute nodes for AIX and IBM i.
Table 2-35 Software features for IBM PureFlex System Enterprise with AIX and IBM i on Power
AIX 6 AIX 7 IBM i 6.1 IBM i 7.1
Operating system 5765-G62 AIX 5765-G98 AIX 5761-SS1 IBM 5770-SS1 IBM
Standard V6 Standard V7 i V6.1 i V7.1
5773-SWM 5773-SWM 5773-SWM 5773-SWM
3 year SWMA 3 year SWMA 3 year SWMA 3 year SWMA
Cloud Software Not applicable Not applicable Not applicable Not applicable
(optional)
Table 2-36 Software features for IBM PureFlex System Enterprise with RHEL and SLES on Power
Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES)
Table 2-37 Software features for IBM PureFlex System Enterprise on Intel Xeon based compute nodes
Intel Xeon based compute nodes Intel Xeon based compute nodes
(AAS) (HVEC)
IBM Flex System 5765-FMX FSM Standard 94Y9787 FSM Standard, 3 year
Manager 5662-FMX 3 year software SWMA
maintenance
42 IBM Flex System p260 and p460 Planning and Implementation Guide
Intel Xeon based compute nodes Intel Xeon based compute nodes
(AAS) (HVEC)
IBM Flex System 5765-FMS IBM Flex System 94Y9783 IBM Flex System Manager
Manager Manager Advanced Advanced
Operating system 5639-OSX RHEL for x86 5731RSI RHEL for x86 - L3 support
5639-W28 Windows 2008 R2 only
5639-CAL Windows 2008 Client 5731RSR RHEL for x86 - L1-L3
Access support
5731W28 Windows 2008 R2
5731CAL Windows 2008 Client
Access
2.3.9 Services
IBM PureFlex System Enterprise includes the following services:
Service & Support offerings:
– Software maintenance: 1 year 9x5 (9 hours per day, 5 days per week).
– Hardware maintenance: 3 years 9x5 Next Business Day service.
Maintenance and Technical Support (MTS) offerings:
– 3 years with one microcode analysis per year.
Lab Services:
– 7 days of on-site lab services
– If the first compute node is a p260 or p460, 6911-300 is specified.
– If the first compute node is a x240, 6911-100 is specified.
With SmartCloud Entry, you can build on your current virtualization strategies to
continue to gain IT efficiency, flexibility, and control.
You can use existing IBM server investments and virtualized environments to
deploy IBM SmartCloud Entry with the essential cloud infrastructure capabilities:
44 IBM Flex System p260 and p460 Planning and Implementation Guide
Reliably track images to ensure compliance and minimize security risks.
Optimize resources, reducing the number of virtualized images and the
storage required for them.
For more information about IBM SmartCloud Entry, go to the following website:
http://ibm.com/systems/cloud
Figure 3-1 IBM Flex System Enterprise Chassis - front and rear
The chassis provides locations for 14 half-wide nodes, four scalable I/O switch
modules, and two Chassis Management Modules. Current node configurations
include half-wide and full-wide options. The chassis supports other
configurations, such as full-wide by double-high. Power and cooling can be
scaled up in a modular fashion as additional nodes are added.
Maximum number of 14 half-wide (single bay), or 7 full-wide (two bays) or 3 double-height full-wide
compute nodes (four bays). Mixing is supported.
supported
48 IBM Flex System p260 and p460 Planning and Implementation Guide
Feature Specifications
Management One or two Chassis Management Modules, for basic chassis management.
Two CMMs form a redundant pair; one CMM is standard in 8721-A1x. The
CMM interfaces with the integrated management module (IMM) or flexible
service processor (FSP) integrated in each compute node in the chassis.
There is an optional IBM Flex System Manager management appliance for
comprehensive management, including virtualization, networking, and
storage management.
I/O architecture Up to 8 lanes of I/O to an I/O adapter, with each lane capable of up to 16
Gbps bandwidth. Up to 16 lanes of I/O to a half wide-node with two adapters.
There are a wide variety of networking solutions, including Ethernet, Fibre
Channel, FCoE, and InfiniBand
Power supplies Six 2500-watt power modules that provide N+N or N+1 redundant power; two
power modules are standard in model 8721-A1x. The power supplies are
80 PLUS Platinum certified and provides 95% efficiency at 50% load and
92% efficiency at 100% load. There is a power capacity of 2500 W output
rated at 200 VAC. Each power supply contains two independently powered
40 mm cooling fan modules.
Fan modules Ten fan modules (eight 80 mm fan modules and two 40 mm fan modules);
Four 80 mm and two 40 mm fan modules are standard in model 8721-A1x.
Figure 3-2 shows a front view of the chassis, with the bay locations identified and
several half-wide nodes installed.
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Bay 13 a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Bay 14
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
Bay 11 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Bay 12
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaa aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaa
Bay 9 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaa
a a a a a a a a
Bay 10
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaa
a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a
a a a a a
a a a a a a a a a a aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
Bay 7 aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaa
a a a a a a a a
Bay 8
a a a a a a a a
aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
Bay 5 aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaa
a a a a a a a a
Bay 6
a a a a a a a a
aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
Bay 3 aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaa
a a a a a a a a
Bay 4
a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa
Bay 1 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaa
a a a a a a a a
Bay 2
a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a
Information panel
Compute nodes in the Intel and IBM processor types have options for processor
speeds, quantities of memory, expansion cards, and internal disk types
and sizes.
50 IBM Flex System p260 and p460 Planning and Implementation Guide
3.2 I/O modules
The I/O modules or switches provide external connectivity to nodes outside the
chassis and internal connectivity to the nodes in the chassis. These switches are
scalable in terms of the number of internal and external ports that can be
enabled, and how these ports can be used to aggregate bandwidth and create
virtual switches within a physical switch. The number of internal and external
physical ports available exceeds previous generations of products. These
additional ports can be scaled or enabled as requirements grow, and additional
capability can be introduced.
The Enterprise Chassis can accommodate a total of four I/O modules, which are
installed in a vertical orientation into the rear of the chassis, as shown in
Figure 3-3.
Node
A1
bay 1 .... Switch .
.. bay 1 ..
A2
Node
A1 .... Switch .
bay 2
.. bay 3 ..
A2
.... Switch .
Node
A1
.. bay 2 ..
bay
13/14
A2
A3
.... Switch .
.. bay 4 ..
A4
Figure 3-4 Connectivity between I/O adapter slots and switch bays
52 IBM Flex System p260 and p460 Planning and Implementation Guide
The following Ethernet switches were announced at the time of writing:
IBM Flex System Fabric EN4093 10Gb Scalable Switch
– 42x internal ports, 14x 10 Gb and 2x 40 Gb (convertible to 8x
10 Gb) uplinks
– Base switch: 10x external 10 Gb uplinks, 14x 10 Gb internal 10 Gb ports
– Upgrade 1: Adds 2x external 40 Gb uplinks and 14x internal 10 Gb ports
– Upgrade 2: Adds 4x external 10 Gb uplinks, 14x internal 10 Gb ports
IBM Flex System EN2092 1Gb Ethernet Scalable Switch
– 28 Internal ports, 20 x 1 Gb and 4 x 10 Gb uplinks
– Base: 14 internal 1 Gb ports, 10 external 1 Gb ports
– Upgrade 1: Adds 14 internal 1 Gb ports, 10 external 1 Gb ports
– Uplinks upgrade: Adds four external 10 Gb uplinks
IBM Flex System EN4091 10Gb Ethernet Pass-thru
– 14x 10 Gb internal server ports
– 14x 10 Gb external SFP+ ports
The following Fibre Channel switches were announced at the time of writing:
IBM Flex System FC3171 8Gb SAN Pass-thru
– 28 internal and six external ports: 2, 4, and 8 Gb capable
IBM Flex System FC3171 8Gb SAN Switch
– 28 internal and six external ports: 2, 4, and 8 Gb capable
IBM Flex System FC5022 16Gb SAN Scalable Switch and IBM Flex System
FC5022 24-port 16Gb ESB SAN Scalable Switch
– 28 internal and 20 external ports; 4, 8, and 16 Gb capable
– FC5022 16Gb SAN Scalable Switch: Any 12 ports
– FC5022 16Gb ESB Switch: Any 24 ports
The private management network is the connection for all traffic related to the
remote presence of the nodes, delivery of firmware packages, and a direct
connection to the management controller on each component.
54 IBM Flex System p260 and p460 Planning and Implementation Guide
3.3.3 Chassis Management Module
The Chassis Management Module (CMM) is a hot-swap module that is central to
the management of the chassis and is required in each chassis. The CMM
automatically detects any installed modules in the chassis and stores vital
product data (VPD) from the modules. The CMM also acts as an aggregation
point for the chassis nodes and switches, including enabling all of the
management communications by Ethernet connection.
The CMM is also the key component that enables the internal management
network. The CMM has a multiport, L2, 1 Gb Ethernet switch with dedicated links
to all 14 node bays, the four switch bays, and the optional second CMM.
The second optional CMM provides redundancy in active and standby modes,
has the same internal connections as the primary CMM, and is aware of all
activity of the primary CMM through the trunk link between the two CMMs. This
situation ensures that the backup CMM is ready to take over in a
failover situation.
The FSM is available in two editions: IBM Flex System Manager and IBM Flex
System Manager Advanced.
The IBM Flex System Manager base feature set offers the following functionality:
Support up to four managed chassis
Support up to 5,000 managed elements
Auto-discovery of managed elements
Overall health status
Monitoring and availability
Hardware management
Security management
Administration
Network management (Network Control)
Storage management (Storage Control)
Virtual machine lifecycle management (VMControl Express)
The IBM Flex System Manager advanced feature set offers all the capabilities of
the base feature set plus:
Image management (VMControl Standard)
Pool management (VMControl Enterprise)
56 IBM Flex System p260 and p460 Planning and Implementation Guide
3.4 Power supplies
A minimum of two and a maximum of six power supplies can be installed in the
Enterprise Chassis (Figure 3-5). All power supply modules are combined into a
single power domain in the chassis, which distributes power to each of the
compute nodes and I/O modules through the Enterprise Chassis midplane.
Power
supply Power
bay 6 supply
bay 3
Power Power
supply supply
bay 5 bay 2
Power Power
supply supply
bay 4 bay 1
The power supplies are 80 PLUS Platinum certified and are rated at 2500 W
output rated at 200 VAC, with oversubscription to 3538 W output at 200 VAC. A
C20 socket is provided for connection to a power cable, such as a C19-C20.
N+N means that there are N backup devices for N devices, where N number
of devices can fail and each has a backup.
The redundancy options are configured from the Chassis Management Module
and can be changed nondisruptively. The five policies are shown in Table 3-2.
In addition to the redundancy settings, a power limiting and capping policy can
be enabled by the Chassis Management Module to limit the total amount of
power that a chassis requires.
58 IBM Flex System p260 and p460 Planning and Implementation Guide
3.5 Cooling
On the topic of Enterprise Chassis cooling, the flow of air in the Enterprise
Chassis follows a front to back cooling path, where cool air is drawn in at the front
of the chassis and warm air is exhausted to the rear. Air movement is controlled
by hot-swappable fan modules in the rear of the chassis and a series of internal
dampers.
The cooling is scaled up as required, based upon the number of nodes installed.
The number of cooling fan modules required for a number of nodes is described
in Table 3-3 on page 61.
With these inputs, each fan module has greater independent granularity in fan
speed control. This results in lower airflow volume (CFM) and lower cooling
energy spent at the chassis level for any configuration and workload.
Fan Fan
bay 10 bay 5
Fan
bay 4
Fan
bay 9
Fan
bay 3
Fan
bay 8
Fan Fan
bay 7 bay 2
Fan
Fan
bay 6
bay 1
60 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 3-7 shows the node cooling zones and fan module locations.
9 4
8 3
7 2
6 1
Figure 3-7 Enterprise Chassis node cooling zones and fan module locations
When a node is not inserted in a bay, an airflow damper closes in the midplane to
prevent air from being drawn through the unpopulated bay. By inserting a node
into a bay, the damper is opened, thus allowing cooling of the node in that bay.
Table 3-3 shows the relationship between the number of fan modules and the
number of nodes supported.
Base 4 4
First option 6 8
Second option 8 14
Chassis area: The chassis area for the node is effectively one large chamber.
The nodes can be placed in any slot; however, preferred practices indicate
that the nodes need to be placed as close together as possible to be inline
with the fan modules.
9 4
8 3
7 2
6 1
Figure 3-8 Enterprise Chassis switch module and Chassis Management Module cooling
zones 3 and 4
The integral power supply fans are not dependent upon the power supply being
functional. Rather, they are powered independently from the midplane.
62 IBM Flex System p260 and p460 Planning and Implementation Guide
4
In this chapter, we describe the server offerings and the technology used in their
implementation. We cover the following topics in this chapter:
Overview
Front panel
Chassis support
System architecture
IBM POWER7 processor
Memory subsystem
Active Memory Expansion
Storage
I/O adapters
System management
Integrated features
IBM EnergyScale
Operating system support
4.1 Overview
The Power Systems compute nodes for IBM Flex System have three variations
tailored to your business needs. They are shown in Figure 4-1.
IBM Flex System p24L Compute Node: A half-wide compute node
IBM Flex System p260 Compute Node: A half-wide compute node
IBM Flex System p460 Compute Node: A full-wide compute node
Figure 4-1 POWER7 based compute nodes - The IBM Flex System p260 Compute Node and IBM Flex
System p24L Compute Node (left) and the IBM Flex System p460 Compute Node (right)
64 IBM Flex System p260 and p460 Planning and Implementation Guide
The IBM Flex System p260 Compute Node has the following features:
Two processors with up to 16 POWER7 processing cores
Sixteen DDR3 memory DIMM slots that support IBM Active
Memory Expansion
Supports Very Low Profile (VLP) and Low Profile (LP) DIMMs
Two P7IOC I/O hubs
A RAID-capable SAS controller that supports up to two solid-state drives
(SSDs) or hard disk drives (HDDs)
Two I/O adapter slots
Flexible Support Processor (FSP)
System management alerts
IBM Light Path Diagnostics
USB 2.0 port
IBM EnergyScale technology
Figure 4-2 System board layout of the IBM Flex System p260 Compute Node
The IBM Flex System p460 Compute Node has the following features:
Four processors with up to 32 POWER7 processing cores
Thirty-two DDR3 memory DIMM slots that support IBM Active
Memory™ Expansion
Supports Very Low Profile (VLP) and Low Profile (LP) DIMMs
Four P7IOC I/O hubs
RAID-capable SAS controller that support up to two SSD or HDD drives
Four I/O adapter slots
Flexible Support Processor (FSP)
System management alerts
66 IBM Flex System p260 and p460 Planning and Implementation Guide
IBM Light Path Diagnostics
USB 2.0 port
IBM EnergyScale technology
Figure 4-3 shows the system board layout of the IBM Flex System p460 Compute
Node.
Figure 4-3 System board layout of the IBM Flex System p460 Compute Node
The IBM Flex System p24L Compute Node has the following features:
Up to 16 POWER7 processing cores
Sixteen DDR3 memory DIMM slots that support Active Memory Expansion
Supports Very Low Profile (VLP) and Low Profile (LP) DIMMs
Two P7IOC I/O hubs
RAID-compatible SAS controller that support up to two SSD or HDD drives
Two I/O adapter slots
Flexible Support Processor (FSP)
System management alerts
IBM Light Path Diagnostics
USB 2.0 port
IBM EnergyScale technology
The system board layout for the IBM Flex System p24L Compute Node is
identical to the IBM Flex System p260 Compute Node and is shown in Figure 4-2
on page 66.
68 IBM Flex System p260 and p460 Planning and Implementation Guide
4.2 Front panel
The front panel of Power Systems compute nodes has the following common
elements, as shown in Figure 4-4:
One USB 2.0 port
Power button and light path, light-emitting diode (LED) (green)
Location LED (blue)
Information LED (amber)
Fault LED (amber)
The USB port on the front of the Power Systems compute nodes is useful for
various tasks, including out-of-band diagnostic tests, hardware RAID setup,
operating system access to data on removable media, and local OS installation.
It might be helpful to obtain a USB optical (CD or DVD) drive for these purposes,
in case the need arises.
The front panel of the p460 is similar and is shown in Figure 1-3 on page 12.
Tip: There is no optical drive in the IBM Flex System Enterprise Chassis.
The LEDs on the light path panel indicate the following LEDs:
LP: Light Path panel power indicator
S BRD: System board LED (might indicate trouble with the processor or
memory as well)
MGMT: Flexible Support Processor (or management card) LED
D BRD: Drive (or Direct Access Storage Device (DASD)) board LED
DRV 1: Drive 1 LED (SSD 1 or HDD 1)
DRV 2: Drive 2 LED (SSD 2 or HDD 2)
ETE: Sidecar connector LED (not present on the IBM Flex System p460
Compute Node)
70 IBM Flex System p260 and p460 Planning and Implementation Guide
If problems occur, you can use the light path diagnostics LEDs to identify the
subsystem involved. To illuminate the LEDs with the compute node removed,
press the power button on the front panel. This action temporarily illuminates the
LEDs of the troubled subsystem to direct troubleshooting efforts towards
a resolution.
Typically, an administrator has already obtained this information from the IBM
Flex System Manager or Chassis Management Module before removing the
node, but having the LEDs helps with repairs and troubleshooting if on-site
assistance is needed.
For more information about the front panel and LEDs, see the IBM Flex System
p260 and p460 Compute Node Installation and Service Guide, available from:
http://www.ibm.com/support
4.2.2 Labeling
IBM Flex System offers several options for labeling your server inventory to track
your machines. It is important to not put stickers on the front of the server across
the bezel’s grating, as this action inhibits proper airflow to the machine.
Pull-out labeling
Each Power Systems compute node has two pull-out tabs that can also
accommodate labeling for the server. The benefit of using these tabs is that
they are affixed to the node itself rather than the chassis, as shown in
Figure 4-8.
72 IBM Flex System p260 and p460 Planning and Implementation Guide
4.3 Chassis support
The Power Systems compute nodes can be used only in the IBM Flex System
Enterprise Chassis. They do not fit in the previous IBM modular systems, such as
IBM iDataPlex or IBM BladeCenter.
There is no onboard video capability in the Power Systems compute nodes. The
machines are designed to use Serial Over LAN (SOL) or the IBM Flex System
Manager (FSM).
For more information about the IBM Flex System Enterprise Chassis, see
Chapter 3, “Introduction to IBM Flex System” on page 47. For information about
FSM, see 6.4, “IBM Flex System Manager” on page 192.
DIMM
SMI SAS HDDs/SSDs
DIMM
DIMM GX++ To
SMI PCIe USB
DIMM POWER7 4 bytes front
P7IOC to PCI controller panel
DIMM Processor 0 I/O hub
SMI
DIMM
Each:
DIMM PCIe 2.0 x8
SMI
DIMM I/O connector 1
4 bytes
each
DIMM
SMI
DIMM I/O connector 2
DIMM Each:
SMI PCIe 2.0 x8
DIMM POWER7 P7IOC
DIMM Processor 1 I/O hub
SMI
DIMM
ETE connector
DIMM
SMI Each: PCIe 2.0 x8
DIMM
Flash Gb
BCM5387 Systems
NVRAM Ethernet
FSP Phy Ethernet Management
256 MB DDR2 ports
switch connector
TPMD
Anchor card/VPD
Figure 4-9 IBM Flex System p260 Compute Node block diagram
In this diagram, you can see the two processor slots, with eight memory slots for
each processor. Each processor is connected to a P7IOC I/O hub, which
connects to the I/O subsystem (I/O adapters and local storage). At the bottom,
you can see a representation of the service processor (FSP) architecture.
The IBM Flex System p460 Compute Node shares many of the same
components as the IBM Flex System p260 Compute Node. The IBM Flex System
p460 Compute Node is a full-wide node, and adds additional processors and
memory along with two more adapter slots. It has the same local storage options
as the IBM Flex System p260 Compute Node.
74 IBM Flex System p260 and p460 Planning and Implementation Guide
The IBM Flex System p460 Compute Node system architecture is shown in
Figure 4-10.
DIMM
SMI
DIMM
DIMM PCIe USB To front
SMI GX++ to PCI controller panel
DIMM POWER7 4 bytes
Processor P7IOC
DIMM I/O hub
SMI 0
DIMM
Each:
DIMM PCIe 2.0 x8
SMI
DIMM
I/O connector 1
4 bytes
each
DIMM
SMI
DIMM I/O connector 2
DIMM Each:
SMI POWER7 PCIe 2.0 x8
DIMM P7IOC
Processor
DIMM 1 I/O hub
SMI
DIMM
DIMM
SMI
DIMM Systems
BCM5387 Gb Ethernet
Phy
FSP Ethernet Management ports
switch connector
Flash
NVRAM
256 MB DDR2
TPMD
Anchor card/VPD
DIMM
SMI
DIMM
SAS HDDs/SSDs
DIMM
SMI POWER7
DIMM
Processor P7IOC
DIMM I/O hub
SMI 2
DIMM
Each:
DIMM PCIe 2.0 x8
SMI
DIMM
I/O connector 3
4 bytes
each
DIMM
SMI
DIMM I/O connector 4
DIMM Each:
SMI POWER7 PCIe 2.0 x8
DIMM P7IOC
Processor
DIMM 3 I/O hub
SMI
DIMM
DIMM
SMI
DIMM
FSPIO
Figure 4-10 IBM Flex System p460 Compute Node block diagram
POWER7 POWER7
Processor Processor
0 1
4 bytes
each
POWER7 POWER7
Processor Processor
2 3
Figure 4-11 IBM Flex System p460 Compute Node processor connectivity
76 IBM Flex System p260 and p460 Planning and Implementation Guide
4.5.1 Processor options for Power Systems compute nodes
Table 4-1 defines the processor options for the Power Systems compute nodes.
4.5.2 Unconfiguring
You can order p260 or p460 with Feature Code #2319, which reduces the
number of active processor cores in the compute node to reduce software
licensing costs.
2319 Factory Deconfiguration of one core 0 1 less than the total number of cores
(for example, for EPR5, the maximum
is 7)
The field core override option specifies the number of functional cores that are
active in the compute node. The field core override option provides the capability
to increase or decrease the number of active processor cores in the compute
node. The compute node firmware sets the number of active processor cores to
the entered value. The value takes effect when the compute node is rebooted.
The field core override value can be changed only when the compute node is
powered off.
For detailed information about the field core override feature, go to the
following website:
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/topic/p7hby/fi
eldcore.htm
If the system board is replaced, transfer the anchor card from the old system
board to the new system board. If the anchor card is replaced, the information
is transferred from the system board to the new anchor card upon the
next boot.
If both the system board and the anchor card are replaced, then the field core
override option must be used to reset the core count back to the
previous value.
78 IBM Flex System p260 and p460 Planning and Implementation Guide
4.5.3 Architecture
IBM uses innovative methods to achieve the required levels of throughput and
bandwidth. Areas of innovation for the POWER7 processor and POWER7
processor-based systems include (but are not limited to) the following elements:
On-chip L3 cache implemented in embedded dynamic random access
memory (eDRAM)
Cache hierarchy and component innovation
Advances in memory subsystem
Advances in off-chip signaling
Figure 4-12 shows the POWER7 processor die layout with major areas identified:
eight POWER7 processor cores, L2 cache, L3 cache and chip power bus
interconnect, SMP links, GX++ interface, and memory controller.
GX++ Bridge
C1 C1 C1 C1
Core Core Core Core
Memory Controller
Memory Buffers
L2 L2 L2 L2
4 MB L3 4 MB L3 4 MB L3 4 MB L3
4 MB L3 4 MB L3 4 MB L3 4 MB L3
L2 L2 L2 L2
C1 C1 C1 C1
Core Core Core Core
SMP
The POWER7 processor chip is 567 mm2 and is built using 1,200,000,000
components (transistors). Eight processor cores are on the chip, each with 12
execution units, 256 KB of L2 cache, and access to up to 32 MB of shared
on-chip L3 cache.
Processor cores 4
80 IBM Flex System p260 and p460 Planning and Implementation Guide
POWER7 processor core
Each POWER7 processor core implements aggressive out of order (OoO)
instruction execution to drive high efficiency in the use of available execution
paths. The POWER7 processor has an instruction sequence unit that can
dispatch up to six instructions per cycle to a set of queues. Up to eight
instructions per cycle can be issued to the instruction execution units. The
POWER7 processor has a set of 12 execution units, as follows:
Two fixed-point units
Two load store units
Four double precision floating point units
One vector unit
One branch unit
One condition register unit
One decimal floating point unit
The caches that are tightly coupled to each POWER7 processor core are
as follows:
Instruction cache: 32 KB
Data cache: 32 KB
L2 cache: 256 KB, implemented in fast SRAM
L3 cache: 4 MB eDRAM
Simultaneous multithreading
An enhancement in the POWER7 processor is the addition of simultaneous
multithreading (SMT) mode, known as SMT4 mode, which enables four
instruction threads to run simultaneously in each POWER7 processor core.
Thus, the instruction thread execution modes of the POWER7 processor are
as follows:
SMT1: Single instruction execution thread per core
SMT2: Two instruction execution threads per core
SMT4: Four instruction execution threads per core
SMT4 mode enables the POWER7 processor to maximize the throughput of the
processor core by offering an increase in processor-core efficiency. SMT4 mode
is the latest step in an evolution of multithreading technologies introduced
by IBM.
Multi-threading Evolution
The various SMT modes offered by the POWER7 processor provide flexibility,
where you can select the threading technology that meets a combination of
objectives, such as performance, throughput, energy use, and
workload enablement.
82 IBM Flex System p260 and p460 Planning and Implementation Guide
Intelligent threads
The POWER7 processor features intelligent threads, which can vary based on
the workload demand. The system automatically selects (or the system
administrator manually selects) whether a workload benefits from dedicating as
much capability as possible to a single thread of work, or if the workload benefits
more from having this capability spread across two or four threads of work. With
more threads, the POWER7 processor delivers more total capacity because
more tasks are accomplished in parallel. With fewer threads, workloads that
require fast, individual tasks get the performance they need for maximum benefit.
Memory access
The POWER7 processor chip in the compute nodes has one DDR3 memory
controller enabled (the second controller is not used, as shown in Figure 4-14),
with four memory channels. Each channel operates at 6.4 Gbps and can address
up to 32 GB of memory. Thus, the POWER7 processor used in these compute
nodes can address up to 128 GB of memory. Figure 4-14 gives a simple
overview of the POWER7 processor memory access structure.
Memory
Controller Eight high-speed 6.4 GHz channels
New low-power differential signalling
DDR3 DRAMs
The POWER7 processor can be offered with a single active memory controller
with four channels for servers for which higher degrees of memory parallelism
are not required.
Similarly, the POWER7 processor can be offered with various SMP bus
capacities appropriate to the scaling-point of particular server models.
84 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 4-15 shows the physical packaging options that are supported with
POWER7 processors.
The on-chip L3 cache is organized into separate areas with differing latency
characteristics. Each processor core is associated with a Fast Local Region of L3
cache (FLR-L3), but also has access to other L3 cache regions as shared L3
cache. Additionally, each core can negotiate to use the FLR-L3 cache associated
with another core, depending on reference patterns. Data can also be cloned to
be stored in more than one core's FLR-L3 cache, again, depending on reference
patterns. This intelligent cache management enables the POWER7 processor to
optimize the access to L3 cache lines and minimize overall cache latencies.
The innovation of using eDRAM on the POWER7 processor die is significant for
several reasons:
Latency improvement
A six-to-one latency improvement occurs by moving the L3 cache on-chip,
compared to L3 accesses on an external (on-ceramic) application-specific
integrated circuit (ASIC).
Bandwidth improvement
A 2x bandwidth improvement occurs with on-chip interconnect. Frequency
and bus sizes are increased to and from each core.
No off-chip drivers or receivers
Removing drivers and receivers from the L3 access path lowers interface
requirements, conserves energy, and lowers latency.
86 IBM Flex System p260 and p460 Planning and Implementation Guide
Small physical footprint
The performance of eDRAM when implemented on-chip is similar to
conventional SRAM but requires far less physical space. IBM on-chip eDRAM
uses only one-third of the components used in conventional SRAM, which
has a minimum of six transistors to implement a 1-bit memory cell.
Low energy consumption
The on-chip eDRAM uses only 20% of the standby power of SRAM.
For more information about the POWER7 energy management features, see
Adaptive Energy Management Features of the POWER7 Processor, found at the
following website:
http://researcher.watson.ibm.com/researcher/files/us-lefurgy/hotchips22
_power7.pdf
Table 4-5 lists the available memory options for the Power Systems compute
nodes.
88 IBM Flex System p260 and p460 Planning and Implementation Guide
There are 16 buffered DIMM slots on the p260 and p24L, as shown in
Figure 4-17. The p460 adds two more processors and 16 additional DIMM slots,
divided evenly (eight memory slots) per processor.
DIMM 1 (P1-C1)
SMI
DIMM 2 (P1-C2)
DIMM 3 (P1-C3)
SMI
POWER7 DIMM 4 (P1-C4)
Processor 0 DIMM 5 (P1-C5)
SMI
DIMM 6 (P1-C6)
DIMM 7 (P1-C7)
SMI
DIMM 8 (P1-C8)
DIMM 9 (P1-C9)
SMI
DIMM 10 (P1-C10)
DIMM 11 (P1-C11)
SMI
POWER7 DIMM 12 (P1-C12)
Processor 1 DIMM 13 (P1-C13)
SMI
DIMM 14 (P1-C14)
DIMM 15 (P1-C15)
SMI
DIMM 16 (P1-C16)
Figure 4-17 Memory DIMM topology (IBM Flex System p260 Compute Node)
Number of DIMMs
DIMM 10
DIMM 11
DIMM 12
DIMM 13
DIMM 14
DIMM 15
DIMM 16
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 7
DIMM 8
DIMM 9
2 x x
4 x x x x
6 x x x x x x
8 x x x x x x x x
10 x x x x x x x x x x
12 x x x x x x x x x x x x
14 x x x x x x x x x x x x x x
16 x x x x x x x x x x x x x x x x
For the IBM Flex System p460 Compute Node, Table 4-7 shows the required
placement of memory DIMMs, depending on the number of DIMMs installed.
Table 4-7 DIMM placement - IBM Flex System p460 Compute Node
Processor 0 Processor 1 Processor 2 Processor 3
Number of DIMMs
DIMM 10
DIMM 11
DIMM 12
DIMM 13
DIMM 14
DIMM 15
DIMM 16
DIMM 17
DIMM 18
DIMM 19
DIMM 20
DIMM 21
DIMM 22
DIMM 23
DIMM 24
DIMM 25
DIMM 26
DIMM 27
DIMM 28
DIMM 29
DIMM 30
DIMM 31
DIMM 32
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 7
DIMM 8
DIMM 9
2 x x
4 x x x x
6 x x x x x x
8 x x x x x x x x
10 x x x x x x x x x x
90 IBM Flex System p260 and p460 Planning and Implementation Guide
Processor 0 Processor 1 Processor 2 Processor 3
Number of DIMMs
DIMM 10
DIMM 11
DIMM 12
DIMM 13
DIMM 14
DIMM 15
DIMM 16
DIMM 17
DIMM 18
DIMM 19
DIMM 20
DIMM 21
DIMM 22
DIMM 23
DIMM 24
DIMM 25
DIMM 26
DIMM 27
DIMM 28
DIMM 29
DIMM 30
DIMM 31
DIMM 32
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 7
DIMM 8
DIMM 9
12 x x x x x x x x x x x x
14 x x x x x x x x x x x x x x
16 x x x x x x x x x x x x x x x x
18 x x x x x x x x x x x x x x x x x x
20 x x x x x x x x x x x x x x x x x x x x
22 x x x x x x x x x x x x x x x x x x x x x x
24 x x x x x x x x x x x x x x x x x x x x x x x x
26 x x x x x x x x x x x x x x x x x x x x x x x x x x
28 x x x x x x x x x x x x x x x x x x x x x x x x x x x x
30 x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x
32 x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x
This situation allows an AIX V6.1 or later partition to do more work with the same
physical amount of memory, or a server to run more partitions and do more work
with the same physical amount of memory.
Clients have a great deal of control over Active Memory Expansion usage. Each
individual AIX partition can turn on or turn off Active Memory Expansion. Control
parameters set the amount of expansion wanted in each partition to help control
the amount of processor used by the Active Memory Expansion function. An IBM
Public License (IPL) is required for the specific partition that is turning on or off
memory expansion. After being turned on, monitoring capabilities in standard
AIX performance tools are available, such as lparstat, vmstat, topas,
and svmon.
92 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 4-18 represents the percentage of processor used to compress memory
for two partitions with various profiles. The green curve corresponds to a partition
that has spare processing power capacity. The blue curve corresponds to a
partition constrained in processing power.
% CPU 1
utilization 1 = Plenty of spare
for CPU resource available
expansion
2 = Constrained CPU
Very cost effective resource – already
running at significant
utilization
Both cases show a knee of the curve relationship for processor resources
required for memory expansion:
Busy processor cores do not have resources to spare for expansion.
The more memory expansion that is done, the more processor resources
are required.
The knee varies, depending on how compressible the memory contents are. This
situation demonstrates the need for a case by case study to determine whether
memory expansion can provide a positive return on investment (ROI). To help
you perform this study, a planning tool is included with AIX V6.1 Technology
Level 4 or later. You can use this planning tool to sample actual workloads and
estimate both how expandable the partition memory is and how much processor
resources are needed. Any Power System model runs the planning tool.
Figure 4-19 Output from the AIX Active Memory Expansion planning tool
For more information about this topic, see the white paper Active Memory
Expansion: Overview and Usage Guide, available at the following website:
http://www.ibm.com/systems/power/hardware/whitepapers/am_exp.html
94 IBM Flex System p260 and p460 Planning and Implementation Guide
4.8 Storage
The Power Systems compute nodes have an onboard SAS controller that can
manage up to two, non-hot-pluggable internal drives. Both 2.5-inch hard disk
drives (HDDs) and 1.8-inch solid-state drives (SSDs) are supported. The drives
attach to the cover of the server, as shown in Figure 4-20. Even though the p460
is a full-wide server, it has the same storage options as the p260 and the p24L.
Figure 4-20 The IBM Flex System p260 Compute Node showing the hard disk drive
location on the top cover
As you see in Figure 4-20 on page 95, the local drives (HDD or SDD) are
mounted to the top cover of the system. When ordering your Power Systems
compute nodes, choose which cover is appropriate for your system (SSD, HDD,
or no drives).
7069 None Top cover with HDD connectors for the p260 and the p24L
7066 None Top cover with HDD connectors for the p460 (full-wide)
1.8-inch SSDs
7068 None Top cover with SSD connectors for the p260 and the p24L
7065 None Top Cover with SSD connectors for p460 (full-wide)
96 IBM Flex System p260 and p460 Planning and Implementation Guide
Feature Part Description
code number
No drives
7067 None Top cover for no drives on the p260 and the p24L
Figure 4-22 Connection for drive interposer card mounted to the system cover
(connected to the system board through a flex cable)
The AIX Disk Array Manager is packaged with the Diagnostics utilities on the
Diagnostics CD. Run smit sasdam to configure the disk drives for use with the
SAS controller. The diagnostics CD can be downloaded in ISO file format from
the following website:
http://www14.software.ibm.com/webapp/set2/sas/f/diags/download/
For more information, see “Using the Disk Array Manager” in the Systems
Hardware Information Center at the following website:
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?top
ic=/p7ebj/sasusingthesasdiskarraymanager.htm
98 IBM Flex System p260 and p460 Planning and Implementation Guide
Tip: Depending on your RAID configuration, you might need to create the
array before you install the operating system in the compute node. Before you
can create a RAID array, you must reformat the drives so that the sector size
of the drives changes from 512 bytes to 528 bytes.
If you later decide to remove the drives, delete the RAID array before you
remove the drives. If you decide to delete the RAID array and reuse the
drives, you might need to reformat the drives so that the sector size of the
drives changes from 528 bytes to 512 bytes.
We describe the reference codes associated with the physical adapter slots in
more detail in “Assigning physical I/O” on page 301.
Similarly, you must have an EN4093 10Gb Scalable Switch (Feature Code
#3593), EN2092 1Gb Ethernet Switch (Feature Code #3598) or EN4091 10Gb
Ethernet Pass-thru Switch (Feature Code #3700) installed in bay 1 of
the chassis.
PCIe
connector
Midplane
connector
Adapters share a
common size
Guide block to (96.7 mm x
ensure proper 84.8 mm).
installation
Figure 4-23 The underside of the IBM Flex System EN2024 4-port 1Gb Ethernet Adapter
Note the large connector, which plugs into one of the I/O adapter slots on the
system board. Also, notice that it has its own connection to the midplane of the
Enterprise Chassis. If you are familiar with IBM BladeCenter systems, several of
the expansion cards connect directly to the midplane (such as the CFFh
adapters) and others do not (such as the CIOv and CFFv adapters).
100 IBM Flex System p260 and p460 Planning and Implementation Guide
4.9.2 PCI hubs
The I/O is controlled by two (IBM Flex System p260 Compute Node) or four (IBM
Flex System p460 Compute Node) P7-IOC I/O controller hub chips. This
configuration provides additional flexibility when assigning resources within
Virtual I/O Server (VIOS) to specific Virtual Machine/LPARs.
Table 4-9 Supported I/O adapter for Power Systems compute nodes
Feature Description
Code
EN2024
The firmware for this four port adapter is provided by Emulex, while the AIX
driver and AIX tool support are provided by IBM.
Table 4-10 lists the ordering part number and feature code.
102 IBM Flex System p260 and p460 Planning and Implementation Guide
The IBM Flex System EN4054 4-port 10Gb Ethernet Adapter has the following
features and specifications:
On-board flash memory: 16 MB for FC controller program storage
Uses standard Emulex SLI drivers
Interoperates with existing FC SAN infrastructures (switches, arrays, SRM
tools (including Emulex utilities), SAN practices, and so on)
Provides 10 Gb MAC features, such as MSI-X support, jumbo frames
(8 K bytes) support, VLAN tagging (802.1Q, PER priority pause / priority flow
control), and advanced packet filtering
No host operating system changes are required. NIC and HBA functionality
(including device management, utilities, and so on) are not apparent to the
host operating system
Figure 4-26 on page 105 shows the IBM Flex System EN4054 4-port 10Gb
Ethernet Adapter.
Figure 4-25 The EN2024 4-port 1Gb Ethernet Adapter for IBM Flex System
Table 4-11 lists the ordering part number and feature code.
The IBM Flex System EN2024 4-port 1Gb Ethernet Adapter has the
following features:
Connection to 1000BASE-X environments using Ethernet switches
Compliance with US and international safety and emissions standards
Full-duplex (FDX) capability, enabling simultaneous transmission and
reception of data on the Ethernet local area network (LAN)
Preboot Execution Environment (PXE) support
Wake on LAN support
MSI and MSI-X capabilities
Receive Side Scaling (RSS) support
NVRAM, a programmable, 4 Mb flash module
Host data transfer: PCIe Gen 2 (one lane)
104 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 4-26 shows the IBM Flex System EN2024 4-port 1Gb Ethernet Adapter.
Figure 4-26 The EN2024 4-port 1Gb Ethernet Adapter for IBM Flex System
Table 4-12 lists the ordering part number and feature code.
The IBM Flex System FC3172 2-port 8Gb FC Adapter has the following features:
Support for Fibre Channel protocol SCSI (FCP-SCSI) and Fibre Channel
Internet protocol (FCP-IP)
Support for point-to-point fabric connection (F-port fabric login)
The IBM Flex System FC3172 2-port 8Gb FC Adapter has the
following specifications:
Bandwidth: 8 Gb per second maximum at half-duplex and 16 Gb per second
maximum at full-duplex per port
Throughput: 3200 MBps (full-duplex)
Support for both FCP-SCSI and IP protocols
Support for point-to-point fabric connections: F-Port Fabric Login
Support for Fibre Channel Arbitrated Loop (FCAL) public loop profile: Fibre
Loop-(FL-Port)-Port Login
Support for Fibre Channel services class 2 and 3
Support for FCP SCSI initiator and target operation
Support for full-duplex operation
Copper interface AC coupled
Figure 4-27 shows the IBM Flex System FC3172 2-port 8Gb FC Adapter.
Figure 4-27 The FC3172 2-port 8Gb FC Adapter for IBM Flex System
106 IBM Flex System p260 and p460 Planning and Implementation Guide
4.9.8 IBM Flex System IB6132 2-port QDR InfiniBand Adapter
The IBM Flex System IB6132 2-port QDR InfiniBand Adapter from Mellanox
provides the highest performing and most flexible interconnect solution for
servers used in Enterprise Data Centers, High-Performance Computing, and
Embedded environments.
Table 4-13 lists the ordering part number and feature code.
The IBM Flex System IB6132 2-port QDR InfiniBand Adapter has the following
features and specifications:
ConnectX2 based adapter
Virtual Protocol Interconnect (VPI)
InfiniBand Architecture Specification V1.2.1 compliant
IEEE Std. 802.3 compliant
PCI Express 2.0 (1.1 compatible) through an x8 edge connector up to 5 GTps
Processor offload of transport operations
CORE-Direct application offload
GPUDirect application offload
Unified Extensible Firmware Interface (UEFI)
Wake on LAN (WoL)
RDMA over Converged Ethernet (RoCE)
End-to-end QoS and congestion control
Hardware-based I/O virtualization
TCP/UDP/IP stateless offload
RoHS-6 compliant
Figure 4-28 The IB6132 2-port QDR InfiniBand Adapter for IBM Flex System
The IBM Flex System p460 Compute Node, even though it is a full-wide system,
has only one Flexible Support Processor.
108 IBM Flex System p260 and p460 Planning and Implementation Guide
4.10.2 Serial over LAN (SOL)
The Power Systems compute nodes do not have an on-board video chip and do
not support keyboard, video, and mouse (KVM) connections. Server console
access is obtained by a SOL connection only. SOL provides a means to manage
servers remotely by using a command-line interface (CLI) over a Telnet or
Secure Shell (SSH) connection. SOL is required to manage servers that do not
have KVM support or that are attached to the IBM Flex System Manager. SOL
provides console redirection for both System Management Services (SMS) and
the server operating system. The SOL feature redirects server serial-connection
data over a LAN without requiring special cabling by routing the data using the
Chassis Management Module network interface. The SOL connection enables
Power Systems compute nodes to be managed from any remote location with
network access to the Chassis Management Module.
The vital product data chip includes information such as machine type, model,
and serial number.
110 IBM Flex System p260 and p460 Planning and Implementation Guide
4.12 IBM EnergyScale
IBM EnergyScale technology provides functions that help you to understand and
dynamically optimize the processor performance versus processor power and
system workload, to control IBM Power Systems power and cooling usage.
Dynamic Power Saver Mode varies processor frequency and voltage, based
on the use of the POWER7 processors. This setting is configured from
BladeCenter Advanced Management Module or IBM Director Active Energy
Manager. Processor frequency and usage are inversely proportional for most
workloads, implying that, as the frequency of a processor increases, its use
decreases, given a constant workload. Dynamic Power Saver Mode takes
advantage of this relationship to detect opportunities to save power, based on
measured real-time system usage.
When a system is idle, the system firmware lowers the frequency and voltage
to Power Saver Mode values. When fully used, the maximum frequency
varies, depending on whether you favor power savings or system
performance. If energy savings are preferred and the system is fully used, the
system can reduce the maximum frequency to 95% of nominal values. If
performance is favored over energy consumption, the maximum frequency is
at least 100% of nominal.
Dynamic Power Saver Mode is mutually exclusive with Power Saver mode.
Only one of these modes can be enabled at one time.
Power capping
Power capping enforces a limit, specified by you, on power usage. Power
capping is not a power-saving mechanism. It enforces power caps by
throttling the processors in the system, degrading performance significantly.
The idea of a power cap is to set a limit that is not expected to be reached, but
that frees up margined power in the data center. The margined power is the
amount of extra power allocated to a server during installation in a data
center. It is based on those server environmental specifications that usually
are never reached because server specifications are always based on
maximum configurations and worst case scenarios. The energy cap is set
and enabled in BladeCenter Advanced Management Module and in IBM
Systems Director Active Energy Manager.
112 IBM Flex System p260 and p460 Planning and Implementation Guide
Soft power capping
Soft power capping extends the allowed energy capping range further,
beyond a region that can be guaranteed in all configurations and conditions.
When an energy management goal is to meet a particular consumption limit,
soft power capping is the mechanism to use.
Processor core nap
The IBM POWER7 processor uses a low-power mode called nap that stops
processor execution when there is no work to be done by that processor core.
The latency of exiting nap falls within a partition dispatch (context switch),
such that the IBM POWER Hypervisor™ uses it as a general purpose idle
state. When the operating system detects that a processor thread is idle, it
yields control of a hardware thread to the POWER Hypervisor. The POWER
Hypervisor immediately puts the thread into nap mode. Nap mode allows the
hardware to clock-off most of the circuits inside the processor core. Reducing
active energy consumption by turning off the clocks allows the temperature to
fall, which further reduces leakage (static) power of the circuits, causing a
cumulative effect. Unlicensed cores are kept in core nap mode until they are
licensed, and they return to core nap mode when unlicensed again.
Processor folding
Processor folding is a consolidation technique that dynamically adjusts, over
the short term, the number of processors available for dispatch to match the
number of processors demanded by the workload. As the workload
increases, the number of processors made available increases. As the
workload decreases, the number of processors made available decreases.
Processor folding increases energy savings during periods of low to moderate
workload, because unavailable processors remain in low-power idle
states longer.
EnergyScale for I/O
IBM POWER processor-based systems automatically power off pluggable,
Peripheral Component Interconnect (PCI) adapter slots that are empty or not
being used. System firmware automatically scans all pluggable PCI slots at
regular intervals, looking for the ones that meet the criteria of not being in use
and powering them off. This support is available for all POWER
processor-based servers and the expansion units they support.
Details about warranty options and our terms and conditions are at the
following website:
http://www.ibm.com/support/warranties/
114 IBM Flex System p260 and p460 Planning and Implementation Guide
4.15 Software support and remote technical support
IBM also offers technical assistance to help solve software-related challenges.
Our team assists with configuration, how-to questions, and setup of your servers.
Information about these options is at the following website:
http://ibm.com/services/us/en/it-services/tech-support-and-maintenance-
services.html
Chapter 5. Planning
In this chapter, we describe the steps you should take before ordering and
installing Power Systems compute nodes as part of an IBM Flex System solution.
118 IBM Flex System p260 and p460 Planning and Implementation Guide
Memory
Your Power Systems compute node supports a wide spread of memory
configurations. The memory configuration depends on whether you have
internal disks installed, as described “Hard disk drives (HDDs) and solid-state
drives (SSDs)” on page 118). Mixing both types of memory is not
recommended. Active memory expansion (AME) is available on POWER7, as
is Active Memory Sharing (AMS) when using PowerVM Enterprise Edition.
AMS is described in detail in several Redbooks publications, two of which are
IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 and
IBM PowerVM Virtualization Managing and Monitoring, SG24-7590.
Processor
Several processor options are available for both the IBM Flex System p260
Compute Node and the IBM Flex System p460 Compute Node (described in
4.5.1, “Processor options for Power Systems compute nodes” on page 77).
Evaluate the processor quantity and speed options to determine what
processor configuration most closely matches your needs. IBM provides a
measurement (called rperf) that can be used to compare the relative
performance of POWER systems in absolute values. The charts can be found
at the following website:
http://www.ibm.com/systems/power/hardware/reports/system_perf.html
Optical media
The IBM Flex System Enterprise Chassis and the Enterprise Chassis do not
provide CD-ROM or DVD-ROM devices as do the previous BladeCenter
chassis versions. If you require a local optical drive, use an external
USB drive.
For details about the software available on IBM Power Systems servers, see the
IBM Power Systems Software™ website at:
http://www.ibm.com/systems/power/software/
Note: The p24L supports Virtual I/O Server (VIOS) and Linux only.
The p260 and p460 support the following operating systems and versions.
IBM regularly updates the Virtual I/O Server code. For information about the
latest update, see the Virtual I/O Server website at:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/
The Service Update Management Assistant can help you automate the task of
checking and downloading operating system files, and is part of the base
operating system. For more information about the suma command functionality,
go to the following web page:
http://www14.software.ibm.com/webapp/set2/sas/f/genunix/suma.html
120 IBM Flex System p260 and p460 Planning and Implementation Guide
AIX V6.1
The supported versions are:
AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with
APAR IV14283
AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later
(the planned availability 29 June 2012)
AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later (the
planned availability is 29 June 2012)
For information about AIX V6.1 maintenance and support, go to the Fix Central
website at:
http://www.ibm.com/eserver/support/fixes/fixcentral/main/pseries/aix
AIX V7.1
The supported versions are:
AIX V7.1 with the 7100-01 Technology Level with Service Pack 3 with
APAR IV14284
AIX V7.1 with the 7100-01 Technology Level with Service Pack 4, or later (the
planned availability is 29 June 2012)
AIX V7.1 with the 7100-00 Technology Level with Service Pack 6, or later (the
planned availability is 29 June 2012)
For information about AIX V7.1 maintenance and support, go to the Fix Central
website at:
http://www.ibm.com/eserver/support/fixes/fixcentral/main/pseries/aix
IBM i
The supported versions are:
IBM i 6.1 with i 6.1.1 machine code, or later
IBM i 7.1, or later
Virtual I/O Server is required to install IBM i in a Virtual Server on IBM Flex
System p260 Compute Node and IBM Flex System p460 Compute Node. All I/O
must be virtualized.
For a detailed guide about installing and operating IBM i with Power Based
compute nodes, go to the following website:
http://ibm.com/systems/resources/systems_power_hardware_blades_i_on_bla
de_readme.pdf
Linux operating system licenses are ordered separately from the hardware. You
can obtain Linux operating system licenses from IBM to be included with your
POWER7 processor technology-based servers, or from other Linux distributors.
Important: For systems ordered with the Linux operating system, IBM ships
the most current version available from the distributor. If you require another
version than the one shipped by IBM, you must obtain it by downloading it
from the Linux distributor's website. Information concerning access to a
distributor's website is on the product registration card delivered to you as part
of your Linux operating system order.
For information about the features and external devices supported by Linux, go
to the following website:
http://www.ibm.com/systems/p/os/linux/
For information about SUSE Linux Enterprise Server, go to the following website:
http://www.novell.com/products/server
For information about Red Hat Enterprise Linux Advanced Servers, go to the
following website:
http://www.redhat.com/rhel/features
Important: Be sure to update your system with the latest Linux on Power
service and productivity tools from the IBM website at:
http://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html
122 IBM Flex System p260 and p460 Planning and Implementation Guide
When you install AIX V6.1 TL7 and AIX V7.1 TL1, you can virtualize through
WPARs, as described in 8.3.1, “Installing AIX” on page 364. (Older versions of
AIX 5L V5.2 and V5.3 on lower TL levels can run WPARS within a Virtual Server
running AIX V7.)
Also, Linux installations are supported on the Power Systems compute node.
Supported versions are listed in “Operating system support” on page 119.
If you plan to use advanced features such as Live Partition Mobility or Active
Memory Sharing, the Enterprise Edition is required. Information about these
features is at the following website:
http://ibm.com/systems/power/software/virtualization/editions/
Dual VIOS: If you want a dual VIOS environment, external disk access is
required, as the two internal disks are connected to the same SAS/SATA
controller. The two internal disks are used for the rootvg volume group on one
VIOS only.
Make sure that the external interface ports of the switches selected are
compatible with the physical cabling used or planned to be used in your data
center. Also, make sure that the features and functions required in the network
are supported by the proposed switch modules.
124 IBM Flex System p260 and p460 Planning and Implementation Guide
Detailed information about I/O module configuration can be found in IBM
PureFlex System and IBM Flex System Products & Technology, SG24-7984.
Table 5-3 lists the common selection considerations that might be useful when
selecting a switch module.
Requirement
Layer 3 IPv4 switching (forwarding, routing, and ACL filtering) Yes Yes
Layer 3 IPv6 switching (forwarding, routing, and ACL filtering) Yes Yes
All IBM Flex System switch modules support the 802.1Q protocol for
VLAN tagging.
To be sure that the application deployed supports logical interfaces, check the
application documentation for possible restrictions applied to the NIC teaming
configurations, especially in the case of a clustering solutions implementation.
For more information about Ethernet switch modules, see IBM PureFlex System
and IBM Flex System Products & Technology, SG24-7984.
126 IBM Flex System p260 and p460 Planning and Implementation Guide
5.3 SAN connectivity
SAN connectivity in the Power Systems compute nodes is provided by the
expansion cards. The list of SAN Fibre Channel (FC) adapters currently
supported by the Power Systems compute nodes is listed in Table 5-4. For more
details about the supported expansion cards, see 4.9, “I/O adapters” on page 99.
Important: At the time of writing, the FC3052 2-port 8Gb FC Adapter and
FC5022 2-port 16Gb FC Adapter were not supported by the Power Systems
compute nodes.
The SAN and Fibre Channel I/O modules are installed in the IBM Flex System
chassis. This installation includes SAN switch modules that provide integrated
switching capabilities and pass-through modules that make internal compute
node ports available to the outside.
Use SAN switches whenever possible, because you can use this configuration to
mix complex configuration and zoning settings inside the chassis or to integrate
with your existing SAN configuration.
Ensure that the external interface ports of the switches or pass-through modules
selected are compatible with the physical cabling used or planned to be used in
your data center. Also, ensure that the features and functions required in the
SAN are supported by the proposed switch modules or pass-through modules.
In general, a typical LAN infrastructure consists of server NICs, client NICs, and
network devices, such as Ethernet switches and that cables that connect them.
The potential failures in a network include port failures (both on switches and
servers), cable failures, and network device failures.
128 IBM Flex System p260 and p460 Planning and Implementation Guide
– Virtual Link Aggregation Groups (VLAG)
– Virtual Router Redundancy Protocol (VRRP)
– Routing protocol (such as RIP or OSPF)
Topology 1
Enterprise
Switch 1 NIC 1
Switch 1
Compute node
Network
Rest of
Enterprise
Chassis Trunk
Enterprise
Switch 2 NIC 2
Switch 2
Topology 2
Enterprise
Switch 1 NIC 1
Switch 1
Compute node
Network
Rest of
Enterprise
Chassis
Enterprise
Switch 2 NIC 2
Switch 2
Topology 1 in Figure 5-1 has each switch module in the chassis directly
connected to one of the enterprise switches through aggregation links, using
external ports on the switch. The specific number of external ports used for link
aggregation depends on your redundancy requirements, performance
considerations, and real network environments. This topology is the simplest way
to integrate IBM Flex System into an existing network, or to build a new one.
Assume that the link between enterprise switch 2 and chassis switch 1 is
disabled by Spanning Tree Protocol to break a loop, so traffic is going through
the link between enterprise switch 1 and chassis switch 1. If there is a link failure,
Spanning Tree Protocol reconfigures the network and activates the previously
disabled link. The process of reconfiguration can take tenths of a second, and
the service is unavailable during this time.
Whenever possible, plan to use trunking with VLAN tagging for interswitch
connections, which can help you achieve higher performance by increasing
interswitch bandwidth, and higher availability by providing redundancy for links in
the aggregation bundle.
STP modifications, such as Port Fast Forwarding or Uplink Fast, might help
improve STP convergence time and the performance of the network
infrastructure. Additionally, several instances of STP might run on the same
switch simultaneously, on a per-VLAN basis (that is, each VLAN has its own
copy of STP to load-balance traffic across uplinks more efficiently).
For example, assume that a switch has two uplinks in a redundant loop topology,
and several VLANs are implemented. If single STP is used, then one of these
uplinks is disabled and the other carries traffic from all VLANs. However, if two
STP instances are running, then one link is disabled for one set of VLANs while
carrying traffic from another set of VLANs, and vice versa. Both links are active,
thus enabling more efficient use of available bandwidth.
130 IBM Flex System p260 and p460 Planning and Implementation Guide
Layer 2 failover
Depending on the configuration, each compute node can have one IP address
per each Ethernet port, or it can have one virtual NIC consisting of two or more
physical interfaces with one IP address. This configuration is known as NIC
teaming technology. From an IBM Flex System perspective, NIC Teaming is
useful when you plan to implement high availability configurations with automatic
failover if there are internal or external uplink failures.
We can use only two ports on a compute node per virtual NIC for high availability
configurations. One port is active, and the other is standby. One port (for
example, the active port) is connected to the switch in I/O bay 1, and the other
port (for example, the standby port) is to be connected to the switch in I/O bay 2.
If you plan to use an Ethernet expansion card for high availability configurations,
then the same rules apply. Active and standby ports need to be connected to a
switch on separate bays.
If there is an internal port or link failure of the active NIC, the teaming driver
switches the port roles. The standby port becomes active and the active port
becomes standby. This action is done quickly, within a few seconds. After
restoring the failed link, the teaming driver can perform a failback or can do
nothing, depending on the configuration.
Look at topology 1 in Figure 5-1 on page 129. Assume that NIC Teaming is on,
the compute node NIC port connected to switch 1 is active, and the other node is
on standby. If something goes wrong with the internal link to switch 1, then the
teaming driver detects the status of NIC port failure and performs a failover. But
what happens if external connections are lost (the connection from chassis
switch 1 to Enterprise Switch 1 is lost)? The answer is that nothing happens
because the internal link is still on and the teaming driver does not detect any
failure. So the network service becomes unavailable.
To address this issue, the Layer 2 Failover technique is used. Layer 2 Failover
can disable all internal ports on the switch module if there is an upstream links
failure. A disabled port means no link, so the NIC Teaming driver performs a
failover. This special feature is supported on the IBM Flex System and
BladeCenter switch modules. Thus, if Layer 2 Failover is enabled and you lose
connectivity with Enterprise Switch 1, then the NIC Teaming driver performs a
failover and the service is available through Enterprise Switch 2 and chassis
switch 2.
Layer 2 Failover is used with NIC active or standby teaming. Before using NIC
Teaming, verify whether it is supported by the operating system and applications.
ISL
Aggregation VLAG
Layer Peers
Access
Layer
Servers
A switch in the access layer might be connected to more than one switch in the
aggregation layer to provide network redundancy. Typically, the Spanning Tree
Protocol is used to prevent broadcast loops, blocking redundant uplink paths.
This setup has the unwanted consequence of reducing the available bandwidth
between the layers by as much as 50%. In addition, STP might be slow to
resolve topology changes that occur during a link failure, which can result in
considerable MAC address flooding.
Using Virtual Link Aggregation Groups (VLAGs), the redundant uplinks remain
active, using all the available bandwidth. Using the VLAG feature, the paired
VLAG peers appear to the downstream device as a single virtual entity for
establishing a multiport trunk. The VLAG-capable switches synchronize their
logical view of the access layer port structure and internally prevent implicit
loops. The VLAG topology also responds more quickly to link failure and does
not result in unnecessary MAC address flooding.
132 IBM Flex System p260 and p460 Planning and Implementation Guide
VLAGs are also useful in multi-layer environments for both uplink and downlink
redundancy to any regular LAG-capable device, as shown in Figure 5-3.
VLAG 5 VLAG 6
VLAG 3 VLAG 4
ISL ISL
VLAG VLAG
Peers Peers
VLAG 1 VLAG 2
LACP-capable
Switch
LACP-capable
Server
Servers
In general, a typical SAN infrastructure consists of storage FCs, client FCs, and
SAN devices, such as SAN switches and the cables that connect them. The
potential failures in a SAN include port failures (both on the switches and in
storage), cable failures, and device failures.
V7000 Storage
FC adapter
SAN switch FC switch
DS3400 Storage
Figure 5-4 Dual-FC and dual-SAN switch redundancy connection, as applied to the IBM Flex System p460
Compute Node
This configuration might be improved by adding multiple paths from each Fibre
Channel switch in the chassis to the SAN, which protects against a cable failure.
134 IBM Flex System p260 and p460 Planning and Implementation Guide
Another scenario for the p260 is one in which the redundancy in the configuration
is provided by the Fibre Channel switches in the chassis. There is no hardware
redundancy on the compute node, as it has only two expansion cards, with one
used for Ethernet access and the other for Fibre Channel access. For this
reason, in the case of Fibre Channel or Ethernet adapter failure on the compute
node, redundancy is maintained. Figure 5-5 shows this scenario.
V7000 Storage
EN adapter
Storage Area
Network
Chassis Compute node
Figure 5-5 Dual-SAN switch connection with the IBM Flex System p260 Compute Node
The p260 supports only two expansion cards. Internal disks are attached to only
one PCI bus, so only one VIOS can manage internal disks. Therefore, to have
dual VIOS, you need two Fibre Channel adapters and one Ethernet adapter,
which is one more than the two supported on the p260.
With IBM Flex System Manager, the creation of virtual servers and the type of
operating system environment that they support can occur before any operating
system installation. The only limitation from a dual VIOS perspective is the
availability of disk and network physical resources. Physical resource
assignment to a virtual server is made at the level of the expansion card slot or
controller slot (physical location code). Individual ports and internal disks cannot
be individually assigned. This type of assignment is not unique to Power
Systems compute nodes and is a common practice for all Power platforms.
A dual VIOS environment setup requires the creation of the two virtual servers,
both of which are set for a VIOS environment. After the virtual servers are
created with the appropriate environment setting and physical resources
assigned to support independent disk and network I/O, then the VIOS operating
systems can be installed.
136 IBM Flex System p260 and p460 Planning and Implementation Guide
Two Fibre Channel adapters (using FC3172 2-port 8Gb FC Adapters)
One IBM Flex System Enterprise Chassis, with at least one Ethernet switch or
pass-through node and one Fibre Channel switch or pass-through node.
As mentioned earlier, the four ports are assigned in pairs to each of the two VIOS
virtual servers if only one Ethernet adapter is used, or each Ethernet adapter on
the p460 is assigned to each VIOS if two Ethernet adapters are used. Each FC
Card on the p460 is assigned to each VIOS.
Both VIOS servers in this example boot from SAN. The SAS controller and
internal drive can be owned only by one VIOS and, in this example, could not
be used.
This example for the p460, while not all-inclusive, provides the basics for a dual
VIOS environment. Memory requirements for additional virtual servers beyond
the base order amounts are not considered and need to be evaluated before
ordering either model.
The actual steps of creating a dual VIOS are not covered here; however, the
result of this type of configuration performed on a p460 is shown in Figure 5-6.
Figure 5-6 Dual VIOS configuration on an IBM Flex System p460 Compute Node
With the two virtual I/O servers installed, the normal methods of creating a Share
Ethernet Adapter (SEA) failover for virtual networking and redundant paths for
the client virtual server disks (N_Port ID Virtualization (NPIV) and virtual SCSI
(vSCSI)) can be configured.
Use the IBM Systems Energy Estimator to obtain a heat output estimate based
on a specific configuration. The Estimator is available at the following website:
http://www-912.ibm.com/see/EnergyEstimator
For information about planning your PDU and UPS configurations, see the IBM
Flex System Power Guide, available at the following address:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401
The chassis power system is designed for efficiency using data center power,
and consists of 3-phase, 60A Delta 200 VAC (North America) or 3-phase 32A
wye 380-415 VAC (international). The Chassis may also be fed from single
phase 200-240VAC supplies, if required.
138 IBM Flex System p260 and p460 Planning and Implementation Guide
Power cabling: 32A at 380-415V 3-phase (international)
As shown in Figure 5-7, one 3-phase 32A wye PDU (WW) can provide power
feeds for two chassis. In this case, an appropriate 3-phase power cable is
selected for the Ultra-Dense Enterprise PDU+, which then splits the phases,
supplying one phase to each of the three PSUs within each chassis. One
3-phase 32A wye PDU can power two fully populated chassis within a rack. A
second PDU may be added for power redundancy from an alternative power
source, if the chassis is configured N+N.
Figure 5-7 shows a typical configuration with a 32A 3-phase wye supply at
380-415 VAC (often termed “WW” or “International”) N+N.
g
46M4002 1U 9
C19/3 C13 Switched and
monitored DPI PDU
L3 L2 L3 L2
N L1 N L1
G G
46M4003 1U 9 C19/3
C13 Switched and
monitored DPI PDI
L1 L1
G G
L2 L2
L3 L3
A maximum of six power supplies can be installed in the IBM Flex System
Enterprise Chassis. The power supplies are 80 PLUS Platinum certified and are
2500 W output, rated at 200 VAC, with oversubscription to 3538 W output at
200 VAC. The power supplies also contain two independently powered 40 mm
cooling fans.
140 IBM Flex System p260 and p460 Planning and Implementation Guide
80 PLUS is a performance specification for power supplies used within servers
and computers. To meet the 80 PLUS standard, the power supply must have an
efficiency of 80% or greater, at 20, 50, and 100 percent of rated load with a
Power Factor (PF) of 0.09 or greater. The standard has several grades, such as
Bronze, Silver, Gold, and Platinum. Further information about 80 PLUS is at the
following website:
http://www.80PLUS.org
Figure 5-9 shows the location of the power supply bays at the rear of the
Enterprise Chassis.
Power
supply Power
bay 6 supply
bay 3
Power Power
supply supply
bay 5 bay 2
Power Power
supply supply
bay 4 bay 1
All power supply modules are combined into a single power domain in the
chassis, which distributes power to each compute node and I/O module through
the Enterprise Chassis midplane. The midplane is a highly reliable design with no
active components. Each power supply is designed to provide fault isolation.
Power monitoring of the AC and DC signals from the power supplies allows the
Chassis Management Module to accurately monitor these signals.
For detailed information about the power supply features of the chassis, see the
IBM Flex System Power Guide, available at the following website:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401
These settings can be accessed as shown in Figure 5-10. The Power Modules
and Management window is shown.
AEM: More granular power controls can be set using Active Energy Manager,
as described in 5.7.2, “Active Energy Manager” on page 151.
Figure 5-10 Accessing the Power Management options in the Chassis Management Module
142 IBM Flex System p260 and p460 Planning and Implementation Guide
Power redundancy settings
There are five power management redundancy settings available for selection:
AC Power Source Redundancy
Intended for dual AC power sources into the chassis. Maximum input power is
limited to the capacity of two power modules. This approach is the most
conservative one, and is best used when all four power modules are installed.
When the chassis is correctly wired with dual AC power sources, one AC
power source can fail without affecting compute node server operation. Some
compute nodes may not be allowed to power on if doing so would exceed the
policy power limit.
AC Power Source Redundancy with Compute Node Throttling Allowed
Similar to the AC Power Source Redundancy. This policy allows higher input
power, and capable compute nodes may be allowed to throttle down if one
AC power source fails.
Power Module Redundancy
Intended for a single AC power source in the chassis where each power
module is on its own dedicated circuit. Maximum input power is limited to one
less than the number of power modules when more than one power module is
present. One power module can fail without affecting compute node
operation. Multiple power module failures can cause the chassis to power off.
Some compute nodes may not be allowed to power on if doing so would
exceed the policy power limit.
Power Module Redundancy with Compute Nodes Throttling Allowed
Similar to Power Module Redundancy. This policy allows higher input power;
however, capable compute nodes may be allowed to throttle down if one
power module fails.
Basic Power Management
Maximum input power is higher than other policies and is limited only by the
nameplate power of all the power modules combined. This approach is the
least conservative one, because it does not provide any protection for an AC
power source or power module failure. If any single power supply fails, the
compute node or chassis operation might be affected.
144 IBM Flex System p260 and p460 Planning and Implementation Guide
The power capping options can be set as shown in Figure 5-12.
You need to know the number of power supplies needed to support the number
of Power Systems compute nodes in the IBM Flex System Enterprise Chassis
when adding a Power Systems compute node to an existing chassis. In addition,
you must know the relationship between the number of Power Systems compute
nodes and the number of power supplies in the chassis.
N+N redundancy
Table 5-6 lists the minimum number of power supplies required in a chassis to
support the designated number of compute nodes, assuming average half-wide
powers of 500 W, 600 W, and 700 W per compute node with a throttling policy
enabled. If there is a a power fault, all compute nodes in the chassis must throttle
the power to the average power indicated in the Fault column.
Table 5-6 Minimum number of N+N power supplies required to support half-wide compute nodes
Number of 500 W ITE 600 W ITE 700 W ITE
half-wide
compute Number of Faultb Number of Faultb Number of Faultb
nodes power suppliesa power suppliesa power suppliesa
Table 5-7 shows similar information for full-wide compute nodes of 1000 W, 1200
W, and 1400 W with a throttling policy enabled. The number of supplies indicated
in the table is theoretical, and might not represent a practical configuration. For
example, although a specific number of supplies may be adequate to power the
indicated configuration under normal operation, it may require the compute
nodes to throttle the power to unrealistic or impossible levels during a fault to
keep the system running.
Table 5-7 Minimum number of N+N power supplies required to support full-wide compute nodes
Number of 1000 W ITE 1200 W ITE 1400 W ITE
full-wide
compute Number of Faultb Number of Faultb Number of Faultb
nodes power suppliesa power suppliesa power suppliesa
146 IBM Flex System p260 and p460 Planning and Implementation Guide
Number of 1000 W ITE 1200 W ITE 1400 W ITE
full-wide
compute Number of Faultb Number of Faultb Number of Faultb
nodes power suppliesa power suppliesa power suppliesa
N+1 redundancy
Table 5-8 lists the minimum number of power supplies required in a chassis to
support the designated number of compute nodes, assuming average half-wide
powers of 400 W, 500 W, and 600 W per compute node. If there is a loss of one
power supply, all compute nodes in the chassis must throttle the power to the
average power indicated in the Fault column.
Table 5-8 Minimum number of N+1 power supplies required to support half-wide compute nodes
Number of 500 W ITE 600 W ITE 700 W ITE
half-wide
compute Number of Faultb Number of Faultb Number of Faultb
nodes power suppliesa power suppliesa power suppliesa
Table 5-9 shows similar information for full-wide compute nodes with average
powers of 1000 W, 1200 W, and 1400 W. The number of supplies indicated in the
table are theoretical, and might not represent a practical configuration. For
example, although a specific number of supplies might be adequate to power the
indicated configuration under normal operation, it may require the blades to
throttle the power to unrealistic or impossible levels during a fault to keep the
system running.
Table 5-9 Minimum number of N+1 power supplies required to support full-wide compute nodes
Number of 1000 W ITE 1200 W ITE 1400 W ITE
full-wide
compute Number of Faultb Number of Faultb Number of Faultb
nodes power suppliesa power suppliesa power suppliesa
148 IBM Flex System p260 and p460 Planning and Implementation Guide
5.7 Cooling
The flow of air within the Enterprise Chassis follows a front to back cooling path;
cool air is drawn in at the front of the chassis and warm air is exhausted to
the rear.
There are two cooling zones for the nodes: a left zone and a right zone.
The cooling is scaled up as required, based upon which node bays are
populated. The number of cooling fans required for a given number of nodes is
described further in this section.
Air is drawn in both through the front node bays and the front airflow inlet
apertures, at the top and bottom of the chassis.
When a node is not inserted in a bay, an airflow damper closes in the midplane,
meaning that absolutely no air is drawn in through an unpopulated bay. When a
node is inserted into a bay, the damper is opened mechanically by insertion of
the node, allowing for cooling of the node in that bay.
13 14
11 12 9 4
9 10
8 3
7 8
5 6 7 2
3 4
6 1
1 2
Six installed 80 mm fans support four more half-wide nodes within the chassis, to
a maximum of eight, as shown in Figure 5-14.
13 14
11 12 9 4
9 10
8 3
77 88
55 66 7 2
33 44
6 1
11 22
150 IBM Flex System p260 and p460 Planning and Implementation Guide
To cool more than eight half-wide (or four full-wide) nodes, all the fans must be
installed, as shown in Figure 5-15.
13
13 14
14
11
11 12
12 9 4
99 10
10
8 3
77 88
55 66
7 2
33 44
6 1
11 22
Active Energy Manager is an IBM Flex System Manager or SDMC extension that
supports the following endpoints:
IBM Flex System
IBM BladeCenter
Power Systems
System x
IBM System z®
IBM System Storage® and non-IBM platforms1
In addition, Active Energy Manager can collect information from select facility
providers, including Liebert SiteScan from Emerson Network Power and
SynapSense.
1
IBM storage systems and non-IBM platforms can be monitored through IBM or non-IBM (Raritan
and Eaton) Power Distribution Unit (PDU) support.
Monitoring and management functions apply to all IBM systems that are enabled
for IBM Systems Director Active Energy Manager.
Monitoring functions include power trending, thermal trending, IBM and
non-IBM PDU support, support for facility providers, energy thresholds, and
altitude input.
Management functions include power capping and power savings mode.
Active Energy Manager also provides a source of energy management data that
can be used by Tivoli enterprise solutions, such as IBM Tivoli Monitoring. For
more information about IBM Tivoli Monitoring, go to the following website:
http://www.ibm.com/software/tivoli/products/monitor/
To partition your Power Systems compute node, it must be attached to the IBM
Flex System Manager. The process for connecting your Power Systems
compute node to both nodes is described in 7.1.3, “Preparing to use the IBM Flex
System Manager for partitioning” on page 284.
152 IBM Flex System p260 and p460 Planning and Implementation Guide
The key element for planning your partitioning is knowing the hardware you have
in your Power Systems compute node, as that hardware is the only limit that you
have for your virtual servers. Adding VIOS to the equation solves much
those limitations.
Support for IVM: IVM is not supported on the Power Systems compute nodes
in IBM Flex System.
154 IBM Flex System p260 and p460 Planning and Implementation Guide
Virtual Server 2 consists of:
– 0.5 processor
– 10 GB
– SAN-attached disks through an IBM Flex System FC3172 2-port 8Gb FC
Adapter
– One port of the IBM Flex System EN2024 4-port 1Gb Ethernet Adapter for
networking
– Linux operating system
Virtual Server 3 consists of:
– One processor
– 14 GB
– SAN-attached disks through an IBM Flex System FC3172 2-port 8Gb FC
Adapter
– One port of the IBM Flex System EN2024 4-port 1Gb Ethernet Adapter for
networking
– AIX operating system
Important: Configurations shown in the following samples are not the only
configurations supported. You can use several combinations of expansion
cards and memory; the limitations are disk and network access.
VIOS can solve many of the hardware limitations (buses, cards, disk, and
memory) you find when creating virtual servers on your Power Systems compute
node. For more information, see Chapter 7, “Virtualization” on page 275.
The VIOS virtual servers should be configured for redundant access to storage
and the network.
Additional AIX, Linux, or IBM i client virtual servers can now be configured by
using resources from the VIO virtual servers, with the assurance that the loss of
a VIOS does not result in a client losing access to storage or the network.
156 IBM Flex System p260 and p460 Planning and Implementation Guide
6
You can manage your Enterprise Chassis more proficiently with the Chassis
Management Module and IBM Flex System Manager.
158 IBM Flex System p260 and p460 Planning and Implementation Guide
The Enterprise Chassis ships with secure settings by default, with two security
policy settings supported:
Secure: This setting is the default one. It ensures a secure chassis
infrastructure that supports:
– Strong password policies with automatic validation and verification checks
– Updated passwords that replace the default passwords after initial setup
– Secure communication protocols, such as SSH, SSL, and HTTPS
– Certificates to establish secure and trusted connections for applications
that run on the management processors
Legacy: This setting provides flexibility in chassis security. It provides:
– Weak password policies with minimal controls
– Manufacturing default passwords that do not have to be changed
– Unencrypted communication protocols, such as Telnet, SNMP v1, TCP
command mode, CIM-XML, FTP server, and TFTP server
160 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 6-2 shows a sample configuration of HTTPS access to the Chassis
Management Module.
Bay 2: optional
Bay 1: present
Figure 6-3 FSM chassis manager table view that shows the Chassis Management
Module bays
The Chassis Management Module can also be seen in the Chassis Management
Module GUI, as shown in Figure 6-4.
Bay 2: optional
Bay 1: present
The following section describes the usage models and features of the Chassis
Management Module.
162 IBM Flex System p260 and p460 Planning and Implementation Guide
For a hardware overview of the CMM, see IBM PureFlex System and IBM Flex
System Products & Technology, SG24-7984.
The CMM automatically detects installed modules in the Enterprise Chassis and
stores vital product data (VPD) on them.
The default information, MAC address, and IPv6 link-local address is available in
the network access card attached to all new CMMs, as shown in Figure 6-5.
Network
access tag
Con
ne
ct
via
Sta
tic
IP
Figure 6-5 Chassis Management Module network access tag
164 IBM Flex System p260 and p460 Planning and Implementation Guide
To perform the initial configuration, complete the following steps:
1. Open a browser and go to the IP address of the CMM, either the
DHCP-obtained address or the default IP settings. The Login window opens,
as shown in Figure 6-6.
3. Several options are shown that can be use to manage the Chassis
Management Module configuration. For this first time connection, click Initial
Setup Wizard, as shown in Figure 6-8.
166 IBM Flex System p260 and p460 Planning and Implementation Guide
When the wizard starts, the first window shows the steps that are performed
on the left side of the window, and a basic description of the steps in the
main field.
Figure 6-9 shows the Welcome window of the setup wizard. This wizard is
similar to other IBM wizards. Navigation buttons for the wizard are in the lower
left of each window.
Figure 6-9 Chassis Management Module initial setup wizard Welcome window
4. Proceed through each step of the wizard by clicking Next, entering the
information as required.
5. Click Finish to complete the last step of the wizard.
For further details about using the Chassis Management Module, see 6.2.2,
“CMM functions” on page 168.
168 IBM Flex System p260 and p460 Planning and Implementation Guide
System Status tab
The System Status tab shows the System Status window, which is the default
window when you enter the CMM web interface (Figure 6-11). You can also
access this window by clicking System Status. This window shows a graphical
systems view of a selected chassis, active events, and general
systems information.
4. System Information
5. Selected Component Actions
1. Active Events
170 IBM Flex System p260 and p460 Planning and Implementation Guide
Multi-Chassis Monitor tab
In the Multi-Chassis Monitor tab, you can manage and monitor other IBM Flex
System chassis, as shown in Figure 6-12. Click the chassis name to show details
about that chassis. Click the link to start the CMM interface for
that chassis.
2. Chassis properties
The following selections are available (with the numbers that match callouts
in Figure 6-12):
1. Discover new chassis: Discover other chassis in the network.
2. Chassis properties: The grid marked in point 2 of Figure 6-12 shows a quick
view of the other chassis discovered by the Chassis Management Module,
listing the name, health, management IP address, firmware version, and
firmware release day. Click the chassis name, and an Events Log dialog
box opens.
3. Manage other chassis: With this option, you can manage other chassis
directly from this grid. Click a chassis IP address, and another tab opens,
where you can manage the chassis.
Events tab
This tab (Figure 6-13) has two options, Event Log (shown in Figure 6-14), and
Event Recipients, which provide options to send an SNMP alert or send an email
using Simple Mail Transfer Protocol (SMTP).
1. Event overview
172 IBM Flex System p260 and p460 Planning and Implementation Guide
The callouts shown in Figure 6-14 on page 172 are described as follows:
1. Event overview: This grid shows general information about the event listing,
including severity, source, sequence, date, and message.
2. Event detail: Click the More link to show detailed information about the
selected event.
3. Event actions menu: Several options are available to manage logs:
a. You can use the Export option to export your event log in various formats
(.csv, XML, or PDF).
b. Use the Delete Events option to delete all selected items, with the
additional option of selecting audit, systems, or both.
c. With the Settings option, you can add a log event when a log is 75% full.
d. The Open Service Request option is enabled when you select one of the
events from the grid, followed by a prompt for a small description.
4. Event search and filters: The event grid can become large over time. With
Event Search, you can search by keyword and use several filters, as shown
in Figure 6-14 on page 172.
174 IBM Flex System p260 and p460 Planning and Implementation Guide
Chassis tab
Clicking Chassis from the menu shows a window where you can view or change
chassis-level data (Figure 6-17).
176 IBM Flex System p260 and p460 Planning and Implementation Guide
I/O Modules tab
The I/O Modules window is similar to the Compute Nodes window. A grid opens
and shows the I/O modules. Clicking a module name opens other panes with the
properties of that module (Figure 6-19).
178 IBM Flex System p260 and p460 Planning and Implementation Guide
The Power Modules and Management window has the following features:
The Policies tab shows the power polices that are currently enabled. If you
click Change in Figure 6-21 on page 178, you can modify the current policy in
the window that opens (Figure 6-22).
180 IBM Flex System p260 and p460 Planning and Implementation Guide
The Input Power and Allocation tab shows charts and details of energy use
on the chassis. Figure 6-24 shows an example of one of
these charts.
Component IP configuration
This menu item lists all the components and IP configuration (if available) of
the chassis.
Reports
This menu item shows reports that list all MAC addresses or unique IDs used by
components in the chassis.
182 IBM Flex System p260 and p460 Planning and Implementation Guide
Mgmt Module Management tab
This tab, shown in Figure 6-26, has options for performing user management
tasks, firmware upgrades, security management, network management, and so
on. The tab is shown in Figure 6-26.
2. Permission Groups
1. User Management
Firmware menu
Enables firmware upgrades and views of current firmware state.
184 IBM Flex System p260 and p460 Planning and Implementation Guide
Security
You can use this menu to configure security policies and set up a certificate
authority (CA), enable HTTPS or SSH access, and configure an LDAP for logins.
Figure 6-28 shows the Security Policies tab of this window.
Configuration
You can use this menu to back up and restore your Chassis Management
Module configuration. Use the Initial Setup Wizard to walk you through these
setup steps.
186 IBM Flex System p260 and p460 Planning and Implementation Guide
Properties
You can use this window to set up your Chassis Management Module name, time
and date, and standby Chassis Management Module management details.
Figure 6-30 shows an example.
Before you begin, you need the IP address of the Chassis Management Module.
You can access the CMM using SSH or a browser. The browser method is
described here.
To access the node through the CMM, complete the following steps:
1. Open a browser and point it to the following URL (where system_name is the
host name or IP address of the Chassis Management Module):
https://system_name
The window in Figure 6-32 opens.
188 IBM Flex System p260 and p460 Planning and Implementation Guide
2. Log in with your user ID and password. The System Status window of the
Chassis Management Module opens, as shown in Figure 6-33, with the
Chassis tab active. If not, click System Status from the menu bar at the top of
the window.
3. Select the Power Systems compute node image of the chassis. Figure 6-33
shows the node in bay 3 selected. The Actions menu to the right of the
graphics is useful when working with the node.
Figure 6-34 Launch console on Power Systems compute node from Chassis Management Module
You interact with the node as it boots. You can enter SMS to install the node, or
allow it to boot to an already installed operating system, from which you can log
in to the console.
190 IBM Flex System p260 and p460 Planning and Implementation Guide
6.3 Management network
In an IBM Flex System Enterprise Chassis, you can configure separate
management and data networks.
The management network is a private and secure Gigabit Ethernet network used
to complete management-related functions throughout the chassis, including
management tasks related to the compute nodes, switches, and the
chassis itself.
The management network is shown in Figure 6-36 (it is the blue line). It connects
the CMM to the compute nodes, the switches in the I/O bays, and the FSM. The
FSM connection to the management network is through a special Broadcom
5718-based management network adapter (Eth0). The management networks in
multiple chassis can be connected together through the external ports of the
CMMs in each chassis via a GbE Top-of-Rack switch.
Separate Management and Data Networks
2-port 10 GbE
Flex System Manager System x Power
controller with compute node Systems
Virtual Fabric compute node
Connector
Eth0 Eth1
IMM IMM FSP
Eth0 = Special
GbE management
network adapter
CMM
Port
Top-of-Rack Switch
CMMs in
other Management Network
Enterprise Management
Chassis workstation
One of the key functions that the data network supports is discovery of operating
systems on the various network endpoints. Discovery of operating systems by
the FSM is required to support software updates on an endpoint, such as a
compute node. You can use the FSM Checking and Updating Compute Nodes
wizard to discover operating systems as part of the initial setup.
The following list describes the high-level features and functions of the IBM Flex
System Manager:
Supports a comprehensive, pre-integrated system that is configured to
optimize performance and efficiency.
Automated processes triggered by events simplify management and reduce
manual administrative tasks.
Centralized management reduces the skills and the number of steps it takes
to manage and deploy a system.
Enables comprehensive management and control of energy utilization
and costs.
Automates responses for a reduced need for manual tasks (custom actions /
filters, configure, edit, relocate, and automation plans).
Full integration with server views, including virtual server views enables
efficient management of resources.
192 IBM Flex System p260 and p460 Planning and Implementation Guide
The management node comes standard without any entitlement licenses, so you
must purchase a license to enable the required FSM functionality.
As described in Chapter 2, “IBM PureFlex System” on page 15, there are two
versions of IBM Flex System Manager: base and advanced.
The IBM Flex System Manager base feature set offers the following functionality:
Supports up to four managed chassis
Supports up to 5,000 managed elements
Auto-discovery of managed elements
Overall health status
Monitoring and availability
Hardware management
Security management
Administration
Network management (Network Control)
Storage management (Storage Control)
Virtual machine lifecycle management (VMControl Express)
The IBM Flex System Manager advanced feature set offers all capabilities of the
base feature set plus:
Image management (VMControl Standard)
Pool management (VMControl Enterprise)
194 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 6-38 shows the internal layout and major components of
the FSM.
Cover
Heat sink
Microprocessor
Microprocessor
heat sink filler
SSD and HDD
I/O expansion
backplane
adapter
Hot-swap ETE
storage adapter
cage
SSD interposer
SSD
drives
SSD mounting
insert
Air baffles
Hot-swap
storage drive DIMM
DIMM
Storage filler
drive filler
Figure 6-38 Exploded view of the IBM Flex System Manager node showing major components
The FSM comes preconfigured with the components described in Table 6-1.
Table 6-1 Features of the IBM Flex System Manager node (8731)
Feature Description
Memory 8x 4GB (1x 4GB, 1Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3
1333MHz LP RDIMM
Figure 6-39 Internal view showing the major components of IBM Flex System Manager
196 IBM Flex System p260 and p460 Planning and Implementation Guide
Front controls
The FSM has similar controls and LEDs as the IBM Flex System x240 Compute
Node. Figure 6-40 shows the front of an FSM with the location of the control
and LEDs.
Solid state
drive LEDs Power Identify
button/LED LED
a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a 2 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa 1 a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa 0
a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Storage
The FSM ships with 2x IBM 200GB SATA 1.8" MLC SSD and 1x IBM 1TB 7.2K
6Gbps NL SATA 2.5" SFF HS HDD drives. The 200 GB SSD drives are
configured as a RAID 1 pair, providing roughly 200 GB of usable space. The
1 TB SATA drive is not part of a RAID group.
All other nodes supported by the Enterprise Chassis have a connection only into
the management network through the management controller (IMMv2 for System
x nodes; FSP for POWER nodes), which is not accessible through the
operating system.
Breakout cable
connector
Serial
connector
2-port USB
Video connector
Figure 6-41 The Console Breakout Cable connecting to the IBM Flex System Manager (and x240)
198 IBM Flex System p260 and p460 Planning and Implementation Guide
6.4.2 Software features
The main features of IBM Flex System Manager management software are:
Monitoring and problem determination
– A real-time multichassis view of hardware components with overlays for
additional information.
– Automatic detection of issues in your environment through event setup
that triggers alerts and actions.
– Identification of changes that might impact availability.
– Server resource utilization by a virtual machine or across a rack
of systems.
Hardware management
– Automated discovery of physical and virtual servers and interconnections,
applications, and supported third-party networking.
– Inventory of hardware components.
– Chassis and hardware component views.
– Hardware properties.
– Component names/hardware identification numbers.
– Firmware levels.
– Utilization rates.
Network management
– Management of network switches from various vendors.
– Discovery, inventory, and status monitoring of switches.
– Graphical network topology views.
– Support for KVM, pHyp, VMware virtual switches, and physical switches.
– VLAN configuration of switches.
– Integration with server management.
– Per-virtual machine network usage and performance statistics provided
to VMControl.
– Logical views of servers and network devices grouped by subnet
and VLAN.
Storage management
– Discovery of physical and virtual storage devices.
– Support for virtual images on local storage across multiple chassis.
200 IBM Flex System p260 and p460 Planning and Implementation Guide
– Group storage systems together using storage system pools to increase
resource utilization and automation.
– Manage storage system pools by adding storage, editing the storage
system pool policy, and monitoring the health of the storage resources.
Additional features
– A resource-oriented chassis map provides an instant graphical view of
chassis resources, including nodes and I/O modules.
• A fly-over provides an instant view of individual server’s (node) status
and inventory
• A chassis map provides an inventory view of chassis components, a
view of active statuses that require administrative attention, and a
compliance view of server (node) firmware.
• Actions can be taken on nodes, such as working with server-related
resources, showing and installing updates, submitting service
requests, and launching the remote access tools.
– Remote console.
• Open video sessions and mount media, such as DVDs with software
updates, to their servers from their local workstation.
• Remote Keyboard, Video, and Mouse (KVM) connections.
• Remote Virtual Media connections (mount CD, DVD, ISO, and
USB media).
• Power operations against servers (Power On/Off/Restart).
– Hardware detection and inventory creation.
– Firmware compliance and updates.
– Automatic detection of hardware failures.
• Provides alerts.
• Takes corrective action.
• Notifies IBM of problems to escalate problem determination.
– Health status (such as processor utilization) on all hardware devices from
a single chassis view.
– Administrative capabilities, such as setting up users within profile groups,
assigning security levels, and security governance.
This section describes how to use the startup wizards and use the chassis
management selection and basic POWER based compute node
management functions.
Important: At the time of this writing, IBM Flex System Manager is required
for any configuration that contains a Power Systems compute node. It is also
assumed that IBM Flex System Manager is preconfigured to manage the initial
chassis. In that event, the steps in this section are not required unless IBM
Flex System Manager is being reinstalled.
To monitor the FSM startup process, connect a console using one of these
methods before powering up the FSM node. The steps that follow use the IMMv2
remote console method.
202 IBM Flex System p260 and p460 Planning and Implementation Guide
To initiate an IMMv2 remote console session, complete the following steps:
1. Start a browser session, as shown in Figure 6-42, to the IP address of the
FSM IMMv2.
204 IBM Flex System p260 and p460 Planning and Implementation Guide
3. In the Remote Control window, click Start remote control in single-user
mode, as shown in Figure 6-44. This action starts a Java applet on the local
desktop that is used as a console session to the FSM.
Figure 6-45 shows the Java console window opened to the FSM appliance
before power is applied.
Figure 6-46 Powering on the FSM from the remote console session
206 IBM Flex System p260 and p460 Planning and Implementation Guide
As the FSM powers up and boots, the process can be monitored, but no input
is accepted until the License Agreement window, shown in
Figure 6-47, opens.
5. Click I agree to continue, and the startup wizard Welcome window opens, as
shown in Figure 6-48.
Click Next.
208 IBM Flex System p260 and p460 Planning and Implementation Guide
7. Create a user ID and password for accessing the GUI and CLI. User ID and
password maintenance, including creating additional user IDs, is available in
IBM Flex System Manager after the startup wizard completes. Figure 6-50
shows the creation of user ID USERID and entering a password.
210 IBM Flex System p260 and p460 Planning and Implementation Guide
The second LAN adapter represents one of the integrated Ethernet ports or
LAN on motherboard (LOM). Traffic from this adapter flows through the
Ethernet switch in the first I/O switch bay of the chassis, and is used as a
separate data connection to the FSM. The radio button for the first adapter is
preselected (Figure 6-52).
212 IBM Flex System p260 and p460 Planning and Implementation Guide
After completing the previous step, the wizard cycles back to the Initial LAN
Adapter window and preselects the next adapter in the list, as shown in
Figure 6-54.
Important: It is expected that the host name of the FSM is available on the
domain name server.
13.You can enable the use of DNS services and add the address of one or
severs and a domain suffix search order.
Enter the information, as shown in Figure 6-56, and click Next.
214 IBM Flex System p260 and p460 Planning and Implementation Guide
14.The final step of the setup wizard is shown in Figure 6-57. This windows
shows a summary of all configured options.
To change a selection, click Back. If no changes are needed, click Finish.
216 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 6-60 FSM startup
15.With startup completed, the local browser on the FSM also starts. A list of
untrusted connection challenges opens.
218 IBM Flex System p260 and p460 Planning and Implementation Guide
16.Click Confirm Security Exception, as shown in Figure 6-64.
17.With the security exceptions cleared, the Login window of the IBM Flex
System Manager GUI opens.
220 IBM Flex System p260 and p460 Planning and Implementation Guide
A Getting Started window opens and reminds you that initial setup tasks must
be completed (Figure 6-66).
The startup wizard and initial login are complete. The FSM is ready for further
configuration and use. Our example uses a console from the remote console
function of the IMMv2. At this time, a secure browser session can be started to
the FSM.
222 IBM Flex System p260 and p460 Planning and Implementation Guide
2. From the list of managers, click Update Manager to open the window shown
in Figure 6-68.
224 IBM Flex System p260 and p460 Planning and Implementation Guide
The test attempts to make a connection to a target IBM server. During the
test, a progress indicator opens, as shown in Figure 6-70.
After the test succeeds, the Update Manager can obtain update packages
directly from IBM.
If a direct Internet connection is not allowed for the FSM, complete the steps
described in “Importing update files” to import the update files into Update
Manager.
226 IBM Flex System p260 and p460 Planning and Implementation Guide
The scp command is used to copy the update files from a local workstation to the
FSM. The update files on the local workstation are obtained from IBM Fix
Central. From an ssh login, you have access only to the /home/userid directory.
Additional subdirectories can be created and files copied and removed from
these subdirectories, but running cd to the subdirectory is a restricted operation.
To import the update files using the GUI, complete the following steps:
1. Beginning at the Update Manager window, click Acquire updates, as shown
in Figure 6-72.
228 IBM Flex System p260 and p460 Planning and Implementation Guide
3. Enter the path for the updates that were manually copied to the IBM Flex
System Manager, as shown in Figure 6-74.
4. Click OK, and the IBM Flex System Manager job scheduler opens, as shown
in Figure 6-75.
After the initial setup of the FSM finishes, FSM discovers any available chassis.
You can then decide which chassis is managed by the current FSM. To
accomplish this task, complete the following steps:
1. Click the Home tab.
2. Click the Initial Setup tab to open the Initial Setup window.
230 IBM Flex System p260 and p460 Planning and Implementation Guide
3. Click IBM Flex System Manager Domain - Select Chassis to be Managed
(Figure 6-77).
232 IBM Flex System p260 and p460 Planning and Implementation Guide
The Manage Chassis window, shown in Figure 6-79, lists the selected
chassis. A drop-down menu lists the available IBM Flex System Manager
systems.
6. Ensure that the chassis and IBM Flex System Manager selections
are correct.
7. Click Manage. This action updates the Message column from Waiting to
Finalizing, then Managed, as shown in Figure 6-80 and Figure 6-81 on
page 234.
8. After the successful completion of the manage chassis process, click Show
all chassis, as shown in Figure 6-82.
234 IBM Flex System p260 and p460 Planning and Implementation Guide
The resulting window is the original IBM Flex System Manager Management
Domain window, with the target chassis as the managing IBM Flex System
Manager (Figure 6-83).
With the Enterprise Chassis now managed by the IBM Flex System Manager, the
typical management functions on a Power Systems compute node can
be performed.
Most operations in the IBM Flex System Manager use the Home page as the
starting point. To access Manage Power Systems Resources, complete the
following steps:
1. Click Chassis Manager (Figure 6-84).
236 IBM Flex System p260 and p460 Planning and Implementation Guide
A new tab opens that shows a list of managed chassis (Figure 6-85).
2. Click the name of the wanted chassis in the chassis name column (in this
case, modular01).
A window with a graphical view of the chassis opens (Figure 6-86).
3. Click the General Actions drop-down menu and click Manage Power
Systems Resources.
A new tab is created along the top edge of the GUI, and the Manage Power
Systems Resources window opens.
238 IBM Flex System p260 and p460 Planning and Implementation Guide
To request access, complete the following steps:
1. Right-click the wanted server object, as shown in Figure 6-88, and click
Request Access.
240 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 6-91 Completed access request
3. With the access request complete, click Close to exit the window and return
to the Manage Power Systems Resources window, as shown in Figure 6-92.
Many of the columns now contain information obtained from this limited
communication with the Flexible Service Processor.
242 IBM Flex System p260 and p460 Planning and Implementation Guide
2. Click Inventory/View and Collect Inventory to start the collection.
In Figure 6-94, notice that, to the right of the Collect Inventory button, a time
stamp of the last collection is displayed. In this case, inventory has never
been collected for this node.
244 IBM Flex System p260 and p460 Planning and Implementation Guide
Clicking Display Properties opens the window shown in Figure 6-97. The job
properties window has several tabs that can be used to review additional job
details. The General tab shown indicates that the inventory collection job
completed without errors.
The Active and Scheduled Jobs tab and the View and Collect Inventory tabs near
the top of the window can be closed.
Now that you have completed access and inventory collection, you can now use
IBM Flex System Manager to manage the node.
246 IBM Flex System p260 and p460 Planning and Implementation Guide
4. The Terminal Console tab opens and shows a message and an OK button.
Click OK to return to the Resource Explorer tab (or the tab you started the
console from) (Figure 6-99).
248 IBM Flex System p260 and p460 Planning and Implementation Guide
If Serial Over LAN (SOL) is not disabled, you receive the error shown in
Figure 6-101. To learn the process to disable SOL, see 6.6.3, “Disabling
Serial Over LAN (SOL)” on page 249.
Figure 6-101 Console open failure on virtual server ID 1 with SOL enabled
2. Log in using a valid user and password. If this is the first time you are logging
in to the Chassis Management Module, the System Status window of the
Chassis Management Module opens the Chassis tab. If this is not the first
time you are logging in, you are returned to the place where you were when
you logged off.
3. Click Chassis Management Compute Nodes from the menu bar in the
CMM interface.
The Compute Nodes window opens and shows all the compute nodes in the
chassis. In our chassis, we had two compute nodes: a Power Systems
compute node and the IBM Flex System Manager.
250 IBM Flex System p260 and p460 Planning and Implementation Guide
Disabling SOL
To disable SOL on the chassis, complete the following steps, which are also
shown in Figure 6-103:
1. Click the Settings tab.
2. Click the Serial Over LAN tab.
3. Clear the Serial Over LAN check box.
4. Click OK.
1.
2.
3.
4.
Figure 6-103 Disable SOL for all compute nodes from the Chassis Management Module
252 IBM Flex System p260 and p460 Planning and Implementation Guide
This window provides access to the functions listed in the following sections.
Chassis IP (ipv4/ipv6)
Selected chassis
254 IBM Flex System p260 and p460 Planning and Implementation Guide
Compute Nodes - Check and Upgrade Firmware
After your compute node is discovered, you have several actions that you can
take, as shown on Figure 6-107.
256 IBM Flex System p260 and p460 Planning and Implementation Guide
Collect inventory: After you discover your system and request access, you
can collect and review the systems inventory. The systems inventory shows
you information about hardware and operating systems for the systems you
select. There are several filter and export options available, as shown in
Figure 6-109.
Check for Updates: If your FSM is connected to the Internet, you can update
your firmware and operating system directly from the Internet. If the FSM is
not connected to the Internet, you can download the firmware and operating
system manually to another system, and then use that system to upgrade
your system firmware and operating system.
258 IBM Flex System p260 and p460 Planning and Implementation Guide
6.7.2 Additional setup tab
In this window, you have access to settings such as the IBM Electronic Service
Agent™ (ESA) setup, LDAP setup, user setup and more, as shown in
Figure 6-110.
Deploy Agents
When you use this setting, IBM Flex System Manager deploys monitor agents to
monitor several items of your compute nodes. You can deploy agents to
discovered systems with full access (discovery and collection can be started
from this point, before the agent installation).
260 IBM Flex System p260 and p460 Planning and Implementation Guide
Manage System Storage
As part of the new total management approach, storage management is
integrated into the FSM, as shown in Figure 6-111. After you discover your
storage appliance and request access to it through the FSM, you can start
managing it.
262 IBM Flex System p260 and p460 Planning and Implementation Guide
6.7.3 Plug-ins tab
The plug-ins tab has options for managing the FSM, managing virtual servers,
checking status, managing discovery, and more, as shown in Figure 6-113.
Figure 6-113 shows only a portion of the
available entries.
Several of the plug-ins require licensing and are included on a trial basis.
264 IBM Flex System p260 and p460 Planning and Implementation Guide
You can also create shortcuts for functions you frequently use for IBM Flex
System Manager, chassis management, managing Power System resources,
the IBM Flex System Manager management domain, event log, backup and
restore, and high availability settings (Figure 6-114).
General information
Discovery Manager
You can use Discovery Manager to discover and connect to the systems at your
site. The Discovery Manager window shows an overview of all discovered
systems, which ones you have access to, and which ones you have collected
inventory from. It also has options to explore all discovered resources by
category (Figure 6-115).
266 IBM Flex System p260 and p460 Planning and Implementation Guide
Status Manager
This window shows a tactical overview with a pie chart of all resources and
systems managed by IBM Flex System Manager, dividing the chart into critical,
warning, informational, and OK statuses. As with the other plug-ins, it has quick
access menus for frequently used functions, for example, health summary, view
problems, monitors, and event logs.
Update Manager
One of the main features of the IBM Flex System Manager is the ability to
perform system upgrades and software upgrades on all systems and
components that are managed by the IBM Flex System Manager. The Update
Manager window is shown in Figure 6-116.
4. Quick access
3. Compliance policies
Figure 6-116 Update Manager window
Automation Manager
This feature shows an overview of active and scheduled jobs for your IBM Flex
System Manager, including information about completed, failed, and
upcoming jobs.
Remote Access
You can use this feature to access managed systems using CLI, web access,
remote access, or remote control. Access depends on the type of system, for
example, I/O modules might have web and CLI access, but not remote.
Computer nodes might have CLI, remote control, and web access.
Storage management
For more information about this feature, see “Manage System Storage” on
page 261.
268 IBM Flex System p260 and p460 Planning and Implementation Guide
Power Systems Management
You can use this feature to assume the role of the Hardware Management
Consoles and Systems Director Management Consoles to manage the Power
Systems servers in your data center. From here you can create partitions,
manage virtual servers, set up dual VIOS in an IBM Flex System environment,
and access features, such as live partition mobility. The Power System
management overview is shown in Figure 6-117.
2. Platform overview
By selecting a power server, in this example, the IBM Flex System p460
Compute Node, you have access to all virtual servers that run on that system.
(Figure 6-119).
270 IBM Flex System p260 and p460 Planning and Implementation Guide
System z Management
You can use this feature to manage the System z systems in your data center. It
is similar to the Power Systems Management feature.
Systems x Management
You can use this feature to manage the System x servers in your data center. It
is similar to the Power Systems Management feature.
272 IBM Flex System p260 and p460 Planning and Implementation Guide
6.7.4 Administrator tab
From the Administration tab, you can access all IBM Flex System Manager
configurations for tasks, such as shut down, restart, power off, upgrade firmware,
set up network, set up users, and backup and restore. See Figure 6-121.
274 IBM Flex System p260 and p460 Planning and Implementation Guide
7
Chapter 7. Virtualization
If you create virtual servers (also known as logical partitions (LPARs)) on your
Power Systems compute node, you can consolidate your workload to deliver
cost savings and improve infrastructure responsiveness. As you look for ways to
maximize the return on your IT infrastructure investments, consolidating
workloads and increasing server use becomes an attractive proposition.
IBM Power Systems, combined with PowerVM technology, are designed to help
you consolidate and simplify your IT environment. The following list details
several key capabilities:
Improve server use by consolidating diverse sets of applications.
Share processor, memory, and I/O resources to reduce the total cost of
ownership (TCO).
Improve business responsiveness and operational speed by dynamically
reallocating resources to applications as needed, to better anticipate
changing business needs.
Simplify IT infrastructure management by making workloads independent of
hardware resources, so that you can make business-driven policies to deliver
resources based on time, cost, and service-level requirements.
Move running workloads between servers to maximize availability and avoid
planned downtime.
7.1.1 Features
The latest version of PowerVM contains the following features:
Support for the following number of maximum virtual servers (or logical
partitions, LPARs):
– p260: Up to 160 virtual servers
– p460: Up to 320 virtual servers
– p24L: Up to 120 virtual servers
Role Based Access Control (RBAC)
RBAC brings an added level of security and flexibility in the administration of
the Virtual I/O Server (VIOS). With RBAC, you can create a set of
authorizations for the user management commands. You can assign these
authorizations to a role named UserManagement, and this role can be given
to any other user. So one user with the role, UserManagement, can manage
the users on the system, but does not have any further access.
With RBAC, the VIOS can split management functions that presently can be
done only by the padmin user, providing better security by giving only the
necessary access to users, and easy management and auditing of system
functions.
Suspend/Resume
Using Suspend/Resume, you can provide long-term suspension (greater than
5 - 10 seconds) of partitions, saving partition state (memory, NVRAM, and
VSP state) on persistent storage. This action makes server resources
available that were in use by that partition, restoring partition state to server
resources, and resuming operation of that partition and its applications, either
on the same server or on another server.
The requirements for Suspend/Resume dictate that all resources must be
virtualized before suspending a partition. If the partition is resumed on
another server, then the shared external I/O (disk and local area network
(LAN)) needs to remain identical. Suspend/Resume works with AIX and Linux
workloads when managed by the Hardware Management Console (HMC).
276 IBM Flex System p260 and p460 Planning and Implementation Guide
Shared storage pools
You can use VIOS 2.2 to create storage pools that can be accessed by VIOS
partitions that are deployed across multiple Power Systems servers.
Therefore, an assigned allocation of storage capacity can be efficiently
managed and shared.
The December 2011 Service Pack enhances capabilities by enabling four
systems to participate in a Shared Storage Pool configuration. This
configuration can improve efficiency, agility, scalability, flexibility, and
availability. Specifically, the Service Pack enables:
– Storage Mobility: A function that allows data to be moved to new storage
devices within Shared Storage Pools, while the virtual servers remain
active and available.
– VM Storage Snapshots/Rollback: A new function that allows multiple
point-in-time snapshots of individual virtual server storage. These
point-in-time copies can be used to quickly roll back a virtual server to a
particular snapshot image. This functionality can be used to capture a VM
image for cloning purposes or before applying maintenance.
Thin provisioning
VIOS 2.2 supports highly efficient storage provisioning, where virtualized
workloads in VMs can have storage resources from a shared storage pool
dynamically added or released, as required.
VIOS grouping
Multiple VIOS 2.2 partitions can use a common shared storage pool to more
efficiently use limited storage resources and simplify the management and
integration of storage subsystems.
Network node balancing for redundant Shared Ethernet Adapters (SEAs)
(with the December 2011 Service Pack).
This feature is useful when multiple VLANs are being supported in a dual
VIOS environment. The implementation is based on a more granular
treatment of trunking, where there are different trunks defined for the SEAs on
each VIOS. Each trunk serves different VLANs, and each VIOS can be the
primary for a different trunk. This situation occurs with just one SEA definition
on each VIOS.
The user interface for the POWER Hypervisor on POWER based blades is
traditionally based on the Integrated Virtualization Manager. With the PS700
family of blades, a second method of systems management is available: the
Systems Director Management Console. A new user interface is introduced with
the introduction of IBM Flex System Manager. This chapter focuses on using the
IBM Flex System Manager for most configuration tasks performed on the Power
Systems compute nodes.
278 IBM Flex System p260 and p460 Planning and Implementation Guide
POWER Hypervisor technology is integrated with all IBM POWER servers,
including the Power Systems compute nodes. The hypervisor orchestrates and
manages system virtualization, including creating logical partitions and
dynamically moving resources across multiple operating environments. POWER
Hypervisor is a basic component of the system firmware that is layered between
the hardware and the operating system.
Less than 4 GB 16 MB
Greater than 4 GB up to 8 GB 32 MB
Greater than 8 GB up to 16 GB 64 MB
The POWER Hypervisor provides the following types of virtual I/O adapters:
Virtual SCSI
Virtual Ethernet
Virtual Fibre Channel
Virtual (TTY) console
Virtual I/O adapters are defined by system administrators during logical partition
definition. Configuration information for the adapters is presented to the partition
operating system.
Virtual SCSI
The POWER Hypervisor provides a virtual SCSI mechanism for virtualization of
storage devices. Virtual SCSI allows secure communications between a logical
partition and the I/O Server (VIOS). The storage virtualization is accomplished by
pairing two adapters: a virtual SCSI server adapter on the VIOS, and a virtual
SCSI client adapter on IBM i, Linux, or AIX partitions. The combination of Virtual
SCSI and VIOS provides the opportunity to share physical disk adapters in a
flexible and reliable manner.
Virtual Ethernet
The POWER Hypervisor provides an IEEE 802.1Q, VLAN-style virtual Ethernet
switch that allows partitions on the same server to use fast and secure
communication without any need for physical connection.
Virtual Ethernet support starts with AIX 5L V5.3, or the appropriate level of Linux
supporting virtual Ethernet devices. The virtual Ethernet is part of the base
system configuration.
280 IBM Flex System p260 and p460 Planning and Implementation Guide
Virtual Ethernet has the following major features:
Virtual Ethernet adapters can be used for both IPv4 and IPv6 communication
and can transmit packets up to 65,408 bytes in size. Therefore, the maximum
transmission unit (MTU) for the corresponding interface can be up to 65,394
(65,408 minus 14 for the header) in the non-VLAN case, and up to 65,390
(65,408 minus 14, minus 4) if VLAN tagging is used.
The POWER Hypervisor presents itself to partitions as a virtual
802.1Q-compliant switch. The maximum number of VLANs is 4096. Virtual
Ethernet adapters can be configured as either untagged or tagged (following
the IEEE 802.1Q VLAN standard).
An AIX partition supports 256 virtual Ethernet adapters for each logical
partition. Aside from a default port VLAN ID, the number of additional VLAN
ID values that can be assigned per virtual Ethernet adapter is 20, which
implies that each virtual Ethernet adapter can be used to access 21
virtual networks.
Each operating system partition detects the virtual local area network (VLAN)
switch as an Ethernet adapter without the physical link properties and
asynchronous data transmit operations.
Any virtual Ethernet can also have connectivity outside of the server if a Layer 2
bridge to a physical Ethernet adapter is configured in a VIOS partition. The
device configured in this fashion is the SEA.
You can configure only virtual Fibre Channel adapters on client logical partitions
that run the following operating systems:
AIX V6.1 Technology Level 2, or later
AIX 5L V5.3 Technology Level 9, or later
IBM i V6.1.1, V7.1, or later
SUSE Linux Enterprise Server 11, or later
RHEL 5.5, 6, or later
282 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 7-1 shows the connections between the client partition virtual Fibre
Channel adapters and external storage.
Physical fibre
channel adapter Hypervisor
Physical
Disk 1
Storage
Area
Network
Physical
Disk 2
Figure 7-1 Connectivity between virtual Fibre Channel adapters and external SAN devices
For Power Systems compute nodes, the operating system console can be
accessed from IBM Flex System Manager.
7.1.3 Preparing to use the IBM Flex System Manager for partitioning
FSM is used to create virtual servers on Power Systems compute nodes. This
function is one of many provided by FSM.
If you have experience using the Integrated Virtualization Manager, HMC, or the
Systems Director Management Console to create logical partitions or virtual
servers on any POWER7 system, the process is similar.
Table 7-2 Virtual server name, processor, and memory planning information
Virtual server name Processor/UnCap/Weight Memory
node1
vios1 1/Y/200 2 GB
vios2 1/Y/200 2 GB
lpar1 3/Y/100 4 GB
284 IBM Flex System p260 and p460 Planning and Implementation Guide
Virtual server name Processor/UnCap/Weight Memory
lpar2 .5/N/- 1 GB
lpar3 .5/N/- 1 GB
node2
vios3 1/Y/200 2 GB
vios4 1/Y/200 2 GB
lpar1 3/Y/100 4 GB
lpar2 2/Y/50 2 GB
lpar3 2/Y/50 2 GB
lpar4 1.5/N/- 1 GB
lpar5 1.5/N/- 1 GB
lpar6 1.5/N/- 1 GB
Physical adapters
For the VIOS partitions, planning for physical adapter allocation is important,
because the VIOS provides virtualized access through the physical adapters to
network or disk resources. If availability is a concern for the virtualized
environment, use redundant physical adapters in the VIOS. For network
adapters, you most likely use Etherchannel. For storage adapters, a multipathing
package (for example, an MPIO-PCM or EMC PowerPath) is installed and
configured in the VIOS after the operating system is installed. To further enhance
availability in a virtualized configuration, implement two VIOS servers, both
capable of providing the same network and storage access to the virtual servers
on the Power Systems compute node. The ideal availability configuration
involves redundant physical adapters in each of the two VIOS servers. Because
of hardware requirements in a dual VIOS configuration, a p460 might be the
better choice.
Create a similar document that shows the physical adapter assignments to each
VIOS. With only two or four adapters to be assigned to the virtual servers, the
document is fairly simple.
For virtual storage access, either virtual SCSI or NPIV can be used. Virtual SCSI
adapters are configured in a client-server relationship, with the client adapter in
the client virtual server configured to refer to the server adapter configured in the
VIOS. The server adapter in the VIOS can be configured to refer to one client
adapter or allow any client to connect. NPIV configuration differs, in that the
VIOS serves as a pass-through module for a virtual Fibre Channel adapter in the
client virtual server. The SAN administrator assigns LUNs to the virtual Fibre
Channel adapters in the virtual servers, just as they would for a real Fibre
Channel adapter. The WWPNs are generated when the virtual Fibre Channel
adapter is defined for the client. This configuration can be provided to the SAN
administrator to ensure the LUNs are correctly mapped in the SAN.
Documenting the relationships between the VIOS and the client virtual servers
leads to correctly defined virtual adapters when you created the virtual servers
in FSM.
For more information about planning and configuring a highly available virtual
environment, see IBM System p Advanced POWER Virtualization (PowerVM)
Best Practices, REDP-4194.
286 IBM Flex System p260 and p460 Planning and Implementation Guide
7.2.1 Using the CLI
Many integrators and system administrators make extensive and efficient use of
the CLI, rather than use a graphical interface for their virtual server creation and
administration tasks. Tasks can be scripted, and often the tasks are completed
faster using the command line.
Scripts: In many cases, existing scripts that were written for usage on a
Systems Director Management Console can run unchanged on FSM.
Similarly, scripts written to run on an HMC might run with the smcli command,
added to each command in the script.
To ensure that the correct I/O devices are specified in the command, understand
and document the intended I/O adapters using the information described in
“Assigning physical I/O” on page 301. An example of modifying a virtual server
definition using the CLI is in 7.3, “Modifying the VIOS definition” on page 304.
To create a VIO Server using a single command, run the following command:
smcli mksyscfg -r lpar -m p4601 -i
"name=vios1,profile_name=vios1_default,lpar_env=vioserver,lpar_id=1,msp
=1,min_mem=1024,desired_mem=2048,max_mem=4096,proc_mode=shared,min_proc
_units=1.0,desired_proc_units=1.0,max_proc_units=2.0,min_procs=2,desire
d_procs=4,max_procs=8,sharing_mode=uncap,uncap_weight=200,max_virtual_s
lots=100,virtual_eth_adapters=\"11/0/1//1/0,12/0/99///0\",virtual_scsi_
adapters=13/server/10//13/0,boot_mode=norm,io_slots=\"21010201//1,21010
210//1,21010220//1\""
VIOS command: This command creates a VIOS server that matches the one
created and modified in 7.2.2, “Using the IBM Flex System Manager” on
page 288, which shows the usage of the graphical interface.
To verify that the VIO Server was created, run smcli lssyscfg, scanning the
results for the name of your virtual server:
sysadmin@sys1234567: ~> smcli lssyscfg -r lpar -m p4601 -F name
7989-SLES
7989-AIX
7989-RHEL6
7989-VIOS
vios1
To verify the content of the profile created as a result, run smcli lssyscfg with
different parameters:
Recognition of failure
There are many reasons that your CLI command might fail. A syntax error is the
most likely, producing something like the following output. Use the information in
the message to correct the problem.
288 IBM Flex System p260 and p460 Planning and Implementation Guide
We access the FSM remotely using a browser. Complete the following steps:
1. Open a browser and point the browser to the following URL (where
system_name is the host name or IP address of the FSM node):
https://system_name:8422
Port number: The port you use may be different than the port we use in
our examples.
2. Enter a valid FSM user ID and password, and click Log in. The Welcome
window opens.
290 IBM Flex System p260 and p460 Planning and Implementation Guide
4. Click the Plug-ins tab to display the list of installed plug-ins. The list of
installed plug-ins opens, as shown in Figure 7-4.
Figure 7-4 IBM Flex System Manager Plug-ins tab - highlighting the Power Systems
Management plug-in
292 IBM Flex System p260 and p460 Planning and Implementation Guide
2. Select the compute node.
If more hosts are managed by this Flex System Manager, select the one on
which the VIOS virtual server is created.
3. Click Actions System Configuration Create Virtual Server to start
the wizard (Figure 7-6).
Figure 7-8 Specify the memory information for the VIOS virtual server
294 IBM Flex System p260 and p460 Planning and Implementation Guide
1. Change the value to reflect the amount of wanted memory in gigabytes.
Decimal fractions can be specified to assign memory in megabyte
increments. This memory is the amount of memory the hypervisor attempts to
assign when the VIOS is activated. We assign the VIOS 2 GB of memory.
Figure 7-9 Setting the processor characteristics for the VIOS virtual server
Virtual Ethernet
In this task, the process is repeated for each virtual adapter to be defined on the
VIOS, but the characteristics differ with each adapter type. The order in which
the adapters are created does not matter.
If you performed the steps in “Memory and processor settings” on page 294, you
should see the window shown in Figure 7-10.
Figure 7-10 Create the bridging virtual Ethernet adapter in the VIOS
296 IBM Flex System p260 and p460 Planning and Implementation Guide
Complete the following steps:
1. Define the bridging virtual Ethernet adapter. Click Create Adapter, which
opens the window where you create the bridging virtual Ethernet adapter, as
shown in Figure 7-11.
Figure 7-12 Create control channel virtual Ethernet adapter for SEA failover
298 IBM Flex System p260 and p460 Planning and Implementation Guide
4. Click Add to add more virtual Ethernet adapters, and a new virtual Ethernet
adapter window opens.
5. Create an additional virtual Ethernet adapter to use as the control channel for
shared Ethernet adapter failover:
a. Make the adapter ID 12 and the VLAN 99, leaving all other fields as they
are, to create the control channel virtual Ethernet adapter.
b. Click OK to return to the virtual Ethernet adapter main window.
Virtual storage
Here we show an example of creating a virtual SCSI adapter for the VIOS virtual
server. When creating a virtual Fibre Channel adapter, the same windows shown
in “Virtual Ethernet” on page 296 are shown. However, change the Adapter type
field to Fibre Channel.
300 IBM Flex System p260 and p460 Planning and Implementation Guide
2. Complete the fields in Figure 7-14 on page 300 as follows:
– Specify 13 as the Adapter ID.
– To create a virtual SCSI relationship between this VIOS and a client virtual
server, specify SCSI as the Adapter type. Either choose an existing
virtual server and supply an ID in the Connecting adapter ID field, or
enter a new ID and connecting adapter ID for a virtual server that is not
defined.
Figure 7-14 on page 300 shows the window for creating a virtual SCSI
adapter between this VIOS and a client virtual server with an ID of 10 and
a connection adapter ID of 13.
Note: The number of virtual adapters allowed on the virtual server can be
set in this window. Set it to one more than the highest ID number that you
plan to assign. If you do not set it correctly, it automatically increases, if
necessary, when assigning ID numbers to virtual adapters that exceed the
current setting. This value cannot be changed dynamically after a virtual
server is activated.
3. Click OK to save the settings for this virtual storage adapter, and return to the
main virtual storage adapter window.
4. When all virtual storage adapters are defined, click Next in that window to
save the settings and proceed to the physical adapters window (Figure 7-17
on page 303).
Ports: Keep in mind that the four ports on the EN4054 4-port 10Gb Ethernet
Adapter are on two busses, so you can assign two ports to one partition
independent of the other two ports. The location code has a suffix of L1 or L2
to distinguish between the two pairs of ports.
Un-P1-C34
1
Un-P1-C35
2
Un-P1-C36
3
Un-P1-C37
4
302 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 7-16 shows the expansion card location codes for the p260.
Un-P1-C18
1
Un-P1-C19
2
The storage controller, if disks were ordered, has a location code of P1-T2 on
both models. The USB controller has a location code of P1-T1 on
both models.
For our VIOS, we assign all four ports on an Ethernet expansion card and the
storage controller.
304 IBM Flex System p260 and p460 Planning and Implementation Guide
7.3.1 Using the IBM Flex System Manager
To change the values using the web interface, complete the following steps:
1. Select the newly created VIOS and click Actions System
Configuration Manage Profiles, as shown in Figure 7-18.
A window opens and shows all of the profiles for the selected virtual server.
2. Select the profile to edit and click Actions Edit.
306 IBM Flex System p260 and p460 Planning and Implementation Guide
Note the values that were made by the wizard:
– The desired virtual processor count is 10 (as specified when creating the
virtual server). This count translates to a desired processing unit setting
of 1.0.
– The maximum virtual processor count is 20. The maximum count is always
the desired count plus 10. The maximum processing units setting is also
set to 20.
– The minimum virtual processors setting is set to 1, with the processing
units set to 1.
– The sharing mode is set to uncapped with a weight of 128. Verify that this
setting is what is acceptable for the virtual server.
Use the same process shown in 7.2, “Creating the VIOS virtual server” on
page 286, but with some differences. The differences between creating a VIOS
and an AIX or Linux virtual server are:
The Environment option in the initial window is set to AIX/Linux.
No physical I/O adapters must be defined, if the virtual server is virtualized. In
this case, a VIOS must be defined to provide virtualized access to network
and storage.
The virtual server might use all physical resources, running as a full
system partition.
The virtual server can be defined as suspend capable.
For more details about installing IBM i in a virtual server, see IBM i on a POWER
Blade Read-me First, found at:
http://www.ibm.com/systems/i/advantages/v6r1/blades/pdf/i_on_blade_read
me.pdf
308 IBM Flex System p260 and p460 Planning and Implementation Guide
Creating the virtual server for an IBM i installation is similar to the process for
creating a VIOS. Complete the following steps:
1. Set the Environment option to IBM i, as shown in Figure 7-20.
2. Click Next to go to the Memory settings. The window shown in Figure 7-21
opens.
4. Choose a quantity of processors for the virtual server and click Next to create
the virtual Ethernet adapters. The window shown in Figure 7-23 opens.
With the VIOS already defined, the FSM defines a virtual Ethernet on the
same VLAN as the SEA on the VIOS. We keep that definition, as shown in
Figure 7-23.
310 IBM Flex System p260 and p460 Planning and Implementation Guide
Important: These steps are critical, because the IBM i virtual server must
be defined to use only virtual resources through a VIOS. At the least, a
virtual Ethernet and a virtual SCSI adapter must be defined in the IBM i
virtual server.
6. Indicate that you do not want automatic virtual storage definition (configure
the adapters manually), and click Next to proceed to the main Virtual
Storage window.
Because no virtual storage adapters exist, the Create Adapter option is
displayed, as shown in Figure 7-25. If virtual storage adapters are already
created, they are shown.
312 IBM Flex System p260 and p460 Planning and Implementation Guide
9. Click OK to create this virtual SCSI adapter and return to the main Virtual
Storage adapter window, as shown in Figure 7-27.
Figure 7-27 IBM i virtual server settings for virtual SCSI adapter
10.This adapter is the only virtual SCSI adapter we create, so click Next to
proceed to the physical adapter settings, as shown in Figure 7-28.
To use a virtual optical drive from the VIOS for the IBM i operating system
installation, the installation media ISO files must be copied to the VIOS,
and the virtual optical devices must be created.
11.Do not select physical adapters for IBM i virtual servers, as shown in
Figure 7-28 on page 313. Click Next in this window to proceed to the Load
Source and Console settings, as shown in Figure 7-29.
Figure 7-29 IBM i virtual server load source and console settings
12.Choose the virtual SCSI as the Load Source. Click Next to proceed to
the Summary.
13.Review the settings on the Summary page, and click Finish to complete
the definition.
314 IBM Flex System p260 and p460 Planning and Implementation Guide
7.6 Preparing for a native operating system installation
If you need the entire capacity of the Power Systems compute node, an
operating system can be installed natively on the node. The configuration is
similar to the setup for a partitioned node, but all of the resources are assigned to
a single virtual server.
The operating system can then be installed to that single virtual server, using the
methods described in Chapter 8, “Operating system installation” on page 317.
316 IBM Flex System p260 and p460 Planning and Implementation Guide
8
In this chapter, we describe methods for updating the Power Systems compute
nodes.
Firmware updates can provide fixes to previous versions and can enable new
functions. Compute node firmware typically has a prerequisite CMM firmware
level. It is best to have a program in place for reviewing the current firmware
levels of the chassis components and compute nodes to ensure the
best availability.
Firmware updates are available at IBM Fix Central web page at the
following website:
http://www.ibm.com/support/fixcentral/
318 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 8-1 shows the update firmware menu in IBM Flex System Manager.
Firmware updates done using IBM Flex System Manager can be
non-destructive or concurrent with respect to server operations, so that a
server reboot is not required. Only updates within a release can be, but are
not guaranteed to be, concurrent.
Figure 8-1 The IBM Flex System Manager compute nodes check and update
firmware
Firmware updates can take time to load. To expedite the initial setup process,
you can install your operating system while you wait for firmware updates.
320 IBM Flex System p260 and p460 Planning and Implementation Guide
– Install the firmware by running update_flash (on AIX):
cd /tmp/fwupdate
/usr/lpp/diagnostics/bin/update_flash -f 01EA3xx_yyy_zzz
– Install the firmware by running update_flash (on Linux):
cd /tmp/fwupdate
/usr/sbin/update_flash -f 01EA3xx_yyy_zzz
– Install the firmware by running ldfware (on VIOS):
cd /tmp/fwupdate
ldfware -file 01EA3xx_yyy_zzz
8. Verify that the update installed correctly, as described in 8.1.4, “Verifying the
system firmware levels” on page 325.
322 IBM Flex System p260 and p460 Planning and Implementation Guide
2. Accept the copyright notice, and then choose the task selection menu entry
shown in Figure 8-3.
FUNCTION SELECTION
1 Diagnostic Routines
This selection will test the machine hardware. Wrap plugs and
other advanced functions will not be used.
2 Advanced Diagnostics Routines
This selection will test the machine hardware. Wrap plugs and
other advanced functions will be used.
3 Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.)
This selection will list the tasks supported by these procedures.
Once a task is selected, a resource menu may be presented showing
all resources supported by the task.
4 Resource Selection
This selection will list the resources in the system that are supported
by these procedures. Once a resource is selected, a task menu will
be presented showing all tasks that can be run on the resource(s).
99 Exit Diagnostics
[MORE...7]
Delete Resource from Resource List
Display Configuration and Resource List
Display Firmware Device Node Information
Display Hardware Error Report
Display Hardware Vital Product Data
Display Multipath I/O (MPIO) Device Configuration
Display Resource Attributes
Display Service Hints
Display or Change Bootlist
Hot Plug Task
Microcode Tasks
Process Supplemental Media
[MORE...1]
324 IBM Flex System p260 and p460 Planning and Implementation Guide
4. Select Download Latest Available Microcode, as shown in Figure 8-5.
5. Insert the CD-ROM with the microcode image, or select the virtual optical
device that points to the microcode image. If the system is booted from a NIM
server, the microcode must be in usr/lib/microcode of the Shared Product
Object Tree (SPOT) the client is booted from.
326 IBM Flex System p260 and p460 Planning and Implementation Guide
4. Select the system object sys0 and press F7 to commit, as shown
in Figure 8-7.
All Resources
This selection will select all the resources currently displayed.
+ sys0 System Object
328 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 8-9 shows window where you select the type of update to search for,
download, and apply. For this procedure, we use Power System Firmware.
2. Click Add to add your selection to the list of selected update types.
4. Review and confirm the list of updates that will be installed on the selected
systems, as shown in Figure 8-11. After you confirm this list, the update
begins and is concurrent (the system does not require a restart to activate the
new firmware).
330 IBM Flex System p260 and p460 Planning and Implementation Guide
5. If necessary, review the installation log, as shown in Figure 8-12, to determine
the status of the installation.
More information about the NIM installation is NIM from A to Z in AIX 5L,
SG24-7296.
332 IBM Flex System p260 and p460 Planning and Implementation Guide
3. In the next window, respond to the prompt for a machine name and the type
of network connectivity you are using. The system populates the remaining
fields and displays the screen shown in Figure 8-13.
Define a Machine
[MORE...1]
5. Assign the installation resources to the machine. For this example, we are
doing an RTE installation, so we use spot and lpp_source for the installation.
Run the following command:
smit nim_mac_res
334 IBM Flex System p260 and p460 Planning and Implementation Guide
6. Select Allocate Network Install Resources, as shown in Figure 8-14. A list of
available machines opens.
Mo+--------------------------------------------------------------------------+
| Target Name |
| |
| Move cursor to desired item and press Enter. |
| |
| CURSO groups mac_group |
| master machines master |
| STUDENT1 machines standalone |
| STUDENT2 machines standalone |
| STUDENT3 machines standalone |
| STUDENT4 machines standalone |
| STUDENT5 machines standalone |
| STUDENT6 machines standalone |
| tws01 machines standalone |
| 7989nimtest machines standalone |
| 7989AIXtest machines standalone |
| bolsilludo machines standalone |
| tricolor machines standalone |
| decano machines standalone |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
F1| /=Find n=Find Next |
F9+--------------------------------------------------------------------------+
336 IBM Flex System p260 and p460 Planning and Implementation Guide
9. Confirm your resource selections by running smit nim_mac_res and selecting
Select List Allocated Network Install Resources, as shown in Figure 8-16.
+--------------------------------------------------------------------------+
| Available Network Install Resources |
| |
| Move cursor to desired item and press F7. |
| ONE OR MORE items can be selected. |
| Press Enter AFTER making all selections. |
| |
| > LPP_AIX61_TL04_SP01_REL0944_BOS lpp_source |
| > SPOT_AIX61_TL04_SP01_REL0944 spot |
| AIX61_LAST_TL lpp_source |
| |
| F1=Help F2=Refresh F3=Cancel |
| F7=Select F8=Image F10=Exit |
F1| Enter=Do /=Find n=Find Next |
F9+--------------------------------------------------------------------------+
+--------------------------------------------------------------------------+
| Operation to Perform |
| |
| Move cursor to desired item and press Enter. Use arrow keys to scroll. |
| |
| [TOP] |
| diag = enable a machine to boot a diagnostic image |
| cust = perform software customization |
| bos_inst = perform a BOS installation |
| maint = perform software maintenance |
| reset = reset an object's NIM state |
| fix_query = perform queries on installed fixes |
| check = check the status of a NIM object |
| reboot = reboot specified machines |
| maint_boot = enable a machine to boot in maintenance mode |
| showlog = display a log in the NIM environment |
| lppchk = verify installed filesets |
| restvg = perform a restvg operation |
| [MORE...6] |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
| /=Find n=Find Next |
+--------------------------------------------------------------------------+
338 IBM Flex System p260 and p460 Planning and Implementation Guide
14.Confirm your machine selection and option selection in the next window, and
select additional options to further customize your installation, as shown in
Figure 8-18.
[Entry Fields]
Target Name 7989AIXtest
Source for BOS Runtime Files rte +
installp Flags [-agX]
Fileset Names []
Remain NIM client after install? yes +
Initiate Boot Operation on Client? yes +
Set Boot List if Boot not Initiated on Client? no +
Force Unattended Installation Enablement? no +
ACCEPT new license agreements? [yes] +
The selection of options on the NIM machine is complete. Next, continue the
installation from the Systems Management Services (SMS) menu on the
POWER7 based compute node.
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
340 IBM Flex System p260 and p460 Planning and Implementation Guide
16.Select option 1 (SMS Menu) to open the SMS Main Menu, as shown in
Figure 8-20.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Main Menu
1. Select Language
2. Setup Remote IPL (Initial Program Load)
3. Change SCSI Settings
4. Select Console
5. Select Boot Options
-------------------------------------------------------------------------------
Navigation Keys:
17.Select option 2 (Setup Remote IPL (Initial Program Load) from the SMS
main menu.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
NIC Adapters
Device Location Code Hardware
Address
1. Interpartition Logical LAN U7895.42X.1058008-V5-C4-T1 42dbfe361604
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
19.Select the IP protocol version (either ipv4 or ipv6), as shown in Figure 8-22.
For our example, we select ipv4.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Internet Protocol Version.
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
342 IBM Flex System p260 and p460 Planning and Implementation Guide
20.Select option 1 (BOOTP) as the network service to use for the installation, as
shown in Figure 8-23.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Network Service.
1. BOOTP
2. ISCSI
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
21.Set up your IP address and the IP address of the NIM server for the
installation. To do so, select option 1 (IP Parameters), as shown in
Figure 8-24.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Network Parameters
Interpartition Logical LAN: U7895.42X.1058008-V5-C4-T1
1. IP Parameters
2. Adapter Configuration
3. Ping Test
4. Advanced Setup: BOOTP
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:11
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
IP Parameters
Interpartition Logical LAN: U7895.42X.1058008-V5-C4-T1
1. Client IP Address [9.27.20.216]
2. Server IP Address [9.42.241.191]
3. Gateway IP Address [9.27.20.1]
4. Subnet Mask [255.255.252.0]
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
23.Press M to return to the SMS main menu (see Figure 8-20 on page 341).
344 IBM Flex System p260 and p460 Planning and Implementation Guide
24.Select option 5 (Select boot options) to display the Multiboot screen, as
shown in Figure 8-26, and select option 1 (Select Install/Boot Device).
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Multiboot
1. Select Install/Boot Device
2. Configure Boot Device Order
3. Multiboot Startup <OFF>
4. SAN Zoning Support
5. Management Module Boot List Synchronization
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Device Type
1. Diskette
2. Tape
3. CD/DVD
4. IDE
5. Hard Drive
6. Network
7. List all Devices
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Device
Device Current Device
Number Position Name
1. 3 Interpartition Logical LAN
( loc=U7895.42X.1058008-V5-C4-T1 )
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
346 IBM Flex System p260 and p460 Planning and Implementation Guide
28.On the Select Task screen, select option 2 (Normal Mode Boot), as shown in
Figure 8-29.
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Task
1. Information
2. Normal Mode Boot
3. Service Mode Boot
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
chosen-network-type = ethernet,auto,none,auto
server IP = 9.42.241.191
client IP = 9.27.20.216
gateway IP = 9.27.20.1
device = /vdevice/l-lan@30000004
MAC address = 42 db fe 36 16 4
loc-code = U7895.42X.1058008-V5-C4-T1
348 IBM Flex System p260 and p460 Planning and Implementation Guide
Note: IBM i installation can be performed from optical media. The IBM i
process is different from what is described here for AIX and Linux. For more
information, see Section 5 of Getting Started with IBM i on an IBM Flex
System compute node, available at:
http://www.ibm.com/developerworks/
To perform Optical media installation, you need an external USB drive (not
provided with either the chassis nor the Power Systems compute node)
attached to your Power Systems compute node.
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Main Menu
1. Select Language
2. Setup Remote IPL (Initial Program Load)
3. Change SCSI Settings
4. Select Console
5. Select Boot Options
-------------------------------------------------------------------------------
Navigation Keys:
4. Select option 5 (Select Boot Options) to display the multiboot options. The
window shown in Figure 8-33 opens.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Multiboot
1. Select Install/Boot Device
2. Configure Boot Device Order
3. Multiboot Startup <OFF>
4. SAN Zoning Support
5. Management Module Boot List Synchronization
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
350 IBM Flex System p260 and p460 Planning and Implementation Guide
5. Select option 1 (Select Install/Boot Device). The window shown in
Figure 8-34 opens.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Device Type
1. Diskette
2. Tape
3. CD/DVD
4. IDE
5. Hard Drive
6. Network
7. List all Devices
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Media Type
1. SCSI
2. SSA
3. SAN
4. SAS
5. SATA
6. USB
7. IDE
8. ISA
9. List All Devices
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
352 IBM Flex System p260 and p460 Planning and Implementation Guide
7. Select option 6 (USB) media type. The window shown in Figure 8-36 opens
and shows the list of available USB optical drives. In our example, a virtual
optical drive is shown as item 1.What you see depends on the drive you
have connected.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Media Adapter
1. U7895.42X.1058008-V6-C2-T1 /vdevice/v-scsi@30000002
2. List all devices
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Task
1. Information
2. Normal Mode Boot
3. Service Mode Boot
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
9. When you select your optical drive, you have three options. Select option 2
(Normal Mode boot), then select option 1 (Yes) on the next screen. The boot
process for your CD displays, and you can continue with the installation
process shown in “Installation procedures” on page 364.
First, you must set up three standard Linux services on the installation server:
tftpd
dhcpd (used only to allow netboot using bootpd to a specific MAC address)
NFS server
354 IBM Flex System p260 and p460 Planning and Implementation Guide
SUSE Linux Enterprise Server 11
The following steps pertain to SLES 11:
1. Obtain the distribution ISO file, and copy it to a work directory of the
installation server. We configure a Network File System (NFS) server (this
server can be the installation server itself or another server) and mount this
shared directory from the target virtual server to unload the software.
2. On the installation server, install the tftp and the dhcpd server packages (we
use dhcpd only to run bootp for a specific MAC address).
3. Copy in the tftpboot directory (the default for SUSE Linux Enterprise Server
11 is /tftpboot), the netboot image, and the yaboot executable from the
DVD directory, sles11/suseboot.
– The netboot image is named inst64.
– The yaboot executable is named yaboot.ibm.
4. Boot the target virtual server and access SMS (see Figure 8-38) to retrieve
the MAC address of the Ethernet interface to use for
the installation.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Main Menu
1. Select Language
2. Setup Remote IPL (Initial Program Load)
3. Change SCSI Settings
4. Select Console
5. Select Boot Options
-------------------------------------------------------------------------------
Navigation Keys:
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
NIC Adapters
Device Location Code Hardware
Address
1. Interpartition Logical LAN U8406.71Y.06ACE4A-V4-C4-T1 XXXXXXXXXXXX
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:
356 IBM Flex System p260 and p460 Planning and Implementation Guide
5. On the installation server, configure the dhcpd.conf file and, assuming it is the
NFS server too, the /etc/exports file. The dhcpd.conf file is shown in
Figure 8-40, where we must replace XX.XX.XX.XX.XX.XX and the network
parameters with our MAC and IP addresses.
always-reply-rfc1048 true;
allow bootp;
deny unknown-clients;
not authoritative;
default-lease-time 600;
max-lease-time 7200;
ddns-update-style none;
Figure 8-40 The dhcpd.conf file for SUSE Linux Enterprise Server 11
default=sles11
timeout=100
image[64bit]=inst64.sles11
label=sles11
append="quiet usevnc=1 vncpassword=passw0rd
install=nfs://10.1.2.51/temp/sles11"
7. Figure 8-42 shows an example of the /etc/exports file with the exported
directory that contains the image of the SUSE Linux Enterprise
Server 11 DVD.
/dati1/sles11/ *(rw,insecure,no_root_squash)
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Main Menu
1. Select Language
2. Setup Remote IPL (Initial Program Load)
3. Change SCSI Settings
4. Select Console
5. Select Boot Options
-------------------------------------------------------------------------------
Navigation Keys:
358 IBM Flex System p260 and p460 Planning and Implementation Guide
10.Select option 5 (Select Boot Options). The window shown in
Figure 8-44 opens.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Multiboot
1. Select Install/Boot Device
2. Configure Boot Device Order
3. Multiboot Startup <OFF>
4. SAN Zoning Support
5. Management Module Boot List Synchronization
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:1
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Device Type
1. Diskette
2. Tape
3. CD/DVD
4. IDE
5. Hard Drive
6. Network
7. List all Devices
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:6
360 IBM Flex System p260 and p460 Planning and Implementation Guide
12.Select option 6 (Network) as the boot device. The window shown in
Figure 8-46 opens.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Network Service.
1. BOOTP
2. ISCSI
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:1
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
For a description of the installation, see 8.3.3, “Installing SUSE Linux Enterprise
Server” on page 381.
362 IBM Flex System p260 and p460 Planning and Implementation Guide
Tip: The yaboot executable is named simply yaboot. We can rename it, for
example, to yaboot.rh61, to avoid conflicts in the tftpboot directory.
4. The netboot image is larger than 65,500 512 bytes blocks and cannot be
used due to a limitation of tftpd. We must boot the vmlinuz kernel and use
the ramdisk image. Copy the two files from the ppc/ppc64 directory of the
DVD to the tftpboot directory of the installation server.
5. On the installation server, create a directory named tftpboot/etc, and create
a file named 00-XX-XX-XX-XX-XX-XX, replacing all characters except the 00
with the target virtual server MAC address, as shown in Figure 8-48.
default=rh61
timeout=100
image=vmlinuz
initrd=ramdisk.image.gz
label=rh61
allow bootp;
deny unknown-clients;
not authoritative;
default-lease-time 600;
max-lease-time 7200;
ddns-update-style none;
Figure 8-49 The dhcpd.conf file for Red Hat Enterprise Linux 6.1
Ensure that the IP address is not cloned in this process. If you are using NIM to
restore the mksysb, the IP address given to the client during the network boot
overrides the IP address on the interface used by NIM.
It is also important to determine if all device drivers that are needed to support
the hardware on the target system are in the mksysb. This task can be
accomplished by installing the necessary device drivers in the image before
creating the mksysb, or, when using NIM to restore the mksysb, ensure that an
lpp_source is specified that contains the needed drivers.
You can also use the ALT_DISK_INSTALL method, but this method work only if
you have SAN disks attached or removable disks that can be attached to the new
server. You can use the ALT_DISK_INSTALL method to create a full copy of your
system rootvg, and then you can remove that disk from the server and assign it
to another server. When you start your system, your system is cloned.
364 IBM Flex System p260 and p460 Planning and Implementation Guide
To install AIX using the NIM lpp_source method, complete the following steps:
1. The first part of the process, setting up the environment for installation, is
covered in 8.2.1, “NIM installation” on page 332, and we follow up after exiting
to the normal boot part of the process.
2. After you exit to normal boot, a screen opens that shows the network
parameters for BOOTP, as shown in Figure 8-24 on page 343.
3. Next, a screen opens that shows the AIX kernel loading. You are prompted to
select the installation language (English, by default), as shown in Figure 8-50.
88 Help ?
Type the number of your choice and press Enter. Choice is indicated by
>>>.
88 Help ?
99 Previous Menu
366 IBM Flex System p260 and p460 Planning and Implementation Guide
You can install the OS using option 1 or 2:
– Option 1 (Start Install Now with Default Settings) begins the installation
using the default options.
– Option 2 (Change/Show Installation Settings and Install) displays several
options, as shown in Figure 8-52.
1 System Settings:
Method of Installation.............New and Complete Overwrite
Disk Where You Want to Install.....hdisk0
+-----------------------------------------------------
88 Help ? | WARNING: Base Operating System Installation
will
99 Previous Menu | destroy or impair recovery of ALL data
on the
| destination disk hdisk0.
>>> Choice [0]:
In this screen, the following settings are available. After you change and
confirm your selections, type 0 and press Enter to begin the installation. The
settings are:
– Option 1 (Systems Settings) refers to the installation method and
destination disk. Supported methods for AIX installation are:
• New and Complete Overwrite: Use this method when you are installing
a new system or reinstalling one that needs to be erased.
Security Models
1. Trusted AIX............................................. no
88 Help ?
99 Previous Menu
– Option 4 (More Options (Software Install options)): You can use this option
to choose whether to install graphics software, such as X Window System,
to select the file system type jfs or jfs2, and to enable system backups at
any time, as shown in Figure 8-54 on page 369.
368 IBM Flex System p260 and p460 Planning and Implementation Guide
Install Options
88 Help ?
99 Previous Menu
5. After you complete your options selection, you are prompted to confirm your
choices, as shown in Figure 8-55.
Disks: hdisk0
Cultural Convention: en_US
Language: en_US
Keyboard: en_US
JFS2 File Systems Created: yes
Graphics Software: yes
System Management Client Software: yes
Enable System Backups to install any system: yes
Selected Edition: express
We install the virtual servers using a virtual optical media and the ISO image of
the RHEL distribution as the boot device. Figure 8-56 shows the Virtual Optical
Media window in IBM Flex System Manager.
370 IBM Flex System p260 and p460 Planning and Implementation Guide
To install RHEL, complete the following steps:
1. After the virtual media is set up, boot the server and enter SMS. The screen
shown in Figure 8-57 opens.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Main Menu
1. Select Language
2. Setup Remote IPL (Initial Program Load)
3. Change SCSI Settings
4. Select Console
5. Select Boot Options
-------------------------------------------------------------------------------
Navigation Keys:
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Multiboot
1. Select Install/Boot Device
2. Configure Boot Device Order
3. Multiboot Startup <OFF>
4. SAN Zoning Support
5. Management Module Boot List Synchronization
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:1
372 IBM Flex System p260 and p460 Planning and Implementation Guide
3. Select option 1 (Select Install/Boot Device). The window shown in
Figure 8-59 opens.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Device Type
1. Diskette
2. Tape
3. CD/DVD
4. IDE
5. Hard Drive
6. Network
7. List all Devices
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:3
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Media Type
1. SCSI
2. SSA
3. SAN
4. SAS
5. SATA
6. USB
7. IDE
8. ISA
9. List All Devices
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:1
374 IBM Flex System p260 and p460 Planning and Implementation Guide
5. For the virtual optical media, select option 1 (SCSI). The window shown in
Figure 8-61 opens.
Version AF740_051
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
-------------------------------------------------------------------------------
Select Device
Device Current Device
Number Position Name
1. - SCSI CD-ROM
( loc=U7895.42X.1058008-V2-C2-T1-L8200000000000000 )
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:1
6. Select the drive you want to boot from. In Figure 8-61, there is only one drive
to select, which is the virtual optical media linked to the Red Hat Enterprise
Linux DVD ISO image.
The system now boots from the ISO image. Figure 8-62 shows the boot of the
virtual media and the VNC parameters.
376 IBM Flex System p260 and p460 Planning and Implementation Guide
Figure 8-64 shows the VNC graphical console start.
Running anaconda 13.21.117, the Red Hat Enterprise Linux system installer - please wait.
21:08:52 Starting VNC...
21:08:53 The VNC server is now running.
21:08:53
7. Connect to the IP address listed in Figure 8-64 with a VNC client to perform
the installation. You see the graphic RHEL installer welcome window.
8. Select a preferred language for the installation process.
9. Select the keyboard language.
10.Select the storage devices to use for the installation, as shown in Figure 8-65.
For virtual disks, hdisks, or SAN disks, select Basic Storage Devices.
378 IBM Flex System p260 and p460 Planning and Implementation Guide
12.Select a disk layout, as shown in Figure 8-67. You can choose from a number
of installations or create a custom layout (for example, you can create a
software mirror between two disks). You can also manage older RHEL
installations if they are detected.
As the system boots, the operating system loads, as shown in Figure 8-70.
Starting cups: [ OK ]
Mounting other filesystems: [ OK ]
Starting HAL daemon: [ OK ]
Starting iprinit: [ OK ]
Starting iprupdate: [ OK ]
Retrigger failed udev events[ OK ]
Adding udev persistent rules[ OK ]
Starting iprdump: [ OK ]
Loading autofs4: [ OK ]
Starting automount: [ OK ]
Generating SSH1 RSA host key: [ OK ]
Generating SSH2 RSA host key: [ OK ]
Generating SSH2 DSA host key: [ OK ]
Starting sshd: [ OK ]
Starting postfix: [ OK ]
Starting abrt daemon: [ OK ]
Starting crond: [ OK ]
Starting atd: [ OK ]
Starting rhsmcertd 240[ OK ]
ite-bt-061.stglabs.ibm.com login:
380 IBM Flex System p260 and p460 Planning and Implementation Guide
The basic installation is complete. You might choose to install additional RPMs
from the IBM Service and Productivity Tools website found at:
http://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html
382 IBM Flex System p260 and p460 Planning and Implementation Guide
2. Select New installation and click Next. The Installation Settings window
opens (Figure 8-72).
3. Either accept the default values or click Change to change the values for:
– Keyboard layout
– Partitioning
– Software
– Language
384 IBM Flex System p260 and p460 Planning and Implementation Guide
The final phase of the basic installation process is shown in Figure 8-74.
At the end of the installation, the system reboots and the VNC connection
is lost.
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM STARTING SOFTWARE IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM PLEASE WAIT... IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
-
Elapsed time since release of system processors: 202 mins 30 secs
5. The installation and configuration continues with a prompt to enter the root
password. Enter the root password.
386 IBM Flex System p260 and p460 Planning and Implementation Guide
6. Other installation screens open. Enter values as needed for your
environment. After the installation is complete, you see the window shown in
Figure 8-76.
sles11-e4kc login:
The basic SLES installation is complete. You may choose to install additional
RPMs from the IBM Service and Productivity Tool website at:
http://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html
388 IBM Flex System p260 and p460 Planning and Implementation Guide
Abbreviations and acronyms
AAS Advanced Administrative DPM distributed power
System management
AC alternating current DRTM Dynamic Root of Trust
ACL access control list Measurement
390 IBM Flex System p260 and p460 Planning and Implementation Guide
ROM read-only memory SWMA Software Maintenance
RPM Red Hat Package Manager Agreement
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this book.
IBM Redbooks
The following publications from IBM Redbooks provide additional information
about IBM Flex System. They are available at:
http://www.redbooks.ibm.com/portals/puresystems
IBM PureFlex System and IBM Flex System Products & Technology,
SG24-7984
IBM Flex System Networking in an Enterprise Data Center, REDP-4834
Switches:
IBM Flex System EN2092 1Gb Ethernet Scalable Switch, TIPS0861
IBM Flex System EN4091 10Gb Ethernet Pass-thru Module, TIPS0865
IBM Flex System Fabric EN4093 10Gb Scalable Switch, TIPS0864
IBM Flex System FC3171 8Gb SAN Switch and Pass-thru, TIPS0866
IBM Flex System FC5022 16Gb SAN Scalable Switch and FC5022 24-port
16Gb ESB SAN Scalable Switch, TIPS0870
IBM Flex System IB6131 InfiniBand Switch, TIPS0871
Adapters:
IBM Flex System CN4054 10Gb Virtual Fabric Adapter and EN4054 4-port
10Gb Ethernet Adapter, TIPS0868
IBM Flex System EN2024 4-port 1Gb Ethernet Adapter, TIPS0845
IBM Flex System EN4132 2-port 10Gb Ethernet Adapter, TIPS0873
You can search for, view, download or order these documents and other
Redbooks, Redpapers, Web Docs, draft and additional materials, at the
following website:
ibm.com/redbooks
IBM education
The following are IBM educational offerings for IBM Flex System. Note that some
course numbers and titles might have changed slightly after publication.
For more information about these education offerings, and many other IBM
System x educational offerings, visit the global IBM Training website located at:
http://www.ibm.com/training
394 IBM Flex System p260 and p460 Planning and Implementation Guide
Help from IBM
IBM Support and downloads
ibm.com/support
398 IBM Flex System p260 and p460 Planning and Implementation Guide
hardware 193 I/O modules
importing update files 226 bays 52
inital setup 202 Chassis Management Module interface 177
Intial Setup 252 introduction 51
inventory collection 242 overview 12
Java 205 IB6131 InfiniBand Switch 53
local storage 197 IB6132 2-port QDR InfiniBand Adapter 107
Manage Power Systems Resources window IBM i 388
236 supported versions 121
management network 210 virtual server 308
management network adapter 198 InfiniBand adapters 107
motherboard 196 installation
network adapter 198 AIX 364
NTP setup 208 IBM i 388
open a console 245 NIM 332
overview 7, 9, 54, 192 Red Hat Enterprise Linux 370
partitioning 284 SUSE Linux Enterprise Server 381
planar 196 integrated systems 1
plugins tab 263 Intel TXT 158
power control 206 intelligent threads 83
Power Systems 236 internal management network 191
Power Systems Management 292 Internet connection 222
remote access 268 IOC controller hub 101
remote control 205
setup 202
SOL disable 249
L
labeling 71
solid-state drives 197
LDAP 186
specifications 195
license key management 187
Status Manager 267
light path diagnostic panel 70
storage 197
Linux
system board 196
Red Hat Enterprise Linux install 370
Update Manager 221, 267
supported versions 122
user accounts 209
SUSE Linux Enterprise Server install 381
VIOS virtual server 288
TFTP installation method 355
VMControl 271
virtual server 308
wizard 202
logical partitions 275
foundations 3
front panel 69
FSM M
See Flex System Manager management 54, 157–274
FSP 108 management network 191
memory 87
Active Memory Expansion 92
H DIMM socket locations 89
hard disk storage 95
feature codes 88
humidity 152
maximums 87
memory channels 83
I mixed DIMMs 91
I/O expansion slots 99 planning 119
Index 399
rules 89 Ethernet adapters 102
expansion slots 99
features 68
N Fibre Channel adapters 105
N+1 redundancy 147
front panel 69
N+N redundancy 145
I/O slots 99
native installation 315
InfiniBand adapters 107
network planning 118, 124
labeling 71
network redundancy 129
light path diagnostic panel 70
network topology 52
local storage 96
networking 8
memory 87
teaming 131
installation sequence 90
NIC teaming 131
memory channels 83
NIM installation 332
operating systems 119
NPIV 281
overview 64, 68
PCIe expansion 99
O power button 69
operating environment 152 power requirements 145
operating systems 119, 317, 364–388 PowerLinux 68
AIX install 364 processor 76
AIX support 120 storage 96
cloning 364 supported adpaters 101
DVD install 348 USB port 69
IBM i 388 p260
IBM i support 121 architecture 74, 79
installing 332 block diagram 74
Linux support 122 board layout 66
native install 315 chassis support 73
NIM installation 332 cover 97
optical media 348 deconfiguring 77
Red Hat Enterprise Linux 370 Ethernet adapters 102
SUSE Linux Enterprise Server 381 expansion 99
TFTP method 354 features 65
VIOS support 120 Fibre Channel adapters 105
optical drive 119 front panel 69
OS install 348 I/O slots 99
overview InfiniBand adapters 107
p24L 68 labeling 71
p260 64 light path diagnostic panel 70
p460 66 local storage 96
memory 87
installation sequence 90
P
p24L memory channels 83
architecture 74, 79 operating systems 119
block diagram 74 overview 64
chassis support 73 PCIe expansion 99
cover 97 power button 69
deconfiguring 77 power requirements 145
processor 76
400 IBM Flex System p260 and p460 Planning and Implementation Guide
storage 96 redundancy 128
supported adapters 101 security 158
USB port 69 software 119
p460 storage 118
architecture 75, 79 UPS units 138
block diagram 75 virtualization 152
board layout 67 policies
chassis support 73 Chassis Management Module interface 179
cover 97 security 142, 159
deconfiguring 77 power
dual VIOS 136 capping 144
Ethernet adapters 102 Chassis Management Module interface 178
expansion slots 99 planning 138
features 66 policies 142
Fibre Channel adapters 105 power supplies 57, 140
front panel 69 requirements 145
I/O slots 99 POWER Hypervisor 279
InfiniBand adapters 107 POWER7 processor 76
labeling 71 PowerLinux 68
light path diagnostic panel 70 See also p24L Compute Node
local storage 96 PowerVM 276–286
memory 87 features 276
installation sequence 90 POWER Hypervisor 278
memory channels 83 processor 76
operating systems 119 architecture 79
overview 64, 66 cores 81
PCIe expansion 99 memory channels 83
power button 69 overview 80
power requirements 146 processors
processor 76 cache 85
storage 96 deconfiguring 77
supported adapters 101 energy management 87
USB port 69 feature codes 77
P7-IOC controller 101 intelligent threads 83
partitioning SMT 81
Flex System Manager 284 PureApplication System 4
POWER Hypervisor 278 PureFlex System 2, 15–43
PowerVM 276 Enterprise 35
preparing 284 Express 17
VIOS 286 Standard 26
virtual storage 300 PureSystems 2
PDU planning 139
planning 117–156
memory 119
R
Red Hat Enterprise Linux
network 118, 124
installation 370
operating systems 119
Redbooks website 394
PDUs 139
Contact us xv
power 138
redundancy 128
power policies 142
Index 401
power policies 143 U
remote access 188 Update Manager 221
remote presence 9 update_flash command 319
UPS planning 138
USB cable 198
S
SAN connectivity 127 USB port 69
SAS storage 95 user accounts
security 158 Chassis Management Module 184
Chassis Management Module interface 185
policies 159 V
Serial over LAN 109 video cable 198
serial port cable 198 VideoStopped_OutOfRange 205
services 115 VIOS
single sign-on 158 CLI 287
slots 99 creating a virtual server 286
SmartCloud Entry 44 dual VIOS 135
SMS mode 340 modifying 304
SMT 81 supported versions 120
SMTP 186 virtual servers 155
SOL VIOS virtual server 292
disabling 249 virtual Ethernet 280
solid-state drives 96 virtual Fibre Channel 281
sound level 49 virtual SCSI 280
Spanning Tree Protocol 130 virtual servers 275
specifications virtualization 275–316
Enterprise Chassis 48 planning 152
Standard, PureFlex System 3 PowerVM 276–286
standard, PureFlex System 26 VLAGs 132
storage 95 VLANs 126, 280
overview 8 VMControl 271
planning 118 VPD card 110
SUSE Linux Enterprise Server
installation 381
switches 53 W
W1500 6
systems management 54, 108
warranty 114
wizards
T Chassis Management Module 167
teaming 131 Flex System Manager 9, 202
technical support 115
temperature 152
TFPT installation 354
time-to-value 9
topology 52
Chassis Management Module interface 182
Trusted Platform Module 158
402 IBM Flex System p260 and p460 Planning and Implementation Guide
IBM Flex System p260 and p460 Planning and Implementation Guide
(0.5” spine)
0.475”<->0.875”
250 <-> 459 pages
Back cover ®