SG 248082
SG 248082
SG 248082
Dino Quintero
Faraz Ahmad
Stephen Dominguez
David Pontes
Cesar Rodriguez
Redbooks
International Technical Support Organization
September 2019
SG24-8082-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page ix.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 . . 155
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5.2 Component architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5.3 Simplifying management of security and compliance by using TNC. . . . . . . . . . . . . . 160
iv Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
5.4 Deployment considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.4.1 Disk and memory requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.4.2 Requirements to install software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.4.3 Host installation matrix for TNC components . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.4.4 Syslog configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.5 Installing TNCPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.5.1 Networking requirements for TNCPM internet connections . . . . . . . . . . . . . . . . 164
5.5.2 Configuring the TNCPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.5.3 Configuring the Trusted Network Connect Server . . . . . . . . . . . . . . . . . . . . . . . 174
5.5.4 Configuring the Trusted Network Connect Client . . . . . . . . . . . . . . . . . . . . . . . . 181
5.5.5 Configuring Trusted Network Connect Server email. . . . . . . . . . . . . . . . . . . . . . 182
5.6 Working with Trusted Network Connect and Patch Management. . . . . . . . . . . . . . . . 182
5.6.1 Verifying the Trusted Network Connect Client . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.6.2 Viewing the Trusted Network Connect Server logs. . . . . . . . . . . . . . . . . . . . . . . 189
5.6.3 Viewing the verification results of the TTNCCs. . . . . . . . . . . . . . . . . . . . . . . . . . 189
5.6.4 Updating the Trusted Network Connect Client . . . . . . . . . . . . . . . . . . . . . . . . . . 189
5.6.5 Updating and verifying by using PowerSC GUI 1.2.0.0 . . . . . . . . . . . . . . . . . . . 192
5.6.6 New TNC functions provided in PowerSC GUI 1.2.0.1 . . . . . . . . . . . . . . . . . . . . 195
5.6.7 Update logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.7 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.7.1 Check syslog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.7.2 Verify your configuration files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.7.3 Update operation fails while AIX Trusted Execution is enabled . . . . . . . . . . . . . 196
5.7.4 Refreshing the daemons to correct anomalies . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5.7.5 Enabling TNCS verbose logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5.7.6 More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Contents v
6.5.3 Accessing virtual log data on the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . 220
6.5.4 Configuring shared storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
6.5.5 Demonstrating multipath failover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
6.5.6 Configuring AIX auditing to use a virtual log . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
6.5.7 Configuring syslog to use a virtual log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
6.5.8 Backing up Trusted Logging data on the Virtual I/O Server . . . . . . . . . . . . . . . . 234
6.5.9 Deleting virtual logs and virtual log target devices . . . . . . . . . . . . . . . . . . . . . . . 245
6.6 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
6.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
vi Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Contents vii
viii Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® POWER Hypervisor™ PowerVM®
FileNet® Power Systems™ Redbooks®
IBM® POWER7® Redbooks (logo) ®
IBM Watson® PowerHA® Watson™
POWER® PowerSC™
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
x Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Preface
This IBM® Redbooks® publication provides a security and compliance solution that is
optimized for virtualized environments on IBM Power Systems™ servers, running IBM
PowerVM® and IBM AIX®. Security control and compliance are some of the key components
that are needed to defend the virtualized data center and cloud infrastructure against ever
evolving new threats. The IBM business-driven approach to enterprise security that is used
with solutions, such as IBM PowerSC™, makes IBM the premier security vendor in the market
today.
The book explores, tests, and documents scenarios using IBM PowerSC that leverage IBM
Power Systems servers architecture and software solutions from IBM to help defend the
virtualized data center and cloud infrastructure against ever evolving new threats.
This publication helps IT and Security managers, architects, and consultants to strengthen
their security and compliance posture in a virtualized environment running IBM PowerVM.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, Austin Center.
Dino Quintero is an IT Management Consultant Project Leader and an IBM Level 3 Senior
Certified IT Specialist with IBM Redbooks in Poughkeepsie, New York. Dino shares his
technical computing passion and expertise by leading teams developing technical content in
the areas of enterprise continuous availability, enterprise systems management,
high-performance computing, cloud computing, artificial intelligence (including machine and
deep learning), and cognitive solutions. He also is a Certified Open Group Distinguished IT
Specialist. Dino holds a Master of Computing Information Systems degree and a Bachelor of
Science degree in Computer Science from Marist.
Faraz Ahmad is a an IBM Power Systems solution architect working in IBM Lab Services,
India. Faraz has over 15 years of experience in various areas of IT, including software
development, solution designing, and IT consulting. He specializes in cyber security and in
his current role, he designs security solutions for IBM customers. He is also a geography lead
and mentors security consultants in the Central and Eastern Europe, Middle East, and Africa
regions. His other areas of expertise includes IBM PowerHA®, AIX, Linux, Networking, and
virtualization. He is author of multiple patents and recognized as an Invention Plateau holder.
He has a degree in Computer Science from Birla Institute of Technology, Ranchi, India.
Stephen Dominguez is the worldwide AIX security lead for IBM Systems Lab Services. He
has worked for IBM for over 20 years. He has been delivering AIX Security consulting
services for 10 years. He spent his initial 10 years in IBM UNIX Product Test organization
testing the HMC and AIX Security components. He is a Java certified programmer.
David Pontes has been working for more than 20 years at IBM, where he has worked in
several areas, from support, management, and transformation projects. David has worked for
the past seven years in security for Power Systems, and part of those years in the role of
consultant for the IBM Lab Services Cloud Team.
Wade Wallace
IBM Redbooks, Austin Center
Xiohan Qin
IBM USA
Petra Buehrer
IBM Germany
Tim Hill
Rocket Software
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
xii Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xiii
xiv Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
1
This chapter also describes the business requirements for an IT security and Compliance
Management solution.
As the pace of globalization continues and new technologies emerge, traditional boundaries
between organizations continue to disappear. The ideal response involves planning and
assessment to identify risks across key business areas, including people, processes, data,
and technology throughout the entire organization. It is important to take a holistic approach
that can facilitate a business and compliance-driven security strategy that can act as an
effective defense for the entire organizational data and systems.
Organizations must build business processes, policies, and services that are secure by
design, meaning that security is intrinsic to their business processes, product development,
and daily operations. Security must be factored into the initial design, not bolted on afterward.
Additionally, companies must adopt a policy of compliance by design to ensure that all
services and IT systems that are supporting the operations are aligned with the required
regulatory requirements.
This approach enables an organization to securely and safely adopt new forms of technology,
such as cloud computing and mobile device management, and business models, such as
telecommuting and outsourcing, which can be more safely used for cost benefit, innovation,
and shorter time to market.
2 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
IT factors also represent technical considerations that affect the trustworthiness of the IT
systems and likely the IT environment as a whole. The combination of business and IT
factors represents the foundation for security management.
Security events and incidents might impact the correct and reliable operation of these
business processes. It might also affect the underlying IT infrastructure or upstream and
downstream business processes. The consequences of a defective service (incorrect or
varying results over time) might be significant to the consumer of the service, and therefore to
the provider of the service.
In addition to affecting the correct and reliable operation of the business processes, security
capabilities themselves must adhere to metrics around correct and reliable operation. Logs
and reports must be correct and reliable, security features must keep the number of false
positives and false negatives to a minimum, and security software and appliances must strive
for defect-free operation with a mean time-to-failure that meets an organization’s
requirements.
Service-level agreements
This factor applies to circumstances where security threats and threat agents can impact the
availability of IT systems and therefore the organization’s ability to conduct business or
provide services. Service-level agreements (SLAs) define acceptable conditions of operation
in an organization or service. SLAs might vary from IT system to IT system or from application
to application.
The availability of IT systems, data, and processes are conditions that are commonly
referenced in SLAs. SLAs are also commonly associated to contractual penalties. Therefore,
companies must concentrate efforts to ensure the optimal availability of their IT systems.
IT system value
From a business perspective, the IT system value directly relates to the value of the business
transactions that it supports. These IT system values might be tangible or intangible. For an
e-retailer, these IT system values are tangible assets.
For a financial services company, the asset might be the client information or other data that
is used in transactions of the system. Another important consideration is that intangible
assets are sometimes not clearly identified, and these assets are normally the most valuable
assets of the company.
Contractual obligation
Security and risk management for IT system are likely to be proportional to the consequences
that are can occur when the business encounters contractual liability from a security incident.
For example, a security incident that affects the availability of a service can prevent an
organization from complying with some contractual obligations, which is typically tied to some
penalties and sanctions.
Some examples of indirect loss include civil or criminal process, fines, penalties, loss of
credibility, and damage to the corporate image.
The investment in security mechanisms that enhance, standardize, and facilitate the security
and compliance of IT systems must be proportional to the consequences if any of these risks
are materialized.
Critical infrastructure
Critical infrastructure is associated to the systems that support a process or service. Typically,
any impact to those systems affects the capacity to provide services to all or most of the
users. Examples include telecommunications, electrical power grids, transportation systems,
and computer networks.
The loss of the IT systems that support the critical infrastructure likely has a ripple effect,
which causes secondary losses and increases the impact to the organization. The
identification of the systems that support the critical infrastructure is key during the Risk
Analysis process because of the effect of the related incidents.
4 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Internal threats and threat agents
Security-related incidents on corporate IT systems can be caused by Internal threats and
threat agents that are found within the physical and logical boundaries of the organization or
enterprise and these threats and threat agents are normally related to people.
An example of an internal threat agent is a person who uses their ability or influence to
access an IT system to carry out a malicious activity.
External threats are single points of failure that are outside the organization boundaries,
including a power system grid or the outside internet connection.
These threats and threat agents likely affect any of the three main security components:
Confidentiality, Integrity, and Availability.
IT Service Management
The incorrect management or operation of the IT systems likely affect the system’s
availability, which introduces several risks to the business that if materialized can result in
financial loses.
The organization must ensure that service delivery commitments are achieved to reduce the
risk of not meeting an SLAm, which can lead to a penalty.
For example, an attacker can target a company with a DDos attack that is aimed to prevent
the company from achieving an SLA by affecting the availability of a system.
A simplified management of the IT systems often improves the two pillars of IT Service
Management: Service Support (Manage of the Incident, Problem, and Change process), and
Service Delivery (SLAs, Disaster Recovery, and Availability Management).
IT environment complexity
The complexity of the IT environment increases or decreases the risk over the IT systems and
their data. The IT environment reflects the infrastructure on which the IT systems are placed,
including systems, networks, the policies and procedures that are associated, and others.
For example, most IT environments often must be aligned with a plurality of regulations
across all their systems and the complexity that is associated to achieve the required level of
compliance is exponentially associated to the complexity of the environment.
IT vulnerabilities
In this section, we describe several important IT vulnerabilities.
Configuration
Misconfiguring IT systems can produce many vulnerabilities on the IT systems. Therefore,
companies must assign the required resource to ensure the optimal configuration of the
systems.
Additionally, the use of a system to standardize and simplify the configuration of IT systems
typically reduces the risks that are associated and at the same time improves the use of
resources that are needed to support it.
Flaws
A flaw on IT systems must be considered at least as a high probability risk that can be used
by an attacker to damage the confidentiality, integrity, and availability of the corporate data.
The biggest danger of those flaws is when the manufacturer is unaware of the flaw and the
attacker can use it freely until a patch or work-around is designed, developed, and applied.
Those vulnerabilities are known as Zero-day Vulnerabilities and are one of the biggest
threats on IT systems.
After the patch is available, companies must ensure that it is deployed and applied as soon as
possible to reduce the risk of an attack. Therefore, organizations must have a reliable,
standard, and robust system to distribute and deploy those patches securely.
Exploits
An exploit is commonly referred to as a known script or vulnerability that is used to attack a
set of IT systems.
Sometimes, exploits take advantage of some features or functions within a system to escalate
permissions, perform unauthorized actions, disrupt the functions of the system, and many
other attacks.
Next, we introduce the IBM Security Framework that was created to help organizations with
their security challenges.
The main goal of the IBM Security Framework is to create a bridge to address the
communication gap between the business and technical perspectives of security to enable
simplification of thought and process.
6 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
1.3 IBM Security Framework
Today, business initiatives often are guided by the principles of Governance, Risk, and
Compliance (GRC). However, these terms are broad and often have different meanings to
different stakeholders in an organization.
Each chief experience officer (CXO) often attempts to mitigate risks for their division’s
domain; therefore, they have different priorities and points of view when it comes to risk
management, including the following examples:
The Chief Risk Officer (CRO) looks at the organization’s overall risk profile and where the
organization is most vulnerable to an unexpected loss.
The Chief Financial Officer (CFO) must ensure that the necessary controls are in place to
have accurate financial statements.
The Chief Information Security Officer (CISO) ensures that the IT infrastructure and
systems support the overall business and the organization. The CISO must minimize the
risk of the IT environment and IT systems. Additionally, the CISO must assess and
communicate the effect of those risks to the overall organization from a GRC perspective.
IBM created the IBM Security Framework (see Figure 1-1 on page 8) to help ensure that
every IT security aspect can be properly addressed when a holistic approach is used with
business-driven security.
The capabilities that are described by the IBM Security Framework are based on Security
Intelligence and Threat Research capabilities.
Additionally, the solutions that are provided within the security domains and more layers can
be delivered through software, hardware (appliances), or as Managed Services or Cloud
offerings.
8 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
The IBM Security Framework is a layered model that is composed of the following domains:
Infrastructure
IT systems must be secured to be aligned with all required regulations to achieve the level
of compliance that is required by the organization. Therefore, a robust Security system is
required to support the organization’s IT systems to stay ahead of emerging threats that
can adversely affect system components, the people, and business processes that they
support.
Organizations are increasingly using virtualization technology to support their goals of
delivering services in less time, with greater agility, and at lower cost. However, those
environments must preemptively and proactively monitor the operation of the IT systems
infrastructure while looking for threats and vulnerabilities to avoid or reduce the probability
of security breaches. This domain covers securing that IT system infrastructure against all
emerging threats (see Figure 1-2).
Important: IBM PowerSC is a robust, integrated, and simplified standard solution that
supports organizations to increase their security and simplify compliance for the
organization.
Figure 1-2 Summary and other aspects to be addressed within the infrastructure domain
10 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
People
As shown in Figure 1-4, this domain covers aspects about how to ensure that the correct
people have access to the correct assets at the correct time, which is known as identity
management and access control.
Applications
One of the most effective ways to avoid a breach is by ensuring that the applications your
users are accessing are designed and implemented securely (see Figure 1-6 on page 13).
Therefore, organizations must proactively protect their business-critical applications from
external and internal threats throughout their entire lifecycle, from design to development,
test, and production.
12 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 1-6 Application and Process domain
Note: IBM provides the full breadth and depth of solutions and services that enable
organizations to take this business-driven, secure-by-design approach to security that
aligns with the IBM Security Framework.
Next, these domains and layers are broken down into greater detail to work toward a common
set of core security capabilities that are needed to help your organization securely achieve its
business goals. These core security capabilities are called the IBM Security Blueprint.
The IBM Security Blueprint was created after researching many client-related scenarios,
which focused on how to build IT solutions. The intention of the blueprint is to support and
assist in designing and deploying security solutions in your organization.
Building a specific solution requires a specific architecture, design, and implementation. The
IBM Security Blueprint can help you evaluate those aspects, but does not replace them.
Using the IBM Security Blueprint in this way can provide a solid approach to consider the
security capabilities in an architecture or solution.
14 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
IBM chose to use a high-level service-oriented perspective for the blueprint, which is based
on the IBM service-oriented architecture (SOA) approach.
Services use and refine other services (for example, policy and access control components
affect almost every other infrastructure component).
The left portion of Figure 1-8 represents the IBM Security Framework, which describes and
defines the security domains from a business perspective.
The middle portion in Figure 1-8 represents the IBM Security Blueprint, which describes the
IT security management and IT security infrastructure capabilities that are needed in an
organization.
The capabilities that are described in the IBM Security Blueprint are presented in product and
vendor-neutral terms.
The right portion of Figure 1-8 represents the solution architecture views, which describe
specific deployment guidance that is related to a specific IT environment and the current
maturity of the organization in the security domains. The solution architecture views also
provide details about specific products, solutions, and their interactions.
Figure 1-9 on page 16 shows the components and subcomponents of the IBM Security
Blueprint that must be examined for every solution in the Infrastructure security domain. In
addition to the Foundational Security Management Services, you can use the IBM Security
Blueprint to determine the Security Services and Infrastructure components by reviewing the
component catalogs for these Foundational Security Management Services. Each of these
components can then be assessed by determining whether each infrastructure component is
required to make a Foundational Security Management service functional so that it can
address the issues or provide a value that is associated with the particular business security
domain (in this case, Infrastructure).
For more information about the IBM Security Framework, IBM Security Blueprint, and all
related security components and domains (including infrastructure), see Using the IBM
Security Framework and IBM Security Blueprint to Realize Business-Driven Security,
SG24-8100.
The management of virtual machines on the enterprise domain is a complex task that likely
requires different configurations and settings. Also, more configuration can be required on
each machine to ensure alignment with a set of regulations. Therefore, the efficient gathering
of audit reports from that plurality (and variety) of IT systems often results on a complex task.
16 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Enforcing the configuration for a particular virtual machine requires a flexible monitoring
mechanism that constantly evaluates the state of these settings. This mechanism must
identify current patches and security configuration levels, along with hardware configuration,
and compare them against defined policies. It can then produce a high-level picture of the
infrastructure through reports and graphics that identifies gaps that might exist in the security
and compliance of this IT infrastructure.
Compliance versus control: If you are audited (or if you audited someone else), you
probably know that a difference exists between being in compliance and being in control.
Consider the following points:
When you are in compliance, all of your systems and processes are operated and
delivered according to the security policies and standards (and you have evidence for
compliance).
When you are in control, you know what is in compliance and what is not, you know
why, and you create an action plan (and you have evidence for control).
Now, which is more important? Being in control is more important because you can be in
compliance by accident. Furthermore, if you are compliant but not in control, chances are
high that you cannot stay compliant for long.
If you are in control, you are compliant eventually, or at least you have it on record why you
are not compliant.
In addition, if you are not compliant and not in control, gaining control must be your primary
goal, which is why more often regulations shift from compliance to control objectives.
Addressing the security needs of virtual machines must be a holistic approach that starts with
gaining visibility into these virtual machines within the infrastructure. The saying that “you
cannot manage what you cannot see” can be translated in this context as “you cannot secure
and control what you cannot see”. To properly remediate vulnerabilities and enforce
configurations, you must first know which virtual machines are at risk and which regulations,
policies, and restrictions are applied on each of them.
Many failed audits result from poor security management of server vulnerabilities because of
configuration drift, or the inability to rapidly deploy (and confirm) the application of patches
and updates, security settings, and virtual machine misconfiguration.
A unified solution that incorporates the capability to detect and identify audit gaps at a specific
time can help organizations move to a unified management approach, which enhances
visibility, management, and control. Also, this process enables the link between the
establishment of security strategy and policy, execution of that strategy, real-time reporting,
and security and compliance reporting.
Virtual machine security and Compliance Management are vital to IT security management.
The ideal response involves a level of planning and assessment to identify risks across key
business areas, including people, processes, data, and technology throughout the entire
business. It is important to plan a holistic approach that can facilitate a business-driven
security blueprint and strategy that can act as an effective shield of defense for the entire
organization.
We think that organizations must build services that are secure by design, meaning that
security is intrinsic to their business processes, product development, and daily operations. It
means that this is factored from the initial design and not added afterward. This methodology
allows an organization to securely and safely adopt new forms of technology (innovation) that
runs on new virtual machines that are in cloud computing.
Then, we described how the IBM Security Framework and IBM Security Blueprint can help
avoid misalignment of priorities between IT and the business strategy. These tools aim to
ensure that every necessary IT security domain is properly addressed when you take a
holistic approach to business-driven security.
Finally, per the scope of this book, we highlighted the importance of security and Compliance
Management on virtualized environments.
18 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
2
The IBM PowerSC GUI provides a centralized management console for visualization of
endpoints and their status; applying, undoing, or checking compliance levels; grouping
systems for the application of compliance level actions; and viewing and customizing
compliance configuration profiles. The IBM PowerSC GUI also provides extensive profile
editing and reporting capabilities.
The GUI interface also includes File Integrity Monitoring (FIM). FIM includes Real Time
Compliance (RTC) and Trusted Execution (TE) for AIX and Audit for Linux endpoints. The IBM
PowerSC GUI can be used to configure RTC, TE, Auditd, and view real-time events. For more
information, see Chapter 4, “Real-Time File Integrity Monitoring” on page 111.
The IBM PowerSC GUI consists of the following main parts, as shown in Figure 2-1:
UI Server
AIX/Linux LPAR for IBM PowerSC GUI server
UI Endpoint Agent
AIX/Linux endpoints that are managed by IBM PowerSC GUI server
Browser
User interaction
20 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
IBM PowerSC GUI communication adheres to the following security standards:
The HTTPS communication between the IBM PowerSC server and the IBM PowerSC GUI
agents on each of the AIX or Linux endpoints are bidirectional and uses industry-standard
technology (such as SSL Certificates), and other application-specific technology.
All communication between the IBM PowerSC GUI agents and the IBM PowerSC GUI
server is encrypted by using protocols and cipher suites that are consistent with the
security requirements of the protected systems.
IBM PowerSC GUI uses TLS 1.2 protocol level to interact with all the IBM PowerSC GUI
agents and with all the IBM PowerSC GUI users.
The IBM PowerSC GUI server access supports LDAP or local accounts and allows
management of access and endpoint-control authority by using AIX and Linux group
membership.
After the first contact is established between the IBM PowerSC GUI server and the IBM
PowerSC GUI agents, a one-time agent-server security handshake is performed.
2.2.1 AIX
In our scenario, we used an AIX 7.2 LPAR and the IBM PowerSC Standard Edition ISO file to
perform the installation.
Note: The installation media can be downloaded from this web page.
3. After mounting the ISO file, go to the installp/ppc directory, as shown in Figure 2-3.
22 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
4. You can use command line or SMIT to install IBM PowerSC filesets. In our example, we
used SMIT for the installation. You can run smitty installp fast path to install the filesets,
as shown in Figure 2-4. For the IBM PowerSC GUI Server, we need the following filesets:
powerscStd.license
powerscStd.uiserver
Figure 2-5 shows the input pane for the location of the installation filesets.
Figure 2-6 shows the IBM PowerSC GUI server installation filesets.
24 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 2-8 shows the installation completion message pane without errors.
5. After completing the installation through the smitty, go to the command line and run the
lslpp -L command to verify the installation:
# lslpp -L powerscStd.uiServer.rte
Fileset Level State Type Description (Uninstaller)
----------------------------------------------------------------------------
powerscStd.uiServer.rte 1.2.0.0 C F PowerSC User Interface Server
The installation adds an entry in the /etc/inittab file to start the IBM PowerSC Server at
start time. Use the lsitab command to verify the entry:
# lsitab pscuiserver
pscuiserver:2:wait:/usr/bin/startsrc -s pscuiserver > /dev/console 2>&1
IBM PowerSC GUI Server installation automatically installs the pscuiserver service. This
installation can be verified by running the lssrc command:
# lssrc -s pscuiserver
Subsystem Group PID Status
pscuiserver 9503060 active
The IBM PowerSC GUI server listens on TCP port 443 for all communication from the
PowerSC GUI agent, or from any web browser. The IBM PowerSC GUI agent that is running
on each endpoint listens on TCP port 11125 for all communication from the IBM PowerSC
GUI server.
Note: We are not showing how to install Red Hat Enterprise Linux on IBM Power System in
this section. This process can be done by using the manuals that are available at this web
page.
You can find the RHEL version by viewing the /etc/redhat-release file as shown in the
following example:
#cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.4 (Maipo)
By using a similar procedure for AIX, create a directory that is named /software, transfer the
ISO file to the server, and mount it to have access to the rpm files, as shown in Figure 2-9.
26 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
After mounting the ISO file, find a directory that is named /rhel, which includes three scripts
that you can use for RHEL in later steps, as shown in Figure 2-10.
Figure 2-10 Scripts under the rhel directory to be used for the GUI installation
To install the IBM PowerSC GUI server, run the script as shown in Figure 2-11.
We encountered a Java virtual machine (JVM) related error. To correct this error, you must
check whether JAVA was installed, and that the PATH variable is updated with the path where
JAVA is installed, as shown in Figure 2-12. (A few other prerequisites might exist for
installation.)
Figure 2-12 Checking JAVA is installed and the variable PATH is valid
Note: It is recommended that your media repository is up to date to avoid any problem with
rpm dependencies.
After completing and resolving any dependency, run again the script to install the GUI, as
shown in Figure 2-13.
28 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Accept the license agreement. After few minutes, you see output that is similar to the output
that is shown in Figure 2-13 on page 28. You can run the systemctl command to verify that
the powersc-uiserver service is running, as shown in Figure 2-14.
Figure 2-14 Check the status of the IBM PowerSC GUI server
Run the netstat command to verify that the ports are opened, as shown in Figure 2-15.
Note: This section does not show you how to install SLES. This process can be done by
using your procedures or following the instructions that are described in the manual that is
available at this web page.
After mounting the ISO file, you find a directory that is named sles, which includes three
scripts that you can use for installation, as shown in Figure 2-16.
To install the IBM PowerSC GUI server, run the script as shown in Figure 2-18.
To correct this error, you must check that JAVA was installed, and that the PATH variable is
updated with the path where JAVA is installed, as shown in Figure 2-20 on page 31. (A few
prerequisites might exist for the installation.)
30 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 2-20 Checking that JAVA was installed and its path
Note: It is recommended that you have your media repository up to date to avoid any
problem with rpm dependencies.
After completing and resolving any dependency, run again the script to install the GUI, as
shown in Figure 2-21.
Accept the license agreement. After few minutes, you see an output that is similar to the
output that is shown in Figure 2-21.
Run the netstat command to verify that the ports are opened, as shown in Figure 2-23.
More security control is provided by using UNIX Groups. Any LDAP users or local users who
are defined by the AIX or Linux operating system must be a member of the security group to
have the endpoint management into the PowerSC GUI. The administrator must set or change
group membership by using the pscuiserverctl command.
After you are logged in, you might still be restricted to view-only mode. You can use the user
authority function to perform actions against endpoints that are controlled by UNIX group
membership. To perform any actions, you must be a member of a UNIX group that has
permission to manage the endpoint.
By default, any user who is a member of the security group can manage every endpoint that
is visible in the IBM PowerSC GUI. The IBM PowerSC administrator can restrict user access
to the individual endpoint level by using the pscuiserverctl command with the setgroup
parameter. For example, pscuiserverctl setgroup <group name> <comma- or
space-separated list of host names>.
32 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Administrative access: This access is required to perform administrative functions by using
the IBM PowerSC GUI. You can assign administrative access to a group or multiple groups
by setting the administratorGroupList by using the pscuiserverctl command. For more
information, see 2.3.3, “PowerSC GUI login” on page 37.
Note: To get administrative access, you must assign the group to be part of
logonGroupList and administratorGroupList so that the group members can log in and
perform administrative tasks.
Because of security recommended practices, we created separate users and groups in our
scenario to demonstrate how it is possible to configure the access to GUI for login only and
administrative purposes.
Normally, you can run the pscuiserverctl command as a root user to set the access control.
Create a user and group and set the password. We demonstrate how we use it for AIX and
Linux systems in 2.3.2, “Manage users and groups” on page 34.
Run the following script to specify the AIX groups in which a user must be a member to run
commands on specific endpoints. You must provide fully qualified host names of the
endpoints. The groups that you specify are written to the
/etc/security/powersc/uiServer/groups.txt file:
pscuiserverctl setGroups <group name> <comma separated list of host names>
By default, any user who is a member of the security group can manage every endpoint that
is visible in the IBM PowerSC GUI.
In the IBM PowerSC GUI, you must pay attention to and consider the following relationship
between endpoints and groups:
One UNIX group can be associated with many endpoints
One endpoint can be associated with many UNIX groups
When a user is logged in to the IBM PowerSC GUI, group associations are used to determine
whether a user is allowed to run commands to specific endpoints, or whether the user is
allowed only to view endpoint status. Consider the following points:
If the user must run commands against a specific endpoint by using the IBM PowerSC
GUI, the user must be associated with one of the groups that is associated with the
endpoint.
The group membership for users that are logging in to the IBM PowerSC GUI is compared
with the set of groups that are associated with each endpoint. If the user’s group
membership matches groups that are associated with each endpoint, the user can run
commands, such as Apply profiles, Undo, and Check against that endpoint.
If the user’s group membership does not match any groups that are associated with each
endpoint, the user can view only the status for that endpoint.
AIX
On AIX, you can use the mkgroup command to create a group and the mkuser command to
create a user. As shown in Figure 2-24, we create a group psclog and add a user psclogin to
this group. Then, we run the pscuiserverctl command to set the logonGroupList to the
psclog group. This process allows all users that are part of the psclog group (in our case, the
psclogin user) to log in to the IBM PowerSC GUI server.
Next, we create a group pscadm and add user pscadmin to this group, as shown in Figure 2-25.
Figure 2-25 Create group and adding the user to the group
34 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Then, we run the pscuiserverctl command to set the logonGroupList and the
administratorGroupList to include the pscadm group. This process allows all users that are
part of the pscadm group (in our case, the pscadmin user) to log in to the IBM PowerSC GUI
server.
Linux
On Linux, you can use the groupadd command to create a group, the useradd command to
create a user, and the usermod command to modify the user’s group.
As shown in Example 2-2, we create a group psclog and added a user psclogin to this group.
Then, we run the pscuiserverctl command to set the logonGroupList to the psclog group.
This process allows all users that are part of the psclog group (in our case, the psclogin
user) to log in to the IBM PowerSC GUI server.
Example 2-2 Creating group and adding the user to the group
Linux:
[root@p52n76 ~]# useradd pscadmin
[root@p52n76 ~]#
[root@p52n76 ~]# passwd pscadmin
Changing password for user pscadmin.
New password:
BAD PASSWORD: The password is shorter than 7 characters
Retype new password:
passwd: all authentication tokens updated successfully.
[root@p52n76 ~]#
[root@p52n76 ~]#
[root@p52n76 ~]# groupadd pscadm
[root@p52n76 ~]#
[root@p52n76 ~]# usermod -G pscadm pscadmin
[root@p52n76 ~]#
[root@p52n76 ~]#
[root@p52n76 ~]# groups pscadmin
pscadmin : pscadmin pscadm
[root@p52n76 ~]#
[root@p52n76 ~]#
[root@p52n76 ~]# pscuiserverctl set administratorGroupList pscadmin
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
administratorGroupList=pscadmin
[root@p52n76 ~]# pscuiserverctl set administratorGroupList
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
pscadmin
36 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
LANGUAGE = (unset),
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
pscadmin
[root@p52n76 ~]# pscuiserverctl set administratorGroupList pscadm
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
administratorGroupList=pscadm
[root@p52n76 ~]#
(…)
By default, IBM PowerSC GUI uses port 443 for SSL-based communication, which can be
modified by using the pscuiserverctl command. If you use any other port, you must specify
the port number in the URL:
https://<hostname of PowerSC GUI server>:<port>
The IBM PowerSC GUI login page opens, as shown in Figure 2-26 on page 38. We log in by
using the psclogin user account.
Because the psclogin user account does not have administrative access, you see many of
the GUI options grayed out or unavailable, as shown in Figure 2-27.
Options
disable
38 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Next, login by using the pscadmin account, as shown in Figure 2-28.
The pscadmin account features all administrative access rights, as shown in Figure 2-29.
Options enabled
for
administrator
account.
Next, we describe how you can install and configure UIAgent on the endpoints so that IBM
PowerSC GUI server can manage them.
The IBM PowerSC GUI agent filesets are provided with the IBM PowerSC Standard Edition. It
is not part of the base AIX operating system.
40 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
3. Install UIAgent and license filesets by using smitty, as shown in Figure 2-31:
smitty installp
The installation process automatically attempts to start the UIAgent process and creates an
entry in /etc/inittab file:
# lsitab pscuiagent
pscuiagent:2:wait:/usr/bin/startsrc -s pscuiagent > /dev/console 2>&1
If the IBM PowerSC GUI server is not reachable or the server keystore is not present on the
endpoint, the UIAgent start process fails and automatically stops.
For more information about generating the keystore, see 2.5.1, “Generate keystore” on
page 43.
The following prerequisites must be met before IBM PowerSC GUI agent is installed:
Install the IBM PowerSC pscxpert before installing the GUI agent. For more information
about installing pscxpert, see step 3 next.
The UIAgent needs redhat-lsb-core.
42 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
2.5 Endpoint administration
This section describes endpoint administration management.
First, the truststore security certificate must be distributed to the endpoints. You or the system
administrators must deploy the truststore security certificate on all endpoints.
Normally, a truststore file is created during the installation of the UI Server and all endpoints
running the UI agent must be copied. The name of the truststore file is
endpointTruststore.jks. The file is placed in the /etc/security/powersc/uiServer/
directory on the UI Server.
The following steps show how you must place the endpointTruststore.jks file on each
endpoint for the IBM PowerSC GUI agent on that endpoint to make contact with the IBM
PowerSC GUI server, and to start the process that results in the creation of the keystore on
the endpoint.
It is possible to distribute the truststore file by using one of the following methods:
Manually copy the endpointTruststore.jks file to each endpoint.
If IBM PowerVC (or another virtualization manager) is used in your environment, the
endpointTruststore.jks file can be put onto the IBM PowerVC image. When the IBM
PowerVC image is deployed to an endpoint, the IBM PowerSC GUI agent and the
truststore file are included. For more information, see 2.5.3, “IBM PowerVC integration” on
page 48.
Complete the following steps by using the IBM PowerSC GUI server:
1. Go to the /etc/security/powersc/uiServer directory, as shown in Figure 2-34.
2. Copy the endpointTruststore file by using the scp command to the endpoint agent LPAR
at the /etc/security/powersc/uiAgent directory. Then, start the agent at the endpoint.
The first time an endpoint starts running, the IBM PowerSC GUI agent uses the
truststore file to determine where the IBM PowerSC GUI server is running. Then, the
IBM PowerSC GUI agent sends a message to the IBM PowerSC GUI server with a request
to join the list of available monitored endpoints.
3. After starting the agent, check the log file, as shown in Figure 2-37.
44 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
4. Run the pscuiserverctl setgroup command to add the UNIX groups that can manage
the endpoints, as shown in Figure 2-38.
It is important to know after you start the agent that the keystore request is sent to the GUI
Server. The GUI Server administrator must generate and confirm the keystore for the
endpoint. Complete the following steps:
1. Go to IBM PowerSC GUI server, click the Languages and Settings icon in the menu bar
of the main page.
2. At the Configuration tab, click Manage the Endpoints, check the Keystore Requests,
check the connection of the endpoint, and delete.
It is possible to verify the communication between discovered endpoints and the IBM
PowerSC GUI server (see Figure 2-39).
Important: For each endpoint, you must verify that a keystore request is valid. If it is valid,
you can generate a keystore for the endpoint.
Confirm that the GUI is used and check that “yes” is displayed in the keystore generated field,
as shown in Figure 2-42.
Figure 2-43 PowerSC shows the key expiration timestamp dates for the endpoints
Go to the Languages and Settings icon in the menu bar of the main page. Then, click the
Endpoint Admin tab.
46 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
You can check that the date the security certificate expires is displayed in the Key Expiration
Timestamp column, as shown in Figure 2-44. After the expiration date passes, it automatically
generates a request for a new security certificate.
In this view, you also can confirm and verify whether the generated keystore request is valid.
If it is not valid, you can generate a keystore for the endpoint.
Note: Before the expiration date is reached, you can preemptively delete the current
certificate for the endpoint and restart the IBM PowerSC GUI agent on that endpoint.
By deleting the certificate for the endpoint, when IBM PowerSC GUI agent is restarted, it
appears that the endpoint is new, and as such, IBM PowerSC automatically generates a
request for a new security certificate.
By using the Endpoint Administrator Keystore Requests page, you can verify that the
generated keystore request is valid. If it is valid, you can generate a keystore for the
endpoint.
For more information about each option, see the Administering endpoint and server
communication page of IBM Knowledge Center.
The following keystores are required and are created by one or more of the shell scripts that
are run during the installation process or by the IBM PowerSC administrator:
endpointKeystore.jks
endpointTruststore.jks
serverKeystore.jks
For more information about the shell scripts that are provided by PowerSC to generate the
certificate, this page of IBM Knowledge Center.
You also can purchase your certificate so that you can use the script
import_well_known_certificate_uiServer.sh. Run this script only if you are providing your
own well-known certificate.
If you have a certificate .pem file from a well-known certificate authority, you can run this script
to create the endpoint truststore, import that certificate, and create the GUI server truststore,
the truststore, and the GUI server keystore.
To integrate IBM PowerVC, you must complete some steps on the IBM PowerVC and on the
IBM PowerSC. Therefore, you might ask for help from the IBM PowerVC administrator.
First, you must copy the truststore file that is created and must be used by all endpoints. The
name of the file is endpointTruststore.jks and it is in the
/etc/security/powersc/uiServer/ directory.
The endpointtruststore.jks file must be placed on each endpoint so that the IBM PowerSC
GUI agent on that endpoint to make contact with the IBM PowerSC GUI server and to start
the process that results in the creation of the keystore on the endpoint.
As the example and for our scenario, we copied the endpoint truststore
/etc/security/powersc/uiServer/endpointTruststore.jks file to the IBM PowerVC image.
Then, we deployed the IBM PowerVC image as a new endpoint.
For more information about the variable OS_AUTH_URL that appears at the
file /opt/ibm/powervc/powervcrc file, contact your IBM PowerVC administrator. You can have
something similar to our environment:
https://powervc2.pbm.ihost.com:5000/v3/auth
Then, go to the IBM PowerSC Server and run the following commands:
p52n75:/opt/powersc/uiServer/bin #
p52n75:/opt/powersc/uiServer/bin # ./pscuiserverctl set powervcKeystoneUrl
p52n75:/opt/powersc/uiServer/bin #
p52n75:/opt/powersc/uiServer/bin # ./pscuiserverctl set powervcKeystoneUrl
https://powervc2.pbm.ihost.com:5000/v3/auth/
powervcKeystoneUrl=https://powervc2.pbm.ihost.com:5000/v3/auth/
p52n75:/opt/powersc/uiServer/bin # ./pscuiserverctl set powervcKeystoneUrl
https://powervc2.pbm.ihost.com:5000/v3/auth/
p52n75:/opt/powersc/uiServer/bin #
48 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
IBM PowerVC GUI
Using the IBM PowerVC GUI and with administrator or deployment permissions as shown in
Figure 2-45, we created a gold image with the IBM PowerSC agents files, and the
endpointTruststore.jks file on it. (This process is not described in this book.)
In our scenario, we created an image with AIX 7.2 called redbook-psc-client. Selecting this
image, click the Deploy option to start the process through the IBM PowerVC GUI.
50 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
After starting the process, you can see in the Virtual Machines tab that our new machines
appear as “Building” state, as shown in Figure 2-47.
This process normally takes 8 - 10 minutes to complete. You can check that the new machine
now shows as Active, as shown in Figure 2-48.
By selecting the machine, the Verify, Generate Keystore, Delete, and Refresh Table options
become available. Then, click Verify to confirm that these machines are created by IBM
PowerVC, as shown in Figure 2-50.
52 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
When you use this option, the IBM PowerVC Administrator credentials must be entered, as
shown in Figure 2-51.
Figure 2-51 IBM PowerSC GUI entering the IBM PowerVC credentials
After confirming that the machines are created by IBM PowerVC, the keystore to send the
GUI Server must be generated, as shown in Figure 2-52.
Figure 2-52 IBM PowerSC GUI generating the Keystore to send to UI server
You can see now that the keystore must be generated after confirming that this machine was
created by IBM PowerVC. The Keystore Generate tab changed to Yes, as shown in
Figure 2-54.
Figure 2-54 IBM PowerSC pane view showing the Keystore generated
54 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Also, the status of the new endpoint features the updated timestamp, as shown in
Figure 2-55.
This section describes how to create and manage groups by using IBM PowerSC GUI.
To create a custom group, click the Plus sign and enter the name of the Group. In our
example, we created two groups (AIX and Linux) to manage the AIX and Linux endpoints.
Note: The name of group must be unique and can be up to 128 characters.
56 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
After it is saved, you see that the group was created, as shown in Figure 2-58.
Also, you can create as many groups as needed to fulfill your requirements. To demonstrate
our scenario, we created a few more groups: Development, Production, PCIv3, and GDPR.
58 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
To remove an endpoint from the group, select the system from the GroupName list and
click the left arrow (see Figure 2-62).
After making the changes, click the Save group changes option to save your changes, as
shown in Figure 2-63.
Go to the Group Editor from the Security or Compliance tabs. Select the group that you want
to delete. Click Delete Group. The group is deleted and removed from the list of groups in
the Groups tab.
60 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
To perform a task, you can select the endpoints and click Apply, Simulate, Undo, or check,
as shown in Figure 2-65.
For more information about the Compliance tab, see Chapter 3, “Compliance automation” on
page 75.
The fourth column shows the operating system name. For AIX endpoints, the FIM events are
shown under the RTC and the TE columns. For Linux endpoints, the FIM events are shown
under the Auditd column.
You can use this page to view FIM alerts or configure RTC, Auditd, and TE for the endpoints.
For more information about configuring FIM, see Chapter 4, “Real-Time File Integrity
Monitoring” on page 111.
Next, we describe the options that are available in the Reports tab.
Compliance Overview
The Compliance Overview section can be configured to receive reports that are related to
high-level overview compliance, as shown in Figure 2-67.
To configure this report, click Compliance Overview and select the group or endpoints for
which you want to configure the report, as shown in Figure 2-68.
62 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 2-69 shows the report for the Production group.
This tab shows the email configuration pane. The following options are available:
Send me e-mails: Select this option.
Addresses [comma separated]: Specify email address to where you want to receive the
reports. To specify more than one email address, specify multiple email IDs that must be
comma-separated.
Subject: Specify the subject for the email.
Send every day at: Specify the hours and minutes when you want to receive the email.
After completing this information, click Save. Daily emails are then configured, as shown in
Figure 2-71 on page 64.
64 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Compliance detail
You can configure the Compliance Detail reporting to get low-level details about compliance
failures. As with the Compliance Overview, you can specify the group and configure
automated email options for this group, as shown in Figure 2-73.
Figure 2-74 shows the compliance report for the GDPR group.
66 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Combined Compliance and FIM
You can combine compliance and FIM reporting in one location by using this section. As with
the Compliance Overview, you can specify the group and configure automated email options
for this group, as shown in Figure 2-77.
Timeline Report
The Timeline Report section provides an excellent way to report compliance and FIM events
in a monthly, daily, or hourly view. Click Timeline Report and select the endpoint for which
you want to see the report. The total number of events is shown at the top of Figure 2-78.
Click Month to view the timeline report by month, as shown in Figure 2-80.
68 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Click Day to view the timeline report by day, as shown in Figure 2-81.
Click Hour to view the timeline report by hour, as shown in Figure 2-82.
You also can select the Change endpoint option or Configure and email immediately
option, as shown in Figure 2-84.
70 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 2-85 shows how to change the endpoint to generate a report.
Figure 2-87 Select one of the events in the list to report more details
You can unhide an event if it was deleted from the report, as shown in Figure 2-88.
In the File Integrity Detail menu option, selecting an event shows more information about the
specific event, as shown in Figure 2-89 on page 73.
72 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 2-89 Timeline report selecting an event
By using the drop-down menu, you can select any of the built-in profiles or custom profiles.
You can view the security rules along with the description. The same page provides an option
to create a profile by using the Create New Profile option.
IBM PowerSC provides a way to automate the compliance process by providing built-in
security profiles that are based on various security standards, such as PCI-DSS, HIPAA,
GDPR, and DOD.
This chapter describes IBM PowerSC compliance features and contains the following
sections:
3.1, “IBM PowerSC compliance automation overview” on page 76
3.2, “Installation” on page 77
3.3, “Profiles” on page 79
3.4, “Applying a profile” on page 80
3.5, “Checking compliance” on page 85
3.6, “UNDO” on page 91
3.7, “Custom profile” on page 92
3.8, “Importing custom profiles not created with IBM PowerSC” on page 103
3.9, “Applying the PCIv3 profile to an AIX LPAR” on page 106
Solution
Security compliance automation provides pre-built profiles to support industry standards,
including the following examples:
Payment Card Industry Data Security Standard (PCI) v3
Health Insurance Portability and Accountability Act Privacy and Security Rules (HIPAA)
North American Electric Reliability Corporation compliance (NERC)
Department of Defense Security Technical Implementation Guide for UNIX (DOD STIG)
Control Objectives for Information and related Technology (COBIT)
General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679)
PSCxpert (enhanced version of AIXpert) is the underlying mechanism to apply policy settings
and check for compliance.
For the information about IBM PowerSC, see IBM Knowledge Center.
76 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
3.2 Installation
To use the security and compliance automation feature, it is necessary to install the fileset
powerscSTD.ice fileset.
This fileset must be installed on AIX and Linux systems that require the security and
compliance automation feature.
Figure 3-1 Checking the installation media for PowerSC Standard Edition
The installation can be done by using the command line, but this example uses SMIT, as
shown in Figure 3-2.
78 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
After completing the installation, confirm it by using the lslpp command, as shown in
Figure 3-5.
3.3 Profiles
After installing the filesets for security and compliance automation, you can find them in the
/etc/security/aixpert/custom directory on AIX systems, and the
/etc/security/pscxpert/custom. directory for Linux
For example, an AIX system is checked in Figure 3-6 for the installed filesets.
GDPR requires several levels of compliance. The levels of compliance that can be interpreted
as requirements for server and network configuration are addressed with the IBM PowerSC
product. For more information, see IBM Knowledge Center.
For this IBM Redbooks publication, we tested two scenarios by using the GDPR and PCIv3
and applying them to AIX and Linux systems.
For your operating system’s default settings, remember that applying security profiles can
change the default settings on your operating system. Ensure that the rules do not prevent
you from accessing your system after implementation. Even root can be locked out when the
settings are applied.
Each of the IBM PowerSC built-in profiles includes rules that must be applied to an endpoint
to meet security requirements. You can create a custom profile when you need to apply only a
subset or a different combination of these rules or customize compliance levels.
Figure 3-7 on page 81 shows the steps for applying the profile for PCIv3, which can be found
in the directory /etc/security/aixpert/custom in the AIX environment.
80 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 3-7 Applying a profile using the CLI
Figure 3-8 shows the final execution results after the profile is applied.
Applying a profile can take few minutes to complete and you can check all rules that are being
processed. In the end, you can see how many rules were processed (in our case 105), how
many rules passed, how many rules failed, and how many rules include prerequisite failures.
Also, these rules can be checked on /etc/security/aixpert//check_report.txt.
If you have prerequisites rules or failure rules, we recommend reviewing them and checking
whether that prerequisite applies to your environment. The system can have missing
installation prerequisites or other issues that require attention from the administrator.
By using the command pscxpert -t, you can confirm that the latest profile is applied, as
shown in Figure 3-9.
The levels and profiles that can be applied to all the endpoints are listed. Profiles that cannot
be applied to all the selected endpoints are displayed with a gray color. The profile in gray
mean this profile is not copied on the endpoint. If required, you can copy the profile to the
endpoint from the Profile Editor Tab.
For our example, we selected the GDPR group and the Linux_GDPRv1 profile, as shown in
Figure 3-10.
82 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Also, select the endpoints with Red Hat or SUSE and click OK, as shown in Figure 3-11.
You can check the applying process has started and after some minutes, you can see the
results, as shown in Figure 3-12.
In this scenario, if one or more rules cannot be applied after applying the profile, the rules are
considered failed. More information is available about how many rules were checked for both
servers, and the specific rules that failed.
Failed rules can be associated with many causes; for example pre-requisites failures, or that
some rules do not apply to your environment. Therefore, we recommend verifying and
analyzing the results before applying any profile to perform a simulate process. This process
can be better understood at the Simulate Option session.
Also, you can adjust the rules that are applied by creating a custom profile or by editing a
custom profile (see Figure 3-14).
It is important to understand what each rules does and what changes it is making to the
environment. It is possible that some rules cannot be applied because of the characteristics of
the server.
In most environments and situations, it is recommend that the administrators review and edit
compliance files to remove problem rules. After compatibility checks are completed, the
compliance rule files can be considered stable and used to be deployed onto production
servers.
84 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
3.5 Checking compliance
In this section, we describe hot to set up and check for compliance.
The checking process can be done to confirm that the last applied compliance level and
profile were applied.
Figure 3-18 shows the PowerSC compliance for GDPR after the profiles are applied.
During the check process, the endpoint is checked to see whether the rules that are in the
compliance level or profile can be applied. The endpoints are not updated. If any rules cannot
be applied, it is considered that they fail when they are applied. If one or more rules fail, the
endpoint is shown with a red bar and the text “Failed” is displayed in the #Failed Rules
column. In our example, we can see that some failed rules exist, which must be analyzed.
From the #Failed Rules list for each endpoint that is marked with red, you can view the
message that indicates why the rule failed. You can adjust the rules that are applied by
creating a custom profile.
86 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
If you want to check a compliance level or profile that was not applied to one or more
endpoints in a selected group, you can repeat the steps. When you open the Last Checked
Type drop list, select one of the following options:
All available levels: Displays a list of all the available levels that you can check against an
endpoint.
All available profiles: Displays a list of all the available profiles that you can check against
an endpoint.
Then, select the level or profile that want to check against an endpoint.
You can perform the simulate process by using the command line as well. At the prompt of
your endpoint, run the following command:
#pscxpert -c -P /etc/security/aixpert/custom/GDPRv1.xml -p
In our example, we use the GDPRv1.xml to simulate against the endpoint, as shown in
Figure 3-19 and Figure 3-20 on page 88.
In our example, if this profile is applied to this endpoint, you can see that only 43 rules passed
out of 107, as shown in Figure 3-21.
At the same directory, you can validate the check_report.txt file and see which rules failed
and the reason for that failure, as shown in Figure 3-22.
Also, you can use the following command to generate a report in a csv format file:
# pscxpert -c -r -P /etc/security/aixpert/custom/GDPRv1.xml -p
88 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
or
# pscxpert -c -R -P /etc/security/aixpert/custom/GDPRv1.xml -p
By using the command line process, you can review the results to analyze which rules failed
and if any adjustment is necessary.
Note: Always test it firs. It is highly advised to implement the component on a test
environment. Changes that are made with the Security and Compliance Automation
component affect the systems, and applications might not work as designed.
90 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
3.6 UNDO
In some cases, it is necessary to undo any compliance level profile applied. This process is
done by selecting one or more endpoints through the GUI or command line.
When the command line is used (see Figure 3-27), you run the pscxpert -u -p command.
The UNDO process works recursively, and the UNDO rules are built dynamically.
Tip: The pscxpert UNDO command must run twice to return AIX to the default settings if
you are using more than one profile.
At the Compliance tab, click Groups and select the endpoint that you want to undo. Click
UNDO, as shown in Figure 3-28.
After the process finishes, you see that no information about the compliance level is shown on
the endpoint.
Go to the Profile Editor and select the PCiv3 profile, as shown in Figure 3-30.
92 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Click Create a New Profile, enter the new profile name (in our example,
PCIv3_Custom_RedBook) and the type (PCIv3_Custom), as shown in Figure 3-31. Then, click
Confirm to continue.
You can then select the rules individually and use the arrows to select your custom profile or
select all. Then, you can edit your policies and save them, as shown in Figure 3-31 and
Figure 3-32.
Warning: You should change rule arguments only if you are familiar with the script.
Specifying incorrect values prevents the rule from working properly. Carefully examine the
script before making any changes.
If a rule is not applicable to the specific system environment, most compliant organizations
permit documented exceptions. Also, you can change the value of the content for specific
rules as in this scenario changes the value for PCIv3_maxage.
94 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
3. Select the rule that you want to change, as shown in Figure 3-34 and Figure 3-35.
After completing all of the changes in the Custom Profile, the profile must be copied to all
existing groups and the machines that you need to apply this profile. In this case, the AIX
machine belongs to more than one group.
6. Select the groups to copy the profile to and click OK, as shown in Figure 3-37.
96 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
7. In the Compliance tab, select the profile and click OK, as shown in Figure 3-38.
After reviewing the process, you can see that all rules passed, as shown in Figure 3-41.
Also, a custom profile can be created to use rules from more than one profile. In this example,
another custom profile is created that uses the GDPR and SOX-COBIT profiles.
98 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Complete the following steps:
1. In the Profile Editor, select the GDPR profile, as shown in Figure 3-42.
2. Click Create New Profile and add the name and type. A new profile is created that is
named GDPRv1_SOX-COBIT_Custom, as shown in Figure 3-43.
4. You can now add rules from the SOX-COBIT profile. When you are finish, click Save to
save the profile, as shown in Figure 3-45.
100 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
5. Copy the new profile to the groups and machines, as shown in Figure 3-46 on page 101.
6. In the Compliance page, select the group (our example uses the production group). Select
the AIX machine and the custom profile GDPRv1_SOX-COBIT_Custom, and apply it, as shown
in Figure 3-47.
After the profile is applied, you can see that some rules failed for this particular profile, as
shown in Figure 3-49. You can use the process of applying to review the pre-requisites
and refine the custom profile before applying it again. You also can use the Simulate
process against the machine to re-test the profile while looking for a successful
completion.
Note: Use a test or sandbox machine to test the profiles before going into production.
102 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
7. Run the process again and check the report pane. You see the process completed
successfully and all rules passed, as shown in Figure 3-50.
Normally, all profiles are stored in the /etc/security/aixpert/custom directory. If you have a
profile or want to create a profile by using the command line, complete the following steps to
import it in the IBM PowerSC GUI:
1. In the /etc/security/aixpert/custom directory, select the profile that want to use and
copy it to a new profile. In our example, we used PCIv3_Custom_Redbook.xml, as shown in
Example 3-1.
Example 3-2 Running the script to create an IBM PowerSC version of the profile
p52n75:/opt/powersc/uiServer/bin # ./convertProfileToBean.sh
/etc/security/aixpert/custom/PCIv3_Custom_Production.xml
/opt/powersc/uiServer/bin/uiserver
OUTPLAY=
HOME=/
USER=root
JBOSS_HOME=
TEMP=
TMP=
_STARTED=1
LIBPATH=/opt/powersc/uiServer/bin/jre/bin/default:/opt/powersc/uiServer/bin/jre/li
b/ppc64:/opt/powersc/uiServer/bin/jre/lib/ppc64/default:/opt/powersc/uiServer/bin/
jre/bin:
running class com.rocketsoft.nm.vertical.powersc.ProfileToBeanConverter..
p52n75:/etc/security/aixpert/custom # mv PCIv3_Custom_Production.xml.xml
/opt/powersc/uiServer/knowledge/site/powerscui/aixpertProfiles/
p52n75:/opt/powersc/uiServer/knowledge/site/powerscui/aixpertProfiles # ls -al
total 752
drwxr-xr-x 2 root security 4096 Sep 18 13:21 .
drwxr-xr-x 33 root security 4096 May 30 08:19 ..
-rw-r--r-- 1 root system 7063 Sep 17 17:34
GDPRv1_SOX-COBIT_Custom.xml.jxo
-rw-r--r-- 1 root system 23756 Sep 17 17:34
GDPRv1_SOX-COBIT_Custom.xml.xml
-rw-r--r-- 1 root system 146050 Sep 18 13:14
PCIv3_Custom_Production.xml.xml
104 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
-rw-r--r-- 1 root system 45714 Sep 18 10:56
PCIv3_Custom_Redbook.xml.jxo
-rw-r--r-- 1 root system 146270 Sep 18 10:56
PCIv3_Custom_Redbook.xml.xml
p52n75:/opt/powersc/uiServer/knowledge/site/powerscui/aixpertProfiles #
p52n75:/opt/powersc/uiServer/knowledge/site/powerscui/aixpertProfiles #
p52n75:/opt/powersc/uiServer/knowledge/site/powerscui/aixpertProfiles # stopsrc -s
pscuiserver
0513-044 The pscuiserver Subsystem was requested to stop.
p52n75:/opt/powersc/uiServer/knowledge/site/powerscui/aixpertProfiles #
p52n75:/opt/powersc/uiServer/knowledge/site/powerscui/aixpertProfiles # startsrc
-s pscuiserver
0513-059 The pscuiserver Subsystem has been started. Subsystem PID is 17367480.
p52n75:/opt/powersc/uiServer/knowledge/site/powerscui/aixpertProfiles #
Figure 3-51 shows how to import and use the custom profile created.
106 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
3. Click OK to start applying the new profile, as shown in Figure 3-54.
After a few minutes, you can see the completion results, as shown in Figure 3-55.
Depending on the machine, this process can take approximately 5 minutes to complete.
You can check the status of the process by using the command line.
108 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
The simulation process starts, as shown in Figure 3-58.
For all simulated and processed rules processes, you can analyze, review, and adjust what
must be changed for all of the rules in the profile to be applied successfully.
This chapter introduces the PowerSC File Integrity Monitoring (FIM) component, which
consists of real-time monitoring for IBM AIX and Linux systems to ensure that these systems
are configured correctly and are consistently in a compliant state.
PowerSC RTC works with the PowerSC Compliance Automation to provide automatic
compliance check feature. Whenever a monitored file is changed, RTC runs the pscxpert
command to capture and notify if any compliance violations occurred. For more information
about the PowerSC compliance automation feature, see Chapter 3, “Compliance automation”
on page 75.
1
At the core of the AIX Event Infrastructure is a pseudo-filesystem: Autonomic Health Advisor File System (AHAFS),
which is implemented as a kernel extension. AHAFS mainly acts as a mediator to take the requests of event
registration, monitoring, and unregistering from the processes interested in monitoring for events. It forwards the
requests to the corresponding event producers (code responsible for triggering the occurrence of an event) in the
kernel space, processes the callback functions when the event occurs, and notifies the registered users or
processes with useful information.
112 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
4.1.1 Detailed implementation
RTC can act as a stand-alone component for monitoring the configured files in the predefined
file list and notifying the administrators, in real time, of any changes to these files. RTC can
also work with the Compliance Automation to ensure that all policies are adhered to and no
unauthorized changes go unnoticed to the administrators.
It is possible to configure both monitoring functions for the same file. You also can change the
type of monitoring of a file at any time. RTC can also monitor directories to notify if any new
files or directories are created or deleted inside the monitored directory.
RTC uses the AIX AHAFS feature for the detection of any potential configuration change,
which reduces monitoring overhead to a bare minimum. No scheduled monitoring jobs are
run because configuration changes are captured in real time by this technology.
When installed, the software provides a default list of files to monitor for changes, and this list
by can be modified by adding or removing files and directories as needed. RTC is configured
by specifying the alert notification emails. This feature can also be extended to support
various options through a configuration file.
If the PowerSC GUI agent is configured on the LPAR where RTC is deployed, the agent
automatically sends RTC alerts to the PowerSC GUI server. This feature helps to monitor
RTC events centrally on PowerSC GUI server.
After finishing the initial installation and configuration, the RTC daemon rtcd uses the
monitoring capabilities of the AHAFS and validates if any of the files that are contained in the
predefined list were changed. If a violation occurred, an alert is sent to a configured list of
emails and the monitoring continues.
114 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
To ensure that you meet all the software requirements, run the commands that are shown in
Example 4-1.
Example 4-1 shows that you deployed an IBM AIX 7.2 with Technology Level 1 and bos.ahafs
version 7.2.1.0.
For more information about IBM AIX Technology Level or to download the necessary fixes,
see this IBM Fix Central web page.
4.1.3 Installation
The RTC component is provided in the PowerSC Standard Edition. It is not part of the base
AIX operating system. PowerSC RTC is installed directly on the host and does not require any
configuration on PowerVM.
You can also list the contents of the file set as shown in Example 4-2.
Path: /etc/objrepos
powerscStd.rtc.rte 1.2.0.0
/etc/security/pscexpert/bin/rtc_lku
/etc/security/rtc/rtcd.conf
/etc/security/rtc/rtcd_policy.conf
116 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Four main objects are installed with PowerSC RTC, as listed in Table 4-1.
To avoid mistakes, use the top-level menu smitty to configure the software. The fast path to
run this configuration is smitty RTC, as shown in Figure 4-5.
Enter information, such as email address and email subject, as shown in Figure 4-6.
You can verify the status of rtcd daemon by running the lssrc command, as shown in
Figure 4-8.
The rtcd daemon is a subsystem under the AIX System Resource Controller (SRC);
therefore, you must use the following SRC commands to manage the rtcd subsystem:
To check the subsystem use:
lssrc -s rtcd
To stop the subsystem use:
stopsrc -s rtcd
To start the subsystem use:
startsrc -s rtcd
You can also configure RTC by using the command line with the mkrtc command. The syntax
of the mkrtc command is shown in Example 4-3.
Example 4-3 Displaying different options to configure RTC with mkrtc command
mkrtc -e <email,email,...> [-a <alertStyle>] [-d <debug>] [-i <infoLevel>] [-s
<emailSubject>] [-c <minchecktime>]
where:
<email>: Email address where alerts are sent to
<alertStyle>: Takes 1 of 3 values: once, event, and always. Default is once
<debug>: Takes 1 of 2 values: on or off. Defualt is off
<infolevel>: Takes 1 of 3 values: 1, 2, and 3. Default is 1
<emailSubject>: The text for the Subject-line for the email alert
(minchecktime): Minimum inerval time that RTCD uses for compliance checks.
Default is 30 min. 0 indicates never.
118 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
The RTC feature can also be managed from the PowerSC GUI server. For this process, the
PowerSC GUI agent must be configured on this LPAR. For more information about how to
configure PowerSC GUI agent, see 2.4, “Installing the UIAgent” on page 40.
After you start RTC on the LPAR, it appears as running (shown as tick mark) in the PowerSC
GUI Security window, as shown in Figure 4-9.
rtcd.conf file
The rtcd.conf file defines the RTC configuration details. Various fields in this file can be
configured to define the RTC configuration. The options to configure the RTC subsystem
fields are listed in the Table 4-2.
infolevel Specifies the information level of file modifications. Valid values are 1, 2, and
3
debug Specifies whether to turn on debug. Valid values are on and off.
snmptrap_enable Specifies to enable or disable SNMP trap. Valid values are yes or no.
complianceCheck Specifies whether the rtcd calls pscxpert to do compliancy checks. Valid
values are on and off.
You can manually edit these options in the configuration file or use the PowerSC GUI.
To configure RTC by using the PowerSC GUI, go to the Security page of the PowerSC GUI
server, select the LPAR, and click Configure RTC, as shown in Figure 4-10.
120 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 4-11 shows all of the fields that are available to edit.
The PowerSC GUI server automatically creates a backup of the rtcd.conf file whenever it is
modified. This backup provides rollback options if you want to return to any of the previous
configuration settings. The PowerSC GUI server also provides a Copy RTC configuration
option that can be used if you want to copy the same RTC configuration to other LPARs.
rtcd_policy.conf
The rtcd_policy.conf file is responsible for storing the list of files and directories to be
monitored and specifying the type of monitoring. After installing RTC, this file includes all of
the predefined configurations, which can be altered by the administrator as needed.
The available options to define the type of monitoring are shown in Table 4-3.
modDir This attribute is for directories. It monitors whether a new file or directory is
created or deleted inside the monitored directory.
If you want to add or remove any file from the monitoring, you can do so directly by editing the
/etc/security/rtc/rtcd_policy.conf file, by using the chsec command, or by using the
PowerSC GUI.
122 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Removing files from monitoring using chsec
To remove /tmp/myfile from the monitoring list, use the following command:
chsec -f /etc/security/rtc/rtcd_policy.conf -s /tmp/abc -a eventtype=
You can also use the PowerSC GUI server to manage the rtcd_policy.conf file. Go to
PowerSC GUI server Security page, select the LPAR, and click Edit RTC File List, as shown
in Figure 4-13.
The action lists all the files present in the AIX system. The list shows two options for each file:
Content and Attribute.
If an option is selected, the file is being monitored for that attribute. You can clear the attribute
if you want to remove this file from monitoring.
Note: For monitoring a directory, you must browse the directory and find a filename as dot
(.), which indicates current directory. You must select the option for dot (.) to enable
monitoring the directory.
124 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
4. Browse to the /confidential.txt file and select the Content and Attribute options (see
Figure 4-15).
An RTC alert is generated in the PowerSC GUI server, as shown in Figure 4-17 on page 126.
If you click the alert message, it displays more information, as shown in Figure 4-18.
A new alert is generated in the PowerSC GUI server, as shown in Figure 4-19.
Figure 4-19 Another alert generated after the access was changed
126 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 4-20 shows the details of the alert about the access modification on the file.
After enabling the SNMP trap capability as shown in Example 4-4 on page 127, the rtcd
daemon starts sending SNMP traps for each detected violation to the SNMP server by using
the snmptrap command.
This feature helps to monitor the system for integrity violations and enforce runtime policies to
disable execution of binaries, shell scripts, libraries, and kernel extensions if they are
tampered with. It can help you prevent against malware attacks by allowing only white listed
commands to run.
TE refers to a collection of features that are used to verify the integrity of the system’s trusted
computing base, which in the context of TE is called Trusted Signature Database (TSD). In
addition, TE implements advance security policies, which together can be used to enhance
the trust level of the complete system.
The usual way for a malicious user to harm the system is to access the system and then
install trojan horses, rootkits, or tamper with some security critical files such that the system
becomes vulnerable and exploitable.
The central idea behind the set of features under TE is to prevent such activities or in a
worst-case scenario, identify if any such thing occurs to the system. Using the functionality
provided by TE, the system administrator can decide on the actual set of executables that are
allowed to run or the set of kernel extensions that are allowed to be loaded.
128 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
TE can also be used to audit the security state of the system and identify files that changed,
which increases the trusted level of the system and makes it more difficult for the malicious
user to harm the system. The set of features under TE can be grouped into the following
categories:
Managing the TSD
Auditing the integrity of the TSD (system integrity check)
Configuring security policies (runtime integrity check)
TE Path, Trusted Library Path, Trusted Shell, and Secure Attention Key
PowerSC GUI can be used to centrally manage the TE feature of multiple AIX endpoints. For
this feature, the PowerSC GUI agent must be configured on the AIX endpoint. For more
information about setting up the PowerSC GUI, see 2.2, “Installing IBM PowerSC GUI server”
on page 21.
For more information s about the AIX TE feature, see IBM Knowledge Center.
owner Owner of the file. This value is computed by the trustchk command when the file is added to the TSD
group Owner of the file. This value is computed by the trustchk command when the file is added to the TSD
mode Comma-separated list of values. This value is computed by the trustchk command. The permissible
values are SUID, SGID, SVTX, and TCB. The file permissions must be the last value and can be
specified as an octal value. For example, for a file that is set uid and has permission bits as rwxr-xr-x,
the value for the mode is SUID,755
type Type of the file. This value is computed by the trustchk command. The possible values are FILE,
DIRECTORY, MPX_DEV, CHAR_DEV, BLK_DEV, and FIFO.
hardlinks List of hardlinks to the file. Because this value cannot be computed by the trustchk command, it must
be supplied by the user at the same time when a file is added to the database.
symlinks List of symlinks to the file. Because this value cannot be computed by the trustchk command, it must
be supplied by the user when a file is added to the database.
size Defines the size of the file. This value is computed by the trustchk command. A value of VOLATILE
means that the file is changed frequently.
cert_tag This value is computed by the trustchk command when the file is added to the TSD. The field maps
the digital signature of the file with the associated certificate that can be used to verify the file’s
signature. (At the time of this writing, the certificate’s ID is also its file name in
/etc/security/certificates, but this might change in future releases.)
signature The digital signature of the file. VOLATILE means that the file is changed frequently. This field is
computed by the trustchk command.
hash_value Cryptographic hash of the file. This value is computed by the trustchk command. VOLATILE means
that the file is changed frequently.
minslabel Defines the minimum Sensitivity Label for the object (when running Trusted AIX).
maxlabel Defines the maximum Sensitivity Label for the object (when running Trusted AIX). This attribute is not
applicable to regular files and FIFO.
intlabel Defines the integrity label for the object (when running Trusted AIX).
innateprivs Defines the innate privileges for the file (used in RBAC).
inheritprivs Defines the inherit privileges for the file (used in RBAC).
authprivs Defines the privileges that are assigned to the user if they are authorized (used in RBAC).
secflags Defines the file security flags that are associated with the object (used in RBAC). The FSF_TLIB flag
also is available. It marks the object as part of the Trusted Library.
130 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
t_accessauth Defines the extra Trusted AIX-specific access authorizations.
t_innateprivs Defines the extra Trusted AIX-specific innate privileges for the file.
t_authprivs Defines the extra Trusted AIX-specific privileges that are assigned to the user if they are authorized.
t_secflags Defines the extra Trusted AIX-specific file security flags that are associated with the object.
Certificates
TE uses certificates to verify the integrity of a file. The location of certificate is
/etc/security/certificates. The default entries in the TSD database are signed by IBM
private key and certificates are provided to verify the integrity. To add new files to TSD,
administrators can provide their own private key/certificate pair. OpenSSL can be used to
generate the keys and sign the file-related hashes.
trustchk command
The trustchk command is the main command that can be used to manage various
functions of TE. The command provides the following functions:
– Verifies system integrity
– Sets runtime policies
– Manages the TSD database
132 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Before loading the binary, the component responsible for loading the file (system loader)
starts the TE subsystem, which calculates the hash value by using the SHA-256 algorithm.
This runtime calculated hash value is matched with the value that is stored in the TSD. The
binary can be opened and run only if the values match (see Figure 4-22).
The trustchk command can be used to display or configure runtime policies. Table 4-5 lists
various runtime policies.
CHKEXEC Checks the integrity of trusted executables before loading them in memory for execution.
CHKSHLIB Checks the integrity of trusted shared libraries before loading them in memory for execution.
CHKSCRIPT Checks the integrity of trusted shell scripts before loading them in memory.
STOP_UNTRUSTD Stops loading of files that are not trusted; that is, only files belonging to the TSD are loaded.
This policy works with any of the CHK* policies. For example, if CHKEXEC=ON and
STOP_UNTRUSTD=ON, any executable binary that does not belong to the TSD is blocked
from execution.
STOP_ON_CHKFAIL Stops loading of trusted files that fail the integrity check. This policy also works in combination
with CHK* policies. For example, if CHKSHLIB=ON and STOP_ON_CHKFAIL=ON, any
shared library that does not belong to the TSD is blocked from being loaded into memory for
use.
TEP Sets the value of the Trusted Execution Path, and enables or disables it. The TEP consists of
a list of colon-separated absolute paths, SUCH AS /usr/bin:/usr/sbin. When this policy is
enabled, the files that belong to only these directory paths can be run. Any executable program
that requests to be loaded that does not belong to the TEP is blocked.
You can run the trustchk -p command to display the current TE policies, as shown in
Figure 4-23.
134 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
4.2.3 Trusted Execution integration with PowerSC GUI
The PowerSC GUI server can centrally manage the AIX TE feature. For this management,
you must configure the PowerSC GUI agent on the AIX LPAR.
The PowerSC GUI Security page displays the TE status for the LPAR, as shown in
Figure 4-24.
Figure 4-24 shows an X where TE is not enabled, a tick () sign where TE enabled, and a
dash (-) where it cannot detect TE status.
You can select the LPAR and go to Configure TE option to enable or disable TE runtime
policies, as shown in Figure 4-25.
After you enable TE, the PowerSC GUI indicates that TE is enabled, as shown in Figure 4-27.
136 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
PowerSC GUI shows a TE alert, which indicates that TE is switched on. If you click the alert,
more information displayed, as shown in Figure 4-28.
If you run the trustchk command on the LPAR, you can see that the policies you changed
from the PowerSC GUI server are enabled on the LPAR, as shown in Example 4-7.
After the scan is complete, you can view the result by clicking the TE: Concern alert, as
shown in Figure 4-31.
138 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 4-32 shows information from the TE Concern alert.
The same result can be viewed by running the trustchk -n ALL command on the LPAR, as
shown in Figure 4-33.
As you can see, PowerSC GUI makes it easier to manage TE on multiple endpoints through a
single console.
Note: To get TE runtime alerts, you must check that syslog is enabled in the LPAR. If
syslog is disabled, you must enable it and restart the PowerSC GUI agent.
TE blocks the execution of a command if an integrity mismatch occurs in a trusted file; for
example, chfs.
140 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
The /usr/bin/chfs command is a trusted file. Without any tampering, you can run the
command as shown in Figure 4-35.
After modifying the permission of the chfs file, attempt run it again. This time, TE detects the
modification and blocks the execution, as shown in Example 4-9.
The TE alert can be seen in the PowerSC GUI server, as shown in Figure 4-36.
The details of the alert can be viewed by clicking the alerts, as shown in Figure 4-36. The
details of the alert are shown in Figure 4-37 and Figure 4-38 on page 142.
You can fix the problem by running the trustchk -t <filename> command.
PowerSC GUI simplifies this process because the key management is done by the PowerSC
GUI server. If you add a new file to TSD by using the PowerSC GUI, you are not required to
run openssl commands to create keys. This process is internally managed by the PowerSC
GUI server.
To add a new file to TSD, go to PowerSC GUI Security page and select the LPAR. Click the
Edit TE File List option, as shown in Figure 4-39.
Figure 4-39 Using the PowerSC GUI to add a new file to TSD
Browse to select the file. In this case, add the /home/config.ksh file to TSD, as shown in
Figure 4-40 on page 143.
142 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 4-40 TE File List Configuration pane
You can verify the new entry by running the trustchk -q command on the LPAR, as shown in
Figure 4-41.
Figure 4-41 Running the trustchk command to query TSD for the /home/config.ksh file
Note: At the time of this writing, the PowerSC GUI does not automatically load the new
information from TSD into the kernel. You must restart TE for this load to take effect. This
process can be done by running the trustchk -p TE=ON command on the endpoint.
However, if runtime policies are enforced without proper planning, legitimate commands
might be blocked by TE, which can affect the usability and availability of the system. For
example, if you did not add the application or middleware commands to TSD and enable the
runtime policy to block untrusted commands, the application commands do not run.
To prevent such a scenario, you can initially enable TE in log only mode and review your
system for few days. After you are satisfied with the configuration by reviewing the log, you
can enable TE in enforcement mode.
Complete the following steps to enable TE without causing disruption to the usability of the
system:
1. Enable TE in log only mode. In this mode, TE logs only the integrity mismatch errors, and
does not block the execution. For this process, you must enable the following TE flags:
TE=ON
CHKEXEC=ON
CHKSHLIB=ON
CHKSCRIPT=ON
CHKKERNEXT=ON
STOP_UNTRUSTD=OFF
STOP_ON_CHKFAIL=OFF
2. Review TE logs directly on the LPAR or by using the PowerSC GUI.
3. After reviewing the alerts, identify the commands that are legitimate and can be added to
TE database.
4. Add the commands to TE database. You can use command line option or use PowerSC
GUI.
5. Keep monitoring to ensure no new TE alerts occur on the system. After you confirm that
no new alerts are present, enable TE in enforcement mode. In this mode, TE blocks the
execution. For this process you must enable the following TE flags:
TE=ON
CHKEXEC=ON
CHKSHLIB=ON
CHKSCRIPT=ON
CHKKERNEXT=ON
STOP_UNTRUSTD=ON
STOP_ON_CHKFAIL=ON
6. Continue monitoring the alerts.
Note: As with any other security feature, we strongly recommend enabling TE in a test
environment first. After you test the settings thoroughly, enable TE in the production
environment.
144 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
4.2.8 Updating an application that is integrated with TE
If you want to update an application that is integrated with TE, you must update the TSD
database after the application update is completed. This process is required; otherwise, the
TSD entries for the application files do not match with the files on the system.
2. Select the Turn off option to disable TE, as shown in Figure 4-43 on page 146.
Tip: The PowerSC GUI provides an option to automatically turn on TE after a specific
period. You can choose this period from the available options. This feature is useful so
that you do not forget to turn on TE after the installation is finished.
146 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
5. Enable TE by using the command line or from the PowerSC GUI, as shown in Figure 4-44.
PowerSC uses the Linux auditd feature to track changes to monitored files.
4.3.1 Prerequisites
The following prerequisites must be met enable FIM on Linux endpoints:
Enable auditing on Linux endpoints by starting the auditd subsystem by using the
systemctl command.
To enable auditing from the command line, use the following command:
systemctl start auditd
To check the status of audit, use the following command:
systemctl status auditd
You can verify the installation by running the rpm command, as shown in Example 4-10.
148 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
4.3.2 Configuration
To configure FIM on Linux endpoint, you must configure the PowerSC GUI agent on the Linux
LPAR. For more information, see 2.4, “Installing the UIAgent” on page 40.
The auditd appears enabled in the PowerSC GUI as indicated by a tick mark (see
Figure 4-46).
150 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
In the LPAR, you can verify that the file is being monitored by auditd, as shown in
Example 4-11.
The file modification action generates an alert in the PowerSC GUI, as shown in Figure 4-51.
152 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
4.4 FIM reporting with PowerSC GUI
PowerSC GUI provides excellent reporting feature for viewing FIM events centrally.
Timeline Report: The timeline report provides FIM alerts by month, day, or hour, as shown
in Figure 4-56.
For more information about PowerSC reporting features, see 2.7.4, “Reports tab” on page 62.
154 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
5
Numerous security defenses are needed to properly secure any type of computer
environment. Some security defenses, such as employee security education, are not even
considered technical security controls.
One of the most fundamental and important cyber defenses is vulnerability management
because when vulnerabilities are published, attackers are informed of these vulnerabilities.
Attackers can use this new information to enhance their ability to leverage vulnerabilities
against the organizations they target.
Fortunately, PowerSC provides Trusted Network Connect (TNC) and Patch Management as a
key solution to use when implementing your vulnerability management cyber defenses.
TNC and Patch Management can query AIX or VIOS hosts to determine whether they are
properly patched. This process of querying a host to determine whether a host meets
compliance requirements is referred to as verification. TNC provides automated and manual
methods for specifying the criteria that determines whether a host is classified as compliant or
non-compliant to the patch policy.
TNC and Patch Management can retrieve relevant AIX and VIOS updates from IBM Fix
Central and other sites for open source packages in installation and rpm format (for more
information, see this web page). This process allows TNC to automatically download updates
and provide email messaging to notify you when new updates are downloaded and available
for administrator-initiated deployment. Considering TNC is primarily a command line-based
solution, it can facilitate security patching monitoring and deployment automation.
156 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
5.2 Component architecture
TNC is a set of network security specifications that are recommended by Trusted Computing
Group (TCG). TCG is a consortium of multiple organizations, with IBM as one of its key
promoters.
The goal of TCG is to develop, define, and promote open standards for trusted computing and
security technologies. TCG specifications include hardware building blocks and software
interfaces across multiple platforms, peripherals, and devices. These specifications enable
secure computing environments with the primary goal of helping users to protect their
informational assets from compromise because of software attacks.
A subgroup of TCG, TNC defined this solution architecture that can help the administrators to
enforce policies to effectively control access to the network infrastructure. The end points that
are requesting access are assessed based on the integrity measurements of critical
components that can affect its operational environment. The integrity verification can be
strengthened by using capabilities of Trusted Platform Module (TPM) and Trusted Software
Stack (TSS). The TNC architecture is built on the well-established security standards, such as
802.1X, EAP, RADIUS, IPSec, and TLS/SSL.
Figure 5-1 also shows TNC and Patch Management serving as the patch repository. The TNC
and Patch Management daemon, tncpmd2, uses the AIX Service Update Management
Assistant (SUMA) and cURL to download AIX Technology Service Packs, interim fixes, and
open source packages from the different sites shown in Figure 5-1.
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 157
This retrieval of software can be done by using a direct connection to the internet or an http or
https proxy. The TNCS is the PowerSC component that issues verify and update operations
against TNC Clients.
The verification of the TNCC occurs through a network connection that is established
between the TNCS daemon (tncsd) and the TNCC daemon (tnccd). When the TNCS issues
an update operation against a TNC Client, the operation is communicated to the TNC and
Patch Management by using a network connection that is established between the TNCS
daemon and the TNCPM daemon (tncpmd2). When the TNC and Patch Management
daemon receives the update operation request from the TNCS, it instructs the Network
Installation Manager (NIM) server on the local host to perform the TNC update by way of NIM.
Important: Both of these operations can also be issued against a TNC ipgroup, which is a
TNC-defined group of hosts.
158 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
2. The TNCPM takes the TNCS request and submits it for execution by using the underlying
NIM server that is installed on the local host of the TNCPM server.
3. A NIM operation performs the update against the targeted TNCC by using the NIM client
on the local host of the TNCC.
4. At the conclusion of the NIM operation, the results are returned to the TNCS.
5. The TNCS reports the status of the update operation at the conclusion of the update
operation. This result can be viewed from the command-line interface. The details of the
results also are stored on the local TNCS.
The TNC framework enables administrators to monitor the integrity of the systems in the
network. The TNC implementation on AIX is integrated with the AIX patch distribution
infrastructure to build a complete patch management solution.
TNC specifications must satisfy the requirements of AIX and IBM POWER® family system
architecture. The components of TNC provide a complete patch management solution on the
AIX operating system.
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 159
This configuration enables administrators to efficiently manage the software configuration on
AIX deployments. It also provides tools to verify the patch levels of the systems and generate
a report on the clients that are not compliant. Also, patch management simplifies the process
of downloading and installing the patches.
160 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Simple reverification or update for new security updates.
When the TNCPM downloads a new security update, the default policy can be used to
quickly verify your TNC clients including the new ifix. The new ifix also can be immediately
deployed to one or more TNC clients.
For more information, see 5.6.1, “Verifying the Trusted Network Connect Client” on
page 182.
Correlation of patches with endpoint service pack level.
An AIX ifix typically corresponds to one specific AIX service pack level. Without a patch
management solution, you must maintain the mappings of several different ifixes for the
different service pack levels of your environment. TNC eliminated this complexity by
maintaining this ifix mapping for you. For example, if you verify a TNC client and TNC
reports an ifix is missing, you can be sure that the reported ifix corresponds to the specific
service pack level of the TNC client that was verified.
Patch recommendations that are made upon the filesets that are installed on the TNCC.
When a TNC Client is verified, the filesets that are installed on the system are inventoried.
TNC uses this inventory to provide ifix installation recommendations that are based on the
actual filesets that are installed on the specific endpoint. If ifixes correspond to a specific
service pack level but are not needed because the corresponding filesets are not installed,
TNC identifies this detail and report this distinction without identifying it as a missing ifix.
Therefore, you see only compliance failures for vulnerabilities that correspond to the
filesets that are installed on your endpoints.
Extensive installation support, including open source packages in rpm and installp format.
PowerSC v1.1.6.0 introduced the ability to download open source packages. This new
functionality allows you to define open package groups so you can use the standard verify
and update operations with open source packages. With this new extension of open
source packages, TNC provides a comprehensive software patching solution that meets
any type of software update requirements for AIX systems.
Light-weight component architecture that provides excellent performance.
PowerSC TNC is a solution specifically for AIX environments. Because it specializes in
AIX endpoints, it was implemented with a small code base that provides fast performance
for update and verification operations. This small code base also integrates multi-threaded
support for all verify operations, so the performance of these operations against groups of
systems is as efficient as possible.
Provides restart details about update operations.
Most security ifixes do not require a restart after being applied, but some do. TNC
eliminates the question of whether an update requires a restart by displaying a restart
required field for updates that require a restart after being applied.
Flexible command line functions that facilitate automation.
PowerSC TNC implemented its functionality by using standard UNIX command line
conventions. This feature is conducive for integration with solutions that provide
automation, such as in-house scripting or third-partly automation solutions.
Automatic updating of patch repository that includes updating ifixes with superseding
versions.
When a vulnerability is first identified, an initial security ifix is published to IBM Fix Central.
However, sometimes newer versions for the same vulnerability are later published. When
this issue occurs, the old version of the ifix must be replaced with the newer superseding
version.
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 161
PowerSC TNC addresses this requirement by not only downloading the newer version of
the ifix to the TNCPM, but also by having the TNCS verify operation check if an ifix is
superseded by another ifix that is based on an ifix number and related fileset information.
If the TNCS does detect that an ifix is superseded by a newer version, the TNCS uses the
superseding version for the verify operation.
Minimum requirements
Figure 5-1 on page 157 shows general minimal recommendations. Requirements can vary
depending on the details of the number of AIX technology levels and service packs you must
support and the number of TNC clients.
Important: The resource sizes that are described in this section are dedicated to TNC.
This sizing does not consider non-TNC elements that are running on the same host on
which the TNC component is running.
TNCPM 25 G 5 GB
TNCS 2G 1 GB
TNCC 1G 500 MB
162 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Table 5-3 lists the requirements matrix for TNCPM.
Table 5-4 lists the fileset installation matrix for TNC components.
openpts.verifier No No No No
powerscStd.ice No No No No
PowerscStd.rtc No No No No
powerscStd.svm No No No Yes
powerscStd.tnc_pm Yes No No No
powerscStd.uiAgent No No Optional No
powerscStd.uiServer No No No No
powerscStd.vlog No No No No
ca-certificates Yes No No No
curl Yes No No No
libgcc Yes No No No
Important: If you want to add the ability for the PowerSC GUI to perform verification or
update operations against a TNC Client, install the TNCC as a GUI-managed endpoint by
using the powerscStd.uiAgent fileset and corresponding configuration procedure.
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 163
TNCS installed on AIX host No N/A Yes
Note: For a security best practice, it is recommended that the TNCS and TNCPM be
installed on separate hosts. The rationale for this suggestion is that if an attacker accesses
the system that contains the TNCPM or TNCS, they can access only one PowerSC
component, not both. However, the matrix in Table 5-5 shows that the TNCS and the
TNCPM properly function when they are installed on the same host.
For more information about how to restrict SUMA to use only https for all communication, see
5.5.2, “Configuring the TNCPM” on page 165.
Ifix download requirements.
Table 5-7 lists the server information for ifix downloads.
164 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Table 5-7 Server information for ifix software download
Server Protocol/port
www3.software.ibm.com https/443
Important: Contact your network administrator for help or more information about
specifying the correct configuration for your proxy.
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 165
3. Complete the following steps to configure SUMA:
a. Set Download Protocol to HTTPS.
The default configuration of SUMA specifies HTTP for the download protocol. HTTPS
must be used for the download protocol. Run the following command on the TNCPM:
# suma -c -a DOWNLOAD_PROTOCOL=HTTPS
The configuration is listed, as shown in Example 5-2 on page 166.
b. Verify that SUMA can receive data from IBM Fix Central and IBM Electronic Customer
Care (ECC), as shown in the following working example (condensed using “… etc …”):
# suma -x -a Action=Preview -a RqType=Latest
****************************************
Performing preview download.
****************************************
Partition id was unassigned; will attempt to assign it.
Partition id assigned value 12
Download SUCCEEDED: /usr/sys/inst.images/installp/ppc/7200-01-04-1806.bff
Download SUCCEEDED: /usr/sys/inst.images/installp/ppc/U877996.bff
Download SUCCEEDED: /usr/sys/inst.images/installp/ppc/U877995.bff
… etc …
Download SUCCEEDED: /usr/sys/inst.images/installp/ppc/U872710.bff
Download SUCCEEDED: /usr/sys/inst.images/installp/ppc/U872706.bff
Download SUCCEEDED: /usr/sys/inst.images/installp/ppc/U872704.bff
Total bytes of updates downloaded: 2011392512
Summary:
271 downloaded
0 failed
skipped
root@lbstnc1>
c. Verify that SUMA can download data from IBM Fix Central and IBM ECC.
This second test is a deeper verification that your TNCPM can download updates from
the internet, as shown in the following working example:
root@lbstnc1> suma -x -a Action=Download -a RqType=PTF -a RqName=U813941 \
-a FilterML=6100-01 -a DLTarget=/tmp
Partition id was unassigned; will attempt to assign it.
Partition id assigned value 12
166 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Download SUCCEEDED: /tmp/installp/ppc/U813941.bff
Total bytes of updates downloaded: 1331200
Summary:
1 downloaded
0 failed
0 skipped
root@lbstnc1>
d. Verify curl can interact with IBM Fix Central:
# curl --silent --cacert /etc/security/certificates/tnc/IBM_IFIX_cert.pem
--list-only https://www3.software.ibm.com/aix/efixes/security/
e. Initialize TNCPM.
This step is the preliminary step in configuring TNCPM. The TNCPM downloads the
latest service pack and the latest ifixes relative to the latest service pack in question.
The command uses the following syntax:
pmconf init -i <download interval> -l <TL List> -A [ -P <download path>] [
-x <ifix interval>] [ -K <ifix key>]
The following example is an actual working example (condensed using “… etc …”):
root@lbstnc1> pmconf init -i d1:h8:m0 -l 7200-01 -A -x 60
New ifix interval check set to 60
accept_all_licenses for TNC Clients update set to yes
New Service Pack interval check set to d1:h8:m0
Initializing 7200-01
Downloading Metadata for 7200-01
Platform Extension: information for proxy SAS not found in repository
Partition id was unassigned; will attempt to assign it.
Partition id assigned value 12
Storing auth proxy creds for SAS
successfully stored auth proxy creds for SAS
Storing auth proxy creds for PROFILE_URIS_LENGTH
successfully stored auth proxy creds for PROFILE_URIS_LENGTH
Storing auth proxy creds for PROFILE_URI_0
successfully stored auth proxy creds for PROFILE_URI_0
Download SUCCEEDED:
/var/tnc/tncpm/fix_repositories/7200-01/metadata/installp/ppc/7200-01-04-180
6.dd.xml
Download SUCCEEDED:
/var/tnc/tncpm/fix_repositories/7200-01/metadata/installp/ppc/7200-01-04-180
6.install.tips.html
Download SUCCEEDED:
/var/tnc/tncpm/fix_repositories/7200-01/metadata/installp/ppc/U875433.pd.sdd
Download SUCCEEDED:
/var/tnc/tncpm/fix_repositories/7200-01/metadata/installp/ppc/U875433.dd.xml
Download SUCCEEDED:
/var/tnc/tncpm/fix_repositories/7200-01/metadata/installp/ppc/aix_7200-01-01
.special.note.txt
Total bytes of updates downloaded: 544178
Summary:
17 downloaded
0 failed
0 skipped
Latest Service Pack is 7200-01-04
Downloading Service Pack 7200-01-04
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 167
Partition id was unassigned; will attempt to assign it.
Partition id assigned value 12
New ifix interval check set to 60
accept_all_licenses for TNC Clients update set to yes
New Service Pack interval check set to d1:h8:m0
Initializing 7200-01
Downloading Metadata for 7200-01
Platform Extension: information for proxy SAS not found in repository
Partition id was unassigned; will attempt to assign it.
Partition id assigned value 12
Storing auth proxy creds for SAS
successfully stored auth proxy creds for SAS
Storing auth proxy creds for PROFILE_URIS_LENGTH
successfully stored auth proxy creds for PROFILE_URIS_LENGTH
Storing auth proxy creds for PROFILE_URI_0
successfully stored auth proxy creds for PROFILE_URI_0
Download SUCCEEDED:
/var/tnc/tncpm/fix_repositories/7200-01/metadata/installp/ppc/7200-01-04-180
6.dd.xml
Download SUCCEEDED:
/var/tnc/tncpm/fix_repositories/7200-01/metadata/installp/ppc/7200-01-04-180
6.install.tips.html
Download SUCCEEDED:
/var/tnc/tncpm/fix_repositories/7200-01/metadata/installp/ppc/7200-01-04-180
6.pd.sdd
Download SUCCEEDED:
/var/tnc/tncpm/fix_repositories/7200-01/metadata/installp/ppc/U875433.pd.sdd
Download SUCCEEDED:
/var/tnc/tncpm/fix_repositories/7200-01/metadata/installp/ppc/U875433.dd.xml
Download SUCCEEDED:
/var/tnc/tncpm/fix_repositories/7200-01/metadata/installp/ppc/aix_7200-01-01
.special.note.txt
Total bytes of updates downloaded: 544178
Summary:
17 downloaded
0 failed
0 skipped
Latest Service Pack is 7200-01-04
Downloading Service Pack 7200-01-04
Partition id was unassigned; will attempt to assign it.
Partition id assigned value 12
Download SUCCEEDED:
/var/tnc/tncpm/fix_repositories/7200-01/SPs/7200-01-04/installp/ppc/7200-01-
04-1806.bff
NOTE: … removing actual output to condense text….
Download SUCCEEDED:
/var/tnc/tncpm/fix_repositories/7200-01/SPs/7200-01-04/installp/ppc/U872704.
bff
Total bytes of updates downloaded: 2011392512
Summary:
271 downloaded
0 failed
0 skipped
Preparing to copy install images (this will take several minutes)...
Now checking for missing install images...
168 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Checking for ifixes...
Checking website for advisories...
Scanning advisories for consistency...
Searching advisories for applicable ifixes (this will take several
minutes)...
Updating the cache for Product: [aixbase]
Downloading ifixes for Product: [aixbase]
Updating the cache for Product: [bellmail]
… note: removing some actual text to condense output …
Updating the cache for Product: [wpar]
Downloading ifixes for Product: [wpar]
83 new ifixes downloaded.
Applying ifix(es) (this will take several minutes)...
Processing [aixbase_advisory]
Processing [bellmail_advisory]
Processing [bellmail_advisory2]
Processing [nettcp_advisory]
Processing [nettcp_advisory2]
NEW IFIX
New Ifix registered with TNCPM for 7200-01-04 102m_ifix.180105.epkg.Z
NEW IFIX
New Ifix registered with TNCPM for 7200-01-04 fips_102m.180105.epkg.Z
Processing [openssl_advisory26]
NEW IFIX
New Ifix registered with TNCPM for 7200-01-04 102ma_ifix.180410.epkg.Z
NEW IFIX
New Ifix registered with TNCPM for 7200-01-04 fips_102ma.180410.epkg.Z
Processing [pconsole_advisory]
Processing [pconsole_advisory2]
Processing [powerha_advisory]
Processing [rc4_advisory]
Processing [rmsock_advisory2]
NEW IFIX
New Ifix registered with TNCPM for 7200-01-04 IJ06907s1a.180607.epkg.Z
Processing [sendmail1_advisory]
Processing [tcpdump_advisory3]
Processing [tftp_advisory]
Processing [variant4_advisory]
NEW IFIX
New Ifix registered with TNCPM for 7200-01-04 IJ05820m4a.180423.epkg.Z
Processing [wpar_advisory]
root@lbstnc1>
f. Initialize after a failed initialization.
Suppose that during your initialization, some type of failure caused the initialization to
fail. This failure can be because of various factors, such as network, disk, or power
failure. If a failure occurs, retry the initialization. TNC automatically deletes files that
were downloaded in the failed initialization.
g. Add lower-level service packs.
After you initialize the TNCPM, it is a good practice (but not required) to download all
service packs that are related to the technology levels you initialized against. For
example, suppose you initialized the TNCPM against 7200-01 and the initialization
downloaded 7200-01-04 as the latest service pack. You add service packs 1 - 3 for
7200-01.
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 169
Note: Check you have sufficient disk space throughout the download process,
otherwise, your installation can fail.
170 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
The following output is produced:
root@lbstnc1> pmconf mktncpm pmport=20000 tncserver=10.3.126.34:10000
Starting component :TNCPM
tncpmd daemon started successfully pid 12255622
root@lbstnc1>
i. Query open source sites for packages.
The TNCPM queries several sites for open source packages. Use this function to
determine what packages you can might download to your TNCPM.
The command uses the following syntax:
pmconf get -L -o <package> -V <version | all> -T <installp | rpm>
The following output is produced:
root@lbstnc1> pmconf get -o lsof -T rpm -L -V all
Attempting to obtain list for [lsof] from
[https://www3.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/lsof]
No matches found
tncpm_openpack_download:process_sitelist_entry: Error [1] attempting to
locate package(s) from this site
Attempting to obtain list for [lsof] from
[http://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/lsof]
No matches found
tncpm_openpack_download:process_sitelist_entry: Error [1] attempting to
locate package(s) from this site
Attempting to obtain list for [lsof] from
[ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/lsof]
No matches found
tncpm_openpack_download:process_sitelist_entry: Error [1] attempting to
locate package(s) from this site
Attempting to obtain list for [lsof] from [ftp://www.oss4aix.org/RPMS/lsof]
The following packages were found:
lsof-4.85-1.aix6.1.ppc.rpm
lsof-4.85-1.aix7.1.ppc.rpm
lsof-4.86-1.aix6.1.ppc.rpm
lsof-4.86-1.aix7.1.ppc.rpm
lsof-4.87-1.aix6.1.ppc.rpm
lsof-4.87-1.aix7.1.ppc.rpm
lsof-4.88-1.aix6.1.ppc.rpm
lsof-4.88-1.aix7.1.ppc.rpm
lsof-4.88-1.aix7.2.ppc.rpm
lsof-4.89-1.aix6.1.ppc.rpm
lsof-4.89-1.aix7.1.ppc.rpm
lsof-4.89-1.aix7.2.ppc.rpm
root@lbstnc1>
j. Download open source packages.
This function allows you to download a particular open source package.
The command uses the following syntax:
pmconf get -o <package> -V <version> -T <installp | rpm> -D <download
directory>
The following example output is produced:
root@lbstnc1> pmconf get -o lsof -V 4.89-1.aix7.2 -T rpm -D /lsof
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 171
Attempting to download [lsof] from
[https://www3.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/lsof]
No matches found
tncpm_openpack_download:download_package: Unable to execute the download!
Status=[1]
tncpm_openpack_download:process_sitelist_entry: Error [1] attempting to
locate package(s) from this site
Attempting to download [lsof] from
[http://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/lsof]
No matches found
tncpm_openpack_download:download_package: Unable to execute the download!
Status=[1]
tncpm_openpack_download:process_sitelist_entry: Error [1] attempting to
locate package(s) from this site
Attempting to download [lsof] from
[ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/lsof]
No matches found
tncpm_openpack_download:download_package: Unable to execute the download!
Status=[1]
tncpm_openpack_download:process_sitelist_entry: Error [1] attempting to
locate package(s) from this site
Attempting to download [lsof] from [ftp://www.oss4aix.org/RPMS/lsof]
Download Complete. File(s) saved to /lsof/lsof_4.89-1.aix7.2
root@lbstnc1>
k. Register an open source package to the TNCPM.
This operation updates the TNCPM with a open source package it has downloaded.
The command uses the following syntax:
pmconf add -o <package name> -V <version> -T [installp|rqm] -D <User defined
path>
Before you register the open package with the TNCS, verify that the package count
uses psconf pull. In the following output, you can see that our TNCS doe not include
any open-source packages registered:
root@lbstnc1> psconf pull
debug1: [psconf] 11141504:00000001 TNCS_pull(): Command /usr/sbin/tnc
--pull=10.3.126.34:20000
Running transaction
Transaction Summary
Pulled 4 SPs 71 Apars 27 Advisories 239 Ifixes 563 Filesets
0 Packages
Total transaction size: 590821 byte(s)
Total transaction time: 0.01 sec(s)
Transaction succeeded
root@lbstnc1>
172 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
root@lbstnc1>
Now, restart the TNCPM and perform a psconf pull operation to see if a new open
source package is registered to the TNCS. In the following output, you can see that “1
Packages” is now being reported:
root@lbstnc1> pmconf stop
tncpmd daemon stopped successfully
root@lbstnc1> pmconf start
Starting component :TNCPM,SERVER
tnccsd daemon started successfully pid 9830702
root@lbstnc1> psconf pull
debug1: [psconf] 11141522:00000001 TNCS_pull(): Command /usr/sbin/tnc
--pull=10.3.126.34:20000
Running transaction
Transaction Summary
Pulled 4 SPs 71 Apars 27 Advisories 239 Ifixes 563 Filesets
1 Packages
Total transaction size: 591181 byte(s)
Total transaction time: 0.01 sec(s)
Transaction succeeded
root@lbstnc1>
l. Enable TNCPM to use non-security ifixes.
TNC automatically downloads security fixes. You can add HIPER, IBM FileNet®
Process Engine, and Enhancement to the default configuration.
After this step is performed, you must add these types of ifixes as stand-alone fixes.
TNC 1.2.0.0 supports only the automatic download of security ifixes.
The command uses the following syntax:
pmconf modify -t <APAR type list>
The following output is produced:
# pmconf modify -t HIPER,PE,Enhancement
# pmconf stop
# pmconf start
Recommendations for securing the TNCPM
To provide the best security for your TNCPM implementation, complete the following
steps:
a. Instead of using a direct connection to the internet, connect through an http proxy.
A secure proxy implementation uses a whitelisting approach for authorizing which
connections are allowed. For a secure whitelisting proxy configuration, you need the
proxy to authorize the https ports of the servers, as described in 5.5.1, “Networking
requirements for TNCPM internet connections” on page 164.
b. Ensure SUMA is using only HTTPS. For more information, see 5.5.2, “Configuring the
TNCPM” on page 165.
c. Configure the /etc/tncpm_openpack_sitelist.conf to contact servers with the sftp,
scp, or https protocols only.
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 173
5.5.3 Configuring the Trusted Network Connect Server
Complete the following steps to configure the TNCS:
1. Use the following command to configure the TNCS. This command configures the TNCS
daemon, tncsd:
psconf mkserver [ tncport=<port> ] pmserver=<host:port> [tsserver=<host>]
[recheck_interval=<time_in_minutes> | d (days): h (hours) : m (minutes) ]
[dbpath = <user-defined directory> ][default_policy=<yes | no > ]
[clientData_interval=<time_in_minutes> | d (days) : h (hours) : m (minutes) ] [
clientDataPath=<Full_path >]
The following output is produced:
# psconf mkserver tncport=10000 pmserver=10.3.126.45:20000 recheck_interval=60
2. Start and stop the TNCS.
When a configuration change is applied to the TNCS, you often must bounce the server by
using the following command:
psconf { start | stop | restart } server
The following output is produced:
root@lbstnc1> psconf stop server
tnccsd daemon stopped successfully
root@lbstnc1> psconf start server
root@lbstnc1>
3. Create a TNC group, which is a set of partitions, by using the following command:
psconf add { -G <ipgroupname> ip=[1]<host1, host2...> | {-A<apargrp>
[aparlist=[1]apar1, apar2...|
{-V <ifixgrp> [ifixlist=[+|-]ifix1,ifix2...]
The following output is produced:
# psconf add -G tncgrp ip=lbsaix1,lbstds3,lbsaix2
4. List the members of a TNC group by using the following command:
psconf list { -S | -G < ipgroupname | ALL > | -F < FSPolicyname | ALL >| -P <
policyname | ALL > | -r < buildinfo | ALL > | -I -i < ip | ALL >| -A < apargrp
| ALL > | -V <ifixgrp> | -O <openpkggrp|ALL>} [-c] [-q]
The following output is produced:
root@lbstnc1> psconf list -G tncgrp
#ipgroupname ip policyname EMAILID
EMAILTYPE
tncgrp lbsaix1.aus.stglabs.ibm.com tncpol ....
....
tncgrp lbstds3.aus.stglabs.ibm.com tncpol ....
....
root@lbstnc1>
5. Create a fileset policy.
A fileset policy is a set of filesets that meet security policies. A fileset policy is defined by
an AIX service pack level and can include apar groups, ifix groups, and open package
groups.
174 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Important: The fileset policy cannot be used with a VIOS TNC Client. Only the default
policy can be used when verifying a VIOS TNC Client.
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 175
7200 1 2
7200 1 3
7200 1 4
Apar SP:
#Release TL SP
7200 1 1
7200 1 2
7200 1 3
7200 1 4
root@lbstnc1>
8. List the ifixes and apars that are registered to the TNCS for a service pack.
The TNCPM is the component that downloads all updates. For TNCS to issue an update
operation to a client, it must refer to an update that was registered to the TNCS by the
TNCPM.
The command uses the following syntax:
psconf list { -S | -G < ipgroupname | ALL > | -F < FSPolicyname | ALL > | -P <
policyname | ALL > | -r < buildinfo | ALL > | -I -i < ip | ALL > | -A <
apargrp | ALL > | -V <ifixgrp> | -O <openpkggrp|ALL>} [-c] [-q]
The following output is produced, which is a working example (condensed using “… etc
…”):
root@lbstnc1> psconf list -r 7200-01-04
#ifix Release TL SP cve cvss
fileset vrmf
102j_ifix.170207.epkg.Z 7200 1 4 .... ....
openssl.base 20.13.102.1000
102j_ifix.170207.epkg.Z 7200 1 4 .... ....
openssl.base 1.0.2.1000
102m_ifix.180105.epkg.Z 7200 1 4 .... ....
openssl.base 20.13.102.1300
102m_ifix.180105.epkg.Z 7200 1 4 .... ....
openssl.base 1.0.2.1300
102ma_ifix.180410.epkg.Z 7200 1 4 .... ....
openssl.base 20.13.102.1300
102ma_ifix.180410.epkg.Z 7200 1 4 .... ....
openssl.base 1.0.2.1300
517_ifix.170113.epkg.Z 7200 1 4 .... ....
openssl.base 20.13.101.500
517_ifix.170113.epkg.Z 7200 1 4 .... ....
openssl.base 1.0.1.517
… etc …
#aparname Release TL SP Apar_type Fileset
abstract
IJ02828 7200 1 4 Security
bos.cluster.rte 7.2.1.3, A potential security issue exists
IJ03035 7200 1 4 Security
bos.pmapi.pmsvcs 7.2.1.3,bos.mp64 7.2.1.5, A POTENTIAL SECURITY ISSUE EXISTS
IV96310 7200 1 4 Security
bos.net.tcp.ntpd 7.2.1.2, A potential security issue exists
IV97898 7200 1 4 Security bos.acct
7.2.1.2, A potential security issue exists
IV97901 7200 1 4 Security bos.acct
7.2.1.2, A potential security issue exists
176 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
IV97958 7200 1 4 Security
bos.rte.archive 7.2.1.2, A potential security issue exists
IV98298 7200 1 4 Security bos.rte.lvm
7.2.1.3, A potential security issue exists
IV98830 7200 1 4 Security
bos.net.tcp.bind_utils 7.2.1.3, A potential security issue exists
IV99499 7200 1 4 Security
bos.net.tcp.client_core 7.2.1.3, A potential security issue exists
IV99552 7200 1 4 Security bos.rte.lvm
7.2.1.3, A potential security issue exists
#
9. List an ifix by name.
A detailed listing of a security advisor can be obtained by referencing an ifix name by
using the following command:
psconf report -v ALL -o TEXT | grep -p <name of ifix>
The following output is produced:
# psconf report -v ALL -o TEXT | grep -p IV96310m2a.170519.epkg.Z
AIX Advisory: ntp_advisory9.asc
Abstract: ....
Reboot: ....
Workaround: ....
CVE: CVE-2017-6464
CVSS: 4.2 4.2
CVE: CVE-2017-6462
CVSS: 1.6 1.6
CVE: CVE-2017-6458
CVSS: 4.2 4.2
CVE: CVE-2017-6451
CVSS: 1.8
Ifix: IV96306m9a.170519.epkg.Z
Release: 537214528-537214704-537214816
APAR: IV96306
APAR Release: 6100-09-10
Fileset: bos.net.tcp.client
VRMF: 6.1.9.201
Ifix: IV96310m2a.170519.epkg.Z
Release: 7200-01-01
APAR: IV96310
APAR Release: 7200-01-04
Fileset: bos.net.tcp.ntpd
VRMF: 7.2.1.1
Fileset: bos.net.tcp.ntp
VRMF: 7.2.1.0
Ifix: IV96312m5a.170518.epkg.Z
Release: 7200-01-01
APAR: IV96312
APAR Release: 7200-01-03
Fileset: ntp.rte
VRMF: 7.1.0.9
#
10.List all security advisories that are known to the TNCS.
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 177
Perform this action if you want a detailed listing of all the security advisories that are
registered to the TNCS.
The command uses the following syntax:
psconf report -v <CVEid | ALL> -o <TEXT | CSV>
The following output is produced, which is a working example (condensed using “… etc
…”):
root@lbstnc1> psconf report -v ALL -o TEXT
Report Date: Tue Sep 18 20:43:14 2018
Version: 1.0
Advisories:
AIX Advisory: variant4_advisory.asc
Abstract: ....
Reboot: ....
Workaround: ....
Ifix: IJ05820m2a.180430.epkg.Z
Release: 7200-01-02
APAR: IJ05820
APAR Release: 7200-01-05
Fileset: bos.mp64
VRMF: 7.2.1.5
Ifix: IJ05820m3a.180430.epkg.Z
Release: 7200-01-03
APAR: IJ05820
APAR Release: 7200-01-05
Ifix: IJ05820m4a.180423.epkg.Z
Release: 7200-01-04
APAR: IJ05820
APAR Release: 7200-01-05
Ifix: IJ05824m9a.180501.epkg.Z
Release: 537213360-537213536-537213712
APAR: IJ05824
APAR Release: 6100-09-12
Ifix: IJ05824m9b.180502.epkg.Z
Release: 537213360-537213536-537213712
APAR: IJ05824
APAR Release: 6100-09-12
Ifix: IJ05824mAa.180501.epkg.Z
Release: 537213360-537213536-537213712
APAR: IJ05824
APAR Release: 6100-09-12
Ifix: IJ05824sBa.180426.epkg.Z
Release: 537213360-537213536-537213712
APAR: IJ05824
APAR Release: 6100-09-12
… etc …
#
11.List a security advisory by CVEID if you want a detailed listing of a particular security
advisor by referencing its CVEID.
The command uses the following syntax:
psconf report -v <CVEid | ALL> -o <TEXT | CSV>
178 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
The following output is produced:
root@lbstnc1> psconf report -v CVE-2018-0739 -o TEXT
Report Date: Sun Jul 22 19:35:16 2018
Version: 1.0
Advisories:
AIX Advisory: openssl_advisory26.asc
Abstract: ....
Reboot: TNCC
Workaround: ....
CVE: CVE-2018-0739
CVSS: 5.3
Ifix: 102ma_ifix.180410.epkg.Z
Release: 537210400-537210576-537210752
APAR: N/A
APAR Release: N/A
Fileset: openssl.base
VRMF: 20.13.102.1300
Ifix: fips_102ma.180410.epkg.Z
Release: 537232160-537232336-537232512
APAR: N/A
APAR Release: N/A
root@lbstnc1>
Sync TNCS with TNCPM
When using the psconf command on the TNCS, you cannot reference the name of a
service pack, apar, ifix, or open source package unless it was first downloaded to the
TNCPM and then registered to the TNCS.
To register everything that was downloaded on the TNCPM by using the TNCS, you must
perform the psconf pull operation.
The following output indicates that 4 AIX service packs, 20 AIX apars, 24 advisories, 200
ifixes, 493 filesets, and no open source packages were downloaded to the TNCPM and
these updates are registered to the TNCS:
root@lbstnc1> psconf pull
debug1: [psconf] 16515360:00000001 TNCS_pull(): Command /usr/sbin/tnc
--pull=10.3.126.34:20000
Running transaction
Transaction Summary
Pulled 4 SPs 20 Apars 24 Advisories 200 Ifixes 493 Filesets
0 Packages
Total transaction size: 530381 byte(s)
Total transaction time: 0.01 sec(s)
Transaction succeeded
root@lbstnc1>
12.Create an ifix group.
When you want your systems to deploy a set of ifixes, you must create an ifix group and
then map that group to a fileset policy. The fileset policy is mapped to a TNC policy.
The command uses the following syntax:
psconf add { -G <ipgroupname> ip=[1]<host1, host2...> |
{-A<apargrp>[aparlist=[1]apar1, apar2... |
{-V <ifixgrp> [ifixlist=[+|- ]ifix1,ifix2...]}
The following output is produced:
root@lbstnc1> psconf add -V ifixgrp1 ifixlist=IV96310m2a.170519.epkg.Z
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 179
root@lbstnc1>
13.Create a fileset policy with an ifix group.
An Ifix group does not take effect until you map it to a Fileset Policy, which must be
mapped to a TNC policy.
The command uses the following syntax:
psconf add -F <FSPolicyname> -r <buildinfo> [apargrp= [1]<apargrp1, apargrp2..
>] [ifixgrp=[+|-]<ifixgrp1,ifixgrp2...>]
The following output is produced:
root@lbstnc1> psconf add -F fspol -r 7200-01-02 ifixgrp=ifixgrp1
14.Create an apar group.
When you want your systems to deploy a set of apars, you must create an apar group and
then map that group to a fileset policy. The fileset policy is then mapped to a TNC policy.
The command uses the following syntax:
psconf add { -G <ipgroupname> ip=[1]<host1, host2...> |
{-A<apargrp>[aparlist=[1]apar1, apar2... |
{-V <ifixgrp> [ifixlist=[+|- ]ifix1,ifix2...]}
The following output is produced:
root@lbstnc1> psconf add -A apargrp1 aparlist=IV60303
root@lbstnc1>
15.Create a fileset policy with an apar group.
An apar group does not take effect until you map it to a fileset Policy, which must be
mapped to a TNC policy.
The command uses the following syntax:
psconf add -F <FSPolicyname> -r <buildinfo> [apargrp= [1]<apargrp1, apargrp2..
>] [ifixgrp=[+|-]<ifixgrp1,ifixgrp2...>]
The following output is produced:
root@lbstnc1> psconf add -F fspol2 -r 7100-03-03 apargrp=apargrp1
16.Create an open package group.
When you want your systems to deploy a set of open source packages, you must create
an open package group and then map that to a fileset policy. The fileset policy is then
mapped to a TNC policy.
The command uses the following syntax:
psconf add -O <openpkggrp> <openpkgname:version>
The following output is produced:
root@lbstnc1> psconf add -O opengrp1 lsof:4.89-1.aix7.2
Successfully added the attribute OpenPackage Group
Successfully added the attribute OpenPackage
root@lbstnc1>
17.Create a fileset policy with an open package group.
You can map an open package group to a fileset policy. After it is attached to a fileset
policy, a verification operation that is run by using the fileset policy verifies that the open
package is installed on the TNCC.
180 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
As shown in Figure 5-2 on page 181, the fileset policy is shown without an open package
group, the open package groups are listed, the open package group is added to the fileset
policy, and then, the fileset policy is listed showing the added open group.
The command uses the following syntax:
psconf add -O <openpkggrp> fspolicy=<fspolicy name>
Figure 5-2 shows the command listing the fileset policies.
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 181
root@lbsaix1> psconf status
component = CLIENT
tncport = 10000
tncserver = lbstnc1
trustmode = false
tnccd daemon pid 10158392
root@lbsaix1>
Default Policy
The term default policy refers to the set of ifixes and apars for any service pack that was
downloaded by the TNCPM and registered to the TNCS. When a verification operation occurs
against a TNC Client and no matching fileset policy exists, the default policy is used to
determine compliance.
Important: The VIOS TNC Client can use only the default policy for verification operations.
182 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Running transaction 10.3.126.48
#(ifixhigher.rpt)
#ifixes with fileset that is at a higher level then maximum
#Ifix_Level:Ifix_Name:Fileset_Name:Ifix_Version:Client_Version
7.2.1.4:517_ifix.170113.epkg.Z:openssl.base:1.0.1.517:1.0.2.800
7.2.1.4:IV83169m9a.160401.epkg.Z:openssl.base:1.0.2.500:1.0.2.800
7.2.1.4:IV83169m9a.160401.epkg.Z:openssl.base:1.0.1.515:1.0.2.800
7.2.1.4:IV83169m9a.160401.epkg.Z:openssl.base:0.9.8.2506:1.0.2.800
7.2.1.4:IV83169m9b.160401.epkg.Z:openssl.base:1.0.2.500:1.0.2.800
7.2.1.4:IV83169m9b.160401.epkg.Z:openssl.base:1.0.1.515:1.0.2.800
7.2.1.4:IV83169m9b.160401.epkg.Z:openssl.base:0.9.8.2506:1.0.2.800
7.2.1.4:IV83169m9c.160401.epkg.Z:openssl.base:1.0.2.500:1.0.2.800
7.2.1.4:IV83169m9c.160401.epkg.Z:openssl.base:1.0.1.515:1.0.2.800
7.2.1.4:IV83169m9c.160401.epkg.Z:openssl.base:0.9.8.2506:1.0.2.800
7.2.1.4:IV83169s9d.160401.epkg.Z:openssl.base:1.0.2.500:1.0.2.800
7.2.1.4:IV83169s9d.160401.epkg.Z:openssl.base:1.0.1.515:1.0.2.800
7.2.1.4:IV83169s9d.160401.epkg.Z:openssl.base:0.9.8.2506:1.0.2.800
#(ifixfilesetnotinst.rpt)
#ifix with filesets that are not installed on the other system
#Ifix_Level:Ifix_Name:Fileset_Name:Fileset_Version
7.2.1.4:IV83983s5a.160602.epkg.Z:ntp.rte:7.1.0.5
7.2.1.4:IV87279s7a.160901.epkg.Z:ntp.rte:7.1.0.7
7.2.1.4:IV92126m3a.170106.epkg.Z:ntp.rte:7.1.0.7
7.2.1.4:fips_ifix.170113.epkg.Z:openssl.base:1.0.1.517:1.0.2.800
#(missingifixes.rpt)
#ifixes that are not installed on the other system
#Ifix_Level:Ifix_Name:Apar_Name
7.2.1.4:102j_ifix.170207.epkg.Z: N/A
7.2.1.4:102m_ifix.180105.epkg.Z: N/A
7.2.1.4:102ma_ifix.180410.epkg.Z: N/A
7.2.1.4:102oa_ifix.180906.epkg.Z: N/A
7.2.1.4:6202_ifix.160830.epkg.Z: N/A
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 183
7.2.1.4:6203_ifix.170124.epkg.Z: N/A
7.2.1.4:IJ05820m4a.180423.epkg.Z:IJ05820
7.2.1.4:IJ06400s9a.180514.epkg.Z:IJ06400
7.2.1.4:IJ06655m2a.180527.epkg.Z:IJ06655
7.2.1.4:IJ06907s1a.180607.epkg.Z:IJ06907
7.2.1.4:IJ07501m4a.180716.epkg.Z:IJ07501
7.2.1.4:fips_102j.170207.epkg.Z: N/A
7.2.1.4:fips_102m.180105.epkg.Z: N/A
7.2.1.4:fips_102ma.180410.epkg.Z: N/A
7.2.1.4:fips_102oa.180910.epkg.Z: N/A
#(missingapars.rpt)
#Missing Apars that are not on the other system
#Apar_Level:Apar_Name:Fileset_Name
7.2.1.4:IJ01423:devices.pciex.df1060e214103404.com 7.2.1.3,
7.2.1.4:IJ01426:devices.common.IBM.xhci.rte 7.2.1.2,devices.common.IBM.usb.rte
7.2.1.1,
7.2.1.4:IV96360:devices.pciex.df1060e214103404.com 7.2.1.3,
184 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
# The following shows that only lbsaix1 is mapped to a TNC policy, tncpol.
# This TNC policy contains the fileset policy, fspol.
#(clientstatus.rpt)
#status info of the other system
#Client_IP:Client_Level:Policy_Level:Client_Apars:Apars:Ifixes:Packages:Status
10.3.126.48:7.2.1.2:7.2.1.2:1058:0:0:0:COMPLIANT
# Even though lbstds3 is not mapped to a TNC Policy it finds that there is a
# fileset policy, fspol,
# that corresponds to the service pack of lbstds3, so it verifies the client
# using this fileset policy
# policy:
OSLevel 7200-01-02 exact match FS Policy <fspol>
Running policy checks for 7.2.1.2:7.2.1.2:fspol
Running verification based on policy fspol
#(clientstatus.rpt)
#status info of the other system
#Client_IP:Client_Level:Policy_Level:Client_Apars:Apars:Ifixes:Packages:Status
10.3.126.46:7.2.1.2:7.2.1.2:1058:0:0:0:COMPLIANT
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 185
root@lbstnc1>
#(instifixes.rpt)
#ifixes that are installed on the other system
#Ifix_Level:Ifix_Name:Apar_Name
7.2.1.2:IV96310m2a.170519.epkg.Z:IV96310
7.2.1.2:IV96310m2a.170519.epkg.Z:IV96310
#(ifixhigher.rpt)
#ifixes with fileset that is at a higher level then maximum
#Ifix_Level:Ifix_Name:Fileset_Name:Ifix_Version:Client_Version
7.2.1.2:517_ifix.170113.epkg.Z:openssl.base:1.0.1.517:1.0.2.800
7.2.1.2:IV83169m9a.160401.epkg.Z:openssl.base:1.0.2.500:1.0.2.800
7.2.1.2:IV83169m9a.160401.epkg.Z:openssl.base:1.0.1.515:1.0.2.800
7.2.1.2:IV83169m9a.160401.epkg.Z:openssl.base:0.9.8.2506:1.0.2.800
7.2.1.2:IV83169m9b.160401.epkg.Z:openssl.base:1.0.2.500:1.0.2.800
7.2.1.2:IV83169m9b.160401.epkg.Z:openssl.base:1.0.1.515:1.0.2.800
7.2.1.2:IV83169m9b.160401.epkg.Z:openssl.base:0.9.8.2506:1.0.2.800
7.2.1.2:IV83169m9c.160401.epkg.Z:openssl.base:1.0.2.500:1.0.2.800
7.2.1.2:IV83169m9c.160401.epkg.Z:openssl.base:1.0.1.515:1.0.2.800
7.2.1.2:IV83169m9c.160401.epkg.Z:openssl.base:0.9.8.2506:1.0.2.800
7.2.1.2:IV83169s9d.160401.epkg.Z:openssl.base:1.0.2.500:1.0.2.800
7.2.1.2:IV83169s9d.160401.epkg.Z:openssl.base:1.0.1.515:1.0.2.800
7.2.1.2:IV83169s9d.160401.epkg.Z:openssl.base:0.9.8.2506:1.0.2.800
#(ifixfilesetnotinst.rpt)
#ifix with filesets that are not installed on the other system
#Ifix_Level:Ifix_Name:Fileset_Name:Fileset_Version
7.2.1.2:IV83983s5a.160602.epkg.Z:ntp.rte:7.1.0.5
7.2.1.2:IV87279s7a.160901.epkg.Z:ntp.rte:7.1.0.7
7.2.1.2:IV92126m3a.170106.epkg.Z:ntp.rte:7.1.0.7
7.2.1.2:IV96312m5a.170518.epkg.Z:ntp.rte:7.1.0.9
7.2.1.2:fips_ifix.170113.epkg.Z:openssl.base:1.0.1.517:1.0.2.800
#(missingifixes.rpt)
#ifixes that are not installed on the other system
#Ifix_Level:Ifix_Name:Apar_Name
186 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
7.2.1.2:102j_ifix.170207.epkg.Z: N/A
7.2.1.2:102m_ifix.180105.epkg.Z: N/A
7.2.1.2:102ma_ifix.180410.epkg.Z: N/A
7.2.1.2:6202_ifix.160830.epkg.Z: N/A
7.2.1.2:6203_ifix.170124.epkg.Z: N/A
7.2.1.2:IJ02828s2a.171221.epkg.Z:IJ02828
7.2.1.2:IJ02919s1a.180108.epkg.Z:IJ02919
7.2.1.2:IJ03035m2a.180118.epkg.Z:IJ03035
7.2.1.2:IJ05820m2a.180430.epkg.Z:IJ05820
7.2.1.2:IJ06907s1a.180607.epkg.Z:IJ06907
7.2.1.2:IV94723m3a.171009.epkg.Z: N/A
7.2.1.2:IV94723s2a.170414.epkg.Z:IV94723
7.2.1.2:IV97811s2a.170712.epkg.Z:IV97811
7.2.1.2:IV97898s2a.171201.epkg.Z:IV97898
7.2.1.2:IV97901s2a.171201.epkg.Z:IV97901
7.2.1.2:IV97958s0b.171205.epkg.Z:IV97958
7.2.1.2:IV98830m1a.170809.epkg.Z:IV98830
7.2.1.2:IV99499m3a.171115.epkg.Z:IV99499
7.2.1.2:IV99552m3a.171031.epkg.Z:IV99552
7.2.1.2:fips_102j.170207.epkg.Z: N/A
7.2.1.2:fips_102m.180105.epkg.Z: N/A
7.2.1.2:fips_102ma.180410.epkg.Z: N/A
#(clientstatus.rpt)
#status info of the other system
#Client_IP:Client_Level:Policy_Level:Client_Apars:Apars:Ifixes:Packages:Status
10.3.126.48:7.2.1.2:7.2.1.2:1058:0:22:0:NON-COMPLIANT
#(instifixes.rpt)
#ifixes that are installed on the other system
#Ifix_Level:Ifix_Name:Apar_Name
7.2.1.2:IV96310m2a.170519.epkg.Z:IV96310
7.2.1.2:IV96310m2a.170519.epkg.Z:IV96310
#(ifixhigher.rpt)
#ifixes with fileset that is at a higher level then maximum
#Ifix_Level:Ifix_Name:Fileset_Name:Ifix_Version:Client_Version
7.2.1.2:517_ifix.170113.epkg.Z:openssl.base:1.0.1.517:1.0.2.800
7.2.1.2:IV83169m9a.160401.epkg.Z:openssl.base:1.0.2.500:1.0.2.800
7.2.1.2:IV83169m9a.160401.epkg.Z:openssl.base:1.0.1.515:1.0.2.800
7.2.1.2:IV83169m9a.160401.epkg.Z:openssl.base:0.9.8.2506:1.0.2.800
7.2.1.2:IV83169m9b.160401.epkg.Z:openssl.base:1.0.2.500:1.0.2.800
7.2.1.2:IV83169m9b.160401.epkg.Z:openssl.base:1.0.1.515:1.0.2.800
7.2.1.2:IV83169m9b.160401.epkg.Z:openssl.base:0.9.8.2506:1.0.2.800
7.2.1.2:IV83169m9c.160401.epkg.Z:openssl.base:1.0.2.500:1.0.2.800
7.2.1.2:IV83169m9c.160401.epkg.Z:openssl.base:1.0.1.515:1.0.2.800
7.2.1.2:IV83169m9c.160401.epkg.Z:openssl.base:0.9.8.2506:1.0.2.800
7.2.1.2:IV83169s9d.160401.epkg.Z:openssl.base:1.0.2.500:1.0.2.800
7.2.1.2:IV83169s9d.160401.epkg.Z:openssl.base:1.0.1.515:1.0.2.800
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 187
7.2.1.2:IV83169s9d.160401.epkg.Z:openssl.base:0.9.8.2506:1.0.2.800
#(ifixfilesetnotinst.rpt)
#ifix with filesets that are not installed on the other system
#Ifix_Level:Ifix_Name:Fileset_Name:Fileset_Version
7.2.1.2:IV83983s5a.160602.epkg.Z:ntp.rte:7.1.0.5
7.2.1.2:IV87279s7a.160901.epkg.Z:ntp.rte:7.1.0.7
7.2.1.2:IV92126m3a.170106.epkg.Z:ntp.rte:7.1.0.7
7.2.1.2:IV96312m5a.170518.epkg.Z:ntp.rte:7.1.0.9
7.2.1.2:fips_ifix.170113.epkg.Z:openssl.base:1.0.1.517:1.0.2.800
#(missingifixes.rpt)
#ifixes that are not installed on the other system
#Ifix_Level:Ifix_Name:Apar_Name
7.2.1.2:102j_ifix.170207.epkg.Z: N/A
7.2.1.2:102m_ifix.180105.epkg.Z: N/A
7.2.1.2:102ma_ifix.180410.epkg.Z: N/A
7.2.1.2:6202_ifix.160830.epkg.Z: N/A
7.2.1.2:6203_ifix.170124.epkg.Z: N/A
7.2.1.2:IJ02828s2a.171221.epkg.Z:IJ02828
7.2.1.2:IJ02919s1a.180108.epkg.Z:IJ02919
7.2.1.2:IJ03035m2a.180118.epkg.Z:IJ03035
7.2.1.2:IJ05820m2a.180430.epkg.Z:IJ05820
7.2.1.2:IJ06907s1a.180607.epkg.Z:IJ06907
7.2.1.2:IV94723m3a.171009.epkg.Z: N/A
7.2.1.2:IV94723s2a.170414.epkg.Z:IV94723
7.2.1.2:IV97811s2a.170712.epkg.Z:IV97811
7.2.1.2:IV97898s2a.171201.epkg.Z:IV97898
7.2.1.2:IV97901s2a.171201.epkg.Z:IV97901
7.2.1.2:IV97958s0b.171205.epkg.Z:IV97958
7.2.1.2:IV98830m1a.170809.epkg.Z:IV98830
7.2.1.2:IV99499m3a.171115.epkg.Z:IV99499
7.2.1.2:IV99552m3a.171031.epkg.Z:IV99552
7.2.1.2:fips_102j.170207.epkg.Z: N/A
7.2.1.2:fips_102m.180105.epkg.Z: N/A
7.2.1.2:fips_102ma.180410.epkg.Z: N/A
#(clientstatus.rpt)
#status info of the other system
#Client_IP:Client_Level:Policy_Level:Client_Apars:Apars:Ifixes:Packages:Status
10.3.126.46:7.2.1.2:7.2.1.2:1058:0:22:0:NON-COMPLIANT
188 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
root@lbstnc1>
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 189
Ifix preview update operation
You can update a single client or an ipgroup at the same time by using the following
command:
psconf update [-p] {-i<host>| -G <ipgroup, ipgroup2,....>[-r <buildinfo> | -a
<apar1, apar2,apargrp1,apargrp2,...> |
[-u] -v<ifix1, ifix2,ifixgrp1,ifixgrp2,...> | -O <openpkggrp1, openkggrp2,...>}
Removing an ifix
You can update a single client or an ipgroup at the same time by using the following
command:
psconf update [-p] {-i<host>| -G <ipgroup, ipgroup2,....>[-r <buildinfo> | -a
<apar1, apar2,apargrp1,apargrp2,...> |
[-u] -v<ifix1, ifix2,ifixgrp1,ifixgrp2,...> | -O <openpkggrp1, openkggrp2,...>}
190 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
10.3.126.46:7.2.1.2:IV97811s2a.170712.epkg.Z:INSTALL-REQUEST
10.3.126.46:7.2.1.2:IV97811s2a.170712.epkg.Z:INSTALL-SUCCESS
10.3.126.46:7.2.1.2:IV96310m2a.170519.epkg.Z:INSTALL-REQUEST
10.3.126.46:7.2.1.2:IV96310m2a.170519.epkg.Z:INSTALL-SUCCESS
Transaction succeded
root@lbstnc1>
Updating apar
You can deploy an apar to a TNCC by using the following command:
psconf update [-p] {-i<host>| -G <ipgroup, ipgroup2,....>[-r <buildinfo> | -a
<apar1, apar2,apargrp1,apargrp2,...> |
[-u] -v<ifix1, ifix2,ifixgrp1,ifixgrp2,...> | -O <openpkggrp1, openkggrp2,...>}
To verify that the installation was successful, go to the updated log directory for this TNC
Client and view the resulting installation log that is created after the update is completed. This
log provides the NIM installation information that resulted from the update operation.
After you verify the operation completed successfully with the installation log, start the system
per standard AIX best practice after a service pack update, and reverify the TNC Client after
restart as shown in the following example:
root@lbstnc1> psconf update -i 10.3.126.48 -r 7200-01-03
Running transaction
Running policy check
10.3.126.48:7.2.1.2:7.2.1.2:NON-COMPLIANT
Running update of system 10.3.126.48 level: 7200-01-02 to release: 7200-01-03
10.3.126.48:7.2.1.2:7.2.1.3:INSTALL-REQUEST
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 191
10.3.126.48:7.2.1.2:7.2.1.3:INSTALL-FAILURE
Nothing to update
Transaction succeded
root@lbstnc1>
After rebooting the system and verifying the TNCC again, the client shows as properly
updated to the 7200-01-03 service pack level:
root@lbstnc1> psconf list -s ALL -i ALL
#ip Release TL SP status time
trustlevel
10.3.126.46 7200 1 2 COMPLIANT 2018-09-15
11:25:54 ....
10.3.126.48 7200 1 3 COMPLIANT 2018-09-15
18:06:43 ....
root@lbstnc1>
192 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 5-3 Using the PowerSC GUI
2. After the Up-to-date subproduct is activated, the window that is shown in Figure 5-4
opens.
3. Complete verify and update operations on the TNC Client by clicking the three dots that
are to the far right of the TNC client name (see Figure 5-5).
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 193
Figure 5-5 Selecting to perform operations on the TNC Client by using the PowerSC GUI
4. When an update operation occurs by using the PowerSC GUI, the fileset policy that
corresponds to your TNC client is used. For more predictable results, define a fileset policy
and specify the precise set of security updates with which you want the client to be
updated.
5. Issue the Update operation by using the PowerSC GUI.
6. After the update operation finishes, issue a Verify operation by using the GUI to get the
up-to-date status of the TNC Client.
Figure 5-6 shows the TNC Client is not reported as compliant because one of the patches
applied required a restart of the system.
Figure 5-6 PowerSC GUI status details pane shows a required action
7. If your TNC client is successfully updated, that status is reflected in the command line
interface and the PowerSC GUI (see Figure 5-7).
194 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 5-7 PowerSC GUI status details of system showing status compliant
The second new option updates the TNCC to a higher service pack level or different
technology level, as shown in Figure 5-9.
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 195
Figure 5-9 PowerSC GUI v1.2.0.1 - updating the TNCC
5.7 Troubleshooting
This section provides a few troubleshooting techniques.
196 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
IP:10.3.126.48 Name:lbsaix1 Fileset:all lpp_resource:tncpm_7200-01-03_lpp
Chapter 5. PowerSC Trusted Network Connect and Patch Management v1.2.0.0 197
198 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
6
The Virtual I/O Server administrator can provision any number of write-only virtual log devices
for a client LPAR, which can be used for any purpose. The content of each of these logs are in
a directory in the Virtual I/O Server’s file system.
This chapter describes Trusted Logging, and covers the purpose and architecture of the
component. It also includes detailed, hands-on examples of installation, configuration, and
management tasks.
This section describes the architecture of Trusted Logging. We introduce the important
concepts that are required to develop an intuitive understanding of what Trusted Logging can
do and how it works.
The virtual SCSI channels of communication pass through the PowerVM hypervisor. The
channels are reliable (messages cannot get lost) and secure (the traffic is not visible to any
LPAR other than the one participating in the specific client LPAR-to-Virtual I/O Server
relationship).
The SCSI protocol does not treat the two endpoints of a connection in the same way. In SCSI
terminology, the client LPAR is an Initiator and the Virtual I/O Server is a Target. The SCSI
protocol allows Initiators only to read and write data from the Target; the Target cannot make
any request of the Initiator. Therefore, it is impossible for the Virtual I/O Server to insert or
extract data from the client LPARs; instead, they must explicitly transmit that data to the
Virtual I/O Server.
Virtual SCSI connections are provisioned by creating virtual SCSI client adapters on the
client LPARs and virtual SCSI server adapters on the Virtual I/O Servers by using the
Hardware Management Console (HMC). Trusted Logging can use virtual SCSI adapters that
are in place for the provision of virtual disk, tape, or optical media resources to a client LPAR.
For more information about the configuration of virtual SCSI adapters, see IBM PowerVM
Virtualization Managing and Monitoring, SG24-7590.
Figure 6-1 shows a simple virtual SCSI configuration, in which two client LPARs include
virtual SCSI client adapters (named vscsi0 on each) that are connected to virtual SCSI server
adapters on a single Virtual I/O Server (named vhost0 and vhost1).
Figure 6-1 Virtual SCSI with two client LPARs and a single Virtual I/O Server
200 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
The Virtual I/O Server is presenting two virtual SCSI disks (vtscsi0 and vtscsi1), which are
backed by physical disks (hdisk0 and hdisk1) from a Fibre Channel (FC) adapter (fscsi0) that
is attached to a storage area network (SAN). One of the disks is presented to each client
LPAR, where the disks appear as devices named hdisk0.
A more complex but common deployment model is to use two Virtual I/O Servers, which
provides multiple paths (“multipath”) to the same physical resource. This model allows Virtual
I/O Servers to be individually upgraded without loss of service to the client LPARs. Figure 6-2
shows a simple multipath configuration that provides the same access from the two
SAN-backed disks to client LPARs, but now with redundant data paths.
Note: For Trusted Logging to support multipath configurations, shared storage pools must
be deployed on the Virtual I/O Servers. For more information about deploying Trusted
Logging with shared storage pools, see 6.5.4, “Configuring shared storage pools” on
page 222.
Shared storage pools provide a means by which many Virtual I/O Servers (a cluster) can
coordinate concurrent access to individual files on SAN storage. Shared storage pools are
the foundation of Trusted Logging’s multipathing support. The combination of Trusted Logging
with shared storage pools has advantages beyond multipath. The Virtual I/O Servers in a
shared storage pool cluster do not need to all be on a single physical system. Therefore, the
following benefits are possible:
All log data that is collected by the Virtual I/O Servers in a shared storage pool can be
accessed by any Virtual I/O Server in the shared storage pool. Therefore, the number of
management touchpoints that are needed to back up and analyze log data across a large
hardware estate is reduced.
PowerVM Live Partition Mobility can be performed from one physical system to another
with no impact to the log data. The log data seamlessly continues to be written to the
same log file on the shared storage. PowerVM Live Partition Mobility can be performed
between Virtual I/O Servers that do not use shared storage pools. However, a new log file
is created on the destination Virtual I/O Server. The old log file on the original Virtual I/O
Server remains because no shared file system is in place between the Virtual I/O Servers.
As with all other virtual SCSI devices, the following procedure is used for creating a virtual log
device and presenting it to a client LPAR:
1. Create a virtual log on the Virtual I/O Server.
2. Attach the virtual log to a virtual SCSI server adapter on the Virtual I/O Server by creating
a virtual log target device.
3. Detect new virtual SCSI devices on the client LPAR.
4. Verify that the client LPAR detected the virtual log device on its corresponding virtual SCSI
client adapter.
5. Configure operating system services to use the newly detected virtual log device.
For more information about this process, see 6.5, “Working with Trusted Logging” on
page 219.
A virtual log is an entity that is created and managed on the Virtual I/O Server. It is important
to understand the difference between the virtual log and the virtual log target device. The
virtual log device also exists on the Virtual I/O Server and is the means by which virtual logs
are exposed to client LPARs. Consider the following points:
The virtual log target device connects a specific virtual log to a specific virtual SCSI
server adapter. This relationship is analogous to how a virtual optical target device
connects an optical media image to a virtual SCSI server adapter for use by a client LPAR.
The virtual log represents the log file on the Virtual I/O Server, together with some
configuration properties, and it is not a device. It is possible to create a virtual log without
creating an associated virtual log target device, although the virtual log is not accessible
by a client LPAR. Unattached virtual logs can be attached to virtual SCSI server adapters
at a later point by creating a virtual log target device.
Virtual logs: A virtual log can be connected to at most one virtual log target device at
any time, so virtual logs cannot be concurrently shared between several client LPARs.
Because virtual logs are not devices, they cannot be uniquely identified by device names.
Instead, a virtual log is assigned a random Universally Unique Identifier (UUID) when it is
created. The UUID is a 32-character hexadecimal number, as shown in the following
example:
00000000000000005b3f6b7cfcec4c67
202 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
This UUID is unique within the Virtual I/O Server, or within the Virtual I/O Server cluster if
shared storage pools are used. When a virtual log is moved to another Virtual I/O Server with
Live Partition Mobility, the UUID moves with it.
Figure 6-3 shows the relationship between virtual logs, virtual log target devices, virtual log
devices, virtual SCSI client adapters, and virtual SCSI server adapters. It shows a virtual log
(00000000000000005b3f6b7cfcec4c67) that is presented to client LPAR A as vlog0 by
attaching it as a device vtlog0 to the virtual SCSI server adapter vhost0. The virtual SCSI
server adapter vhost0 is connected to virtual SCSI client adapter vscsi0 on the client LPAR.
Because managing and tracking the purpose of virtual logs by using only the UUID can be
laborious, virtual logs also posses two properties to make management easier: the client
name and the log name.
Both of these properties are modifiable by the Virtual I/O Server administrator. They can be
inspected (but not modified) by the client LPAR to which a virtual log is connected. Although
the Virtual I/O Server administrator can modify these properties to contain any value that they
like, convention is that the properties are used in the following way:
Client name Indicates the name of the client LPAR to which this log is to be attached.
Typically, all virtual logs that are intended for a particular client LPAR are
assigned the same client name. If a virtual log is created and attached to
a virtual SCSI server adapter in a single operation, the Virtual I/O Server
attempts to obtain the host name of the associated client LPAR. The
Virtual I/O Server uses that name as the client name if it is not specified
on the command line. The client name can be up to 96 characters in
length.
Log name Indicates the purpose of a virtual log. This property can be assigned any
value by the Virtual I/O Server administrator and must be provided when
a new virtual log is created. For example, you can create two virtual logs
named audit (for the collection of audit data) and syslog (for the
collection of syslog data) for a certain client LPAR. The log name can be
up to 12 characters in length.
On the client LPAR, these properties can be inspected by using the lsattr -El command.
The log name is also used to name the device in the client LPAR’s /dev file system, as
described in 6.3.2, “Virtual log devices” on page 212.
Understanding the purpose of the UUID, client name, and log name is sufficient to start
creating virtual logs. A virtual log can be created, and then attached to a virtual SCSI server
adapter by using two separate invocations of the mkvlog command on the Virtual I/O Server,
as shown in Example 6-1 on page 204.
Example 6-1 Virtual I/O Server commands to create a virtual log (manual client name)
$ mkvlog -client LPAR2.01 -name syslog
Virtual log 0000000000000000f8546e995c208cbe created
$ mkvlog -uuid 0000000000000000f8546e995c208cbe -vadapter vhost0
vtlog0 Available
Alternatively, the virtual log and virtual log target device can be created in a single operation,
with the client name automatically assigned, as shown in Example 6-2. Example 6-2 shows
the use of the mkvlog command to create the virtual log. Then, Example 6-2 shows the use of
the lsvlog command to display the virtual log properties, which shows the automatic
assignment of the client name.
Example 6-2 Virtual I/O Server commands to create virtual log (automatic client name)
$ mkvlog -name syslog -vadapter vhost1
Virtual log 0000000000000000a11af0a9ac388216 created
vtlog1 Available
$ lsvlog -dev vtlog1
Client Name Log Name UUID VTD
LPAR2.01 syslog 0000000000000000a11af0a9ac388216 vhost1/vtlog1
Important: Automatic client name assignment requires that the client LPAR is active and
its operating system is fully operational. If these conditions are not met, the command
might fail with the following message:
mkvlog Error:
Client LPAR is not accessible for VSCSI adapter vhost1. Use -client
option to specify a client name for the new Virtual Log.
Within the virtual log repository root directory, every virtual log exists in a
clientname/logname/ subdirectory. Example 6-3 shows the resulting directory structure when
three virtual logs are created with three separate invocations of the mkvlog command. (Two
virtual logs are created for a client LPAR named s1 that is attached to vhost0. One virtual log
is created for another client LPAR named s2 that is attached to vhost1.) A find command is
run to locate all the directories in the virtual log repository root.
Example 6-3 Directory structure of the local virtual log repository with three virtual logs
$ mkvlog -vadapter vhost0 -name syslog
Virtual log 0000000000000000b224bb0dfb1030bf created
vtlog0 Available
204 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
$ mkvlog -vadapter vhost0 -name audit
Virtual log 00000000000000004e42f98eed1c6a02 created
vtlog1 Available
$ mkvlog -vadapter vhost1 -name syslog
Virtual log 0000000000000000fe72d7b80c0394a9 created
vtlog2 Available
$ find /var/vio/vlogs -type d
/var/vio/vlogs
/var/vio/vlogs/config
find: 0652-081 cannot change directory to </var/vio/vlogs/config>:
: The file access permissions do not allow the specified action.
/var/vio/vlogs/s1
/var/vio/vlogs/s1/audit
/var/vio/vlogs/s1/syslog
/var/vio/vlogs/s2
/var/vio/vlogs/s2/syslog
The following observations are from the output of the find command that is shown in
Example 6-3 on page 204:
A config subdirectory is not accessible from the Virtual I/O Server command line, and it
contains no client LPAR-generated data.
The virtual logs automatically detected the s1 and s2 client LPAR names by querying the
vhost0 and vhost1 virtual SCSI adapters.
Each virtual log has a corresponding clientname/logname/ subdirectory.
Within each of the leaf subdirectories, log files are stored. The following types of data are
generated by Trusted Logging:
Log data is a byte-for-byte copy of the logs, as written by the client LPAR.
State data consists of informational messages that concern the operation of Trusted
Logging. Some of these messages are generated by the client LPAR, and some of these
messages are generated by the Virtual I/O Server. For more information about the
contents of these files, see 6.3.3, “Messages that are written to the state files” on
page 213.
To reduce the possibility of the log files causing the Virtual I/O Server file system to fill up,
both of these log types store their data in a series of rotating log files. The number of files and
the maximum size of each file are configurable by the Virtual I/O Server administrator.
As an example, consider a virtual log that is configured to use five 2 MB files for log data.
Data is initially written to the first file until the next write causes the file size to exceed 2 MB. At
this point, a second file is created, and data is written to that new file, and so on. When the
maximum file count is reached, the next write causes the first file to be truncated to zero
bytes, and writes continue to the newly emptied file.
To configure these settings, the following virtual log properties can be specified when a virtual
log is created by using the mkvlog command, or modified after the virtual log is created with
the chvlog command:
Log files The number of files to use for client-generated log data,
Log file size The maximum size of each client-generated log data file,
State files The number of files to use for virtual log state data,
State file size The maximum size of each virtual log state date file,
Virtual log repository properties are modified by using the chvlrepo command.
Changing the repository root or state: If virtual logs exist, you cannot perform the
following tasks:
Change the repository root directory.
Change the state of the repository from enabled to disabled.
Therefore, it is important that the Virtual I/O Server administrator selects the repository
root directory early in the deployment process. For more information, see 6.2,
“Deployment considerations” on page 208.
206 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
6.1.6 Shared storage pools
Shared storage pools are the means by which Trusted Logging allows a cluster of Virtual I/O
Servers to concurrently access log file data that is generated by virtual log devices.
The original use of shared storage pools was to allow a single SAN-backed disk to be split
into several smaller volumes and presented to client LPARs as virtual disks with advanced
functions, such as thin provisioning, seamless Live Partition Mobility, and cluster-wide
management.
However, for Trusted Logging, the important function that is provided by shared storage pools
is the provision of a cluster-wide file system. This cluster-wide file system is stored on the
SAN and accessible from every Virtual I/O Server in the cluster. By storing virtual log data in
the shared storage pool file system, multiple paths to the same log file can be provided, which
facilitates multipath support for virtual logs and seamless Live Partition Mobility.
The construction of a shared storage pool cluster requires that all member Virtual I/O Servers
can access the same set of Fibre Channel SAN disks and communicate with each other by
way of TCP/IP.
Figure 6-4 shows the architecture of a Trusted Logging deployment that uses shared storage
pools. In this example, two Virtual I/O Servers form a shared storage pool by way of Fibre
Channel and Ethernet connectivity, presenting a common shared file system to both Virtual
I/O Servers. Virtual logs that are created within this shared file system are visible to both
Virtual I/O Servers and therefore, can be presented to a client LPAR in a multipath
arrangement.
Figure 6-4 Trusted Logging that uses shared storage pools for multipath virtual logs
For more information about the use of Trusted Logging with shared storage pools, see the
following sections:
6.5.2, “Creating a virtual log on a single Virtual I/O Server” on page 220
“Creating a single-path virtual log in a shared storage pool” on page 224
“Creating a multipath virtual log using shared storage pools” on page 225
In both cases, the virtual log configurations cannot be preserved. Also, the virtual logs must
be deleted and then new virtual logs must be created.
The following virtual log properties can be modified at the same time the virtual logs are
attached to virtual SCSI server adapters and performing I/O, without any noticeable effect on
client LPARs. However, because these properties define the directory into which the logs are
written, the log and state files effectively become “split”. The old data is preserved in the old
location, and new log and state messages are written to the new location. You can modify the
following virtual log properties without affecting the client LPARs:
Client name
Log name
The following virtual log maximum number and maximum size properties can be modified
during the time the virtual logs are attached to virtual SCSI server adapters and performing
I/O, without any noticeable effect on client LPARs:
Log files
State files
Next, we guide you through the major decisions that are required when you plan your Trusted
Logging deployment.
208 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Therefore, the decision to deploy a dedicated Virtual I/O Server for Trusted Logging is
essentially a tradeoff. You choose the convenience of a single management point and
network-based backup options or the security of restricted passwords and no required
network.
A full discussion of viosecure is outside the scope of this chapter. For more information, see
the following sections of the Virtual I/O Server documentation:
Configuring Virtual I/O Server firewall settings:
https://ibm.co/2Z0UzUu
Configuring Virtual I/O Server system security hardening:
https://ibm.co/318nA2q
Note: Do not use viosecure when shared storage pools are active. The use of a firewall
can disrupt the operation of a shared storage pool cluster.
After virtual logs are created, they cannot be migrated easily from the local virtual log
repository to a shared storage pool, or vice versa. Therefore, it is important to decide early in
your deployment planning whether you intend to use shared storage pools.
However, a shared storage pool deployment of Trusted Logging also has the following
requirements and behaviors; therefore, it might not be suitable for your environment:
The time to complete a write to a virtual log device on the client LPAR is approximately
doubled if the virtual log is in a shared storage pool, when compared with a local virtual log
that is stored on the same disk infrastructure. For more information, see 6.3.6,
“Performance” on page 215.
The shared storage pool data must be on SAN-backed disks, which must be accessible
from all Virtual I/O Servers in the cluster.
If your infrastructure is capable, and performance is acceptable, the use of shared storage
pools might make sense because of the additional function that it provides. However, in
deployments where the performance of log writes is paramount, or the Virtual I/O Servers
must be segregated from the network, shared storage pools might not be the best choice.
When you decide where to place the local virtual log repository, the following useful
possibilities are available:
Keep the local virtual log repository in /var/vio/vlogs
Move the local virtual log repository to a dedicated file system
If you use virtual logs in shared storage pools, the decision of where to place the log data is
not yours. The logs are stored on the SAN-backed shared storage pool disk that is specified
when the Virtual I/O Server cluster is created. However, you still need to read the remainder
of this section. The factors that are discussed can help to define how your SAN disks are
configured.
Consider the following key factors when you decide where your virtual log data must be
stored:
Disk space
Although an individual virtual log can be bound by the disk space that it can use (see
6.1.5, “Virtual log repositories” on page 206), an overall cap does not exist on the total
amount of space that can be used by the virtual log repository as a whole. The repository
can contain hundreds or thousands of virtual logs.
Therefore, it is important to ensure that the virtual log repository is not responsible for
filling up the /var file system, which can result in the loss of availability of other Virtual I/O
Server functions. The easiest way to avoid this issue is to create a dedicated file system
for virtual logs, which ensures that if that file system fills up, other services are not
affected.
Performance
For the best performance, virtual logs must be written to dedicated physical disks so that
other I/O operations do not affect write latency. The best performance might be achieved
by using a SAN-backed disk for the virtual log repository. The RAID level (the way that
data is striped and mirrored across an array of physical disks) also affects the
performance that can be obtained.
Resilience
You might have availability requirements for your virtual log data beyond the requirements
for the rest of your Virtual I/O Server file system. For example, you might require your
virtual log data to use a more fault-tolerant RAID level, or to be synchronously or
asynchronously mirrored to another site by using the built-in capabilities of your SAN
controller.
210 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
These capabilities are typically provided at a per-disk granularity. Therefore, implementing
a specific policy for virtual log data is likely requires that a separate disk with the
appropriate qualities is presented to the Virtual I/O Server, and a separate file system
must be deployed within it.
Except for small test environments, it is highly likely that the default location for your virtual log
data in /var/vio/vlogs is not suitable because of some or all of the factors that are described
in this section. The deployment planning process must consider the quality of service
(performance and resilience) that is required of these disks, and the total amount of required
space.
This section describes some of the useful implementation details of these subcomponents.
By default, these devices are named vtlogn for local virtual logs, and vtlogsn for shared
storage pool virtual logs, where n is a number that is unique to the specific device, starting
with 0. They are child devices of a virtual SCSI server adapter.
Example 6-4 shows the expected output from the lsmap command for a virtual SCSI server
adapter with one virtual log, one file-backed disk, and one optical media device attached.
Example 6-4 Using lsmap to view virtual log target devices on the Virtual I/O Server
$ lsmap -vadapter vhost0
SVSA Physloc Client Partition ID
--------------- ----------------------------------- -------------------
vhost0 U8205.E6C.06A22ER-V1-C13 0x00000003
VTD vtlog0
Status Available
LUN 0x8300000000000000
Backing device vlog:000000000000000075f175982f4d10d9
Physloc
Mirrored N/A
VTD vtopt0
Status Available
LUN 0x8200000000000000
Backing device cd0
Physloc U78AA.001.WZSHN02-P2-D9
Mirrored N/A
The Backing device field of the virtual log target device (vtlog0 in Example 6-4 on page 211)
corresponds to the UUID of the virtual log.
Each virtual log target device runs as its own kernel process, so its resource usage can be
monitored with standard commands, such as ps, topas, and nmon.
By default, these devices are named vlogn, where n is a number unique to the particular
device, starting with vlog0. They are child devices of a virtual SCSI client adapter.
Example 6-5 shows the expected lsdev output on a client LPAR with two virtual logs present.
Example 6-5 Using the lsdev command to view virtual log devices on the client LPAR
$ lsdev -t vlog
vlog0 Available Virtual Log
vlog1 Available Virtual Log
Each device appears in /dev as two equivalent files, with identical major and minor numbers.
One of the files is named per the device name. The other file incorporates the name of the
log, as specified when the virtual log is created on the Virtual I/O Server. Example 6-6 shows
the expected representation of two virtual logs that are named syslog and audit in the client
LPAR’s /dev file system.
Example 6-6 Contents of /dev with two virtual logs named “syslog” and “audit”
$ ls -l /dev/vl*
crw------- 1 root system 37, 0 Sep 13 05:32 /dev/vlsyslog0
crw------- 1 root system 37, 1 Sep 13 05:32 /dev/vlaudit1
crw------- 1 root system 37, 0 Sep 13 05:32 /dev/vlog0
crw------- 1 root system 37, 1 Sep 13 05:32 /dev/vlog1
Detailed properties of the virtual log can be inspected from within the client LPAR by using the
lsattr -El command, as shown in Example 6-7.
Example 6-7 Detailed virtual log information by using the lsattr command on the client LPAR
$ lsattr -El vlog0
PCM Path Control Module False
UUID 0000000000000000b174283d936cc140 Unique id for virtual log device False
client_name s1 Client Name False
device_name vlsyslog0 Device Name False
log_name syslog Log Name False
max_log_size 2097152 Maximum Size of Log Data File False
max_state_size 2097152 Maximum Size of Log State File False
212 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
pvid none Physical Volume Identifier False
It is common to want to see a simple mapping of the virtual log device name to the virtual
log’s log name, which is specified on the Virtual I/O Server. Combine the lsdev, lsattr, and
xargs commands to create a one-line command. This command produces a summary of the
log name that is associated with each virtual log device, as shown in Example 6-8.
Example 6-8 Displaying the mapping from device name to log name
$ lsdev -Fname -tvlog | xargs -L1 lsattr -Ea log_name -F"name value" -l
vlog0 audit
vlog1 syslog
To write to these virtual log devices, open the appropriate file in /dev and perform a write to it.
Virtual log devices can be used in place of log files for many applications, including syslog,
which is described in 6.5.7, “Configuring syslog to use a virtual log” on page 232.
Important: Only one process on the client LPAR can have a specific virtual log device
open at any time. If a second process attempts to open the device, an error is returned,
which can manifest on a command line as the following message:
$ echo "Test" > /dev/vlog0
The requested resource is busy.
ksh: /dev/vlog0: 0403-005 Cannot create the specified file.
Each write to the virtual log device is atomically written to the appropriate set of log files on
the Virtual I/O Server; no single write is ever truncated or split across multiple log files. If the
write cannot be performed on the Virtual I/O Server because of insufficient disk space or the
deletion of the corresponding virtual log target device, the write on the client LPAR fails. It
returns the ENOSPACE error code if the Virtual I/O Server disk fills up, or error code EIO for all
other errors.
The first field is the POSIX timestamp at the point that the state messages are written,
according to the Virtual I/O Server. The second field is the hostname of the Virtual I/O Server
that generated the message, which is useful when you analyze behavior in multipath and live
migration-capable environments.
The following messages are shown in their abstract form, followed by an example and a
description of what the message represents:
virtual log target device initialized
[1336172435] [vios1] vtlog0 initialized
Example 6-9 State messages for a virtual log with three log files
[1347535902] [vios1] vtlog0 using /vlogs/s1/logA//s1_logA.000
[1347535910] [vios1] vtlog0 using /vlogs/s1/logA//s1_logA.001
[1347535918] [vios1] vtlog0 using /vlogs/s1/logA//s1_logA.002
[1347535925] [vios1] vtlog0 using /vlogs/s1/logA//s1_logA.000
[1347535933] [vios1] vtlog0 using /vlogs/s1/logA//s1_logA.001
[1347535940] [vios1] vtlog0 using /vlogs/s1/logA//s1_logA.002
214 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Closed by client process pid (name)
[1336172555] [vios1] Closed by client process 7012592 (ksh)
This message is emitted when the process on the client LPAR that previously opened a
virtual log device closes it.
Client switch from path index(devno=major,minor;lua=lua) to path
index(devno=major,minor;lua=lua)
When multiple paths are configured for a virtual log (see “Creating a multipath virtual log
using shared storage pools” on page 225), and the virtual log device driver experiences an
I/O error when writing to the current path, it searches for an alternative path. It searches by
attempting to send a message of this form. After one of these messages is successfully
transferred, the path is used for all future messages that originate from that virtual log
device.
The client LPARs identify multiple routes to the same virtual log by looking for virtual logs with
the same UUID and presenting those different paths as a single virtual log device.
Virtual logs use a multipathing algorithm that is much simpler than the multipathing algorithms
in use for virtual disks. All writes to the virtual log device are sent down the first detected path.
If an I/O error occurs when the write is performed, the virtual log device attempts to send the
same message down the other paths until it finds a path that does not return an error. It then
continues to use that new path for all log traffic.
Although the lspath command shows the Virtual SCSI client adapter providing the multiple
routes to a certain virtual log, paths with failed I/O are not explicitly marked offline. A health
check interval does not need to be configured to ensure that paths with failed I/O are tried
again.
For an example of the multipath failover capability in practice, see 6.5.5, “Demonstrating
multipath failover” on page 227.
Trusted Logging does not support exporting virtual log devices to WPARs on the client LPAR.
6.3.6 Performance
The performance of Trusted Logging is highly dependent on your precise deployment
environment. However, it is possible to make some general observations about the relative
performance of Trusted Logging in various configurations.
This process guarantees that the messages are received and written on the Virtual I/O Server
in the correct order. Any failed write (for example, because of a lack of disk space on the
Virtual I/O Server) can return an error message to the application.
Messages that are written into different virtual logs are not ordered with respect to each other.
Messages can be written to several different virtual logs concurrently.
Because of the implementation of the Virtual SCSI infrastructure, the write time observed for
small (for example, single byte) writes is almost identical to the write time for 4 KB writes.
Beyond 4 KB, writes take longer because more data is transferred to the Virtual I/O Server
and written to disk.
The processing of virtual log messages uses Virtual I/O Server CPU resources and I/O
operations on the disk that contains the virtual log. Use the nmon command, which is
accessible from the oem_setup_env shell, to monitor these properties of the Virtual I/O Server
for any bottlenecks that might affect Trusted Logging performance.
The use of the nmon command collects and displays metrics from throughout the system on a
single page. For Trusted Logging, review the following metrics:
CPU utilization
Disk utilization
Disk adapter utilization
216 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
When started with the NMON=twDa environment variable set, nmon displays this information
automatically. Figure 6-5 shows a nmon display that is captured from a Virtual I/O Server with
six virtual logs active.
Figure 6-5 The nmon display when started by using the NMON=twda nmon command
If a specific disk is busy and a likely bottleneck to virtual log performance results, the following
options are available that might improve performance:
Move to a SAN-backed volume if your Virtual I/O Server uses internal disks for its file
systems.
Move the virtual log repository to a dedicated file system on a dedicated disk. This
procedure is described in 6.5.1, “Changing the local virtual log repository file system” on
page 219. However, this procedure works on empty virtual log repositories only. You must
remove all your virtual logs and re-create them after the virtual log repository is moved.
For this reason, the location of your virtual log repository is a key deployment
consideration, as discussed in 6.2.4, “Where to store local virtual logs” on page 210.
If the performance bottleneck appears to be the CPU, CPUs can be added to the Virtual I/O
Server. However, remember that the work of each virtual log target device is serial in nature,
and each virtual log therefore cannot use more than a single thread of execution.
It is expected that assigning more processors to the Virtual I/O Server realizes the most
benefit in environments in which many virtual logs are deployed. This environment provides
the most opportunity for parallel work that can use those extra processors.
Therefore, no action is required to enable Virtual I/O Servers for Trusted Logging. Installation
of the client LPAR virtual log device is described in 6.4.1, “Installing the Client LPAR
component” on page 218. For more information about verifying the version of your Virtual I/O
Servers, see 6.4.2, “Verifying the version of the Virtual I/O Server” on page 219.
Example 6-10 shows the required commands to install the Trusted Logging virtual log device
driver with the PowerSC Standard installation media that is mounted in /cdrom. The first
invocation of installp displays the license agreement; the second invocation accepts the
license agreement and performs the installation. It is also possible to install Trusted Logging
by using the smitty installp menu-based interface by selecting the powerscStd.vlog
package.
Example 6-10 Installation of Trusted Logging device drivers on the client LPAR
> installp -aEgd /cdrom powerscStd.vlog.rte
> installp -agXYd /cdrom powerscStd.vlog.rte
Example 6-11 shows how the lslpp command can be used to verify that Trusted Logging is
correctly installed on the client LPAR.
Path: /etc/objrepos
powerscStd.vlog.rte 1.1.2.0
powerscStd.vlog
218 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
6.4.2 Verifying the version of the Virtual I/O Server
The Virtual I/O Server component of Trusted Logging is available and installed by default on
Virtual I/O Server versions 2.2.1.0 and later. The Virtual I/O Server version can be verified by
using the ioslevel command on the Virtual I/O Server command line, as shown in
Example 6-12.
Example 6-12 Verifying Trusted Logging capability on the Virtual I/O Server
$ ioslevel
2.2.1.4
Important: Changing the virtual log repository file system requires that virtual logs are not
present in the virtual log repository.
To change the virtual log repository file system, complete the following steps:
1. In the setup environment (by using the oem_setup_env command), create the file system
by using the crfs command. Ensure that it is configured to be remounted on startup. Use
the mount command to ensure that the file system is available for use. Use the chmod
command to make the directory group-writable. Example 6-13 shows the required
commands to create a 2 GB file system that is mounted as /vlogs.
Example 6-13 Creation of a file system for the local virtual log repository
$ oem_setup_env
# crfs -g rootvg -m /vlogs -v jfs2 -A yes -p rw -a size=2G
File system created successfully.
2096884 kilobytes total disk space.
New File System size is 4194304
# mount /vlogs
# chmod g+rwX /vlogs
# exit
Example 6-14 Changing the path to the local virtual log repository
$ chvlrepo -root /vlogs
Updated repository.
Virtual logs now store their log data in the /vlogs file system tree, as described in 6.1.4,
“Virtual log directory and file structure” on page 204.
The simplest way to create a virtual log is to specify only the log name and the Virtual SCSI
server adapter to which it connects. This task produces a new virtual log, which obtains its
client name by querying the client LPAR. It inherits the default log and state file sizes and
numbers that are specified in the local virtual log repository.
A new virtual log target device is created, which connects the new virtual log to the specified
Virtual SCSI server adapter and to a client LPAR. Example 6-15 shows the use of mkvlog with
the -vadapter option to specify Virtual SCSI server adapter vhost0 and the -name option to
specify syslog as the log name.
It is also possible to override the local virtual log repository default properties by using more
command-line arguments. Example 6-16 shows that invocations of mkvlog can include the
following information:
The -lf option to specify the number of log files
The -lfs options to specify the size of each of those log files
The -sf option to specify the number of state files
The -sfs options to specify the size of each of those state files
Example 6-16 Creating a virtual log using mkvlog and overriding default properties
$ mkvlog -vadapter vhost1 -name audit -lf 10 -lfs 10M -sf 4 -sfs 200K
Virtual log 00000000000000005661f7f13dea7100 created
vtlog1 Available
220 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Example 6-17 shows the lsvlog command to display information for the virtual log that is
associated with virtual log target device vtlog0. The Log Directory field shows the full path to
the files of the virtual log.
Example 6-17 lsvlog -detail that is used to locate the Log Directory
$ lsvlog -detail -dev vtlog0
Client Name: s2
The contents of the directory can be viewed by using the ls command, as shown in
Example 6-18, where ls -l is used to display detailed information for each file. In this
example, a single log file that is named s2_syslog.000 and a single state file that is named
s2_syslog.000 are included.
Example 6-18 Viewing files associated with a virtual log using the ls -l command
$ ls -l /var/vio/vlogs/s2/syslog/
total 16
-rw-r----- 1 root staff 4 Sep 19 03:07 s2_syslog.000
-rw-r----- 1 root staff 378 Sep 19 03:07 s2_syslog.state.000
Example 6-19 Using the tail command to view the most recent ten entries in a virtual log
$ tail -n4 /var/vio/vlogs/s2/syslog/s2_syslog.000
Sep 21 08:05:40 s2 auth|security:info sshd[9371764]: Failed password
for root from 172.16.254.6 port 64104 ssh2
Sep 21 08:05:40 s2 auth|security:info syslog: ssh: failed login attempt
for root from 172.16.254.6
Sep 21 08:05:41 s2 auth|security:info sshd[9371764]: Failed password
for root from 172.16.254.6 port 64104 ssh2
Sep 21 08:05:41 s2 auth|security:info syslog: ssh: failed login attempt
for root from 172.16.254.6
Important: You must use the -r option with the auditpr command.
By default, the auditpr command converts user and group IDs in the audit records into
user and group names. However, it uses the user and group files on the Virtual I/O Server
to perform this mapping. This mapping is invalid because the records are generated on a
client LPAR with potentially different users and groups. The -r command option
suppresses this conversion.
Example 6-20 shows the use of the auditpr -r command to examine the contents of a virtual
log that is generated by the AIX auditing subsystem on a client LPAR.
Example 6-20 Viewing AIX audit records from a client LPAR on the Virtual I/O Server
$ auditpr -r -i /var/vio/vlogs/s2/audit/s2_audit.000
event login status time command wpar name
--------------- -------- ----------- ------------------------ ------- ---------
FS_Chdir 0 OK Thu Sep 20 05:46:17 2012 ps Global
FS_Chdir 0 OK Thu Sep 20 05:47:09 2012 bash Global
FS_Mkdir 0 FAIL Thu Sep 20 05:47:13 2012 java Global
FILE_Unlink 0 OK Thu Sep 20 05:47:13 2012 java Global
FILE_Rename 0 OK Thu Sep 20 05:47:13 2012 java Global
FS_Rmdir 0 OK Thu Sep 20 05:47:13 2012 java Global
FS_Rmdir 0 OK Thu Sep 20 05:47:13 2012 java Global
S_Rmdir 0 OK Thu Sep 20 05:47:13 2012 java Global
FILE_Unlink 0 OK Thu Sep 20 05:47:15 2012 rm Global
FS_Chdir 0 OK Thu Sep 20 05:47:17 2012 ps Global
For more information about configuring shared storage pools, see section 2.6 in IBM
PowerVM Virtualization Managing and Monitoring, SG24-7590.
For convenience, the remainder this section summarizes the steps that must be taken to
configure a shared storage pool. Shared storage pools provide a shared file system that is
accessible from a cluster of up to four Virtual I/O Servers. They use shared disks on a SAN to
share data. They communicate by way of a TCP/IP network to coordinate file system
operations and ensure that all Virtual I/O Servers see a consistent representation of the file
system.
222 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Shared storage pools have the following prerequisites:
At least two SAN-backed disks are required. One disk serves as the repository disk, which
stores metadata regarding the state of the cluster. At least one other disk stores the user
data (in our case, virtual logs). Both of these disks must be at least 10 GB.
All Virtual I/O Servers in the cluster must communicate by way of TCP/IP. Host name
resolution (Domain Name System [DNS] or host file-based) must be set up between
members of the cluster.
Figure 6-6 shows how a cluster of two Virtual I/O Servers can be formed. Also shown is the
hardware configuration and device names.
Figure 6-6 A hardware configuration that is suitable for shared storage pools
Configuration of shared storage pools between these two Virtual I/O Servers can be formed
as shown in Example 6-21. Both Virtual I/O Servers must first have their Fibre Channel (FC)
adapters configured correctly by using two invocations of the chdev command. Then,
concurrent access to the disks must be enabled by setting the reserve policy attribute of each
disk, also by using the chdev command.
After the devices are configured, the Virtual I/O Server cluster can be created by running the
cluster -create command on either of the Virtual I/O Servers. Then, run the cluster
-addnode command on the same Virtual I/O Server to add the second Virtual I/O Server to the
cluster.
After these commands are run, a shared storage pool cluster is formed. Verify this cluster with
the cluster -status command, which shows the members of the cluster and by using the
lsvlrepo command. The lsvlrepo command shows a second virtual log repository within the
newly created shared storage pool, as shown in Example 6-22.
Example 6-22 Verifying cluster creation with the cluster -status and lsvlrepo commands
$ cluster -status -clustername vlog_cluster
Cluster Name State
vlog_cluster OK
$ lsvlrepo
Storage Pool State Path
enabled /var/vio/vlogs
vlog_ssp enabled /var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/
To create a virtual log in a shared storage pool, the mkvlog command is used with the -sp
option to specify the shared storage pool into which to place the virtual log. Example 6-23
shows the mkvlog command that is used to create a virtual log that is named syslog in the
vlogsp shared storage pool that connects that virtual log to the vhost0 virtual SCSI server
adapter at the same time.
Example 6-23 Creating a virtual log in a shared storage pool and examining its properties
$ mkvlog -sp vlog_ssp -name syslog -vadapter vhost0
Virtual log 99b977dec96860fba65ab60766a56c11 created
vtlogs0 Available
$ lsvlog -detail -u 99b977dec96860fba65ab60766a56c11
Client Name: s1
224 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Maximum State File Size: 1048576
The lsvlog -detail command can be used to verify that the virtual log was created. This
command can also be used to identify the file system path from which the virtual log’s state
and log files can be accessed.
After the new virtual log device is created, the client LPAR administrator detects the new
virtual log device by running the cfgmgr command.
Example 6-24 shows the detection of the new virtual log on the client LPAR by using the
lsdev command to list the virtual log device and the lsattr command to display its
properties.
Example 6-24 Detection of a shared storage pool virtual log on the client LPAR
$ cfgmgr
$ lsdev -t vlog
vlog0 Available Virtual Log
$ lsattr -El vlog0
PCM Path Control Module False
UUID 99b977dec96860fba65ab60766a56c11 Unique id for virtual log device False
client_name s1 Client Name False
device_name vlsyslog Device Name False
log_name syslog Log Name False
max_log_size 2097152 Maximum Size of Log Data File False
max_state_size 2097152 Maximum Size of Log State File False
pvid none Physical Volume Identifier False
In the following example, two Virtual I/O Servers (p6-570vio1 and p6-570-vio2) exist in a
shared storage pool cluster. Both Virtual I/O Servers have a virtual SCSI server adapter
(vhost0 on both Virtual I/O Servers) that provides virtual SCSI connectivity to a client LPAR
(that has two virtual SCSI client adapters, vscsi0 and vscsi1) named client1. The objective is
to create a virtual log in the shared storage pool and present it to the client LPAR on both
paths.
Figure 6-7 on page 226 shows the topology of the system and the paths to the virtual log to
be created. The virtual log target devices on both Virtual I/O Servers are called vtlogs0.
To create a virtual log in a shared storage pool, the mkvlog command is used with the -sp
option to specify the shared storage pool into which to place the virtual log.
Because virtual logs in shared storage pools are visible to all Virtual I/O Servers in the shared
storage pool cluster, use the following process to establish a multipath configuration:
1. Create the virtual log-on p6-570vio1 and attach it to vhost0.
2. On p6-570vio2, attach the virtual log to vhost0.
3. On the client LPAR, detect new devices and verify that paths to the virtual log device are
detected on vscsi0 and vscsi1.
Example 6-25 shows the mkvlog command that is used to create a virtual log that is named
syslog in the vlogsp shared storage pool, which connects that virtual log to the vhost0 virtual
SCSI server adapter at the same time.
Because the virtual log is created in a shared storage pool, it is immediately accessible from
the second Virtual I/O Server, p6-570vio2. Example 6-26 shows how the lsvlog command
can be used to confirm that the virtual log is accessible.
Example 6-26 Confirming the virtual log and establishing a second path from p6-570vio2
# lsvlog
Client Name Log Name UUID VTD
client1 syslog 99b977dec96860fba65ab60766a56c11
$ mkvlog -u 99b977dec96860fba65ab60766a56c11 -vadapter vhost0
vtlogs0 Available
226 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
The VTD field shows that the virtual log is not connected. The field shows only the virtual log
target devices on the local Virtual I/O Server, not virtual log target devices that are present
elsewhere in the cluster. The mkvlog command can then be used to connect this virtual log to
the vhost0 virtual SCSI server adapter, which establishes a second path.
After the virtual log device is created, the client LPAR administrator detects the new virtual log
device by running the cfgmgr command. Example 6-27 shows the detection of the new virtual
log on the client LPAR. Example 6-27 also shows the use of the lsdev command to list the
virtual log device and the lspath command to display the virtual SCSI client adapters that
provide a path to the virtual log.
Example 6-27 Detection of a multipath shared storage pool virtual log on the client LPAR
$ cfgmgr
$ lsdev -t vlog
vlog0 Available Virtual Log
$ lspath -l vlog0
Available vlog0 vscsi1
Available vlog0 vscsi0
The multipath virtual log is now established, and either of the Virtual I/O Servers can now be
deactivated without affecting the availability of the virtual log to the client LPAR. For more
information about an example of how multipath failover can be simulated and how the virtual
log tracks the change of the path, see 6.5.5, “Demonstrating multipath failover” on page 227.
The demonstration is performed by examining the state file of the virtual log, which shows
changes in path activity. We then use the rmvlog command to disable the current path, which
shows failover to the second path. The first path is then reactivated by using the mkvlog
command, and the second path is disabled, which shows failover back to the first path. During
this demonstration, it is assumed that the client LPAR is generating a continuous stream of
log messages to its virtual log device.
Example 6-28 Locating and examining a shared storage pool virtual log’s state file
$ lsvlog -detail -uuid 99b977dec96860fba65ab60766a56c11
Client Name: client1
$ ls -l
/var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/client1/syslog/
total 83768
-rw-r----- 1 root staff 5704 Sep 17 05:02 s1_logA.state.000
$ cat
/var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/client1/syslog/s1_logA.state.
000
[1347874203] [p6-570vio1] vtlogs0 using
/var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/client1/syslog//
client1_syslog.state.000
[1347874203] [p6-570vio1] vtlogs0 initialised
[1347874243] [p6-570vio2] vtlogs0 using
/var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/client1/syslog//
client1_syslog.state.000
[1347874243] [p6-570vio2] vtlogs0 initialised
[1347874605] [p6-570vio1] Client process 15597740 (syslogd) using path
0 (devno=17,0;lua=0x8200000000000000)
[1347874605] [p6-570vio1] vtlogs0 using
/var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/client1/syslog//
client1_syslog.000
The contents of the state file show that the virtual log device is created and that both Virtual
I/O Servers have a device that uses it. The client LPAR has the device open with the syslogd
process. The client LPAR uses the path by way of the device with (major,minor) number 17,0,
which corresponds to vscsi0 and is confirmed by the Virtual I/O Server p6-570vio1 that emits
a message to indicate that is it writing to the log data file.
Example 6-29 shows the rmvlog and mkvlog commands that are used on p6-560vio1 to
disable and re-enable the virtual log target device. The extra state file messages are then
examined. They show that p6-570vio2 opened the log file and the client LPAR switched to the
new path.
Example 6-29 Removing the active path and confirming failover to an alternative path
$ rmvlog -dev vtlogs0
vtlogs0 Defined
228 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
$ mkvlog -dev vtlogs0
vtlogs0 Available
$ cat
/var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/client1/syslog/s1_logA.state.
000
(Lines from last example removed)
[1347875100] [p6-570vio1] vtlogs0 shutting down
[1347875099] [p6-570vio2] vtlogs0 using
/var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/client1/syslog//
client1_syslog.000
[1347875099] [p6-570vio2] Client switched from path 0
(devno=17,0;lua=0x8200000000000000) to path 1
(devno=17,1;lua=0x8100000000000000)
[1347875101] [p6-570vio1] vtlogs0 using
/var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/client1/syslog//
client1_syslog.state.000
[1347875101] [p6-570vio1] vtlogs0 initialised
As shown, p6-570vio2 opened the log file when the client switched to using device 17,1,
which corresponds to vscsi1. The reinitialization of the vtlogs0 device on p6-570vio1 is
recorded, but the active path remains on p6-570vio2.
Example 6-30 Removing the active path and confirming the failover to the original path
$ rmvlog -dev vtlogs0
vtlogs0 Defined
$ cat
/var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/client1/syslog/s1_logA.state.
000
(Lines from last example removed)
[1347877578] [p6-570vio2] vtlogs0 shutting down
[1347877578] [p6-570vio1] vtlogs0 using
/var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/client1/syslog//
client1_syslog.000
[1347877579] [p6-570vio1] Client switched from path 1
(devno=17,1;lua=0x8100000000000000) to path 0
(devno=17,0;lua=0x8200000000000000)
As shown, p6-570vio1 opened the log file when the client switched back to using device 17,0,
which corresponds to vscsi0. The client LPAR did not experience any disruption to the virtual
log device during these failover operations.
For more information about AIX auditing and its capabilities, see Auditing and Accounting on
AIX, SG24-6020.
Without Trusted Logging, these audit records are stored in a text or binary form on the client
LPAR that generated them. Trusted Logging also allows the binary versions of these audit
records to be transferred by way of a virtual log device to the Virtual I/O Server. On the Virtual
I/O Server, the audit records cannot be modified or removed by a malicious user on the client
LPAR. To use Trusted Logging with AIX auditing, the following steps must be performed:
1. Create a virtual log on the Virtual I/O Server and attach it to the required Virtual SCSI
server adapter.
2. Detect the new virtual log device on the client LPAR.
3. Configure AIX auditing to use the virtual log device.
For this AIX auditing procedure, Example 6-31 shows the creation of a local virtual log that is
named audit, which is attached to the Virtual SCSI server adapter vhost0. In this example,
the new virtual log target device that is created is called vtlog2.
Example 6-31 Creation of a simple virtual log for use by the AIX auditing subsystem
$ mkvlog -vadapter vhost0 -name audit
Virtual log 000000000000000060cc6b83263a1143 created
vtlog2 Available
New devices are detected by running the cfgmgr command after which the virtual log is
available for use.
As described in 6.3.2, “Virtual log devices” on page 212, the lsdev and lsattr commands
can be used to identify the log names that are associated with each device. This identification
is important. We want to ensure that the AIX auditing subsystem writes its logs to the log that
we created and not to some other virtual log that exists.
230 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Example 6-32 shows how new virtual logs can be detected by using the cfgmgr command.
New virtual logs can be displayed with a combination of the lsdev, xargs, and lsattr
commands. In this example, the virtual log that we created on the Virtual I/O Server that is
named audit is detected as the vlog0 device on the client LPAR. It is accessible as
/dev/vlog0.
Example 6-32 Detecting and identifying the new audit virtual log on the client LPAR
$ cfgmgr
$ lsdev -Fname -tvlog | xargs -L1 lsattr -Ea log_name -F"name value" -l
vlog0 audit
vlog1 syslog
For this example, the important section in the configuration file is the bin: section, which
controls the placement of binary audit records. A typical bin: section might look like the bin:
section that is shown in Example 6-33.
To enable output to a virtual log, another virtual_log line must be added to the
/etc/security/audit/config file to enable Trusted Logging on the AIX auditing subsystem.
This added line is shown in Example 6-34. This line must refer to whichever virtual log device
is identified as the device to use for audit records.
After this line is added to the /etc/security/audit/config file, the auditing subsystem can
be restarted by using the commands that are shown in Example 6-35.
Example 6-35 Commands required to shut down and restart the AIX auditing subsystem
$ audit shutdown
auditing reset
$ audit start
This section describes the display of audit records. Example 6-36 shows displaying the
contents of the binary audit log on the Virtual I/O Server by using the auditpr -r command.
Example 6-36 Using the auditpr -r command Virtual I/O Server to display the contents of a binary audit log
$ auditpr -r -i /var/vio/vlogs/s2/audit/s2_audit.000
event login status time command wpar name
--------------- -------- ----------- ------------------------ ------- ---------
FS_Chdir 0 OK Thu Sep 20 05:46:17 2012 ps Global
FS_Chdir 0 OK Thu Sep 20 05:47:09 2012 bash Global
FS_Mkdir 0 FAIL Thu Sep 20 05:47:13 2012 java Global
FILE_Unlink 0 OK Thu Sep 20 05:47:13 2012 java Global
FILE_Rename 0 OK Thu Sep 20 05:47:13 2012 java Global
FS_Rmdir 0 OK Thu Sep 20 05:47:13 2012 java Global
FS_Rmdir 0 OK Thu Sep 20 05:47:13 2012 java Global
S_Rmdir 0 OK Thu Sep 20 05:47:13 2012 java Global
FILE_Unlink 0 OK Thu Sep 20 05:47:15 2012 rm Global
FS_Chdir 0 OK Thu Sep 20 05:47:17 2012 ps Global
The syslog facility works by providing a special file that is named /dev/log, to which
applications and services write messages. The syslogd daemon reads the messages that are
written to this file and processes them according to the rules that are specified in the
/etc/syslog.conf file.
You can edit /etc/syslog.conf to match log messages based on the following information:
Facility (the application or service that generated the message, such as mail or auth)
Priority Level (alert, warn, info, debug, and so on)
For more information about the structure of the /etc/syslog.conf file, see the entry for
syslog.conf at this IBM Knowledge Center web page.
A virtual log presents itself on the client LPAR as a file in /dev. To use virtual logs as a
destination for syslog messages, configure /etc/syslog.conf to write the messages that you
want to the required virtual log.
To use Trusted Logging with syslog, the following steps must be performed:
1. Create a virtual log on the Virtual I/O Server and attach it to the required Virtual SCSI
server adapter.
2. Detect the new virtual log device on the client LPAR.
3. Configure syslog to use the virtual log device.
232 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Creating the virtual log
The virtual log use for syslog can be a local virtual log or it can be in a shared storage pool.
For more information about complex configurations of virtual logs, see the following sections:
6.5.2, “Creating a virtual log on a single Virtual I/O Server” on page 220
“Creating a single-path virtual log in a shared storage pool” on page 224
“Creating a multipath virtual log using shared storage pools” on page 225
For this syslog procedure, Example 6-37 shows the creation of a local virtual log that is
named syslog and attached to the Virtual SCSI server adapter vhost0. In this example, the
new virtual log target device that is created is called vtlog3.
Example 6-37 Creation of a simple virtual log for use by the AIX auditing subsystem
$ mkvlog -vadapter vhost0 -name syslog
Virtual log 000000000000000045a04622edfc10ad created
vtlog3 Available
New devices are detected by running the cfgmgr command, after which the virtual log is
available for use.
As described in 6.3.2, “Virtual log devices” on page 212, the lsdev and lsattr commands
can be used to identify the log names that are associated with each device. This identification
is important. We want to ensure that the syslog subsystem writes its logs to the log that we
created, and not some other virtual log that exists.
Example 6-38 shows how new virtual logs can be detected by running the cfgmgr command.
The example shows how new virtual logs can be displayed with a combination of the lsdev,
xargs, and lsattr commands. In this example, the virtual log that we created on the Virtual
I/O Server that is named syslog is detected as the vlog1 device on the client LPAR. It is
accessible as /dev/vlog1.
Example 6-38 Detecting and identifying the new audit virtual log on the client LPAR
$ cfgmgr
$ lsdev -Fname -tvlog | xargs -L1 lsattr -Ea log_name -F"name value" -l
vlog0 audit
vlog1 syslog
Example 6-39 The syslog.conf line to direct authentication messages to vlog1 virtual log
auth.info /dev/vlog1
Example 6-40 Reload of the syslogd service to reread the configuration file
$ refresh -s syslogd
0513-095 The request for subsystem refresh was completed successfully.
Attempted and successful logins now generate informational messages that are transmitted
to the Virtual I/O Server by way of the virtual log, vlog1. On the Virtual I/O Server, the log file
can be inspected to view these messages. Example 6-41 shows log contents that are typical
of auth.info messages.
Example 6-41 Viewing the contents of syslog from within the Virtual I/O Server
$ cat /var/vio/vlogs/s2/syslog/s2_syslog.000
Sep 21 08:05:40 s2 auth|security:info sshd[9371764]: Failed password for root from
172.16.254.6 port 64104 ssh2
Sep 21 08:05:40 s2 auth|security:info syslog: ssh: failed login attempt for root from
172.16.254.6
Sep 21 08:06:18 s2 auth|security:info sshd[9371772]: Failed password for testuser
from 172.16.254.6 port 64106 ssh2
Sep 21 08:06:18 s2 auth|security:info syslog: ssh: failed login attempt for testuser
from 172.16.254.6
Sep 21 08:06:22 s2 auth|security:info sshd[9371772]: Accepted password for testuser
from 172.16.254.6 port 64106 ssh2
A brief description of where Trusted Logging stores various aspects of its configuration data is
required to better understand why certain backup operations are necessary.
Example 6-42 Using lsvlrepo to show paths to virtual log repository directories
$ lsvlrepo -field sp,path
,/var/vio/vlogs
vlog_ssp,/var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/
In this example, the first line represents the local virtual log repository (because no shared
storage pool is indicated). The second line represents the vlog_ssp shared storage pool of
which the Virtual I/O Server on which this command is run is a member.
234 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Virtual log repositories
The virtual log repositories specify default values for the log and state file number and
maximum sizes. They also specify the path to the virtual log repository root directory, into
which log and state data files are written.
For local virtual logs, these properties are stored in the vlogrepo0 pseudo-device, which does
not represent a physical device in the system. It contains the configuration options that are
required by a virtual log repository. Because it is a device, its configuration can be backed up
and restored with the viosbr -backup command.
For virtual log repositories in shared storage pools, the configuration of the associated virtual
log repository is stored in a database that is part of the shared storage pool subsystem. This
database is backed up by using the viosbr -backup -clustername command.
That directory must be backed up if the configuration of the local virtual logs is retained. By
default, this directory is not readable by the padmin user. This directory must be made
readable from within the oem_setup_env setup environment before this data can be backed up
by the padmin user. Use a command sequence, such as the command that is shown in
Example 6-43.
Example 6-43 Making the virtual log configuration accessible by the padmin user
$ oem_setup_env
# chmod a+rwX /var/vio/vlogs/config
# exit
For virtual logs that are in shared storage pools, the configuration of the virtual logs is stored
in a database that is part of the shared storage pool subsystem. This database is backed up
by using the viosbr -backup -clustername command.
Combined with an invocation of the find command to locate all the files in a specific virtual
log repository, a one-line command to back up the virtual log repository can be produced, as
shown in Example 6-44.
Example 6-44 Use of find and backup to store a copy of virtual log data
$ find /var/vio/vlogs -print | backup -i -v -q
find: 0652-081 cannot change directory to </var/vio/vlogs/config>:
: The file access permissions do not allow the specified action.
Backing up to /dev/rfd0.
In Example 6-44 on page 235, the find command sends a list of the files to the backup
command, which writes the files to the default removable media device rfd0. The config
directory is inaccessible to the padmin user by default and therefore is not backed up.
File system permissions: The padmin user cannot access the virtual log repository’s
config directory by default. You must set the permissions from the oem_setup_env setup
environment for the user that performs backups before this command sequence
functions correctly.
236 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
If you use only shared storage pool virtual logs, use the viosbr -backup -clustername
command to back up the virtual log repository, virtual log, and virtual log target device
configuration. This operation is simple, as shown in Example 6-46.
Example 6-46 Complete backup of shared storage pool virtual log repository data
$ viosbr -backup -clustername vlog_cluster -file /tmp/viosback
Backup of node p6-570vio1 successful
Backup of this node (p6-570vio2) successful
$ echo /tmp/viosback.vlog_cluster.tar.gz | backup -ivq
Backing up to /dev/rfd0.
Cluster 51200 bytes (100 blocks).
Volume 1 on /dev/rfd0
a 336083 /tmp/viosback.vlog_cluster.tar.gz
The total size is 336083 bytes.
Backup finished on Sun Sep 23 19:13:06 EDT 2012; there are 700 blocks on 1
volumes.
A single invocation of the viosbr command captures the configuration of all Virtual I/O
Servers in the specified cluster (vlog_cluster in this example) and writes it to a file, which
in this case is called /tmp/viosback.vlog_cluster.tar.gz. The name of this file can be
sent to the backup command, which writes its contents to the default removable media
device.
If you use both local virtual logs and shared storage pool virtual logs, use the viosbr
-backup -clustername command to back up the virtual log repositories, shared storage
pool virtual logs, and all virtual log target device configurations. Separately back up the
config subdirectory of the local virtual log repository root directory. Example 6-47 shows
this operation when the local virtual log repository is in the /var/vio/vlogs directory.
File system permissions: The padmin user does not have access to the virtual log
repository’s config directory by default. You must set the permissions appropriately
from the oem_setup_env setup environment for the user that performs backups before
this command sequence functions correctly.
Example 6-47 Backup of shared storage pool and local virtual log repository data
$ viosbr -backup -clustername vlog_cluster -file /tmp/viosback
Backup of node p6-570vio1 successful
Backup of this node (p6-570vio2) successful
$ find /tmp/viosback.vlog_cluster.tar.gz /var/vio/vlogs/config |
backup -ivq
Backing up to /dev/rfd0.
Cluster 51200 bytes (100 blocks).
Volume 1 on /dev/rfd0
a 364477 /tmp/viosback.vlog_cluster.tar.gz
a 0 /var/vio/vlogs/config
a 312 /var/vio/vlogs/config/0000000000000000b9e896af7e5b8377.vlog
The total size is 364789 bytes.
Backup finished on Sun Sep 23 19:23:09 EDT 2012; there are 800 blocks on 1
volumes.
It first starts the viosbr command to capture the configuration of all Virtual I/O Servers in
the specified cluster (vlog_cluster in this example) into a backup file. This backup file is
named /tmp/viosback.vlog_cluster.tar.gz.
Next, we show an example restore procedure for the same three backup configurations that
we described (local virtual logs only, shared storage pool virtual logs only, and local and
shared storage pool virtual logs):
If you use only local virtual logs, first restore the local virtual log repository config
subdirectory and backup file by using the restore command. Then, use the viosbr
-restore command to restore the virtual log repository and the virtual log target device
configurations from that backup file.
Example 6-48 shows a full backup, modify, and restore procedure.
Example 6-48 Viewing, backing up, deleting, and restoring local virtual logs
$ lsvlrepo -local -detail
Local Virtual Log Repository:
Repository State: enabled
Path: /vlog6
Maximum Log Files: 2
Maximum Log File Size: 1048576
Maximum State Files: 2
Maximum State File Size: 1048576
$ lsvlog
Client Name Log Name UUID VTD
s1 syslog 000000000000000015a339a71349e5a6 vhost0/vtlog2
s1 audit 0000000000000000d92d8e7cc4d99b1e vhost0/vtlog1
s2 syslog 0000000000000000b9e896af7e5b8377 vhost1/vtlog0
$ viosbr -backup -file /tmp/viosback
Backup of this node (p6-570vio2) successful
$ find /tmp/viosback.tar.gz /vlog6/config | backup -ivq
Backing up to /dev/rfd0.
Cluster 51200 bytes (100 blocks).
Volume 1 on /dev/rfd0
a 4322 /tmp/viosback.tar.gz
a 0 /vlog6/config
a 312 /vlog6/config/000000000000000015a339a71349e5a6.vlog
a 312 /vlog6/config/0000000000000000b9e896af7e5b8377.vlog
a 312 /vlog6/config/0000000000000000d92d8e7cc4d99b1e.vlog
The total size is 5258 bytes.
Backup finished on Mon Sep 24 04:36:45 EDT 2012; there are 100 blocks on 1 volumes.
$ rmvlog -dbdata -u 000000000000000015a339a71349e5a6
vtlog2 deleted
Virtual log 000000000000000015a339a71349e5a6 deleted.
Log files deleted.
$ rmvlog -dbdata -u 0000000000000000d92d8e7cc4d99b1e
vtlog1 deleted
Virtual log 0000000000000000d92d8e7cc4d99b1e deleted.
Log files deleted.
$ rmvlog -dbdata -u 0000000000000000b9e896af7e5b8377
vtlog0 deleted
238 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Virtual log 0000000000000000b9e896af7e5b8377 deleted.
Log files deleted.
$ lsvlog
$ restore -vq
New volume on /dev/rfd0:
Cluster size is 51200 bytes (100 blocks).
The volume number is 1.
The backup date is: Mon Sep 24 04:36:45 EDT 2012
Files are backed up by name.
The user is root.
x 4322 /tmp/viosback.tar.gz
x 0 /vlog6/config
x 312 /vlog6/config/000000000000000015a339a71349e5a6.vlog
x 312 /vlog6/config/0000000000000000b9e896af7e5b8377.vlog
x 312 /vlog6/config/0000000000000000d92d8e7cc4d99b1e.vlog
The total size is 5258 bytes.
The number of restored files is 5.
$ viosbr -restore -file /tmp/viosback.tar.gz
vtlog2 Available
vtlog1 Available
vtlog0 Available
Backedup Devices that are unable to restore/change
==================================================
Here, for reference, the current virtual log configuration is displayed with lsvlrepo and
lsvlog. The Virtual I/O Server configuration is backed up by using viosbr -backup.
The backup command is used to write both the configuration backup and the files in the
local virtual log’s config subdirectory to removable media. The local virtual logs are then
removed by using the rmvlog command. The lsvlog command is used to show that all
virtual logs are removed.
The restoration procedure uses the restore command to restore the virtual log repository
config subdirectory and backup files. The viosbr command is used to restore the virtual
log target devices from the restored backup. The lsvlog command is then used to show
that all virtual log devices are restored.
If you use only shared storage pool virtual logs, use the viosbr -restore -clustername
command to restore the virtual log repository, virtual log, and virtual log target device
configuration.
The restoration procedure uses the restore command to restore the backup file from
removable media. Then, complete the following process to restore data from the cluster
backup file:
a. Remove all other nodes from the cluster by using the cluster -rmnode command
because the next step can be performed on a single-node cluster only.
b. Restore the shared storage pool database by using the viosbr -recoverdb command.
– The MTM and Partition Number of the Virtual I/O Server whose configuration is to be
restored must be known. View this information by using the cluster -status command
to show the MTM and Partition Number of the nodes in the cluster. Example 6-50
shows how these values are displayed. These values are passed to the viosbr
-restore -subfile command during the restore procedure.
Example 6-50 Using cluster -status to identify MTM and Partition Number
$ cluster -status -clustername vlog_cluster
Cluster Name State
vlog_cluster OK
240 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Example 6-51 shows a full backup, modify, and restore procedure.
Example 6-51 View, back up, delete, and restore shared storage pool virtual logs
$ lsvlrepo
Storage Pool State Path
enabled /vlog6
vlog_ssp enabled /var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/
$ lsvlog
Client Name Log Name UUID VTD
s2 syslog 99b977dec96860fb09268ff7eba91a00 vhost1/vtlogs3
s1 syslog 99b977dec96860fb8e7938a15a4a09a2 vhost0/vtlogs1
s1 audit 99b977dec96860fbdc3cbfd48a76c44d vhost0/vtlogs2
$ viosbr -backup -clustername vlog_cluster -file /tmp/viosback
Backup of node p6-570vio1 successful
Backup of this node (p6-570vio2) successful
$ echo /tmp/viosback.vlog_cluster.tar.gz | backup -ivq
Backing up to /dev/rfd0.
Cluster 51200 bytes (100 blocks).
Volume 1 on /dev/rfd0
a 306498 /tmp/viosback.vlog_cluster.tar.gz
The total size is 306498 bytes.
Backup finished on Mon Sep 24 05:12:02 EDT 2012; there are 600 blocks on 1 volumes.
$ rmvlog -dbdata -u 99b977dec96860fb09268ff7eba91a00
vtlogs3 deleted
Virtual log 99b977dec96860fb09268ff7eba91a00 deleted.
Fileset access removed. Freeing data owned by fileset...
done
Log files deleted.
$ rmvlog -dbdata -u 99b977dec96860fb8e7938a15a4a09a2
vtlogs1 deleted
Virtual log 99b977dec96860fb8e7938a15a4a09a2 deleted.
Fileset access removed. Freeing data owned by fileset...
done
Log files deleted.
$ rmvlog -dbdata -u 99b977dec96860fbdc3cbfd48a76c44d
vtlogs2 deleted
Virtual log 99b977dec96860fbdc3cbfd48a76c44d deleted.
Fileset access removed. Freeing data owned by fileset...
done
Log files deleted.
$ lsvlog
$ restore -vq
New volume on /dev/rfd0:
Cluster size is 51200 bytes (100 blocks).
The volume number is 1.
The backup date is: Mon Sep 24 05:12:02 EDT 2012
Files are backed up by name.
The user is root.
x 306498 /tmp/viosback.vlog_cluster.tar.gz
The total size is 306498 bytes.
The number of restored files is 1.
$ cluster -rmnode -clustername vlog_cluster -hostname p6-570vio1
Partition p6-570vio1 has been removed from the vlog_cluster cluster
For reference, the current virtual log configuration is displayed by using the lsvlrepo and
lsvlog commands. The Virtual I/O Server configuration is backed up by using the viosbr
-backup -clustername commands.
The backup command is used to write the configuration backup to removable media. The
virtual logs are then removed by using the rmvlog command. The lsvlog command is
used to show that all virtual logs are removed.
The backup procedure that is described in “Backing up configuration data” on page 236 is
performed. By using the steps described previously, the repository disk is identified as
hdisk8. The Virtual I/O Server MTM is 9117-MMA02101F170. The Partition Number is 2.
Finally, the lsvlog command is used to show that all virtual log devices are restored.
If you use both local virtual logs and shared storage pool virtual logs, use the viosbr
-restore -clustername command to restore the virtual log repository, virtual log, and
virtual log target device configuration.
The restoration procedure uses the restore command to restore the backup file and the
local virtual log configuration from removable media, followed by a four-step process to
restore data from the cluster backup file. The four steps are the same steps as described
in the previous scenario, in which shared storage pool virtual logs are restored.
Example 6-52 shows a full backup, modify, and restore procedure.
242 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Path:
/var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/
Maximum Log Files: 1
Maximum Log File Size: 104857600
Maximum State Files: 2
Maximum State File Size: 1048576
$ lsvlog
Client Name Log Name UUID VTD
s1 syslog 99b977dec96860fb8e7938a15a4a09a2 vhost0/vtlogs1
s1 audit 99b977dec96860fbdc3cbfd48a76c44d vhost0/vtlogs2
s2 syslog 000000000000000068fe15551286c088 vhost1/vtlog0
s2 audit 0000000000000000a22822a0bb3b67bb vhost1/vtlog1
$ viosbr -backup -clustername vlog_cluster -file /tmp/viosback
Backup of node p6-570vio1 successful
Backup of this node (p6-570vio2) successfu
$ find /tmp/viosback.vlog_cluster.tar.gz /vlog6/config | backup -ivq
Backing up to /dev/rfd0.
Cluster 51200 bytes (100 blocks).
Volume 1 on /dev/rfd0
a 283608 /tmp/viosback.vlog_cluster.tar.gz
a 0 /vlog6/config
a 312 /vlog6/config/000000000000000068fe15551286c088.vlog
a 312 /vlog6/config/0000000000000000a22822a0bb3b67bb.vlog
The total size is 284232 bytes.
Backup finished on Mon Sep 24 06:26:01 EDT 2012; there are 600 blocks on 1 volumes.
$ rmvlog -dbdata -u 99b977dec96860fb8e7938a15a4a09a2
vtlogs1 deleted
Virtual log 99b977dec96860fb8e7938a15a4a09a2 deleted.
Fileset access removed. Freeing data owned by fileset...
done
Log files deleted.
$ rmvlog -dbdata -u 99b977dec96860fbdc3cbfd48a76c44d
vtlogs2 deleted
Virtual log 99b977dec96860fbdc3cbfd48a76c44d deleted.
Fileset access removed. Freeing data owned by fileset...
done
Log files deleted.
$ rmvlog -dbdata -u 000000000000000068fe15551286c088
vtlog0 deleted
Virtual log 000000000000000068fe15551286c088 deleted.
Log files deleted.
$ rmvlog -dbdata -u 0000000000000000a22822a0bb3b67bb
vtlog1 deleted
Virtual log 0000000000000000a22822a0bb3b67bb deleted.
Log files deleted.
$ lsvlog
$ restore -vq
New volume on /dev/rfd0:
Cluster size is 51200 bytes (100 blocks).
The volume number is 1.
The backup date is: Mon Sep 24 06:26:01 EDT 2012
Files are backed up by name.
The user is root.
x 283608 /tmp/viosback.vlog_cluster.tar.gz
x 0 /vlog6/config
x 312 /vlog6/config/000000000000000068fe15551286c088.vlog
x 312 /vlog6/config/0000000000000000a22822a0bb3b67bb.vlog
The total size is 284232 bytes.
The number of restored files is 4.
"
$ viosbr -restore -clustername vlog_cluster
-file /tmp/viosback.vlog_cluster.tar.gz
-subfile vlog_clusterMTM9117-MMA02101F170P2.xml
vtlogs1 Available
vtlogs2 Available
vtlog1 Available
vtlog0 Available
Backedup Devices that are unable to restore/change
==================================================
First, the current virtual log configuration is displayed by using the lsvlrepo and lsvlog
commands. The Virtual I/O Server configuration is backed up by using viosbr -backup
-clustername. The backup command is used to write both the configuration backup and
the files in the local virtual log’s config subdirectory to removable media.
The virtual logs are then removed by using the rmvlog command. The lsvlog command is
used to show that all virtual logs are removed.
The backup procedure that is described in “Backing up configuration data” on page 236 is
then performed. By using the steps described previously, the repository disk is identified
as hdisk8. The Virtual I/O Server MTM is 9117-MMA02101F170. The Partition Number is
2.
Finally, the lsvlog command is used to show that all virtual log devices are restored.
This section described Trusted Logging backup scenarios for local and shared storage pool
configurations. The Virtual I/O Server backup and restore commands were used to place the
backups onto removable media. Other options are available, such as mounting a remote
Network File System (NFS) export by using the mount command and placing the required files
onto it.
244 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
6.5.9 Deleting virtual logs and virtual log target devices
Virtual log devices on the client LPAR can be deleted by using the rmdev command in the
same manner as any AIX device. Example 6-53 shows a useful combination of commands
that can be used to delete all virtual log devices. Query the list of devices by using the lsdev
command, and use the xargs utility to run rmdev -d on each device in turn.
Example 6-53 Use lsdev, xargs, and rmdev to remove virtual log devices on client LPAR
$ lsdev -F name -t vlog | xargs -L1 rmdev -d -l
Virtual logs and virtual log target devices can be deleted from within the Virtual I/O Server
with the rmvlog command. As described in 6.1.3, “Virtual logs” on page 202, Trusted Logging
connects virtual logs to client LPARs by attaching them to virtual log target devices.
The rmvlog command features the following modes of operation. Each mode removes a
different amount of the specified virtual log infrastructure:
no option Disable virtual log target device
When used without other options beyond the specification of a virtual log target
device (with -dev) or a virtual log (with -uuid), the virtual log target device (or
the device that is associated with the specified virtual log) is moved into the
Disabled state. Therefore, the client LPAR experience errors if it attempts to
write to it. This behavior is consistent with the rmdev command when used with
no other command options. Specifying a virtual log that does not have an
associated target device results in an error.
-d Delete virtual log target device
When used with the -d option, the virtual log target device that is specified with
the -dev option (or associated with the virtual log that is specified with the -uuid
option) is deleted. Therefore, the client LPAR experiences errors if it attempts to
write to it. This behavior is consistent with the rmdev command when used with
the -d command option. Specifying a virtual log that does not have an
associated target device results in an error.
-db Delete virtual log and associated target device if it exists
When used with the -db option, the virtual log target device that is specified with
the -dev option (or associated with the virtual log that is specified with the -uuid
option, if one exists) is deleted, along with the associated virtual log. Therefore,
the client LPAR experiences errors if it attempts to write to it, and the properties
that are associated with the virtual log (log name, client name, and log and
state file counts and sizes) are lost. The directory that contains the log and
state files is retained. The -dbdata command option must be used instead of
the -db command option if you want to remove this directory.
-dbdata Delete virtual log, target device, and client data
When used with the -dbdata option, the behavior is the same as with the -db
option, except that the directory that contains the log and state files is also
removed.
These four scopes are summarized in Figure 6-8 on page 246, which also shows the
components that are removed for each of the four possible removal options.
6.6 Troubleshooting
This section describes the following common incorrect configurations and complex
interactions that can occur during the configuration and deployment of Trusted Logging:
“The following device packages are required” error occurs when a virtual log device is
detected on the client LPAR.
If the error message that is shown in Example 6-54 occurs when new virtual log devices
are detected, the PowerSC Trusted Logging package is not installed on the client LPAR.
For more information about installing the package, see 6.4.1, “Installing the Client LPAR
component” on page 218.
Example 6-54 Error message on the client LPAR with no PowerSC Trusted Logging
lpar2(root)/> cfgmgr
Method error (/usr/lib/methods/cfg_vclient -l vscsi1 ):
0514-040 Error initializing a device into the kernel.
cfgmgr: 0514-621 WARNING: The following device packages are required for device
support but are not currently installed.
devices.vscsi.tm
Deleting log files on the Virtual I/O Server does not free up as much disk space as you
expected.
As described in 6.1.4, “Virtual log directory and file structure” on page 204, virtual logs
store client and state data in a series of rotating log files.
When connected to a client LPAR, the virtual log target device driver in the Virtual I/O
Server holds the current log file and the current state file open for writing so that incoming
messages can be quickly written to disk.
However, file system semantics dictate that when a file is removed, space is not freed
back to the file system until all processes close the file. As a result, the removal of log or
state files directly with the rm command does not free up the space used by the current log
and state files until the virtual log target device closes the file. Space that is used by other
log and state files is freed immediately, as expected. If a virtual log is not in use by a virtual
log target device, this issue does not occur and space is freed immediately.
246 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Furthermore, the virtual log device driver in the Virtual I/O Server continues to write new
log messages to the opened file. These new messages are inaccessible because the file
is deleted and not visible from the file system.
For these reasons, deleting log and state files manually is not advised. it is better to
correctly configure the log and state file counts and sizes to ensure that the disk space
usage of a virtual log is acceptable.
However, it is possible to fully realize the free space that you want without causing I/O
errors on the client LPAR. Instruct the virtual log target device to close and reopen the log
and state files, which cause the file system to reclaim the space. Example 6-55 shows the
problem. It then shows how it can be solved by using the chvlog command to instruct the
virtual log target device to reinitialize, which occurs without failing any I/O on the client
LPAR.
Example 6-55 Using chvlog to free space that is used by opened state and log files
$ df -m /vlogs
Filesystem MB blocks Free %Used Iused %Iused Mounted on
/dev/fslv00 2048.00 1614.16 22% 18 1% /vlogs
$ lsvlog -detail -u 000000000000000068fe15551286c088
Client Name: s2
Example 6-56 Multiple writers try to open the same virtual log on the client LPAR
The requested resource is busy.
ksh: /dev/vlog0: 0403-005 Cannot create the specified file.
If you can access the Virtual I/O Server, the state messages can be inspected to identify
which process includes the file open. For more information about how to interpret the state
file messages, see 6.3.3, “Messages that are written to the state files” on page 213.
When the auditpr command is used to view audit records on the Virtual I/O Server, the
user and group names are incorrect.
Because the auditpr command uses the local user and group files to map the IDs in the
binary audit logs to string representations, the incorrect strings are used if the Virtual I/O
Server has a different set of users and groups to the client LPAR on which the audit log is
generated. Use the -r command option to the auditpr command to suppress the
conversion of IDs to names, as described in 6.5.3, “Accessing virtual log data on the
Virtual I/O Server” on page 220.
Changes to a virtual log configuration take effect on the Virtual I/O Server, but the
changes are not visible on the client LPAR by using the lsattr command.
The lsattr command queries only the device for attributes when it is first detected. To
refresh this list, remove the virtual log device by using the rmdev -d -l command and start
a detection of devices again with the cfgmgr command.
Changes to shared storage pool virtual log configuration do not take effect on all Virtual
I/O Servers in the cluster.
When the configuration of a virtual log in a shared storage pool is modified by using the
chvlog command, the change is communicated to virtual log target devices that run on the
Virtual I/O Server on which the change is requested. The change is not communicated to
virtual log target devices that run on other Virtual I/O Servers in the cluster.
Therefore, virtual log target devices that provide multiple paths to a client LPAR can get
out of sync, which results in unwanted behaviors, such as a path not respecting an
updated log file size change.
248 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
The solution is to force a reconfiguration of the device on the Virtual I/O Servers that
provide alternative paths for the same virtual log by issuing a chvlog -state enabled
command. This command does not change any properties of the virtual log. Instead, it
instructs running virtual log target devices on the Virtual I/O Server on which the
command is run to reload their configuration from the virtual log.
Example 6-57 shows how a configuration change on the vios1 Virtual I/O Server with
chvlog (setting the number of log files to 5 by using the -lfs command option) is
propagated to the vios2 Virtual I/O Server by using the chvlog -state enabled command.
As shown in Example 6-57, it also is ensured that a change to a virtual log’s configuration
is propagated to both paths in a shared storage pool configuration.
“Fileset ... is actively being used and cannot be deleted” error occurs when you
remove a virtual log from a shared storage pool.
Error messages similar to the messages that are shown in Example 6-58 are shown when
the virtual log is in use by another Virtual I/O Server. This situation is not easily detectable
because the lsvlog command shows only virtual log target devices on the Virtual I/O
Server on which the command is run.
Example 6-58 Error message when trying to remove a virtual log that is still in use
rmvlog Error:
Could not remove logs from the repository.
Virtual log 99b977dec96860fbbd4ede0f5be6a540 deleted.
0967-030 Fileset /var/vio/SSP/vlog_cluster/D_E_F_A_U_L_T_061310/vlogs/s1/logB
is actively being used and cannot be deleted.
Example 6-58 also shows the error message when an attempt is made to remove a virtual
log with the rmvlog command and the virtual log is still in use by another virtual log target
device on a different Virtual I/O Server.
To complete the deletion of the virtual log, locate the virtual log target device that is using
the log and remove it with the rmdev command.
“Client LPAR is not accessible for VSCSI adapter” error occurs when you create a
virtual log with the mkvlog command.
Errors similar to the messages that are shown in Example 6-59 are produced when the
client LPAR is not running. Example 6-59 shows the error message when you create a
virtual log and attach it to a client LPAR that is not running.
Example 6-59 Error when creating a virtual log and attaching it to inactive client LPAR
mkvlog Error:
Client LPAR is not accessible for VSCSI adapter vhost0. Use -client option to
specify a client name for the new Virtual Log.
Starting the client LPAR and waiting until its operating system is fully started rectifies this
problem. Alternatively, use the -client command option to specify the client name
manually.
Note: The use of firewalls, such as a firewall that is activated with the viosecure
-firewall command, can cause the shared storage pool cluster to malfunction.
One way to see whether the cluster is functioning properly is to use the lssp command to
display the shared storage pools that the Virtual I/O Server can access. Example 6-60
shows the behavior of the lssp command when the shared storage pool cluster is working
properly. It also shows the result when the shared storage pool cluster is impeded by the
enablement of the Virtual I/O Server firewall.
Example 6-60 The lssp command: A shared storage pool cluster and then a firewall
$ lssp -clustername vlog_cluster
POOL_NAME: vlog_ssp
POOL_SIZE: 20352
FREE_SPACE: 19831
TOTAL_LU_SIZE: 0
TOTAL_LUS: 0
POOL_TYPE: CLPOOL
POOL_ID: FFFFFFFFAC1014470000000050535277
$ viosecure -firewall on
$ lssp -clustername vlog_cluster
Unable to connect to Database
Unable to connect to Database
6.7 Conclusion
In this chapter, we introduced the Trusted Logging concept and described how it allows log
data to be consolidated in a secure fashion with minimal configuration.
Many standards, such as the Payment Card Industry Data Security Standard (PCI DSS),
require the secure storage of log data. Trusted Logging is of interest to any organization that
is subject to regulatory compliance.
Even if compliance is not a direct concern, the ability of Trusted Logging to consolidate log
data within a system (by using local virtual logs) or across a data center (by using shared
storage pools) provides enhanced manageability when compared with other means of log
collection and analysis.
250 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
7
This chapter provides an overview of Trusted Boot, including its reference architecture, how to
plan for implementation and installation, and troubleshooting information.
Numerous ways of gaining assurances are available. Regulatory bodies, certifications, and
periodic audits or inspections can all be used to help place trust in a cloud provider. However,
they cannot provide continual, definitive proof that everything is alright.
Consider the disk image that belongs to your virtual machine (VM). How can you know for
certain that your VM boots from the correct device and the disk image is not tampered with?
The nature of the cloud means that this question must be answered remotely, which adds the
following complications:
How can I be sure that I am talking to the correct machine and not a man-in-the-middle
machine?
How do I know that the response I receive is honest and not manipulated to tell me what I
want to hear?
Trusted Boot, which is part of the IBM PowerSC Standard Edition, provides a definitive
answer to these questions. Trust Boot is based on the Trusted Computing Group’s Trusted
Platform Module (TPM) technology. It scrutinizes every step of the boot process, taking
secure measurements (cryptographic hashes) of the software and recording them in a virtual
TPM (vTPM). Recording data in the vTPM is a one-way-street. After a value is written, it can
be retrieved, but it cannot be modified or over-written.
The cryptographic strength of the measurement, coupled with the vTPM capabilities, make it
impossible to falsify a measurement. Trusted Boot forms an unbreakable chain of trust for
every step of the boot process. For Power Systems, this chain starts at the hypervisor,
continues through the partition firmware, and into AIX and the application layer.
Each link in the chain is responsible for measuring the next link and locking this measurement
away in the vTPM where it cannot be tampered with. For AIX, it is inspected and analyzed
and the measurements are locked away where it cannot touch them before it has a chance to
run a single instruction of its own code. If the boot image on the disk is modified, Trusted Boot
is aware of this change (see Figure 7-1).
Figure 7-1 PowerVM hypervisor interrogates the AIX boot image before it runs
252 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
7.2 Component architecture
It is important to understand the integrity of the boot process and how to classify the boot as a
trusted boot or a non-trusted boot.
You can configure a maximum of 60 vTPM-enabled logical partitions (LPARs) for each
physical system by using the Hardware Management Console (HMC). When configured, the
vTPM is unique to each LPAR. When used with the AIX Trusted Execution technology, the
vTPM provides security and assurance to the following partitions:
Boot image on the disk
Entire operating system
Application layers
An administrator can view trusted and non-trusted systems from a central console that is
installed with the openpts verifier that is available on the AIX expansion pack. The openpts
console manages one or more Power Systems servers. It also monitors or attests the trusted
state of AIX systems throughout the data center. Attestation is the process where the verifier
determines (or attests) if a collector performed a trusted boot.
A partition is said to be trusted if the verifier successfully attests the integrity of the collector.
The verifier is the remote partition that determines whether a collector performed a trusted
boot. The collector is the AIX partition that has a vTPM that is attached and the Trusted
Software Stack (TSS) installed. It indicates that the measurements that are recorded within
the vTPM match a reference set that is held by the verifier.
A trusted boot state indicates whether the partition booted in a trusted manner. This
statement is about the integrity of the system boot process and does not indicate the current
or ongoing level of the security of the system.
A partition enters a non-trusted state if the verifier cannot successfully attest the integrity of
the boot process. The non-trusted state indicates that some aspect of the boot process is
inconsistent with the reference information that is held by the verifier. The possible causes for
a failed attestation include booting from a different boot device, booting a different kernel
image, and changing the existing boot image.
A specialized hardware device that is named TPM is typically used to store these
measurements. A chain of trust is built by having each step of the boot process measure the
next, starting with the code run at power-on.
After these measurements are made and the operating system is fully booted, a user can
request the list of measurements from the VM, which are retrieved from the secure store. The
user can then check this list of measurements against a set of values that are known to be
good to attest that the software components can all be trusted.
The goal of Trusted Boot is to bring this TPM and attestation capability to the POWER
platform. POWER systems are not included with a hardware TPM component. The IBM
Watson® Research Lab developed software to virtualize the functionality of a TPM. This
vTPM software is used to provide the secure storage for the measurements of a POWER
LPAR’s Trusted Boot.
Each LPAR on a system must measure its boot sequence independently so each LPAR must
have its own vTPM device. Adjuncts are used to provide the vTPM devices to the LPAR.
Adjuncts provide a reusable, lightweight capability of presenting a hardware device to an
LPAR.
The following prerequisites must be met to provide Trusted Boot for POWER LPARs:
Modify the entire POWER boot process to perform measurements at every stage.
Provide a secure way of storing these measurements.
Provide the facility for users to query these measurements.
Some technical considerations influence the high-level design of the solution. One primary
consideration is that hardware TPM chips contain an amount of storage space (NVRAM2)
that persists across reboots. Therefore, vTPM devices are required to have comparable
persistent storage.
POWER systems contain their own piece of NVRAM. The NVRAM is not scalable to provide
storage for each possible VM’s vTPM. Therefore, a limit of 60 partitions exists on a single
machine that can contain a vTPM device.
254 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
7.3 Detailed implementation
The installation of Trusted Boot involves configuring the collector and the verifier. The
hardware and software configurations that are required to install Trusted Boot are shown in
Figure 7-3.
The Trusted Boot/vTPM-enabled LPAR is called the collector. The LPAR that performs the
attestation is called the verifier. The collector and the verifier have the following
characteristics:
Collector:
– IBM POWER7® hardware that runs on a 740 firmware release.
– IBM AIX 6 with Technology Level 7 or IBM AIX 7 with Technology Level 1.
– Hardware Management Console (HMC) version 7.4 or later (not shown in Figure 7-3).
– The partition is configured with the vTPM and a minimum of 1 GB memory.
– Secure Shell (SSH) is required, specifically OpenSSH or equivalent.
– The tcsd daemon must start at boot time.
– PowerSC vTPM device driver is from the PowerSC Standard Edition CD.
– Trusted Platform Module (TPM) tools are installed by default by AIX.
– terenew is installed by default by AIX.
– bosrenew is installed by default by AIX.
Verifier:
– SSH, specifically OpenSSH or equivalent.
– Network connectivity (through SSH) to the collector.
– Java 1.6 or later to access the openpts console from the graphical interface.
The OpenPTS verifier can be accessed from the command-line interface (CLI) and the
graphical user interface (GUI) that is designed to run on a range of platforms.
You must consider certain prerequisites before you migrate a partition to the vTPM. An
advantage of a vTPM over a physical TPM is that it allows the partition to move between
systems at the same time retaining the vTPM. To securely migrate the LPAR, the firmware
encrypts the vTPM data before transmission.
To ensure a secure migration, the following security measures must be implemented before
migration:
Enable the IPSEC on the Virtual I/O Server that performs the migration.
Set the trusted system key through the HMC to control the managed systems that can
decrypt the vTPM data after migration. The migration destination system must have the
same key as the source system to successfully migrate the data.
7.4 Installation
In this section, we describe how to install PowerSC Trusted Boot on the collector and the
verifier.
Be aware: Collector and verifier components cannot be installed on the same system
(LPAR or VM).
256 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
7.4.1 Installing the collector
Complete the following steps to install the collector:
1. Install the openpts.collector package from the AIX base CD by using the smit or
installp command, as shown in Figure 7-5. The required user input is shown in bold.
WPAR Management
Perform Operation in Global Environment yes +
Perform Operation on Detached WPARs no +
Detached WPAR Names [_all_wpars] +
Remount Installation Device in WPARs yes +
Alternate WPAR Installation Device []
Figure 7-5 The openpts.collector package from the AIX base CD
WPAR Management
Perform Operation in Global Environment yes +
Perform Operation on Detached WPARs no +
Detached WPAR Names [_all_wpars] +
Remount Installation Device in WPARs yes +
Alternate WPAR Installation Device []
Figure 7-6 The powerscStd.vtpm from the PowerSC CD
258 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
7.4.2 Installing the verifier
Complete the following steps to install the verifier:
1. Use the fileset from the PowerSC Standard Edition 1.1.2.0 media.
2. Install the openpts.verifier by using the smit or installp command. This package
installs the command-line version and graphical interface version of the verifier (see
Figure 7-7). The required user input is shown in bold.
WPAR Management
Perform Operation in Global Environment yes +
Perform Operation on Detached WPARs no +
Detached WPAR Names [_all_wpars] +
Remount Installation Device in WPARs yes +
Alternate WPAR Installation Device []
Figure 7-7 The openpts.verifier from the AIX expansion pack
Important: Use the ssh-keygen command only once on the verifier. Otherwise, the
private key (id_rsa) and public key (id_rsa.pub) are replaced. Use the same public key
for all clients.
260 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
7.5.2 Enabling Virtual Trusted Platform Module (vTPM)
vTPM must be turned on for Trusted Boot to work properly with the collector and verifier.
Complete the following steps to enable vTPM:
1. Shut down the LPAR by using the shutdown command on the chosen LPAR.
2. Access the LPAR’s partition properties by right-clicking the chosen LPAR from the HMC.
Click Properties to open the window (see Figure 7-8).
To initialize the openpts collector for the first time, run the ptsc command from the collector:
ptsc -i
To enroll a system from the command line, use the following command from the verifier:
openpts -i <hostname>
Information about the enrolled partition is in the $HOME/.openpts directory. Each new partition
is assigned with a unique identifier during the enrollment process. Information that relates to
the enrolled partitions is stored in the directory that corresponds to the unique ID.
To enroll a system from the graphical interface, complete the following steps:
1. Start the graphical GUI by using the /opt/ibm/openpts_gui/openpts_GUI.sh command.
2. Select Add & Enroll from the navigation menu on the left.
3. Enter the host name and the SSH credentials of the system.
4. Click Add & Enroll.
262 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
7.5.4 Attesting a system
To query the integrity of a system boot, use the following command from the verifier:
openpts <hostname>
To attest a system from the graphical user interface, complete the following steps:
1. Start the GUI by using the /opt/ibm/openpts_gui/openpts_GUI.sh command.
2. Select a category from the navigation menu on the left under All.
3. Select one or more systems to attest.
4. Click Attest.
2. Check the environment after the change by refreshing the status of all servers. Then, use
the openPTS GUI to browse to File →Refresh from the drop-down menu.
264 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 7-10 shows the updated status of our simulated environment after the change and
refresh.
You can also check the change by using the openpts command on the command line, as
shown in Example 7-11.
Example 7-11 Checking the client collector2 by using the command line
verifier> # openpts collector2
Target: collector2
Collector UUID: 8572995a-f865-11e1-a0d7-2ae6a0138902 (date:
2012-09-06-20:57:51)
Manifest UUID: c61a90a4-01d2-11e2-b16d-2ae6a0138902 (date: 2012-09-18-20:52:35)
username(ssh): default
port(ssh): default
policy file: //.openpts/8572995a-f865-11e1-a0d7-2ae6a0138902/policy.conf
property file: //.openpts/8572995a-f865-11e1-a0d7-2ae6a0138902/vr.properties
integrity: valid
---------------------------------------------------------
New Manifest UUID: 5hcwTAHXEeKAPyrmoBOJAg== (date: 2012-09-18-21:29:16)
A new reference manifest exists. Update? [Y/n]
n
Keep current manifest
verifier> #
3. Reboot collector2 to force a boot with an untrusted boot image. To reboot the client, use
the shutdown command, as shown in Example 7-12.
Example 7-12 Rebooting the client to use the changed boot image
collector2> # shutdown -Fr
SHUTDOWN PROGRAM
Tue Sep 18 16:18:22 EDT 2012
Wait for 'Rebooting...' before stopping.
Error reporting has stopped.
Advanced Accounting has stopped...
Process accounting has stopped.
nfs_clean: Stopping NFS/NIS Daemons
0513-004 The Subsystem or Group, nfsd, is currently inoperative.
0513-044 The biod Subsystem was requested to stop.
0513-044 The rpc.lockd Subsystem was requested to stop.
4. Check the environment after the reboot by refreshing the status of all servers. Use the
openPTS GUI to browse to File →Refresh from the drop-down menu.
Figure 7-11 shows the status of our simulated environment after the reboot.
You can also check the change by using the command openpts on the command line, as
shown in Example 7-13.
Example 7-13 Checking the client after the reboot by using the command line
verifier> # openpts collector2
Target: collector2
Collector UUID: 8572995a-f865-11e1-a0d7-2ae6a0138902 (date:
2012-09-06-20:57:51)
Manifest UUID: 859909fa-f865-11e1-a0d7-2ae6a0138902 (date: 2012-09-06-20:57:51)
username(ssh): default
port(ssh): default
policy file: //.openpts/8572995a-f865-11e1-a0d7-2ae6a0138902/policy.conf
property file: //.openpts/8572995a-f865-11e1-a0d7-2ae6a0138902/vr.properties
integrity: invalid
0 Missing Reference Manifest (RM)
1 Collector hostname = collector2
2 Collector UUID = 8572995a-f865-11e1-a0d7-2ae6a0138902
266 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
3 Collector RM UUID = ecc902b2-01cd-11e2-9cbd-2ae6a0138902
4 Missing Reference Manifest directory =
//.openpts/8572995a-f865-11e1-a0d7-2ae6a0138902/ecc902b2-01cd-11e2-9cbd-2ae6a01
38902
5 Collector is using another Reference Manifest (RM)
6 Collector hostname = collector2
7 Collector UUID = 8572995a-f865-11e1-a0d7-2ae6a0138902
8 Previous RM UUID = 859909fa-f865-11e1-a0d7-2ae6a0138902, timestamp =
2012-09-06-20:57:51
9 Current RM UUID = ecc902b2-01cd-11e2-9cbd-2ae6a0138902, timestamp =
2012-09-18-20:17:52
10 [RM00-PCR03-PCR3_START] IR validation by RM has failed
11 [RM00-PCR04-EV_EVENT_IPL_LOOP_0] IR validation by RM has failed
12 [QUOTE] verification of PCR Composite has failed, (tscd - bad FSM
configuration in /etc/ptsc.conf)
13 [POLICY-L004] tpm.quote.pcr.3 is G0rSra54/+uuwiS/cI3YJjLiuPs=, not
3ehhn8LqbXgDQ6nP/GdHltKcisw=
14 [POLICY-L014] ibm.pfw.pcr.3.integrity is missing
15 [POLICY-L015] ibm.pfw.pcr.4.integrity is missing
16 [POLICY-L020] tpm.quote.pcrs is invalid, not valid
verifier> #
Example 7-14 Making the server trusted again by using the command line
verifier> # openpts -f -i collector2
Target: collector2
Collector UUID: 8572995a-f865-11e1-a0d7-2ae6a0138902
Manifest UUID: e617304c-01d7-11e2-803f-2ae6a0138902
Manifest[0]:
//.openpts/8572995a-f865-11e1-a0d7-2ae6a0138902//e617304c-01d7-11e2-803f-2ae6a0
138902/rm0.xml
Manifest[1]:
//.openpts/8572995a-f865-11e1-a0d7-2ae6a0138902//e617304c-01d7-11e2-803f-2ae6a0
138902/rm1.xml
Configuration: //.openpts/8572995a-f865-11e1-a0d7-2ae6a0138902/target.conf
Validation policy: //.openpts/8572995a-f865-11e1-a0d7-2ae6a0138902/policy.conf
verifier> #
You must first consult the information center for Trusted Boot, which contains a brief
troubleshooting section that covers the most obvious attestation issues. This guide is
intended to help you in cases where a deeper analysis might be needed, and the case
requires a deeper understanding of what might go wrong with each component.
This troubleshooting section first describes some of the pitfalls that might be encountered
during the configuration and normal operation of a Trusted Boot LPAR and how to avoid them.
The following sections describe some of the tools and information to help you successfully
debug attestation issues. The term collector refers to the Trusted Boot/vTPM-enabled LPAR;
the term verifier refers to the LPAR that performs the attestation.
vTPM missing from LPAR On the collector, vTPM device is From the HMC, shut down the
configuration missing from /dev LPAR, enable a vTPM, and
reboot.
vTPM device driver not On the collector, vTPM device is Install package
installed missing from /dev powerscStd.vtpm.
vTPM device removed by On the collector, vTPM device is If the LPAR runs on a system
DLPAR operation missing from /dev that supports vTPM, from the
HMC, shut down the LPAR and
activate with the profile.
openpts.collector package On the collector, /usr/bin/ptsc From AIX Base pack, install the
not installed is missing from file system openpts.collector package.
268 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Root cause Key indicators Fix
AIX reinstalled on On the collector, the file From the HMC, shut down the
vTPM-enabled LPAR /var/tss/lib/tpm/system.data LPAR, disable the vTPM,
is missing or empty. This file is re-enable the vTPM, and
used by the tcsd daemon to store activate the LPAR.
important keys that are used to
communicate with the vTPM. If
destroyed, it cannot be
re-created without destroying the
vTPM.
SSH not installed on On the verifier, SSH fails to Install SSH on the collector.
vTPM-enabled LPAR connect to the collector LPAR.
SSH not installed on verifier On the verifier, SSH fails to Install SSH on the verifier.
connect to the collector LPAR.
Insufficient disk space on the The home directory that Increase the size of the file
verifier corresponds to the user system on which the user’s
performing the attestation might home directory resides.
not have enough space to store
the OpenPTS verifier
configuration and log files.
User entered input at the The verifier fails to validate Reboot the collector the same
partition firmware prompt PCR 5 way as you did originally when
before booting the original reference
information is collected by the
verifier. In most cases, reboot
normally without entering the
SMS menu or the Firmware
Prompt.
7.6.2 Diagnosis
Many ways are available to try to diagnose the cause of a failed attestation or integrity valid
issue, such as checking that important configuration files are in the correct state, viewing the
output of various log files, and running various tools on the LPAR.
Configuration files
In general, failed attestations often occur because of configuration issues where the vTPM is
not set up correctly or the correct software packages are not installed. However, occasionally
it might be because of an issue with the configuration files on the collector, as shown in
Table 7-3 on page 270.
These files are not expected to change during the life of the collector because they are tuned
to work on AIX LPARs correctly for all possible LPAR configurations. If these files are not
identical to the files that are in the corresponding packages, the attestations might not work
correctly.
Log files
When you debug failed attestation or integrity invalid issues, it is sometimes necessary to
investigate the contents of various log files on the collector and verifier (see Table 7-4).
/var/adm/ras/bootlog Collector This file is the AIX initial boot log, which is
generated early in the boot process. It shows the
vTPM device (for example, /dev/vtpm0) being
initialized correctly. If not, a problem likely exists
loading the device driver.
/var/adm/ras/conslog Collector This file is the output from many of the scripts and
programs that are run later in the boot process.
This file shows output, such as “Waiting for
tcsd to become ready... ok”, which means the
tcsd daemon started successfully.
/var/adm/ras/trousers/tcsd.err Collector Shows any errors from the tcsd daemon. If tcsd
started successfully, this file is empty.
270 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
File Location Description
Tools
The following tools are available on the collector that can help you address some of the issues
that might occur during attestation:
TPM version information command:
/usr/sbin/tpm_version
When run, this command prints version information that is obtained from the vTPM. If this
command successfully displays the version information, it means that the collector has a
working vTPM and vTPM device driver, and the tcsd daemon is running. Before you debug
any issue, the user must run this tool first to ensure that it is possible to communicate with
the vTPM.
Event log:
/usr/bin/iml2text
This command talks to the tcsd daemon to retrieve the event log, which is a record of all
Trusted Boot events that occurred during an AIX boot. The detail that is contained in this
log is used to construct the final PCR values that are tested during attestation.
If the command fails to produce any event log output, it might mean that a problem exists
communicating with the tcsd daemon, or it might indicate that a problem exists generating
the event log at boot time. If the command fails, you can check that the following files are
present on the collector:
– /var/adm/ras/trustedboot.log
This file is a softcopy of the event log that is generated as part of the boot process and
loaded by tcsd at startup. It is initialized by using the /usr/lib/tpm/bin/geteventlog
command early in the boot process.
– /var/adm/ras/teboot.log
This file contains the event log entries that correspond to the Trusted Execution
database, which are generated by the script /etc/rc.teboot.
If either file is missing or empty, it might help to indicate the root of the problem.
A TPM is a security device that is defined by the TCG that is used to securely maintain a
record of the boot process of a system. This secure record can then be used to conditionally
release encryption keys (if the record matches an expected value) and prove to a third party
that the boot of a system ran correctly.
272 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
8
Trusted Firewall can help you improve performance and reduce network resource
consumption. It also can eliminate the need to use the external network for LPAR-to-LPAR
traffic when these LPARs run on the same server. Trusted Firewall offers the following
benefits:
Saves time and network resources by never going out to a physical network interface
controller (NIC) or to an external firewall.
Keeps virtual traffic local, which improves performance and saves physical resources.
Reduces traffic on the external firewalls significantly.
This chapter describes the Trusted Firewall component architecture, including implementation
and deployment, installation details, how-to manage it, and troubleshooting information.
With Virtual I/O Server Version 2.2.1.4, or later, you can configure and manage the Trusted
Firewall feature. By using this feature, LPARs on different VLANs of the same server can
communicate through the shared Ethernet adapters. This section describes the architecture
of Trusted Firewall in more detail. It introduces the important concepts that are required to
build an intuitive understanding of the capabilities of Trusted Firewall and how it works.
The common firewall usage is to control any traffic flow between the secure and non-secure
networks. You also can use a firewall to secure one internal network from another on an
intranet network.
With PowerSC Standard Edition, Trusted Firewall Version 1.1.2 provides a packet filter
firewall, which is also called a network firewall.
Packet firewalls commonly filter traffic by IP address and by TCP or User Datagram Protocol
(UDP) ports. They also incorporate the Internet Control Message Protocol (ICMP). Trusted
Firewall supports IPv4 and IPv6 protocols. Because the IP packet header is inspected, a
packet filter firewall works at Layer 3 of the OSI Stack.
Based on the filtering rules that are configured into the firewall, the firewall typically blocks
these addresses and ports (Deny) unless they are explicitly allowed (Permit).
In a packet-filtering firewall, the firewall checks usually for five characteristics. Because
Trusted Firewall filters inter-VLAN communications, it checks another characteristic: the
VLAN tag.
274 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
A packet is inspected by using the following seven attributes:
Source IP address
Source port number
Destination IP address
Destination port number
IP protocol: UDP, TCP, ICMP, and ICMPV6
VLAN ID of source IP address
VLAN ID of destination IP address
The next section describes how Trusted Firewall manages the denied IP packets.
The packet is denied from going through Trusted Firewall directly to the destination VLAN in
the frame. Therefore, the packet returns to its default route. The default route is through the
external network with its associated firewalls or Intrusion Prevention Systems/Intrusion
Detection Systems (IPS/IDS) appliances.
Trusted Firewall sends the denied packet back to its original Shared Ethernet Adapter (SEA).
This SEA then sends the packet by the physical adapter to the external network. The packet
is denied the use of the cross-VLAN capability of Trusted Firewall.
The Deny decision, which is made by Trusted Firewall, means that the IP packet is ineligible
based on the filtering rules table to route cross-VLAN.
Therefore, the goal of Trusted Firewall Version 1.1.2 is to propose the following paths:
A short path within the frame through the Virtual I/O Servers to IP packets that do not
need to be inspected (Permitted Packets)
A default path (the expected routing to the external network) to IP packets that still require
an examination and inspection (Denied Packets)
Consideration: When Trusted Firewall authorizes packets to go back into the SEA
devices to bridge the VLANs, it opens the shortest path for the permitted packets. On
the other side, the denied packets are sent back to the SEA devices to be exposed to
external firewalls and IPS/IDS appliances. The denied packets include an inspection
requirement.
For a system administrator, denying this behavior results in the following benefits:
Trusted Firewall saves network bandwidth and physical network cards bandwidth only. It
proposes the shortest path to cross-VLAN secured traffic.
Because Trusted Firewall does not alter the IP traffic in any way, you do not need
extensive logging of the Trusted Firewall activity. In Version 1.1.2, Trusted Firewall does
not provide any logging feature.
The filtering rules definition is simpler and easier. The system administrator focuses on
only the cross-VLAN traffic. The systems administrator wants to grant the shortest route,
which is the secured traffic between VLANs.
The Trusted Firewall is safe to implement and nondisruptive to the network traffic. The
deny behavior does not alter the IP traffic, only the routing paths.
With Trusted Firewall, the IP packets that enter the Virtual I/O Servers (outbound) and the IP
packets that exit the Virtual I/O Servers (inbound-to-external) are not filtered.
Important: The Trusted Firewall filtering scope is limited to only the IP flow between
VLANs that are defined in the same server frame. The outbound traffic and the
inbound-to-external traffic of the server frame are not filtered.
A filtering rule for Trusted Firewall applies to both directions of the IP flow (flow from IP source
to IP destination and from IP destination to IP source). However, it is advised that you create
filtering rules for each IP flow direction.
Filter rules can control many aspects of communications. In Trusted Firewall Version 1.1.2,
the filtering scope is limited to source and destination addresses, source and destination
VLANs, port numbers, and protocol.
By default, Trusted Firewall denies all packets and sends them back to the SEAs to be
externally routed. This network behavior is expected and it is achieved by the Trusted Firewall
default rule (see Table 8-1).
Important: By default, Trusted Firewall denies all packets to cross VLANs within the frame.
All VLAN traffic is exposed to the external network to be inspected.
From the external network, the firewall administrator tasks follow this path:
1. Define only the cross-VLAN secured traffic: Permit first.
2. Apply the default Denied Traffic rule to the rest of the traffic.
276 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
In terms of system administration (rules creation and rules maintenance), this approach
works if the frame handles little secured traffic. The resulting rules table is small and easily
maintained, controlled, and audited. Also, these goals are key in an efficient security
framework.
However, if the frame contains only VLANs that belong to the same level of security where
most cross-VLAN traffic is secured, the system administrator task is more difficult. The rules
table is highly populated, difficult to maintain, and prone to maintenance errors. With this
approach, it is difficult to achieve efficiency in terms of security.
For more secured traffic, the firewall administrator must use the following approach:
1. Define only the non-secured traffic (the packets to be inspected): Deny first.
2. Define generic cross-VLAN authorizations without details.
Table 8-2, Table 8-3 on page 278, Table 8-4 on page 278, and Table 8-5 on page 279 are a
simple planning tool for administrators to gather data about the VLAN connections to allow
through the Trusted Firewall.
Table 8-2 is a filter rules table that is designed as Permit First. It first authorizes the detailed
secured traffic and denies all remaining traffic from the frame in generic form with the default
rule.
Permit
In this example, the deny rule between Z-LIFE and U-BOAT must be entered before the
generic permit authorization of VLAN Z - VLAN U, as listed in Table 8-3.
Permit Protocol
Permit
Table 8-4 proposes which rule first denies the detailed non-secured traffic (to be sent
externally) and authorizes all remaining traffic from the frame in a generic form. Trusted
Firewall does not propose a generic rule to authorize all the traffic. Therefore, a generic
authorization rule VLAN-per-VLAN connection must be defined. In Table 8-4, VLAN C - VLAN
D and VLAN E - VLAN F can freely exchange their IP packets within the frame without
external inspection or further control.
278 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
The Deny First table also features a special singularity. The following example is based on
Table 8-4 on page 278 and Table 8-5:
VLAN Z - VLAN U are denied the ability to communicate directly (Deny First).
Exception: Two LPARs (Z-LIFE and U-BOAT) can talk together without packet inspection.
In this example, the permit rule between Z-LIFE and U-BOAT must be entered before the
generic deny authorization of VLAN Z - VLAN U. Table 8-5 lists the example in this table
extract.
Deny
In both approaches (Permit First and Deny First), the order of the rules for singularities is
important. Filtering rules tables for the firewall often are ordered by matching scan, and
Trusted Firewall is no exception. If a packet is validated by one filter rule, the associated rule
action (Deny or Permit) is taken immediately, and the rules scan stops. If no filter rule can
validate the packet, the default action is to deny the packet.
Consideration: The filter rules table must be ordered in respect to the expected scan
order. Rules are processed in order from the top to the bottom of the rules file. By default,
Trusted Firewall uses the first filter rule that matches the packet that it is evaluating.
The study of the VLANs of the frame and the security level that is required in their
connections can help you to choose between the Permit First methodology or the Deny First
methodology to create the filter rules table. It is suggested to prioritize the approach that
results in fewer rules. Fewer rules mean less work to administer and maintain and it is easier
to monitor and control.
When these tables are completed, the security policies of the network traffic within the frame
are established. Then, the firewall administrator can use the Trusted Firewall rules
management commands to activate these policies in the Virtual I/O Server.
For more information about these commands, see 8.5.2, “Configuring the filter rules” on
page 299.
How Trusted Firewall is implemented within the Virtual I/O Server is described next.
Because SEA devices act only at OSI Layer 2 and to implement Trusted Firewall to act as a
cross-VLAN router at OSI Layer 3, a new component is needed. The Secure Virtual Machine
(SVM) is added to the Virtual I/O Server.
The SVM is implemented as a driver kernel extension. It minimizes the effect on the SEA
device driver by performing VLAN crossing operations on network packets in the SVM kernel
extension. The configuration and setup of Trusted Firewall is performed on the Virtual I/O
Server command line.
SVM enables LPARs on the same system, but on different VLANs to communicate with each
other by way of the SEA devices.
The SVM kernel extension consists of the following inter-virtual LAN routing functions:
Layer 3 routing
VLANs represent different logical networks. Therefore, a Layer 3 router is required to
connect the traffic between the VLANs.
Network filtering rules
Network filtering rules are required to permit or deny inter-VLAN network traffic. Network
filtering rules can be configured by using the Virtual I/O Server command-line interface.
280 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 8-2 shows the SVM implementation with an inbound network flow. The outbound traffic
that enters the frame is never filtered because it is not initiated by an LPAR that is in the
frame. Therefore, this traffic is never represented.
Trusted Firewall ensures the inter-VLAN routing between LPARs in the same frame.
Therefore, the packet filtering rules apply to only the IP addresses that belong to the frame.
282 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Because of the Trusted Firewall implementation, the following configurations are ineligible for
the Trusted Firewall packet filtering:
Redundant SEA Load Sharing
Trunk adapters that are split between Virtual I/O Servers cannot be configured for Trusted
Firewall Filtering (see Figure 8-4).
Figure 8-4 Redundant SEAs for load sharing are not supported
Figure 8-5 Multiple SEAs in multiple Virtual I/O Servers are not supported
Inbound network traffic takes the following forms (as shown in Figure 8-6 on page 285):
LPAR network traffic that goes to the external network.
Trusted Firewall denies the packets if the destination IP address does not belong to the
frame. The deny is to reject the packet to the physical network card of the SEA. Therefore,
the traffic remains on its default path to the external network.
LPAR network traffic that goes to another internal VLAN. It is inspected:
– The packet is validated by a filtering rule: it is permitted.
Trusted Firewall resends the packet to the destination SEA. The packet is not exposed
to the external network. It has the benefit of the shortest path. That way, permitted
traffic can bridge different SEAs in the same Virtual I/O Server.
– The packet is not validated by any filtering rule. It is denied.
Trusted Firewall resends the packet to the source SEA to be exposed to the external
network. The deny is to reject the packet to the physical network card of the source
SEA. Therefore, the denied traffic remains on its default path to the external network.
284 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Figure 8-6 Description of filtered traffic by SVM
The SEAs with trunk adapters on different Power hypervisor virtual switches
Each trunk adapter is on a different VLAN ID. In this configuration, each SEA still receives
network traffic by using different VLAN IDs as in the previous example when SEAs are on
the same virtual switch.
286 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
The same VLAN IDs are reused on the virtual switches. In this case, the traffic for both
SEAs has the same VLAN IDs, as shown in Figure 8-8.
PowerSC versions before version 1.1.1.0 did not include the required fileset to install Trusted
Firewall. Therefore, ensure that you have the PowerSC installation CD or ISO image for
PowerSC Standard Edition Version 1.1.1.0 or later.
PowerSC requires Virtual I/O Server Version 2.2.1.4, or later. Complete the following steps:
1. Log in to your Virtual I/O Server where you want to install Trusted Firewall.
2. Ensure that you are running Virtual I/O Server Version 2.2.1.4 or later by using the
ioslevel command under the restricted shell:
$ ioslevel
2.2.1.4
3. Install any required patches for your Virtual I/O Server and PowerSC Trusted Firewall. It is
an opportunity to review the last HIPER/Critical Patches that were released by IBM. The
patches are available IBM Support’s Fix Central web page.
Consider the following points:
– For the Virtual I/O Server software, select as Product Group:
Virtualization software and PowerVM Virtual I/O Server
– For the PowerSC Offering, select as Product Group:
Other software and PowerSC Standard Edition1
4. Verify that you have superuser capabilities (PAdmin role) for the Virtual I/O Server where
you are installing Trusted Firewall. Run the lsuser command as shown in the following
example:
$ whoami
padmin
$ lsuser
padmin roles=PAdmin default_roles=PAdmin account_locked=false expires=0
histexpire=0 histsize=0 loginretries=0 maxage=0 maxexpired=-1 maxrepeats=8
minage=0 minalpha=0 mindiff=0 minlen=0 minother=0 pwdwarntime=0 registry=files
SYSTEM=compat
$
8.4 Installation
In the following sections, we describe a step-by-step Trusted Firewall installation and a
step-by-step installation verification.
With Electronic Software Delivery (ESD), you can download IBM i and AIX products when
your software order is processed. You do not have to wait for your software to be delivered to
you.
1
Fix Central is being configured with this area at the time of this writing.
288 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
You receive electronic images that are bit-for-bit copies of the software order that you can
download to an IBM i, AIX, or PC. You can burn the images to optical media or install the
electronic images directly onto your system. For a list of products ESD, see this web page.
ESD is available for PowerSC Standard Edition under the reference “5765-PSE: IBM
PowerSC Std Ed V1.1”.
To order electronic delivery, the software order must include the following feature codes:
United States, EMEA, LA, and AP: feature code 3450
Canada: feature code 3470
Japan: feature code 3471
For more information about registering to use electronic delivery, this web page.
In this example, we use PowerSC internal images before the product release. Complete the
following steps:
1. To install the fileset, you must escape the Virtual I/O Server restricted shell by using $
oem_setup_env.
2. Download the ISO image of PowerSC Standard Edition and PowerSC Express Edition, as
shown in the following example:
# cd /tmp/ISOIMAGES
# ls
cd.1231A_EXP_PSC.cksum cd.1231A_EXP_PSC.iso cd.1231A_STD_PSC.cksum
cd.1231A_STD_PSC.iso
3. To install from the ISO image of PowerSC Standard Edition, use the loopmount command,
which is introduced in AIX 6.1 TL4. The loopmount command mounts the ISO image as
though it is a CD (burning a CD is unnecessary):
# loopmount -i cd.1231A_STD_PSC.iso -o "-V cdrfs -o ro" -m /mnt
# cd /mnt/installp/ppc; pwd
/mnt/installp/ppc
5. Select Install Software. In the INPUT field (see Figure 8-10), enter the directory of the
licensed program products (LPP) files: /mnt/installp/ppc.
290 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
6. Press PF4 to view the ISO image content. To install Trusted Firewall (see Figure 8-11),
select the powerscStd.svm fileset, which is described as “Secure Virtual Machine”.
The last installation pane (see Figure 8-12) shows a successful installation for the
powerscStd.svm.rte fileset at the 1.1.2.0 level.
In this section, we describe how to configure SVM and how to manage the packet filtering
rules.
292 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
To load the SVM kernel extension in the Virtual I/O Server, complete the following steps:
1. Load SVM as a kernel extension by using the mksvm command as the padmin role:
$ mksvm
$
2. Verify that a device is created by using the lsdev command. In the output, look for the svm
entry:
$ lsdev -virtual
Note: After the mksvm command is run once only, the SVM driver is loaded automatically
by cfgmgr stage during each restart. The mksvm command can be run under the Virtual
I/O Server restricted shell.
3. Couple the SVM driver with the SEA interfaces to start the packet routing.
By using the vlantfw command options, you can start, stop, and query the status of SVM.
These options are listed in Table 8-6.
Note: When the SVM is active, the Trusted Firewall field value is True. When the SVM
is used as the Trusted Firewall, its capability value is 4.
Note: When the SVM is stopped, the Trusted Firewall field value is False and its
capability value is 0.
We use the command options that are listed in Table 8-7 to display the entries and flush the
content of the SVM Firewall address mapping table.
This SVM registration with all SEA devices causes a configuration constraint when one SEA
device must be removed. SVM must first be unregistered from all SEA devices before any
SEA device is removed. For more information about removing the SVM, see 8.5.3, “Removing
Trusted Firewall” on page 311.
294 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Important: Removing any SEA device before you remove the SVM driver can result in
system failure.
To ensure this Layer 3 routing, SVM must associate each IP address (OSI Layer 3) with its
associated MAC address and VLAN ID (OSI Layer 2). When SVM Firewall starts, it populates
an internal dynamic table with the information that it collected from all the SEA devices.
Table 8-8 lists an example of the SVM address mapping table.
1 1 172.16.20.121 66:1d:ba:7:38:2
1 1 172.16.20.122 66:1d:b2:c9:1c:2
1 1 172.16.20.123 66:1d:bd:de:51:2
1 1 172.16.20.124 66:1d:bf:53:dc:2
SVM Firewall keeps this table in its internal memory, even when SVM Firewall is stopped.
Consideration: The SVM address mapping table is empty when it is created when the
SVM driver is installed (by using mksvm command) or is populated when the SVM
Firewall is started (by using vlantfw -s command).
However, this SVM address mapping table must be refreshed when LPARs have no more
network activity. This lack of activity can be because of LPAR inactivity, LPAR removal, or
LPAR migration. Therefore, this table is dynamic, and the content can be flushed.
296 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Verify how this dynamic table works by completing the following steps. Four LPARs are
running on the server, as shown in Figure 8-14.
Figure 8-14 Virtual I/O Server with Trusted Firewall and four running LPARs
Figure 8-15 Virtual I/O Server with Trusted Firewall and three stopped LPARs
SVM performs a regular lookup of the mapping table entries. To flush the SVM table content,
use the vlantfw -f command. Complete the following steps:
1. Display the table content by using the vlantfw -d command:
$ vlantfw -d
vlantfw: /dev/svm dump dynamic learning IP and MAC: count: 4
0: vid: 1 pvidflag: 1 addr: 172.16.20.121 mac: 66:1d:ba:7:38:2
1: vid: 1 pvidflag: 1 addr: 172.16.20.122 mac: 66:1d:b2:c9:1c:2
2: vid: 1 pvidflag: 1 addr: 172.16.20.123 mac: 66:1d:bd:de:51:2
3: vid: 1 pvidflag: 1 addr: 172.16.20.124 mac: 66:1d:bf:53:dc:2
2. Flush the table content by using the vlantfw -f command:
$ vlantfw -f
$
3. Check that the table content is empty by using the vlantfw -d command:
$ vlantfw -d
vlantfw: /dev/svm dump dynamic learning IP and MAC: count: 0
4. Check the table again; the content is back:
$ vlantfw -d
vlantfw: /dev/svm dump dynamic learning IP and MAC: count: 4
0: vid: 1 pvidflag: 1 addr: 172.16.20.121 mac: 66:1d:ba:7:38:2
1: vid: 1 pvidflag: 1 addr: 172.16.20.122 mac: 66:1d:b2:c9:1c:2
2: vid: 1 pvidflag: 1 addr: 172.16.20.123 mac: 66:1d:bd:de:51:2
3: vid: 1 pvidflag: 1 addr: 172.16.20.124 mac: 66:1d:bf:53:dc:2
The SVM Firewall updated its address mapping table in a few seconds.
Important: The address mapping table is automatically refreshed and maintained by SVM,
based on the traffic that passes through the SEA’s devices. The stale entries are managed
automatically by SVM.
The SVM VLAN-IP-MAC mapping table is, by itself, useful information for any Virtual I/O
Server system administrator. Permanent activation of the SVM Firewall is not required. The
definition of the packet filtering rules is not required.
298 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
8.5.2 Configuring the filter rules
Trusted Firewall ensures the cross-VLAN routing between LPARs in the same frame.
Therefore, the packet filtering rules apply to only the IP addresses that belong to the SVM
address mapping table (for more information, see “Managing the SVM address mapping
table” on page 294).
There is an easy way to remember which type of traffic SVM routes: SVM receives the packets
that are received by the SEA trunk devices. Consider the following points:
The intra-VLAN does not reach the SEA trunks; SVM cannot filter this traffic.
The external traffic does not arrive through the SEA trunks; SVM does not filter it.
Important: Trusted Firewall does not route the following types of traffic:
The external network traffic that is received by the physical adapters of the Virtual I/O
Server
The intra-VLAN traffic between two LPARs (the same VLAN and the same vSwitch)
The network filtering is applied to only the IP packets that are eligible by SVM (the destination
IP of the packet is in the frame); therefore, it is in the Address Mapping Table.
It is advised that you limit filter rule creation to only the IP addresses and VLANs, which are
displayed in the Address Mapping Table (vlantfw -d command). These two tables with their
precedence order are shown in Figure 8-16 on page 300.
To create and maintain the filtering rules, two filtering tables are available, as shown in
Figure 8-17 on page 301:
The active filtering rule table, which is loaded into SVM, is used for Trusted Firewall
operations. Actions are limited in this table because it is in use to match arriving IP
Packets. Consider the following points:
– You can deactivate a filter rule immediately by using the rule removal.
– You can list the table content, which is the active filtering rules.
– You can change the filtering rule interactively.
– You can disable the table by using its content flush. This command flushes the
repository simultaneously. It is suggested that you have a script to re-create the rules in
the repository.
300 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
The inactive filtering rule table is used as a repository that you can maintain. Consider the
following points:
– You can create a rule or modify it in the repository.
– You can list the repository content.
– You can remove a rule.
– You can deactivate the repository by using its content flush. This command flushes the
active table simultaneously. It is advised that you have a script to re-create the rules in
the repository.
The repository and the active rules table content must remain identical and reflect each other.
For this reason, the rules management commands are applied first to the active rules table,
then to the repository (see Figure 8-17). Earlier versions of Trusted Firewall v1.1.2 required
that you activate the rules table and the repository to update both sides. This requirement
disappeared in Trusted Firewall Version 1.1.2.
Therefore, to become familiar with these command options, we use a table that is based on
the model that we introduced in 8.1.4, “Security policies” on page 276.
In Table 8-9 on page 302, each option flag is associated with the generic security policy
column. Then, the possible values for these parameters are in the second line. The third line
shows whether these parameters are required (Req.) or optional (Opt.).
-v -a -z -Z -s -p -o -d -P -O -c
Req. Req. Req. Req. Opt. Opt. Opt. Opt. Opt. Opt. Opt.
Trusted Firewall maintains the IPv4 Filter rules and the IPv6 Filter rules in the same
repository. You can administer both IP protocols by using the same commands.
Important: The IBM documentation about the genvfilt command and chvfilt command
provides icmpv6 as a parameter value for the ICMPv6 protocol. This value is a
typographical error. The correct ICMPv6 parameter value is icmp6.
For more information about the genvfilt and chvfilt commands, see IBM Knowledge
Center.
The first steps to managing the rule repository are based on simple firewall policies that are
defined in Table 8-10. You can define only the minimum required parameters. They define
mainly a generic connection between one VLAN to another VLAN.
-v -a -z -Z -s -p -o -d -P -O -c
4 P 304 404
4 P 404 504
6 D 306 406
6 D 406 506
In our simple firewall policies, we want VLAN 304 and VLAN 404 (IPv4) to communicate
freely through Trusted Firewall. The same firewall policies must be configured for VLAN 404
and VLAN 504 (IPv4). However, any communication between VLAN 306 and VLAN 406
(IPv6) or between VLAN 406 to VLAN 506 (IPv6) must be exposed to the external network for
inspection. Because the rest of the traffic is not described by a permit rule, it is exposed to the
external network (deny state, by default).
302 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
The commands to manage the filter rule repository are listed in Table 8-11.
chvfilt -n <fid> Change a rule in the repository and in the active rules table
rmvfilt -n <fid> Remove a rule in the repository and in the active rules table
rmvfilt -n all Flush the rules repository and the rules table content
Therefore, for a broader audience, Table 8-12 lists the lsvfilt output values that are
associated with the genvfilt parameter values that you entered. This table can help you
read these output files.
304 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Parameter genvfilt command lsvfilt output value
Changing a rule
Use the chvfilt command to change a rule. The parameters of the chvfilt command are
identical to the parameters of the genvfilt command. The only required parameter is the rule
number in the repository to be modified, followed by one or more parameters that you want to
change.
To keep the active rules table and the repository consistent, an earlier version of the chvfilt
command is required to activate the rule table. Therefore, during the time you worked on the
rules repository, the chvfilt command activated the repository and the rules that you wanted
to modify. In the most recent version, the chvfilt command does not require that the
repository is active.
To determine whether you are using the old version, the chvfilt command produces the
following error:
$ chvfilt -n 4 -aP
ioctl(QUERY_FILTER) failed no filter rule err =2
Cannot Change the filter rule.
Upgrade to the latest version of the chvfilt command. You also can use the following
workaround:
1. Stop Trusted Firewall. All traffic is not interrupted but routed to the external network.
Trusted Firewall is not disruptive by design:
$ vlantfw -t
$ vlantfw -q
vlantfw: TFW=False capability=0
2. Activate your repository within SVM safely because no other traffic arrives at the SVM. To
activate the repository, use the mkvfilt -u command:
$ mkvfilt -u
$
3. Check whether the repository is active by using the lsvfilt -a command:
$ lsvfilt -a | grep kernel
Number of active filter rules in kernel = 4
$
Now, the repository rules are activated.
Important: You can activate the Filtering Rules when Trusted Firewall is stopped. The
two layers of Trusted Firewall (SVM Address Mapping Table and Filtering Rules Table)
are independent, which makes maintenance easier.
4. Verify the rule 4 content by using the lsvfilt -a | grep -p "Filter ID:4" command:
$ lsvfilt -a | grep -p "Filter ID:4"
Number of active filter rules in kernel = 4
Filter ID:4
Filter Action:2
Source VLANID:406
Removing a rule
The command to change a rule is the rmvfilt command. The only required parameter is the
rule number in the repository or in the active rules table.
Because the two tables are kept consistent, the rule has the same number. Complete the
following steps:
1. Verify active rule 2 by using the lsvfilt -a command:
$ lsvfilt -a | grep -p "Filter ID:2"
Filter ID:2
Filter Action:1
Source VLANID:404
Destination VLANID:504
Source Address:0.0.0.0
Destination Address:0.0.0.0
Source Port Operation:any
Source Port:0
Destination Port Operation:any
Destination Port:0
Protocol:0
2. Verify rule 2 in the repository by using the lsvfilt command:
$ lsvfilt | grep -p "Filter ID:2"
Filter ID:2
Filter Action:1
Source VLANID:404
306 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Destination VLANID:504
Source Address:0.0.0.0
Destination Address:0.0.0.0
Source Port Operation:any
Source Port:0
Destination Port Operation:any
Destination Port:0
Protocol:0
3. Remove active rule 2 by using the rmvfilt -n command:
$ rmvfilt -n 2
$
4. Check that active rule 2 is removed by using the lsvfilt -a command:
$ lsvfilt -a | grep -p "Filter ID:2"
$
We can see that slot rule 2 is now empty. The rules are not renumbered.
5. Review the active rule table by using the lsvfilt -a command output:
$ lsvfilt -a
Number of active filter rules in kernel = 3
Filter ID:4
Filter Action:1
Source VLANID:406
Destination VLANID:506
Source Address:::
Destination Address:::
Source Port Operation:any
Source Port:0
Destination Port Operation:any
Destination Port:0
Protocol:58
Filter ID:3
Filter Action:2
Source VLANID:306
Destination VLANID:406
Source Address:::
Destination Address:::
Source Port Operation:any
Source Port:0
Destination Port Operation:any
Destination Port:0
Protocol:0
Filter ID:1
Filter Action:1
Source VLANID:304
Destination VLANID:404
Source Address:0.0.0.0
Destination Address:0.0.0.0
Source Port Operation:any
Source Port:0
Destination Port Operation:any
Destination Port:0
Filter ID:3
Filter Action:2
Source VLANID:306
Destination VLANID:406
Source Address:::
Destination Address:::
Source Port Operation:any
Source Port:0
Destination Port Operation:any
Destination Port:0
Protocol:0
Filter ID:1
Filter Action:1
Source VLANID:304
Destination VLANID:404
Source Address:0.0.0.0
Destination Address:0.0.0.0
Source Port Operation:any
Source Port:0
Destination Port Operation:any
Destination Port:0
Protocol:0
$
Important: The rmvfilt -n command updates the active rules table and the rules
repository simultaneously in Trusted Firewall Version 1.1.2.
However, in earlier versions of rmvfilt -n, the removal is effective in the active rules
table only, which forces flushing all repository content to remove one rule.
308 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Table 8-13 lists the commands to manage the filter rule repository.
chvfilt -n <fid> Change a rule in repository and in the active rules table
rmvfilt -n <fid> Remove a rule in repository and in the active rules table
rmvfilt -n all Flush the rules repository and the rules table content
Table 8-14 Commands to manage the active filter rules table of Trusted Firewall
Command Description
lsvfilt -a Lists the loaded active filter rules and their status
rmvfilt -n <fid> Remove a rule in repository and in the active rules table
rmvfilt -n all Flush the rules table content and the rules repository
For more information about these commands, see the examples that are described in
“Managing the filter rules repository” on page 301.
For more information about TCP/UDP ports, see the IANA website.
2
The IANA is responsible for the global coordination of the Domain Name Server (DNS) Root, IP addressing, and
other Internet Protocol resources.
ICMP code
The Internet Control Message Protocol (ICMP) is defined in RFC792, as part of the Internet
Protocol Suite. ICMP has many messages that are identified by a type field. Many of these
type fields include a code field. The code field provides more specific information about the
message type.
For more information about ICMP types and codes, see this IANA web page.
Both tables are reproduced in Appendix A, “Trusted Firewall addendum” on page 315.
Note: The genvfilt and chvfilt commands require the ICMP type value.
ICMPv6 code
The Internet Control Message Protocol version 6 (ICMPv6) is defined in RFC4443, as part of
the Internet Protocol version 6 (IPv6).
310 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
ICMP has many messages that are identified by a type field. Many of these type fields include
a code field. The code field value depends on the message type and provides another level of
message granularity.
For more information about ICMPv6 types and codes, see this IANA web page.
Both tables are reproduced in Appendix A, “Trusted Firewall addendum” on page 315.
Note: The genvfilt and chvfilt commands require the ICMPv6 type value.
If you experience execution trouble, you are facing the following types of issues:
Performance issues
The active filtering table is sequentially scanned:
– Optimize the placement of the most used rules without modifying the global semantic
of the table (the first rule match stops the scan).
– Reduce the size of the table to only the necessary rules (be concise in your rule
definition).
The routed paths are not the expected paths
If this error occurs, validate the firewall routing by using the tcpdump command. Check the
content of the active rules table and not the repository. Verify whether one rule takes
precedence during the packet matching operation.
You rebooted the Virtual I/O Server. The Trusted Firewall SVM is not started, and the Trusted
Firewall rules repository is not active. This result might be normal because the Trusted
Firewall installation process does not modify the reboot sequence of the Virtual I/O Server.
Complete the following steps to integrate Trusted Firewall into your boot sequence:
1. Escape the restricted shell and go to the /etc directory:
$ oem_setup_env
#
2. Create the following rc.trustedfw launch script in the /etc directory with the vi editor:
# pwd
/etc
# ls rc.trusted*
rc.trustedboot rc.trustedfw
Escape the vi editor with the sequence :w! and :q commands. The script is shown in
Figure 8-18 on page 313.
312 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
#!/usr/bin/ksh
#
# start Trusted Firewall SVM
echo "Starting Trusted Firewall SVM"
/usr/ios/utils/vlantfw -s
if [ $? -ne 0 ]
then
echo "Failed to launch Trusted Firewall SVM."
exit 1
fi
# Activate the Filtering rules of Trusted Firewall
echo "Starting Trusted Firewall SVM"
/usr/ios/utils/mkvfilt -u
if [ $? -ne 0 ]
then
echo "Failed to activate Filtering rules."
exit 1
fi
exit 0
Figure 8-18 rc.trustedfw script in the /etc directory
8.7 Conclusion
In this chapter, we presented the PowerSC Trusted Firewall design, components, and
installation with several supported configurations. We showed you how to define the filtering
rules for your frames and how to keep your rules simple and manageable.
The use of a simple approach is sound and secure. A simple approach is a key success factor
for any security appliance.
You must specify only the type field of these tables to genvfilt and chvfilt commands. The
code field is not used by Trusted Firewall.
1 and 2 Reserved
4 Fragmentation required
7 Reserved
316 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Type Code Description
13 - Timestamp 0 Timestamp
33 - Where-are-you (IPv6)
34 - Here-I-am (IPv6)
39 - Skip
40 - Security Failure
42 - 255 0 Reserved
0 - Reserved Reserved
3 Address unreachable
4 Port unreachable
318 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Type Code Description
139 - ICMP Node Information Query 0 The Data field contains an IPv6 address that is the
subject of this query.
140 - ICMP Node Information Response 0 A successful reply. The Reply Data field might or
might not be empty.
320 Simplify Management of Security and Compliance with IBM PowerSC in Cloud and Virtualized Environments
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
The IBM Redbooks publication Using the IBM Security Framework and IBM Security
Blueprint to Realize Business-Driven Security, SG24-8100, provides more information about
the topic in this document. Note that this publication might be available in softcopy only.
You can search for, view, download or order this document and other Redbooks, Redpapers,
Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Online resources
The following websites are also relevant as further information sources:
IBM Fix Central
http://www.ibm.com/support/fixcentral
IBM Knowledge Center: Configuring Virtual I/O Server system security hardening
https://ibm.co/318nA2q
IBM Knowledge Center: Configuring Virtual I/O Server firewall settings
https://ibm.co/2Z0UzUu
SG24-8082-01
ISBN 0738457973
Printed in U.S.A.
®
ibm.com/redbooks