Hol VCF Notes
Hol VCF Notes
Hol VCF Notes
HOL-2446-05-HCI
Table of contents
Lab Overview - HOL-2446-05-HCI - VMware Cloud Foundation - Optimize and
Conclusion..................................................................................... 50
Introduction................................................................................... 90
Conclusion..................................................................................... 151
Beginner 152
Introductions .................................................................................152
vSAN Health Check Validation ........................................................152
Conclusion.....................................................................................170
Introduction................................................................................... 172
Images ..........................................................................................178
Conclusion.................................................................................... 180
Intermediate 255
Segment....................................................................................... 279
Module 10 - Basic Load Balancing with NSX (15 Minutes) Intermediate 332
VMs .............................................................................................359
Conclusion.....................................................................................361
Intermediate 362
Intermediate 431
Module 14 - Kubernetes Overview and Deploying vSphere Pod VMs (30 minutes)
Advanced 483
Module 17 - Conclusion..................................................................558
Advanced 559
Module 18 - Overview....................................................................559
Module 19 - Enabling the Embedded Image Registry (30 minutes) Advanced 596
Module 20 - Conclusion.................................................................626
Advanced 627
Conclusion 639
Conclusion....................................................................................639
Lab Overview - HOL-2446-05-HCI - VMware Cloud Foundation - Optimize and Modernize Data
Centers
Note: It may take more than 90 minutes to complete this lab. You may only finish 2-3 of the modules during your time. However, you
may take this lab as many times as you want. The modules are independent of each other so you can start at the beginning of any
module and proceed from there. Use the Table of Contents to access any module in the lab. The Table of Contents can be accessed in
the upper right-hand corner of the Lab Manual.
VMware Cloud Foundation provides a complete set of highly secure software-defined services for compute, storage, network, security,
Kubernetes management, and cloud management. The result is agile, reliable, efficient and AI-ready cloud infrastructure that offers
consistent infrastructure and operations across private and public clouds. In addition, VMware Cloud Foundation contains built-in
automated lifecycle management to simplify the administration of the software stack, from initial deployment, to patching and
upgrading. As a cloud connected offer, VMware Cloud Foundation+ delivers the benefits of public cloud to on-premises workloads by
combining industry-leading full stack cloud infrastructure, an enterprise-ready Kubernetes environment, and high-value cloud services
to transform existing on-premises deployments into SaaS-enabled infrastructure.
In this lab you will walk through VMware Cloud Foundation as well as how to operate this as a private cloud platform to optimize and
modernize your data center.
•Module
Module 1 - Cloud Foundation Overview (15 minutes) (Beginner) A brief overview of the VCF platform and each of its
components.
•Module
Module 2 - Lifecycle Management (30 minutes) (Beginner) Use the Life Cycle Management capabilities to upgrade your VCF
infrastructure.
•Module
Module 3 - Certificate Management (30 minutes) (Beginner) Understand how to manage certificates for all external-facing
Cloud Foundation component resources, including configuring a certificate authority, generating and downloading CSRs, and
•Module
Module 4 - Password Management (30 minutes) (Beginner) Learn how to utilize SDDC Manager to manage component
•Module
Module 5 - vSAN SPBM and Availability (30 minutes) (Beginner) Introduction to VMware vSAN. We will cover the power of
Storage Based Policy Management (SPBM) and show you the availability of vSAN.
•Module
Module 6 - vSAN - Monitoring, Health, Capacity and Performance (30 minutes) (Beginner) Show you how to enable vRealize
Operations within vCenter Server. We will cover the vSAN Health Check and how you can monitor your vSAN environment.
•Module
Module 7 - Workload Domain Operations (iSIM) (45 minutes) (Beginner) Walk through the process of adding additional host
•Module
Module 8 - Introducing Software Defined Networking: Segments and Distributed Routing (45 Minutes) (Intermediate) Learn
how software defined networking removes some barriers to physical networking constructs.
•Module
Module 9 - Changing the Security Game Distributed Firewall (45 Minutes) (Intermediate) Implement East-West firewall to
•Module
Module 10 - Basic Load Balancing with NSX-T (15 Minutes) (Intermediate) Implement load balancing features built into the
platform.
•Module
Module 11 - Migrating Workloads (45 mins) (Intermediate) - Utilize software defined networking and HCX to migrate your
•Module
Module 12 - Deploying applications to a pre-existing NSX network (45 minutes) (Intermediate) In this module, we show how to
use Aria Automation Assembler to deploy an OpenCart instance to pre-defined NSX networks configured on the Cloud
•Module
Module 13 - Deploying applications to an on-demand NSX network (45 minutes) (Intermediate) In this module, we use Aria
Automation Assembler to dynamically deploy software-defined networking objects inside VMware Cloud Foundation's NSX
•Module
Module 14 - Kubernetes Overview and Deploying vSphere Pod VMs (30 minutes) (Advanced)A brief overview of Cloud
Foundation with Tanzu and running vSphere Pod VMs in a vSphere Cluster.
•Module
Module 15 - Deploying Tanzu Kubernetes Clusters (15 minutes) (Advanced) Learn how to deploy a Tanzu Kubernetes Cluster
•Module
Module 16 - Adding Worker Nodes to a Tanzu Kubernetes Cluster (15 minutes) (Advanced) Understand how to add capacity to
•Module
Module 17 - Adding Capacity to a Tanzu Kubernetes Worker Node (15 minutes) (Advanced) Understand how to expand a
•Module
Module 18 - Upgrading a Tanzu Kubernetes Cluster (15 minutes) (Advanced) Walk through the upgrade of a Tanzu Kubernetes
•Module
Module 19 - Enabling the Embedded Image Registry (15 minutes) (Advanced) Configure and use the embedded Harbor
•Module
Module 20 - Use Helm to Deploy a Sample Application (15 minutes) (Advanced) Utilize Helm to deploy OpenCart application
Lab Captains:
Content Architect:
This lab manual can be downloaded from the Hands-on Labs document site found here:
http://docs.hol.vmware.com
This lab may be available in other languages. To set your language preference and view a localized manual deployed with your lab,
utilize this document to guide you through the process:
http://docs.hol.vmware.com/announcements/nee-default-language.pdf
Credentials [3]
The following is a summary of the credentials used for this lab. For your convenience, links to the management interfaces are located in
the bookmark bar of Google Chrome shown in the image.
Additional credentials for components not listed below may be found in the README.txt file located on the desktop of the Main
Console.
•SDDC
SDDC Manager
◦Username: administrator@vsphere.local
◦Password: VMware123!
•SDDC
SDDC Manager as Sam Jones
◦Username: sam@vcf.sddc.lab
◦Password: VMware123!
•SDDC
SDDC Manager as Alex Foster
◦Username: alex@vcf.sddc.lab
◦Password: VMware123!
•SDDC
SDDC Manager as Ava
◦Username: ava@vsphere.local
◦Password: VMware123!
•vCenter
vCenter Server Admin Console
◦Username: root
◦Password: VMware123!
•vSphere
vSphere Web Client
◦Username: administrator@vsphere.local
◦Password: VMware123!
•VMware
VMware NSX Manager
◦Username: admin
Welcome! If this is your first time taking a lab navigate to the Appendix in the Table of Contents to review the interface and features
before proceeding.
For returning users, feel free to start your lab by clicking next in the manual.
Please verify that your lab has finished all the startup routines and is ready for you to start. If you see anything other than "Ready",
please wait a few minutes. If after 5 minutes your lab has not changed to "Ready", please ask for assistance.
VMware Cloud Foundation™ is VMware’s unified SDDC platform for a modern hybrid cloud. This product brings together VMware’s
compute, storage, and network virtualization into a natively integrated stack, and allows you to deliver enterprise-ready cloud
infrastructure with automation and management capabilities for simplified operations that are consistent across private and public
clouds.
A deployed VMware Cloud Foundation™ system includes the following VMware software as standard components:
•SDDC Manager - Virtual appliance that provides administrators with a centralized portal to provision, manage, and monitor
•NSX - VMware NSX is the network virtualization platform for the Software-Defined Data Center. NSX embeds networking and
security functionality that is typically handled in hardware directly into the hypervisor.
•vSphere with Tanzu - vSphere with Tanzu provides the capability to run Kubernetes workloads directly on ESXi hosts and to
create upstream Tanzu Kubernetes Grid clusters within dedicated resource pools.
The following VMware software components may be optionally deployed as part of VMware Cloud Foundation:
•vRealize Operations - Correlates data from applications to storage in a unified, easy-to-use management tool that provides
control over performance, capacity, and configuration, with predictive analytics driving proactive action, and policy-based
automation.
•vRealize Automation - Automates the delivery of the compute, storage, and network resources on a per-application basis,
delivered through repeatable blueprints and accessed through a self-service user portal.
•vRealize Log Insight – Allows administrators to view, manage, and analyze log information from various points within the
solution.
VMware Cloud Foundation is the hybrid cloud platform for managing VMs and orchestrating containers, built on full-stack hyper-
converged infrastructure (HCI) technology. With a single architecture that’s easy to deploy, VMware Cloud Foundation enables
consistent, secure infrastructure and operations across private and public clouds.
VMware Cloud Foundation consists of two types of Workload Domains that make up the Cloud Foundation Platform. These two
Workload Domains are pools of logical resources. Each pool is a cluster or multiple clusters of ESXi hosts managed by an associated
vCenter Server and NSX Manager. Each cluster manages the resources of all the hosts that are assigned to it. Within each cluster, Cloud
Foundation enables the VMware vSphere® High Availability (HA), VMware vSphere® Distributed Resource Scheduler™ (DRS), and
VMware vSAN capabilities.
Management Domain
There is one management domain that is used to manage the SDDC infrastructure components within a Cloud Foundation deployment.
The management domain is automatically provisioned using the first four hosts when the environment is initially configured for Cloud
Foundation (a process referred to as "Bring Up"). The management domain contains all of the management components of the SDDC
Platform. This includes vCenter, vSAN, NSX Manager Controller Cluster, SDDC Manager, and any of the optional vRealize Suite
components, such as vRealize Suite Lifecycle Manager, vRealize Operations, vRealize Log Insight, vRealize Automation, and Workspace
ONE Access.
A Virtual Infrastructure (VI) Workload Domain is designed to run your business applications. When creating VI Workload Domains,
Cloud Foundation takes the number of hosts specified by the cloud administrator and automatically deploys the VI Workload Domain
with VMware best practices. The first VI Workload Domain has its own, vCenter and NSX Manager Controller Cluster. This creates a
highly reliable and secure infrastructure for your business applications. Additional VI Workload Domains can be added, each additional
VI Workload Domains has its own vCenter server, but the customer has the choice to deploy a new NSX Manager Controller Cluster or
share an existing one, depending on the customer’s needs.
Separating the Management Domain from the Workload Domains provides several benefits.
•The Management Domain and VI Workload Domain being separated allows for dedicated resource management for higher
•Security is improved by creating a separate role-based access control of the infrastructure components. This separation
allows for more granular control of who has access to or control of resources inside your private cloud.
•Lifecycle management (patching and upgrades) can be completed on different schedules. The management domain will
always be patched first, but the VI Workload Domains can be patched at different intervals that best suit the business
application needs.
You use the SDDC Manager Web interface in a browser for the single point-of-control management of your VMware Cloud Foundation
system. The SDDC Manager provides centralized access as well as an integrated view of both the physical and virtual infrastructure of
the system.
SDDC Manager does not mask the individual component management products. Along with the SDDC Manager Web interface, for
certain tasks, you might also use web interfaces for administration tasks involving their associated VMware software components that
are part of a VMware SDDC. All of these interfaces run in a browser, and you can launch many of them from locations within the SDDC
Manager Web interface.
We have provided a full VMware Cloud Foundation experience in a virtual environment, however, procedures may have been modified
to account for the simulated environment that the HOL uses or to accelerate time for the user's convenience.
***Note: In the Hands-on Labs environment, as you are navigating through the various screens, you may encounter long refresh
operations for extended periods in the order of 1-3 minutes. Please resist the urge to click refresh on the page during these times as it
will most likely extend the wait.
When building the lab we attempted to minimize these loading times, however, in some instances, operations such as timeouts when
waiting for hardware to reply were unavoidable, as this is a nested environment and not connected to physical hardware. Thank you for
your patience!
1. Please ensure that the Lab Status is green and says “Ready”. If it does not, please let a proctor know by raising your virtual
hand.
2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.
Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is the
vCenter Login.
1. Select the SDDC Manager tab and verify the page URL to ensure you have the correct user interface. The SDDC Manager
Note: The actual SDDC Manager URL is https://sddc-manager.vcf.sddc.lab/ui/ even though the screenshot shows the vCenter URL.
This is due to the fact that SDDC Manager uses vCenter SSO for authentication.
1. After the successful login to the SDDC Manager, select the second tab in the Chrome browser for the vSphere Client.
This action should allow you to be signed in to the vSphere Client without having to enter any additional login credentials. As we have
already authenticated with the SDDC Manager and since they are both in the same SSO domain, our credentials will carry through to
the second browser tab.
Dashboard [14]
The Dashboard page is the home page that provides the overall administrative view of your system. The Dashboard page provides a
top-level view of the physical and logical resources across all of the physical racks in your system, including available CPU, memory, and
storage capacity. From this page, you can start the process of creating a VI Workload Domain. You use the links on the dashboard to
drill down and examine details about the physical resources and the virtual environments that are provisioned for the management and
workload domains.
On the left side of the interface is the Navigation bar. The Navigation bar provides icons for navigating to the corresponding pages. We
will explore each of these in more detail later in the lab.
1. Select the SDDC Manager Tab at the top of the browser window. Here we can see the dashboard view and recent tasks that
2.Due to the resolution of the Hands-On Lab environment, the Tasks tray may need to be resized, or you will need to scroll over
while reviewing the tasks. You also have the option to minimize the Tasks tray by clicking the X.
Rainpole Inc. has just deployed VMware Cloud Foundation. Let’s begin by exploring the Workload Domains.
1. From the left-hand navigation pane, select the Inventory menu item, then select Workload Domains.
From the Workload Domains view, we can see the available CPU, Memory, and Storage capacity. We are also able to see the Workload
Domains and the type of workload domains that have been created within the environment. This new environment currently has one
workload domain provisioned, the mgmt-wld Management Domain. In the future, Rainpole Inc. will deploy additional workload domains
for its applications.
Each of these Workload Domains performs a different function. One, the Management Domain, is responsible for the overall VMware
Cloud Foundation environment. The other Workload Domain will be used to provide resources for virtual server workloads and
applications. VMware recommends that management servers be physically separated from user workloads.
Cloud Foundation now supports the ability for Workload Domains to use vSAN, NFS, vVOls, or VMFS on FC as their principal storage
and automates the deployment of both of the storage solutions.
1. Use the vertical and horizontal scroll bars at the side and bottom of the page to view more information about the existing
Workload Domain.
You will now explore the Management Workload Domain in greater detail.
2.Click on the Management Workload Domain link labeled mgmt-wld at the bottom of the page.
From the landing page of the mgmt-wld Workload Domain, we get an immediate picture of the status of CPU, Memory, and Storage
consumption by this workload domain. We are also able to determine the capacity of allocated resources as well as how much of that
capacity has been consumed.
Scrolling further down you will see several options along the bottom of the page that allow you to drill further into the status of the
workload domain. Each of these options is detailed below. Explore these by clicking on each in turn.
1. Summary: Provides network and storage information. In the Management Domain VSAN is the storage type. In a Workload
2.Services:
Services: Displays the FQDN and IP address of all associated components that have been deployed to support the specific
3.Updates:
Updates: Shows the pre-check workflow, as well as any updates that have been made available that apply to this specific
Workload Domain. Also listed are the specific versions of software for the deployed components within the Workload
Domain. Selecting a version number will take you to the Update history for that component.
4.Update
Update History: Shows all updates that have already been applied to the system. You have the option to filter the period over
5.Hosts:
Hosts: Displays all the hosts that are part of this specific Workload Domain including the Cluster that the host belongs to, the
FQDN of the host, the Management IP address, Network Pool, Host Status, Resource Usage, and Storage Type (Hybrid or
All-Flash)
6.Clusters:
Clusters: Lists out all available clusters under a given Workload Domain
7. Edge Clusters: List of the Edge Clusters used by NSX for North/South Traffic
8.Certificates:
Certificates: Displays the certificate information for all components of the VMware Cloud Foundation environment. This
interface can also automate the replacement of a certificate for all components inside of VMware Cloud Foundation. We will
NOTE: May need to scroll to the right to see all of the tabs.
VMware Cloud Foundation uses NSX for both Management and Workload Domains
VMware Cloud Foundation supports the deployment of NSX and multiple storage options for a Workload Domain.
Below is a snippet from the user manual in regards to Workload Domains and support:
In the VI Configuration wizard, you specify the storage, name, compute, and NSX platform details for the VI Workload Domain. Based
on the selected storage, you provide vSAN parameters or NFS share details. You then select the hosts and licenses for the workload
domain and start the creation workflow.
•Deploys an additional vCenter Server Appliance for the new workload domain within the management domain.
•By using a separate vCenter Server instance per workload domain, software updates can be applied without impacting other
workload domains. It also allows for each workload domain to have additional isolation as needed.
•Connects the specified ESXi servers to this vCenter Server instance and groups them into a cluster. Each host is configured
•For the first VI workload domain, the workflow deploys a cluster of three NSX Managers in the management domain and
configures a virtual IP (VIP) address for the NSX Manager cluster. The workflow also configures an anti-affinity rule between
the NSX Manager VMs to prevent them from being on the same host for High Availability. Subsequent VI workload domains
can share an existing NSX Manager cluster or deploy a new one. To share an NSX Manager cluster, the workload domains
must use the same update manager. The workload domains must both use vSphere Lifecycle Manager (vLCM) or they must
•Cloud Foundation can optionally create a two-node NSX Edge cluster on the management domain for use by the vRealize
Suite components. You can add additional NSX Edge clusters to the management domain. By default, workload domains do
not include any NSX Edge clusters and are isolated. Add one or more Edge clusters to a workload domain to provide north-
south routing and network services. See Deploying NSX Edge Clusters. Note: Multiple Edge clusters cannot reside on the
same vSphere cluster.
•NSX Managers deployed as part of a VI workload domain are configured to periodically get backed up to an SFTP server. By
default, these backups are written to an SFTP server built into SDDC Manager, but you can register an external SFTP server
for better protection against failures. SDDC Manager uses either the built-in or external SFTP server with all currently
•Licenses and integrates the deployed components with the appropriate pieces in the Cloud Foundation software stack.
You have completed Module 1 and should now have a good understanding of how to navigate the SDDC Manager web interface. You
should also at this point conceptually understand what a workload domain is and what it is used for. Please continue to Module 2.
In Cloud Foundation, the Life Cycle Management (LCM) capabilities include automated patching and upgrades for both the SDDC
Manager (SDDC Manager and LCM) and other VMware software components (vCenter Server, ESXi, NSX, and vSAN).
4.Verify update.
Even though SDDC Manager may be available while the update is installed, it is recommended that you schedule the update at a time
when it is not being heavily used.
1. Please ensure that the Lab Status is green and says “Ready”.
2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.
Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is the
vCenter Login.
1. Select the SDDC Manager tab and verify the page URL to ensure you have the correct user interface. The SDDC Manager
Note: The actual SDDC Manager URL is https://sddc-manager.vcf.sddc.lab/ui/ even though the screenshot shows the vCenter URL.
This is due to the fact that SDDC Manager uses vCenter SSO for authentication.
To check for available updates from VMware, a VMware Customer Connect Account is required to be authorized in SDDC Manager.
1. Select Bundle Management from within the Lifecycle Management menu on the left.
Enter the following credentials to authorize. This is a global setting and will be set for all users once done.
1. Username: sam@vcf.sddc.lab
2.Password: VMware123!
3.Click AUTHORIZE.
An update is now available for the VMware Cloud Foundation deployment. Let’s walk through our options for downloading and
deploying this update.
1. If you are not already in the Bundle Management page, Select Bundle Management from the Lifecycle Management menu on
the left.
2.Wait 1 to 2 minutes and then refresh your browser for the available bundle to appear.
3.Click DOWNLOAD NOW - NOTE: This update may take a minute to start and then another minute or two to download. You
Information such as the severity of the update, the number, and types of software components, the minimum required software
versions, and the bundle release date are shown under the details.
1. When you are done examining the details of the update, click the Exit Details link on the top right corner of the window.
2.Click on mgmt-wld.
2.Scroll down to the bottom of the page and click on PLAN UPGRADE.
In this lab, we will be upgrading to a mock version of VMware Cloud Foundation 5.0.0.1
1. Select VMware Cloud Foundation 5.0.0.1 from the drop down menu.
2.By selecting this version of VMware Cloud Foundation, you can see a summary of all the software component changes. For
lab purposes, this version will upgrade SDDC Manager from version 5.0.0.0 to 5.0.0.1. No other changes will occur.
Precheck [30]
It is good practice to ensure the environment is healthy before performing any upgrade activity.
1. Within the same mgmt-wld workload domain page Updates tab, click on RUN PRECHECK on top of the page.
2.Select the Precheck Scope of SDDC_MANAGER_VCF - 5.0.0.1-22053494 which is the component we will be upgrading.
3.Click RUN PRECHECK to proceed. The precheck will take a few minutes to run.
When the precheck completes, you will be presented with the results for all the checks performed against the environment and will
highlight any areas that could potentially prevent the update or patch from being applied successfully.
purposes.
2.Once you have completed reviewing the details, scroll up and click the BACK TO UPDATES link at the top left of the window.
1. At this point, the bundle download should have completed and show up under the Available Updates section. You may need
2.As this bundle applies to the current mgmt-wld domain it is now made available for update. You may review details of this
update.
In the Available Updates section, you are presented with 2 options for executing the deployment of the relevant patches or updates.
1. Choose the SCHEDULE UPDATE option if you'd like to specify a future date and time to execute the update. You may specify
1. Due to time constraints within the lab environment, click to UPDATE NOW button to begin an immediate update.
2.After you click the UPDATE NOW button, you will see a Scheduled message displayed. After a 1-2 min wait, an update dialog
2.Scroll down to view more details. Select the drop-down arrow to view more granular details around the status of specific
Common Services. This update will take about 2-5 minutes to complete.
NOTE: You may need to refresh the browser if the screen does not refresh automatically after 3 minutes.
1. As this is an SDDC Manager update, the page may become unresponsive due to SDDC Manager service restarts. If you are
1. Upon completion, a green ribbon will display the date and time the update was completed.
1. From the main SDDC Manager Dashboard interface. Select Workload Domains from the Inventory menu item on the left side
of the page.
1. Select the Update History link to validate that the update you just applied was successful.
2.Clicking on the ACTIONS drop-down link will allow you to download the log files associated with the update or view the
update status.
You have completed the module and should now have a good understanding of the upgrade and patching process within the VMware
Cloud Foundation environment. Please continue to the next module.
Conclusion [38]
In this module, we worked in Cloud Foundation covering Life Cycle Management (LCM) capabilities including automated patching and
upgrades for both the SDDC Manager (SDDC Manager and LCM) and other VMware software components (vCenter Server, ESXi, NSX,
and vSAN).
An easy way to increase the security of an environment, and a common practice for most IT organizations, is to replace the self-signed
certificates that are generated during installation with a certificate signed by the organization's Certificate Authority (CA). VMware Cloud
Foundation simplifies this process allowing customers to easily update and manage these certificates.
You can manage certificates for all external-facing Cloud Foundation component resources, including configuring a certificate authority,
generating and downloading CSRs, and installing them. Cloud Foundation supports the use of Microsoft certificate authority, Open SSL,
and 3rd party certificate authorities.
•vCenter Server
•NSX Manager
•SDDC Manager
1. Please ensure that the Lab Status is green and says “Ready”.
2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.
Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is the
vCenter Login.
1. Select the SDDC Manager tab and verify the page URL to ensure you have the correct user interface. The SDDC Manager
Note: The actual SDDC Manager URL is https://sddc-manager.vcf.sddc.lab/ui/ even though the screenshot shows the vCenter URL.
This is due to the fact that SDDC Manager uses vCenter SSO for authentication.
1. After the successful login to the SDDC Manager, select the second tab in the Chrome browser for the vSphere Web Client.
2.Select the URL refresh button in the second browser tab. This action should allow you to be signed in to the vSphere Client
without having to enter any additional login credentials. As we have already authenticated with the SDDC Manager and since
they are both in the same SSO domain, our credentials should carry through to the second browser tab.
The refresh process can take a couple of minutes to complete, but you can continue to the next step in the lab.
1. Select the first browser tab to navigate to the SDDC Manager interface.
As we can see the connection from SDDC Manager to the backend Certificate Authority has already been established.
NOTE: Due to time constraints we will be replacing the SDDC Manager certificate.
NOTE: Review the current date that the certificate is valid through.
Populate the fields in the CSR wizard with the following information.
Algorithm: RSA
Email: sam@vcf.sddc.lab
Organizational Unit: IT
Organization: Rainpole
State: CA
Country: US
1. Click NEXT
If you have any Subject Alternative Names you may enter them here. In this lab, we will leave this blank.
1. Click NEXT
1. Now that the CSR has been generated, click the GENERATE SIGNED CERTIFICATES button.
If you were using a 3rd party CA, you would click download CSR after step 1. to submit to the 3rd party Certificate Provider.
Note: If the INSTALL CERTIFICATES button is not activated. Refresh the browser to get the latest update.
Due to the formatting of the Hands-On Lab environment, you may need to scroll over to the right to see the status of the sddc-
manager.vcf.sddc.lab certificate replacement.
This process takes a couple of minutes to replace the certificate in the Hands-On Lab Environment. While this is running please
proceed in the lab, you can come back to check this status later if you wish to do so.
Verify that the Certificate Installation Status for the sddcmanager shows SUCCESSFUL.
1. Launch Putty
3.Click Open
sh /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh
4. Enter Y to proceed
After the service restart, you will need to login back into SDDC manager. The service may take 2-3 minutes for the service fully restart.
1. Select the SDDC Manager tab and verify the page URL to ensure you have the correct user interface. The SDDC Manager
Note: The actual SDDC Manager URL is https://sddc-manager.vcf.sddc.lab/ui/ even though the screenshot shows the vCenter URL.
This is due to the fact that SDDC Manager uses vCenter SSO for authentication.
2.Verify that the Valid to date is 2 years from the current date.
1. Click Certificates
Congratulations. You have completed the module on Certificate Management. We have demonstrated how Cloud Foundation can help
easily replace certificates. Please continue to the next module.
VMware Cloud Foundation provides the ability to manage passwords for logical and physical entities on all racks in your system. The
process of password rotation generates randomized passwords for the selected accounts.
•ESXi
•vCenter / PSC
•NSX Manager
•NSX Edges
1. Please ensure that the Lab Status is green and says “Ready”.
2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.
Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is the
vCenter Login.
1. Select the SDDC Manager tab and verify the page URL to ensure you have the correct user interface. The SDDC Manager
Note: The actual SDDC Manager URL is https://sddc-manager.vcf.sddc.lab/ui/ even though the screenshot shows the vCenter URL.
This is due to the fact that SDDC Manager uses vCenter SSO for authentication.
1. Click Security
1. Click the three vertical dots for the user root on host esxi-1.vcf.sddc.lab
2.Select UPDATE
Once the Update Password dialog box is open, fill in the password you would like it changed to.
2.Click UPDATE
Monitor the progress of the task by opening the Tasks window in the lower left and
1. Tasks link
Once the password update has been completed successfully we will validate that the password change has occurred.
1. In the browser open a new tab, from the bookmarks shortcut bar, select ESXi Hosts and then select ESXi-1
Note: You may need to click on the Advanced link in the browser to proceed if there is a precautionary security warning
presented.
Once the page opens use the following credentials to validate the password change was successful.
◦Username: root
◦Password: HOLR0cks! (or the password you supplied in the previous step when changing the root user password)
The other option is to rotate instead of update. We can test this by navigating back to the first tab for SDDC Manager
1. Click Security
Rotate [71]
1. Click the ROTATE button again in the confirmation pop-up dialog box.
This will rotate the password to a randomly generated password that will be stored in the SDDC Manager database.
There are two ways to look up the password once it has been rotated. You may either (1) SSH into the SDDC Manager and follow the
admin guide and use the lookup_passwords command. This requires SSH access to the host or (2) Use the API to look up the
credentials. We will do the latter in this exercise.
2.Click EXECUTE
3.Expand PageOfCredential
4.and then the second Credential(GUID) View the password information (see yellow box below) Your password will be different
NOTE: There are two Credentials returned one is for the VCF Service account that has access to the host in the second is for the root
user account. For the lab, you will use the second root user account.
Once the page opens use the following credentials to validate that the password change was successful
3.Enter:
◦Username: root
◦Password: <password copied from previous step> (this is the password that is in your developer center)
4.Clicking the Log In button allows us to see that the password rotation was successful.
This concludes this module. We explored how we can utilize SDDC Manager to update and rotate passwords. Please continue to the
next module.
Introduction [77]
vSAN delivers flash-optimized, secure shared storage with the simplicity of a VMware vSphere-native experience for all your critical
virtualized workloads. vSAN runs on industry-standard x86 servers and components that help lower TCO by up to 50% versus
traditional storage. It delivers the agility to easily scale IT with a comprehensive suite of software solutions and offers the first native
software-based, FIPS 140-2 validated HCI encryption. A VMware vSAN SSD storage solution powered by Intel® Xeon® Scalable
processors with Intel® Optane™ technology can help companies optimize their storage solution in order to gain fast access to large data
sets. View video to learn more.
vSAN 7 delivers a new HCI experience architected for the hybrid cloud with operational efficiencies that reduce time to value through a
new, intuitive user interface, and provides consistent application performance and availability through advanced self-healing and
proactive support insights. Seamless integration with VMware's complete software-defined data center (SDDC) stack and leading hybrid
cloud offerings make it the most complete platform for virtual machines, whether running business-critical databases, virtual desktops
or next-generation applications.
Before we jump in the Lab, let's take a moment to review What's New with vSAN 8.
With vSAN 8, we are continuing to build on the robust features that make vSAN a high performing general-purpose infrastructure.
vSAN 8 makes it easy for you to standardize on a single storage operational model with three new capabilities: integrated file services,
enhanced cloud-native storage, and simpler lifecycle management. You can now unify block and file storage on hyperconverged
infrastructure with a single control pane, which reduces costs and simplifies storage management.
Cloud-native applications also benefit from these updates, which include integrated file services, vSphere with Kubernetes support, and
increased data services. Finally, vSAN 7 also simplifies HCI lifecycle management by reducing the number of tools required for Day 2
operations, while simultaneously increasing update reliability.
•Enhanced
Enhanced Cloud-Native Storage
vSAN supports file-based persistent volumes for Kubernetes on vSAN datastore. Developers can dynamically create file shares for their
applications and have multiple pods share data.
•Integrated
Integrated File Services
In vSAN 8, integrated file services make it easier to provision and share files. Users can now provision a file share from their vSAN
cluster, which can be accessed via NFS 4.1 and NFS 3 and SMB. A simple workflow reduces the amount of time it takes to stand up a file
share.
•Simpler
Simpler Lifecycle Management
Consistent operations with a unified Lifecycle Management tool. vSAN 8 provides a unified vSphere Lifecycle Manager tool (vLCM) for
Day 2 operations for software and server hardware. vLCM delivers a single lifecycle workflow for the full HCI server stack: vSphere,
vSAN, drivers and OEM server firmware. vLCM constantly monitors and automatically remediates compliance drift. the vLCM
component is driven and performed by SDDC Manager in VMware Cloud Foundation.
•Increased
Increased Visibility into vSAN Used Capacity
Replication objects are now visible in vSAN monitoring for customers using VMware Site Recovery Manager and vSphere Replication.
The objects are labeled “vSphere Replicas” in the “Replication” category.
•Uninterrupted
Uninterrupted Application Run Time
vSAN 8 enhances uptime in Stretched Clusters by introducing the ability to redirect VM I/O from one site to another in the event of a
capacity imbalance. Once the disks at the first site have freed up capacity, customers can redirect I/O back to the original site without
disruption.
VMware’s solution stack offers the levels of flexibility that is needed for today’s rapidly changing needs. It’s built off of a foundation
of VMware vSphere, paired with vSAN. This provides the basis for a fully software defined storage and virtualization platform that
removes dependencies from legacy solutions using physical hardware. Next is VMware Cloud Foundation, the integrated solution that
provides the full stack of tools for an automated private cloud. And finally, there is VMware Cloud on AWS. The same software that you
already know, sitting in Amazon Web Services, and provide consistent operations to all of the existing workflows used in your private
clouds. The result is a complete solution regardless of where the topology sit on-prem or on the cloud.
As an abstraction layer, Storage Policy Based Management (SPBM) abstracts storage services delivered by Virtual Volumes, vSAN, I/O
filters, or other storage entities. Multiple partners and vendors can provide Virtual Volumes, vSAN, or I/O filters support. Rather than
integrating with each individual vendor or type of storage and data service, SPBM provides a universal framework for many types of
storage entities.
• Advertisement of storage capabilities and data services that storage arrays and other entities, such as I/O filters, offer.
• Bi-directional communications between ESXi and vCenter Server on one side, and storage arrays and entities on the other.
vSAN requires that the virtual machines deployed on the vSAN Datastore are assigned at least one storage policy. When provisioning a
virtual machine, if you do not explicitly assign a storage policy to the virtual machine the vSAN Default Storage Policy is assigned to the
virtual machine.
The default policy contains vSAN rule sets and a set of basic storage capabilities, typically used for the placement of virtual machines
deployed on vSAN Datastore.
• The vSAN default storage policy is assigned to all virtual machine objects if you do not assign any other vSAN policy when you
provision a virtual machine.
• The vSAN default policy only applies to vSAN datastores. You cannot apply the default storage policy to non-vSAN datastores, such
as NFS or a VMFS datastore.
• You can clone the default policy and use it as a template to create a user-defined storage policy.
1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.
4.Click LOGIN
3.Select Rules
Here we can see that the mgmt-vsan is compatible with this storage policy. (not pictured)
1. Select Menu
2.Select Inventory
We will deploy a VM from a template called tmpl-ubuntu to the mgmt-vsan vSAN Datastore and apply the Default Storage Policy.
holostorage
3.Click Next
2.Click NEXT
1. Click on mgmt-vsan
2.For the VM Storage Policy dropdown, select vSAN Default Storage Policy
The resulting list of compatible datastores will be presented, in our case the mgmt-vsan.
In the lower section of the screen we can see the vSAN storage consumption would be 13.33 GB disk space and 0.00 B reserved Flash
space. You may need to scroll down to see this section of the screen.
Since we have a VM with 10 GB disk and the Default Storage Policy is RAID 5, the vSAN disk consumption will be 13.33 GB disk.
3.Click NEXT then, click NEXT on the Deploy From Template (not pictured), then click FINISH (not pictured)
1. Check the Recent Tasks for a status update on the Clone virtual machine task.
3.Select Summary
4.Scroll down
5.View Related Objects
Here we can see that the VM Storage Policy for this VM is set to vSAN Default Storage Policy and the policy is Compliant
Compliant.
2.Select Configure
3.Select Policies
Here we can see the VM Storage Policy that is applied to VM Home Object and the Hard Disk Object.
2.Select Monitor
4.Expand holostorage
Verify that the Object State is Healthy and vSAN Default Storage Policy is applied. You may need to scroll to the right to see the Storage
Policy.
Here we can see the Component layout for the Hard Disk.
2.Click CLOSE
Consider these guidelines when you configure RAID 5 or RAID 6 erasure coding in a vSAN cluster.
•You can achieve additional space savings by enabling deduplication and compression on the vSAN cluster.
First, we need to create a VM Storage Policy that will define the Failure Tolerance method of Raid5/6.
2.Click on CREATE
PFTT=1-Raid1
2.Click NEXT
2.Click NEXT
1. In the Storage rules tab, Select All flash for the Storage tier
Review the options that are available here, but leave at the default settings.
1. Click NEXT
1. Click NEXT
Here we can see the rules that make up our VM Storage Policy.
Now that we have created a new VM Storage Policy , let's assign that policy to an existing VM on the vSAN Datastore.
2.Select Inventory
2.Select Configure
3.Select Policies
Here we can see that the vSAN Default Storage Policy is assigned to this VM.
1. Change the VM storage Policy from the dropdown list to PFTT=1-Raid1. You should notice that the total vSAN storage
Verify that the VM Storage Policy has been changed and that the VM is compliant against the new storage Policy. You might have to hit
the refresh button to see the change.
2.Select Monitor
Here we can see the new revised Component layout for the VM with the Raid-1 Storage Policy (2 vSAN components and a Withness on
3 hosts, you may see reconfiguring like in the screenshot if the components haven't had time to rebalance before you go to this step).
1. Click CLOSE
In vSAN 7.0 U1 and higher, the “number of disk stripes per object” storage policy rule attempts to improve performance by
distributing data contained in a single object (such as a VMDK) across more capacity devices.
Commonly known as “stripe width,” this storage policy rule will tell vSAN to split the objects into chunks of data (known in vSAN as
“components”) across more capacity devices.
Refer to the following Blog Post for a more detailed description of these changes : https://blogs.vmware.com/virtualblocks/2021/01/21/
stripe-width-improvements-in-vsan-7-u1/
The optimizations introduced to the stripe width storage policy rule in vSAN 7 U1 help provide more appropriate levels of disk striping
when using storage policies based on RAID-5/6 erasure codes.
In the next task we will change the Storage Policy to Raid 5 to have a Stripe Width value of 4.
2.Click NEXT
2.Click Next
4.Click NEXT
The VM storage policy is in use by 1 virtual machine(s). Changing the VM storage policy will make it out of sync with those 1 virtual
machine(s).
2.Select Yes
1. Select VM Compliance
2.You will see that the Compliance Status of the VM (holostorage) has now changed to Out of Date since we have changed the
3.Click REAPPLY
Reapplying the selected VM storage policy might take significant time and system resources because it will affect 1 VM(s) and will move
data residing on vSAN datastore.
The changes in the VM storage policies will lead to changes in the storage consumption on some datastores. The storage impact can be
predicted only for vSAN datastores, but datastores of other types could also be affected.
After you reapply the VM storage policies, the storage consumption of the affected datastores is shown.
1. Click CLOSE
1. Once the VM Storage Policy has been reapplied, verify that the VM is in a Compliant state again with the VM Storage Policy.
2.Select Inventory
2.Select Monitor
Here we can see the new revised Components layout for the VM with the Raid-5 Storage Policy (4 components on 4 hosts).
We now have components spread across 4 ESXi hosts with Raid 5. (Given that this is a 4 node cluster this is the default configuration, at
5 hosts you may see objects across 4 hosts in RAID 0 on the component level)
1. Click CLOSE
We now have the ability to control the amount of capacity that is reserved for both rebuild operations and transient operations such as
temporary capacity need to do policy change on an object.
By default, the Capacity Reserve feature is disabled, meaning all vSAN capacity is available for workloads. You can enable capacity
reservations for internal cluster operations and host failure rebuilds. Reservations are soft-thresholds designed to prevent user-driven
provisioning activity from interfering with internal operations, such as data rebuilds, rebalancing activity, or policy re-configurations. The
capacity required to restore a host failure matches the total capacity of the largest host in the cluster and minimum of 4 hosts are
required.
By enabling reserve capacity in advanced, vSAN prevents you from using the space to create workloads and intends to save the
capacity available in a cluster.
If there is enough free space in the vSAN cluster, you can enable the operations reserve and/or the host rebuild reserve.
•Operations Reserve - Reserved space in the cluster for vSAN internal operations.
•Host Rebuild Reserve - Reserved space for vSAN to be able to repair in case of a single host failure.
The reserved capacity is not supported on a stretched cluster, cluster with fault domains and nested fault domains, ROBO cluster, or the
number of hosts in the cluster is less than four.
Now that we have 4 hosts in the cluster, we can start configuring the Reserve Capacity.
1. Select Menu
2.Select Inventory
Once we edit the Enable Capacity reserve, you will be shown how much of the total vSAN datastore capacity (by default) is allocated to
each reserve.
2.Select Configure
It looks like we are running too low on capacity to enable Operations reserve. Lets address this by creating another disk group.
1. Click Cancel
Here we can see 2 disk types that are installed on each host in the cluster. The smaller of the drives has been identified as the Cache
Tier and the larger the Capacity Tier.
1. After waiting 30 seconds or less your screen will update and show that 4/4 disks are in use on each host.
Now that the capacity in the cluster has been increased let's see if we can configure our Reserve Capacity again.
Once we edit the Enable Capacity reserve, you will be shown how much of the total vSAN datastore capacity (by default) is allocated to
each reserve.
2.Select Configure
1. Hover onto the Warning Icon. You will see a warning notifications when the storage reaches 70%
2.Hover onto the Error Icon. You will see an error alert at 90% of storage consuption
4.Select Customize alerts. You can set the warning and error alerts but we will keep it at default
5.Click on APPLY
Conclusion [138]
Storage Policy Based Management (SPBM) is a major element of your software-defined storage environment. It is a storage policy
framework that provides a single unified control panel across a broad range of data services and storage solutions.
The framework helps to align storage with application demands of your virtual machines.
Module 6 - vSAN - Monitoring, Health, Capacity and Performance (30 minutes) Beginner
Introductions [141]
A critical aspect of enabling a vSAN Datastore is validating the Health of the environment. vSAN has over a hundred out of the box
Health Checks to not only validate initial Health but also report ongoing runtime Health. vSAN 8 introduces exciting new ways to
monitor the Health, Capacity and Performance of your Cluster via vRealize Operations within vCenter, all within the same User Interface
that VI Administrators use today.
One of the ways to monitor your vSAN environment is to use the vSAN Health Check.
The vSAN Health runs a comprehensive health check on your vSAN environment to verify that it is running correctly and will alert you if
it finds some inconsistencies and options on how to fix these.
Running individual commands from one host to all other hosts in the cluster can be tedious and time consuming. Fortunately, since
vSAN 6.0, vSAN has a health check system, part of which tests the network connectivity between all hosts in the cluster. One of the first
tasks to do after setting up any vSAN cluster is to perform a vSAN Health Check. This will reduce the time to detect and resolve any
networking issue, or any other vSAN issues in the cluster.
1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.
4.Click LOGIN
1. Select Menu
2.Select Inventory
2.Select Monitor
be expected)
You can view the history of the health of the vSAN Cluster and when an event was unhealthy.
Note that some of the Health Checks are in a Warning State. This is due to the fact that we are running a vSAN cluster in a nested
virtualized environment. In addition some alerts have been silenced due to the nested environment.
Let's induce a vSAN Health Check failure to test the health Check.
2.Select Connection
3.Select Disconnect
2.Select Monitor
4.Click the RETEST button if the Health Check for Hosts disconnected from VC does not show as red.
Here we will see a vSAN Network Health Check that has failed.
Here we can see that the ESXi host called esxi-4.vcf.sddc.lab is showing as Disconnected
Disconnected.
Each details view under the Info tab also contains an Ask VMware button where appropriate, which will take you to a VMware
Knowledge Base article detailing the issue, and how to troubleshoot and resolve it.
2.Select Connection
3.Select Connect
2.Select Monitor
5.The Host connection has been restored and our cluster health score increased.
Click the RETEST button if the Health Checks do not show an equivalent cluster health.
Conclusion [153]
You can use the vSAN health checks to monitor the status of cluster components, diagnose issues, and troubleshoot problems. The
health checks cover hardware compatibility, network configuration and operation, advanced vSAN configuration options, storage
device health, and virtual machine objects.
The capacity of the vSAN Datastore can be monitored from a number of locations within the vSphere Client. First, one can select the
Datastore view, and view the summary tab for the vSAN Datastore. This will show you the capacity, used and free space.
3.Click Summary
2.Select mgmt-cluster
3.Select Monitor
The Capacity Overview displays the storage capacity of the vSAN Datastore, including used space and free space. The Deduplication
and Compression Overview indicates storage usage before and after space savings are applied, including a Ratio indicator.
These are all the different object types one might find on the vSAN Datastore. We have VMDKs, VM Home namespaces, and swap
objects for virtual machines. We also have performance management objects when the vSAN performance logging service is enabled.
There are also the overheads associated with on-disk format file system, and checksum overhead. Other (not shown) refers to objects
such as templates and ISO images, and anything else that doesn't fit into a category above.
It's important to note that the percentages shown are based on the current amount of Used vSAN Datastore space. These percentages
will change as more Virtual Machines are stored within vSAN (e.g. the File system overhead % will decrease, as one example).
3.Note the data distribution across all 5 hosts in the cluster. vSAN is managing the distribution
Here we can see the amount of Used Capacity per ESXi Host.
A healthy vSAN environment is one that is performing well. vSAN includes many graphs that provide performance information at the
cluster, host, network adapter, virtual machine, and virtual disk levels. There are many data points that can be viewed such as IOPS,
throughput, latency, packet loss rate, write buffer free percentage, cache de-stage rate, and congestion. Time range can be modified to
show information from the last 1-24 hours or a custom date and time range. It is also possible to save performance data for later
viewing.
With vSAN 7, the performance service is automatically enabled at the cluster level. The performance service is responsible for collecting
and presenting Cluster, Host and Virtual Machine performance related metrics for vSAN powered environments. The performance
service is integrated into ESXi, running on each host, and collects the data in a database, as an object on a vSAN Datastore. The
performance service database is stored as a vSAN object independent of vCenter Server. A storage policy is assigned to the object to
control space consumption and availability of that object. If it becomes unavailable, performance history for the cluster cannot be
viewed until access to the object is restored.
Performance Metrics are stored for 90 days and are captured at 5 minute intervals.
1. Select mgmt-cluster
2.Select Configure
2.Note that the Stats DB is using the vSAN Default Storage Policy (RAID 5 - Erasure Coding) and is reporting Compliant status
Let's examine the various Performance views next at a Cluster, Host and Virtual Machine level.
1. Select mgmt-cluster
2.Select Monitor
“Front End” VM traffic is defined as the type of storage traffic being generated by the VMs themselves (the reads they are requesting,
and the writes they are committing). “Back
Back End
End” vSAN traffic accounts for replica traffic (I/Os in order to make the data redundant/
highly available), and well as synchronization traffic. Both of these traffic types take place on the dedicated vSAN vmkernel interface(s)
per vSphere Host.
2.Select Monitor
the Host level (you can also customize the Time Range if desired).
In this view, we can see more Performance related metrics at the Host level vs. Cluster. Feel free to examine the various categories
indicated in Step 4 to get a feel for the information that is available.
2.Select Monitor
4.Note that we can choose to view VM and Virtual Disks Performance views at the Virtual Machine level (you can also customize
the Time Range if desired). (If no results show up, click Show Results in this box to refresh, you may need to wait 1-2 minutes
Conclusion [166]
In this module, we showed you how to validate vSAN Health, Monitor vSAN Capacity & Performance as well as utilize vRealize
Operations for vCenter and vRealize Operations Dashboards.
Introduction [169]
This module contains a total of four different iSIM's that are all related to Workload Domain Operations. These demonstrations will walk
you through the following processes:
•Host Commissioning
You can hide the manual to use more of the screen for the simulation.
NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.
Hands-on Labs Interactive Simulation: Create a vLCM Image-Based Workload Domain [171]
You can hide the manual to use more of the screen for the simulation.
NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.
You can hide the manual to use more of the screen for the simulation.
NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.
Hands-on Labs Interactive Simulation: Cluster Upgrades with vLCM Images [173]
You can hide the manual to use more of the screen for the simulation.
NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.
Conclusion [174]
You have completed Module 4 and should now have a good understanding of some operations including the creation and lifecycle
management required to grow and maintain a Workload Domain. Please continue to Module 5.
Module 8 - Introducing Software Defined Networking: Segments and Distributed Routing (45
Minutes) Intermediate
Software-Defined Networking in VMware Cloud Foundation is provided by VMware NSX. NSX operates as an “Overlay Network”,
where the networking capabilities are delivered in software, and “encapsulated” within standard TCP/IP packets transported by a
standard IP “underlay” network.
NSX enables customers to create elastic, logical networks that span physical network boundaries. NSX abstracts the physical network
into a pool of capacity and separates the consumption of these services from the underlying physical infrastructure. This model is similar
to the model vSphere uses to abstract compute capacity from the server hardware to create virtual pools of resources that can be
consumed as a service.
This exercise will deploy the necessary networking components to support a simple two-tier application called “Opencart”. Opencart
is an open-source E-Commerce platform that uses an Apache front end, and a MySQL backend. This lab uses preconfigured Apache
and MySQL VMs that we will attach to newly created SDN segments.
1. Please ensure that the Lab Status is green and says “Ready”.
2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.
Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is the
vCenter Login.
1. Open a new tab and verify the page URL to ensure you have the correct user interface. The vCenter login URL should
read https://mgmt-nsx.vcf.sddc.lab/login.jsp
3.What you see are the existing segments and routers that are used elsewhere in the lab environment. By the end of this lab
exercise, you create the configuration shown below, with 2 new segments, connected to a Tier-1 gateway, with a load
OC-Web-Segment
•10.1.1.0/27
•Gateway 10.1.1.1/27
•OC-Apache-A 10.1.1.18
•OC-Apache-B 10.1.1.19
OC-DB-Segment
•10.1.1.32/27
•Gateway 10.1.1.33/27
•OC-MySQL 10.1.1.50
1. All other settings should remain default scroll to the bottom, click SAVE
2.You will see your segment has been successfully created. Click NO on the Want to continue configuring this segment?
1. All other settings should remain default scroll to the bottom, click SAVE
2.You will see your segment has been successfully created. Click NO on the Want to continue configuring this segment?
1. Select the vCenter tab and verify the page URL to ensure you have the correct user interface. The vCenter login URL should
read https://mgmt-vcenter.vcf.sddc.lab
This step will attach our two Apache web server VMs to the OC-Web-Segment.
1. From the vCenter Server Hosts and Clusters view, find OC-Apache-A in the left-side scroll list.
2.Right-click on OC-Apache-A
2.Select Browse
NOTE: The network segment that you just created in NSX is now visible in vSphere)
1. Click on OC-Web-Segment
1. From the vCenter Server Hosts and Clusters view, find OC-Apache-B in the left-side scroll list.
2.Right-click on OC-Apache-B
2.Select Browse
1. Click on OC-Web-Segment
With 2 VMs on the segment, we can test connectivity between the VMs. IP Assignment is as follows:
•OC-Apache-A 10.1.1.18
•OC-Apache-B 10.1.1.19
1. Open a web console on OC-Apache-B by clicking the LAUNCH WEB CONSOLE button
You may need to hit enter in the window to get a login prompt
1. Login with:
◦Username: ocuser
◦Password: VMware123!
1. Check the interface configuration by typing ifconfig at the prompt and hitting Enter
1. Test connectivity with OC-Apache-A by typing ping 10.1.1.18 at the prompt and hitting Enter
Notice we can communicate from OC-Apache-B to OC-Apache-A on the network we just created.
5.Notice the “N” showing this is an NSX segment versus a Standard Port Group
6.The segment ID is shown along with the transport zone the segment is a part of
8.Lower on the screen you can see which VDS this segment is configured on
NOTE: With vSphere 7 and NSX-T 3.1 and higher versions, NSX segments are an extension of the vSphere Distributed Switch and are
completely visible to the vSphere team
2.Notice there is a port per virtual machine we attached to the segment, along with mac addresses for the interfaces on the
NOTE: Our OC-Web-Segment is connected to each ESXi host in the transport zone. When a segment is created, it is accessible to all
hosts in the transport zone.
◦Username: admin
◦Password: VMware123!VMware123!
Note: You may need to drag the screen to the left and zoom in to see the right side of the topology display. Look for a
Notice the 2 VMs you configured on the OC-Web-Segment in the topology map.
Summary [205]
This lab shows how simple it is to create an overlay network using NSX Manager. In this example, we created a fully functional IP subnet
on an overlay network segment in just a few steps. Unlike traditional VLANs in vSphere, segments do not require any underlying VLAN
configuration.
In this exercise, we will use the Traceflow capability in NSX to view traffic moving between virtual machines on the same host, on the
same segment.
Traceflow injects packets into a vSphere distributed switch (VDS) port and provides observation points along the packet’s path as it
traverses the overlay and underlay network. Observation points include entry and exit of distributed firewalls, host and edge TEPs,
logical routers, etc. This allows you to identify the path (or paths) a packet takes to reach its destination or, conversely, where a packet
is dropped along the way. Each entity reports the packet handling on input and output, so you can determine whether issues occur
when receiving a packet or when forwarding the packet. Traceflow is not the same as a ping request/response that goes from guest-VM
stack to guest-VM stack. With the NSX prepped VDS in VCF, NSX Traceflow can inject and monitor packets at the point where a VM
vNIC connects to the VDS switch port. This means that a Traceflow can be successful even when the guest VM is powered down. Note:
If the VM has not been powered on since attaching the NSX segment, the NSX control plane cannot know which host to use to inject
packets from that VM as the source and that test will fail.
▪Username: administrator@vsphere.local
▪Password: VMware123!
Use the same steps in the previous page to locate the host OC-Apache-B is on. If it is running on the same host as OC-Apache-A, skip
the next steps to migrate the VM.
NOTE: If OC-Apache-B is not on the same host as OC-Apache-A, initiate a vMotion to move it to the same host.
▪Username: admin
▪Password: VMware123!VMware123!
Notice the path the data packets take on the resulting topology view. We can see that packets move from OC-Apache-A to OC-
Apache-B via the OC-Web-Segment.
•The physical hop count is 0, indicating that the packet did not leave the host
•The packet was injected at the network adapter for OC-Apache-A virtual machine
•It is then received at the distributed firewall at the VDS port for OC-Apache-A
•With no rule blocking, the packet is then forwarded on from the sending VDS port
•The packet is then received on the distributed firewall at the receiving VDS port for OC-Apache-B.
•With no rule blocking forwarding, the packet is then forwarded to the destination
•The last step shows the packet is delivered to the network adapter for the OC-Apache-B VM
Summary [218]
This section shows how ICMP packets travel between the VDS ports for 2 virtual machines running on the same ESXi host. You can see
the packet pass from where it enters the VDS at the source, passes through the source side firewall, then get forwarded to the
destination distributed firewall, and finally to the destination VDS port.
◦Username: administrator@vsphere.local
◦Password: VMware123!
NOTE: If OC-Apache-B is on the same host as OC-Apache-A, initiate a vMotion to move OC-Apache-B to a different host. As we are
using vSAN storage, you only need to vMotion compute resources.
◦Username: admin
◦Password: VMware123!VMware123!
1. If your most recent Traceflow is still on screen between OC-Apache-A and OC-Apache-B, click on the RETRACE button
1. If the previous Traceflow is not correct, Click on the NEW TRACE button
NOTE: The path the data packets take on the resulting topology view. We can see that packets move from OC-Apache-A to OC-
Apache-B via the one-tier segment.
•The physical hop count increments to 1 part way through the flow, indicating that the packet left the host
•The packet was injected at the network adapter for OC-Apache-A virtual machine
•It is then received at the distributed firewall at the VDS port for OC-Apache-A
•With no rule blocking, the packet is then forwarded on from the sending VDS port
•The packet then hits the physical layer to transmit to the second host. Notice the local and remote endpoints are shown
•The packet is then received on the second host. Notice the inverse local and remote endpoint IPs The local and remote
endpoints are the “Tunneling End Points” (TEP). When the OC-Apache-B virtual machine was migrated to another host,
the NSX manager updated all hosts in the transport zone with the new TEP for the virtual machine
•The packet is then received on the distributed firewall at the receiving VDS port for OC-Apache-B.
•With no rule blocking forwarding, the packet is then forwarded to the destination
•The last step shows the packet is delivered to the network adapter for the OC-Apache-B VM
3.Click on Hosts
Note: Each host has 2 TEP interfaces in the Host TEP VLAN. In the lab configuration, Host TEP addresses are DHCP assigned on the
172.16.254.1/24 network with Cloud Builder acting as the DHCP server for ESXi hosts connecting to the Host TEP Network.
Compare this to the logical layout of the environment. Notice each host has two TEP interfaces on the DHCP-based Host TEP Network.
The NSX Manager, interfaced with a vCenter Server instance, is responsible for updating all transport nodes in the transport zone any
time a VM powers on or is migrated. This provides a mapping of VM to TEP addresses to send the overlay traffic for a specific VM. As a
“Tunnel End Point,” the NSX prepped vSphere Distributed switch is responsible for de-encapsulating the overlay traffic to a VM and
encapsulating the traffic to communicate on the overlay. This is transparent to the VM and the underlay network.
Summary [229]
In this section, we show how ICMP packets travel between the VDS ports for 2 virtual machines running on different ESXi hosts. Like
section two, you first see the packet passes from where it enters the VDS at the source, through the source side firewall. The packet
then gets encapsulated and placed on the segment (overlay network) via the source side TEP and forwarded to the destination TEP.
The destination TEP de-encapsulates the packet and passes to the destination side distributed firewall, and finally to the VDS port. This
is a very simple example of the power of overlay networks in VCF. The source and destination physical machines do not have to be in
the same subnet, as would be common in multi-rack configuration with Spine/Leaf physical networking.
While we created two segments earlier in the lab, the segments are currently not connected, or to other parts of the network. In this
section, we will create an NSX Tier-1 router, connect the OC-Web-Segment and OC-App-Segment to the T1 router, then connect the T1
router to the existing T0 router in the lab configuration.
▪Username: admin
▪Password: VMware123!VMware123!
9.Enable All Connected Segments & Service Ports by ensuring the toggle is green
3.Select Edit
2.From the vCenter Server, click on the Hosts and Clusters view
3.Find the OC-MySQL virtual machine in the left-side scroll list and right-click it
2.Select Browse...
1. Click on OC-DB-Segment
4.Review your newly created Tier-1 Router and see the segments that are connected
2.From the vCenter Server, click on the Hosts and Clusters view
You may need to click enter on your keyboard to bring up a login prompt
1. Login with:
◦Username: ocuser
◦Password VMware123!
3.Successful ping replies mean that OC-MySQL can communicate with OC-Apache-A via the OC-T1 router.
2.From the NSX Manager interface click on the Plan & Troubleshoot tab
Your output should look like this. Notice the communications go through the OC-T1 router we created.
•The packet was injected at the network adapter for OC-Apache-A virtual machine
•It is then received at the distributed firewall at the VDS port for OC-Apache-A
•With no rule blocking, the packet is then forwarded on from the sending VDS port
•The packet then hits the OC-T1 router and gets forwarded to the OC-Web-Segment
•Since OC-Apache-A and OC-MySQL are running on different ESXi hosts, you notice the physical hop between TEPs
•The packet is then received on the distributed firewall at the receiving VDS port for OC-MySQL
•With no rule blocking forwarding, the packet is then forwarded to the destination, the last step shows the packet is delivered
•Username: ocuser
•Password: VMware123!
By successfully connecting PuTTY from the lab console to OC-MySQL we have tested the entire SDN connection. In this lab, the NSX
Edge Cluster connects via BGP to the pod router where our lab console is connected. SSH traffic flows from our Windows console to
the pod router, over BGP links to the Tier-0 router, to the OC-T1 router, and finally to the OC-MySQL VM on the OC-DB-Segment, and
back.
Summary [253]
In this section, we show how packets travel between the VDS ports for 2 virtual machines running on different ESXi hosts across two
segments connected by a Tier-1 router. The important distinction is this router functionality was distributed across all hosts versus a
physical device cabled somewhere else in the data center.
This section will validate that our web server VMs and database VM are working correctly before moving on to load balancing and
security modules.
1. Login with
◦Username: ocuser
◦Password: VMware123!
3.You should see the OC-Apache-A web page (The webservers for this lab module were modified to show the name of the host
Summary [260]
Congratulations! In just the time it took to get this far in the lab, you have deployed 2 brand new overlay network segments, a software-
defined router, and connected them all so you can access applications from outside the network.
You have completed Module 1 and should now have a good understanding of how to create NSX Segments and configure Distributed
Routing. You should also understand how to test basic connectivity and how to use the Traceflow tool for further troubleshooting.
Please continue to Module 2 - "Changing the Security Game with Microsegmentation"
Module 9 - Changing the Security Game – Distributed Firewall (45 Minutes) Intermediate
Over the past 10+ years, traffic in the data center has changed. More and more traffic stays within the data center, moving between
distributed application components. This traffic, known as “East-West”, is difficult to secure using traditional perimeter firewalls,
which were predominantly designed for traditional “North-South” traffic.
Micro-segmentation enables administrators to increase the agility and efficiency of the data center while maintaining an acceptable
security posture. Micro-segmentation decreases the level of risk and increases the security posture of the modern data center.
Micro-segmentation with NSX in VCF is applied at the vNIC to VDS interface. Packets are inspected as they enter and leave each virtual
machine. Micro-segmentation is effectively a centralized packet filtering solution that acts on every machine.
This section will explore the use of tagging to create groups of VMs to apply specific distributed firewall rules to. In small environments,
creating groups based on VM names may suffice. However, as your environment grows, tagging may be a better alternative.
•Tags
Tags A virtual machine is not directly managed by NSX however, NSX allows the attachment of tags to a virtual machine. This
tagging enables tag-based grouping of objects (e.g., you can apply a Tag called “AppServer” to all application servers).
•Security
Security Groups Security Groups enable you to assign security policies, such as distributed firewall rules, to a group of
objects, such as virtual machines. In addition to Tags, you can also create groups based on VM attributes such as VM Name,
•Security
Security Policies Each firewall rule contains policies that act as instructions that determine whether a packet should be
allowed or blocked, which protocols it is allowed to use, which ports it is allowed to use, etc. Policies can be either stateful or
stateless.
Tagging in NSX is distinct from tagging in vCenter Server, and at this time vCenter Server tags cannot be used to create grouping in
NSX. In a larger, more automated environment customers would use a solution such as Aria Automation to deploy virtual machines and
containers with security tagging set at the time of creation.
Given that the demo Opencart application only has two webservers and one database server, we’re going to create two tags as
criteria for two groups. This might seem somewhat redundant, creating one tag per group, but it’s essential to remember:
•This is a small sample 2-tier application. For applications leveraging micro-services, you’ll be able to group more than one
machine under one tag, and better leverage the security groups
•The advantage of using tags/groups is also an operational one. Once you create your infrastructure around Security Groups
that contain tags, the moment you tag a machine with a specific tag, it immediately inherits the specific Security Group,
Firewall rules, and so on. This brings us closer to the cloud delivery model.
•The downside is that a certain level of caution needs to be implemented when working with tags and security groups,
meaning that it is just as easy to add a machine to an existing security group and avoid the complication that comes with
setting up the firewall rules, but it is also just as easy to evade good security by giving the new machine too many permissions
To show the capability of tags we will set up OC-Apache-A with the appropriate tag and a security group. Then we’ll have OC-
Apache-B inherit the web tag and see how easy it is to apply all the appropriate rules to “a new machine”. VM->Tag->Group
mapping is as follows
1. Please ensure that the Lab Status is green and says “Ready”.
2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.
1. Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is
the vCenter Login.
2.Open a new tab and verify the page URL to ensure you have the correct user interface. The vCenter login URL should
read https://mgmt-nsx.vcf.sddc.lab/login.jsp
1. Click on Inventory
2.Click on Tags
Note: Don’t hit save yet. As you can see, the note says the tag must be assigned to at least one object first
1. Select the OC-Apache-A virtual machine (you may need to scroll down)
1. The “Assigned
Assigned To
To” value has incremented to
2.Click on the SAVE button
1. Select the OC-MySQL virtual machine (you may need to scroll down)
1. Click on Inventory
2.Click on Tags
2.Click on OK
4.Click on “Set
Set” to add group members. In this example, we will use the tags we just created to populate the group
3.Click on “Set
Set” to add group members. In this example, we will use the tags we just created to populate the group
NOTE: Each group should have a single VM as a member (we can ignore IP addresses, Ports, and VIF for now).
Summary [292]
This section shows how to create tagging and grouping in NSX. This capability allows the creation and management of a scalable set of
distributed firewall rules.
This section will show configuring the distributed firewall to limit access in our OpenCart Application. For this lab, we will create the
following rules.
Keep in mind that this all happens at the distributed firewall level, where firewall rules are implemented at the VM switch port versus
needing the services of a routed (perimeter) firewall to implement. Since we have created groups in the previous section, now we can
create access rules based on these groups.
1. Ensure you are on the NSX Manager tab in the Chrome Browser
2.Click Security
NOTE: The default is the entire distributed firewall, however, we want this rule to apply to the groups we created in the previous labs.
2.Search for OC
1. Click on the three vertical dots on the left of the Opencart policy
1. Click on the three vertical dots on the left of the inbound-web-80 rule
Notice at this point we have two rules in place that are defaulted to Allow, and we have not yet published the rule changes. Leave this
as is for the moment. Next, we will test that both ports are currently active on our web server.
1. Return to the NSX Manager tab. Click the arrow next to Allow on inbound-web-8080
NOTE: The Publish button is grayed out, showing there are no uncommitted changes. The green Success indicator is set at the policy
level, and our two rules now have ID numbers showing they have been activated.
1. Go to the OC-Apache-A port 8080 browser tab and refresh the page
1. Test operations of the OC-Apache-B webserver on port 80 using the bookmark in the bookmark bar
Observe that the Reject rule on port 8080 does not extend to OC-Apache-B.
2.Click on Inventory
3.Click on Tags
4.Click on Filter
Filter, select Tag
1. Click SAVE
1. Return to the browser tab for OC-Apache-B web server on port 8080 and refresh browser tab
2.As soon as the tag was applied to OC-Apache-B VM, it immediately became a part of the Opencart Policy, because it became
The step will allow communications from the Apache web servers to the MySQL server.
2.Browse to http://oc-apache-a.vcf.sddc.lab
NOTE: This will allow us to see the impact of the firewall blocking access from Apache to MySQL which will be useful later in the lab.
1. Reset the Web-DB rule to Allow and Publish before moving on to the next step
The step will implement a Deny All Inbound rule which will deny all inbound traffic we have not explicitly allowed. This step will also
show the order of rule evaluation within a security policy
2.Browse to to http://oc-apache-a.vcf.sddc.lab
3.The web page load should fail. This is because the Deny-All rule is evaluated before our Allow rules.
2.Move the Deny-All-Inbound rule down by left-clicking the mouse and holding down with the cursor anywhere on the Deny-
All-Inbound rule line and dragging the rule below our inbound-web-80 rule
NOTE: We have a hard Deny-All rule in our policy. Unless port 8080 is explicitly allowed, it will be blocked, rendering this rule no
longer needed.
2.Browse to http://oc-apache-a.vcf.sddc.lab:8080
The step will implement a rule to allow ssh connections to our Apache and MySQL VMs, but only from our inside admin network
(10.0.0.0/24).
1. Click on IP Addresses
3.Login with
◦Username: ocuser
◦Password: VMware123!
NOTE: Observe that we have blocked ssh within the Web servers but allow it between our admin network and the web servers.
The section will implement a rule to allow ICMP (Ping) to our Apache and MySQL VMs, but only from inside our admin network
(10.0.0.0/24), as Ping is often used to determine host accessibility in many security threat situations.
1. Traceflow should show the ICMP packet dropped at the first firewall point (OC-Apache-A) before being placed on the
segment.
1. Review the observations panel and notice the packet was dropped at OC-Apache A between the VM NIC and OC-Web-
Segment.
1. On the Traceflow screen, click on the EDIT button to reconfigure the trace then
•The packet was injected at the network adapter for OC-Apache-A virtual machine
•It is then received at the distributed firewall at the VDS port for OC-Apache-A
•With no rule blocking, the packet is then forwarded on from the sending VDS port
•The packet then hits the OC-T1 router and gets forwarded to the OC-DB-Segment
•Since OC-Apache-A and OC-MySQL are running on different ESXi hosts, you notice the physical hop between TEPs
•The packet is then received on the distributed firewall at the receiving VDS port for OC-MySQL
•With no rule blocking forwarding, the packet is then forwarded to the destination, the last step shows the packet is delivered
Notice the flexibility of Traceflow to allow us to troubleshoot our distributed firewall and distributed routing using appropriate
communications protocols.
This module shows the power of the distributed firewall capability in NSX. Using tagging and grouping, we were able to create a
scalable set of rules for our Opencart application that only allows necessary communications for application operation, while blocking all
other traffic. This was all done directly at the vSphere VDS switch port level, versus a piece of hardware elsewhere in the data center.
Please continue to Module 3 - "Load Balancing"
In this module, we will use NSX to create a basic load balancer for HTTP traffic to our OpenCart web servers.
In this module we will configure an Layer 7 load balancer on the NSX Tier-1 router created in Module 1.
1. Please ensure that the Lab Status is green and says “Ready”.
2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.
Once the browser has launched you will see two tabs open by default. The first tab is the SDDC Manager Login, the second is the
vCenter Login.
1. Open a new tab and verify the page URL to ensure you have the correct user interface. The vCenter login URL should
read https://mgmt-nsx.vcf.sddc.lab/login.jsp
To support stateful services, such as Layer 3-7 Firewall we need to configure OC-T1 as a Services Router (SR). This simply means
associating OC-T1 with our existing NSX Edge cluster in the management domain.
4.Select Edit
A server pool is a set of servers that can share the same content, in this example it is our Apache web servers.
A Virtual Server is an IP address that acts as the front end for a Server Pool.
2.Login with
◦Username: ocuser
◦Password: VMware123!
Note: The Active Monitor has detected the failure of OC-Apache-A quickly and will no longer send requests to it
Refresh the browser for the OC-VIP tab several times in a row. You should see both OC-Apache-A and OC-Apache-B web pages, as
the Active Monitor would detect the return of OC-Apache-A quickly.
Summary [378]
In this section we configured all the required components for a load balancer to function in NSX. You also had the opportunity to test its
functionality including basic monitoring.
In this module, we explored how quickly a load balancer can be instantiated on the NSX Tier-1 router.
•A load balancer is connected to a Tier-1 logical router. The load balancer hosts single or multiple virtual servers.
•A virtual server is an abstract of an application service, represented by a unique combination of IP, port, and protocol. The
•A server pool consists of a group of servers. The server pools include individual server pool members.
◦Health check monitors - Active monitor which includes HTTP, HTPPS, TCP, UDP, and ICMP, and passive monitor
This module provides an introduction to migrating application workloads from existing vSphere 7.x environment to VMware Cloud
Foundation. For the purposes of this lab, imagine that we are working on a datacenter consolidation project. Our source site is in
Rheinlander, WI and our destination lab is in Brussels, Belgium. We have been tasked with migrating the application VMs from our
source site to the destination site. Because we have VMware Cloud Foundation, we can use VMware HCX to migrate the application
VMs without downtime, and without changing the IPs. Within this lab HCX has been installed and configured. We will review the
topology of both sites, Then we will begin the process of migrating VMs and finish by migrating the Gateway. At the end of the lesson
we will review the installation and configuration of the HCX appliances to see how this all works behind the scenes.
You can hide the manual to use more of the screen for the simulation.
NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.
Hands-on Labs Interactive Simulation: HCX installation, Activation, & Site Pairing [384]
You can hide the manual to use more of the screen for the simulation.
NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.
Hands-on Labs Interactive Simulation: HCX Network Profile, Compute Profile, Service Mesh [385]
You can hide the manual to use more of the screen for the simulation.
NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.
You can hide the manual to use more of the screen for the simulation.
NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.
Conclusion [387]
In this lab, you have seen the powerful migration capabilities with VMware HCX. VMware HCX streamlines application migration,
workload rebalancing, and business continuity across data centers and clouds. We have demonstrated how HCX can move applications
seamlessly between environments at scale and avoid the cost and complexity of refactoring applications. We dove into the details of
how to install, configure and extend networks using HCX. When your business is ready to migrate your applications between any
VMware-based Cloud, you now know how easily this can be with VMware HCX. Only VMware gives you the freedom to run your
applications where ever you need them without downtime.
In this lab, we show how to use Aria Automation Assembler to deploy an OpenCart instance to pre-defined NSX networks configured
on the Cloud Foundation management domain.
•Creating NSX Segments with DHCP services, and connect to OC-T1 Tier-1 router
In this exercise, we will create two new NSX segments to host the OpenCart web and database servers. Each segment will use a /24
subnet and reserve a part of the address space for Aria Automation deployed services like load balancers and the remainder for DHCP
boot of hosts.
1. Please ensure that the Lab Status is green and says “Ready”.
2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.
◦Username: admin
◦Password: VMware123!VMware123!
From the NSX Manager, start by creating a new software-defined network and assigning the IP subnet 172.16.20.0/24.
Within the topology view, you should see the OC-Auto-T1 router with only 1 segment below it, you may need to zoom in to see the Tier 1
names. We are going to add more segments to this Tier-1 router for use by Aria Automation Assembler. The next steps will create the
following networks:
OC-DB-Auto-Seg
•10.1.3.0/24
•Gateway 10.1.3.1/24
OC-Web-Auto-Seg
•10.3.4.0/24
•Gateway 10.1.4.1/24
From the NSX Manager, start by creating a new software-defined network and assigning the IP subnet 10.1.3.0/24.
Observe that a new segment with the name “OC-DB-Auto-Seg” has been created.
The topology view shows the new segment with its network path for routing out to the external network.
From the NSX Manager, start by creating a new software-defined network and assigning the IP subnet 10.1.4.0/24.
2.Set the DHCP Profile to the only option starting with DHCP.....
1. If you hover your mouse over where it says '1 Service on OC-DB-Auto-Seg', you will see that it shows DHCP service. This is
In this exercise we will configure a new Network Profile in Aria Automation Assembler for the OC-DB-Auto-Seg segment and associated
For purposes of the lab, the prerequisite steps of adding a VMware Cloud Foundation Cloud Account to Aria Automation, and creating
the associated Project, Cloud Zones, and Image and Flavor mappings have already been accomplished. Feel free to click through the
configuration and review these settings. However, be careful to not make any changes.
Next, we assign the network that will be used with this Network Profile.
Next, we assign a range of IP addresses that Assembler can use when deploying network services (such as a virtual server on the load
balancer).
Note: you may be presented with the self signed certificate and will need to accept to proceed
Login with:
Aria Automation Assembler allows cloud administrators to define application workloads, using cloud templates, which can be deployed
across different VMware Clouds.
Begin by creating a Network Policy for the OC-DB-Auto-Seg NSX Segment that was created in the previous exercises.
Next, we assign the network that will be used with this Network Profile.
Next, we assign a range of IP addresses that Cloud Assembly can use when deploying network services (such as a virtual server on the
load balancer).
Verify the creation of the IP range by making sure the network field is not blank.
1. Click CLOSE
Note: In the first section we created a DHCP server inside NSX and assigned it the IP address range 10.1.3.100 - 10.1.3.253. This range
represents the IPs that will be assigned to the VMs that get deployed on the OC-DB-Auto-Seg segment. Here we are assigning a
different IP range (10.1.3.2 - 10.1.3.99) to Aria Automation Assembler. This range represents the IPs that Assembler will assign to NSX
services that get created as part of the cloud template deployments. For example, IPs in this range will be assigned to any virtual
servers created on the NSX load balancer.
Next, we identify the associated Edge Cluster for the network along with its Tier 0 gateway router.
1. Click CREATE
The Network Profile is created. With the network and related services created inside NSX, and the network profile defined in Cloud
Assembly we are ready to deploy application workloads. To do this we will use a Cloud Template.
Begin by creating a Network Policy for the OC-Web-Auto-Seg NSX Segment that was created in the previous exercises.
1. For the Account / Region, from the drop down select: HOL-Site-1-Mgmt / mgmt-datacenter
2.In the Name field enter: OC-Web-Auto-Seg
1. Select OC-Web-Auto-Seg
Verify the creation of the IP range by making sure the network field is not blank.
Note, in the first section we created a DHCP server inside NSX and assigned it the IP address range 10.1.4.100 - 10.1.4.253. This range
represents the IPs that will be assigned to the VMs that get deployed on the OC-Web-Auto-Seg segment. Here we are assigning a
different IP range (10.1.4.2 - 10.1.4.99) to Assembler. This range represents the IPs that Assembler will assign to NSX services that get
created as part of the cloud template deployments. For example, IPs in this range will be assigned to any virtual servers created on the
NSX load balancer.
Next, we identify the associated Edge Cluster for the network along with its Tier 0 gateway router.
1. Click CREATE
The Network Profile is created. With the network and related services created inside NSX, and the network profile defined in Assembler
we are ready to deploy application workloads. To do this we will use a Cloud Template.
In the previous exercises, we created a new NSX segment with an associated DHCP. We then added a network profile in Assembler for
this new network. In this exercise, we will use a Cloud Template to deploy an application onto the network.
Note: for purposes of the lab the cloud template has been pre-configured. Feel free to explore the template, however, be careful that
you don’t make any changes.
•Two network resources that connect deployed virtual machines to the correct networks
•A Cloud NSX Load Balancer which configures the virtual server for this instance of OpenCart on the existing OC-Auto-LB load
balancer specified as part of the template
•One or more Apache web servers (number of servers set when the user deploys the template)
2.This highlights the relevant part of the YAML file for this cloud template
Note the OC-Web-Auto-Seg resource is looking for an existing network with a capability tag of oc-fixed-network:oc-web.
These are known as “constraints”. Assembler needs to find a Network Profile with Capabilities that meet these Constraints
when deploying this template
2.This highlights the relevant part of the YAML file for this cloud template. The DB NSX-Network has constraints of oc-fixed-
network:oc-db
2.The load balancer resource will create virtual server resources on the OC-Web-Auto-Seg segment, with members of the
server pool (instances) based on the number of OC-Apache-Auto web servers this template deploys. The load balancer is
configured to listen on Port 80 Protocol and Port), and talk to the backend Apache server on Port 80 (InstanceProtocol and
InstancePort)
2.This resource creates an Apache server from a basic Ubuntu template using the extensive “Cloud Init” functionality built into
Assembler. Notice this resource uses both Flavor and Image mapping.
The remainder of the Apache resource definition will add needed Linux packages, configure users, and then configure the
Apache Webserver for our OpenCart application
2.This resource creates the MySQL database server from a basic Ubuntu template using the extensive “Cloud Init”
functionality built into Assembler. Notice this resource uses both Flavor and Image mapping.
Note: Leave the default Values of small Node Size and medium Front End Cluster Size
2.Click on the TEST button
The test function will evaluate the blueprint against the infrastructure to verify that things are properly configured in the lab. Ensure the
test is successful. If the test fails, work with the lab moderator to resolve any problems. Typical problems at this point are related to the
network profile names and capabilities.
Aria Automation Assembler will deploy the template. This will take approximately ~10 minutes in the lab.
You can monitor the progress from Assembler as well as by connecting to the vSphere Client to watch as the VMs are deployed.
Note: If the deployment fails, attempt to restart it by navigating to the Deployments tab, and for the failed deployment, select
“Update” from the three-dot menu. Contact the lab moderator for assistance.
2.Click on the History tab to see more details about what is happening as it proceeds through deployment
1. The topmost box describes the item to be created. In this case, we are allocating network space from an existing segment
2.The second box shows which project this template is derived from. Access to resources can be controlled with projects
3.The bottom row shows the process Assembler walks through to choose where to allocate this network. In effect, Assembler
chooses the first Network Profile it finds that meets the constraints of the object being provisioned.
•The other Network Profile does not meet the constraints and is ineligible
3.Notice how the Network Profile that meets the constraints for OC-DB-Auto-Seg changes
1. Click the > to explore the components of the OpenCart-Fixed Demo deployment
2.Observe two deployed OC-Apache-Auto-XXX web servers on the 10.1.4.x network, with IP addresses in the range controlled
by NSX for DHCP on the OC-Web-Auto-Seg. (Note: The numeric suffix after the resources name is set by Assembler to keep
resource names unique. This naming mechanism was chosen during the initial Assembler set up in this environment).
4.An NSX Load Balancer on the 10.1.4.x network, with IP address in the range controlled by Assembler on the OC-Web-Auto-
Seg
5.Observe this IP address of the load balancer as we will be using it to test our application
Note: The IP address of your load balancer may be different than shown in the above screenshot
2.Enter the IP address of the NSX Load Balancer (http://<ip>) in the URL field
Note: As you refresh the page the name of the hosts shown in the “Connected to:” field is updated. With this, we can confirm that the
NSX load balancer is distributing the connections across the two web servers.
2.Log in with:
◦Username: administrator@vsphere.local
◦Password: VMware123!
View. Observe the two “OC-Apache-Auto-###” VMs, as well as the single "OC-MySQL-
1. Navigate to the Hosts and Clusters View
Auto-###" running in our inventory. These VMs were deployed from Assembler as part of the template.
Note: Due to the nested nature of the lab environment it is common to see vSAN alerts/alarms in the vSphere Client. These
•CPU and Memory sizes match “Flavor = Small” from the Assembler Flavor Mapping. You can inspect Flavor Mapping in
Assembler by navigating to Assembler>Infrastructure (top bar)> Configure>Flavor Mappings (left bar), and then clicking on
small
•The VM is connected to OC-Web-Auto-Seg based on the OC-Web-Auto-Seg Network Profile selected for this VM. This was
•Finally, note that the VMs may be in different resource pools, this is defined in Assembler and can be inspected by navigating
to Assembler>Infrastructure (top bar)>Configure>Cloud Zones> Select mgmt domain>Select Compute (top bar). Currently,
any resource pool is selected, however, this can be changed to only include specific resource pools as well. Feel free to
change this and test, however, you will want to delete the existing deployed application.
◦Username: admin
◦Password: VMware123!VMware123!
1. Click the OC-Auto-LB-###-pool-1 under the server pool to inspect the members
1. Here we can see the two OC-Apache-Auto-### servers that are listed as members on this Load Balancer. This will allow the
2.Click on the Close button,, and then the Close button again returning to the load balancer overview
1. Click the VMware Cloud Services browser tab. If you happened to have been logged out Login again using the info below:
Credentials: configadmin
Password: VMware123!
5.Click on Delete
Note: It will take ~2 minutes for Cloud Assembly to delete the application.
In this module, we showed how to use Aria Automation Assembler to deploy application workloads onto a pre-defined NSX network.
We began by creating an NSX Segment. We then created a DHCP Server and associated it with the segment. Next, we associated an
NSX Load Balancer that was previously created with the tier 1 logical router. After the network and related services had been created,
we defined a Network Profile in Assembler for the network. Finally, we used a Cloud Template to deploy a sample web server
application on a new network, that was provisioned in minutes through software-defined networking. In fact, the networking team
didn’t even need to be engaged to create the network in the core.
In this module, we use Aria Automation Assembler to dynamically deploy software-defined networking objects inside VMware Cloud
Foundation's NSX implementation as part of an application deployment using Aria Automation Assembler. We begin by reviewing the
vCenter and NSX inventory. We then create a network profile inside Assembler. We then deploy a template to demonstrate how an on-
demand NSX segment is deployed along with a corresponding tier 1 router, DHCP Server, and load balancer.
Prior to beginning the exercises, please close all the windows on the desktop.
A network profile defines a group of networks and network settings that are available for a cloud account in a particular region or
Datacenter in VMware Aria Automation.
You typically define network profiles to support a target deployment environment, for example a small test environment where an
existing network has outbound access only or a large load-balanced production environment that needs a set of security policies. Think
of a network profile as a collection of workload-specific network characteristics.
1. Please ensure that the Lab Status is green and says “Ready”.
2.After you have verified that the lab is ready, please launch Google Chrome using the shortcut on the desktop.
Note: you may be presented with the self signed certificate and will need to accept to proceed
Login with:
Aria Automation Assembler allows cloud administrators to define application workloads, using cloud templates, which can be deployed
across different VMware Clouds.
Aria Automation Assembler has built-in integration with VMware NSX-T. This integration allows for the creation of software-defined
objects inside NSX-T directly from Assembler. This built-in integration includes support for creating tier-1 logical routers, segments,
DHCP Servers, and load balancers.
To enable this integration, you create a network profile. In the Network Profile:
- We identify the cloud location (i.e. endpoint) where the software-defined networking components will be created (in the lab this is the
VCF management domain).
- We specify the network isolation type. In this module, we will use a network isolation type of “on-demand”. This indicates to
Assembler that it will interface with NSX to create the software-defined objects as part of the application deployment.
- We identify the parameters (i.e. NSX transport zone, Tier-0 gateway, NSX Edge Cluster) that Cloud Assembly will use to connect to
NSX and create the software-defined objects.
Begin by creating a new network profile for the Cloud Foundation management domain.
Note: You may need to scroll down to locate it in the Resources section
3.Click on VCF-edge_mgmt-edge-cluster_segment_uplink1_11, we will be updating both edge networks but will be doing this
one first.
2.Click on VCF-edge_mgmt-edge-cluster_segment_uplink2_12
4.Click SAVE
2.Select both VCF-edge segments shown (This allows Cloud Assembly created and routed networks to reach the outside via the
Tier-0 uplinks)
7. Leave Source at Internal (VRA will act as IPAM for this segment)
1. Here we can see the New Database Network Profile has been created.
2.Select both VCF-edge segments shown (This allows Cloud Assembly created and routed networks to reach the outside via the
Tier-0 uplinks)
7. Leave Source at Internal (VRA will act as IPAM for this segment)
1. Here we can see the New Web Server Network Profile has been created.
With the network profile created, we are ready to upload and deploy new Templates where we have defined resources for deploying an
on-demand NSX network and related objects.
1. Click Close
5.The upload wizard should default to the downloads directory, select the Opencart Cloud Network.yaml file
•Two network resources that create on-demand NSX networks and T1 routers
•On-demand NSX Load Balancer and virtual servers for this instance of OpenCart
•One or more Apache web servers (number of servers set when the user deploys the template)
•Two Security objects are attached to respective virtual machines, to create on-demand security policies per VM type
2.This highlights the relevant part of the YAML file for this cloud template
◦Note the OC-Web-Cloud-Seg resource will create a new “routed” network. It will match with a network profile
2.The OC-DB-Cloud-Seg has a constraint of oc-fixed-network:oc-db that will need to be matched by a corresponding network
profile
2.The load balancer resource will create a new load balancer and virtual server resources on the OC-Web-Cloud-Seg segment,
with members of the server pool (instances) based on the number of OC-Apache-Cloud web servers this template deploys.
The load balancer is configured to listen on Port 80 Protocol and Port), and talk to the backend Apache server on Port 80
2.This resource creates an on-demand distributed firewall policy that applies to virtual machines created by the OC-Apache-
Cloud VM resource.
◦This creates a set of rules similar to what you have created and used in the previous OpenCart lab modules.
2.This resource creates an on-demand distributed firewall policy that applies to virtual machines created by the OC-Apache-
Cloud VM resource.
◦This creates a set of rules similar to what you have created and used in the previous OpenCart lab modules.
1. The test function will evaluate the cloud template against the infrastructure to verify that things are properly configured in the
lab. Ensure the test is successful. If the test fails, work with the lab moderator to resolve any problems.
Assembler will deploy the cloud template. This will take approximately ~15 minutes in the lab.
You can monitor the progress from Assembler as well as by connecting to the vSphere Client to watch as the VMs are deployed.
Note: If the deployment fails, attempt to restart it by navigating to the Deployments tab, and for the failed deployment, select
“Update” from the three-dot menu. Contact the lab moderator for assistance.
1. Once complete you can see how long the deployment has taken. (if this gets to 10min or more try refreshing your browser)
1. Use the scroll bar to review the historical information about the application deployment
The provisioning diagram is one of the best troubleshooting tools available for diagnosing failing deployments. This exercise will only
show the initial network allocation to familiarize you with navigating the provisioning diagram
The initial screen presented will default to the first network provisioned, which in this lab is OC-Web-Cloud-Seg
1. The topmost box describes the item to be created. In this case, we are creating a new network space due to the type
ROUTED
2.The second box shows the project that this template is a part of. Access to resources can be controlled with projects
3.The bottom row shows the process Assembler walks through to choose where to allocate this network. In effect, Assembler
chooses the first Network Profile it finds that meets the constraints of the object being provisioned.
◦The remaining Network Profiles do not meet the constraints and are ineligible
3.Notice how the Network Profile that meets the constraints for OC-DB-Cloud-Seg changes
5.Click on the Close button again (button not shown in the screenshot)
2.Two deployed OC-Apache-Cloud-XXX web servers on the 10.1.6.x network, with IP addresses controlled by Assembler for
DHCP on the OC-Web-Cloud-Seg. (Note: The numeric suffix after the resources name is set by Assembler to keep resource
names unique. This naming mechanism was chosen during the initial Assembler set up in this environment).
4.An NSX Load Balancer on the 10.1.6.x network, with an IP address in the range controlled by Assembler on the OC-Web-
Cloud-Seg.
Note down this IP address as you will need it in the following step.
2.Enter the IP address of the NSX Load Balancer (http://<ip>), reviewed from the previous step, your's may be different than
10.1.6.2
With the shopping cart application running, we will now look at the underlying components that were deployed in vSphere and NSX.
◦Username: administrator@vsphere.local
◦Password: VMware123!
4.From the hosts and clusters view, Select one of the OC-Apache-Cloud webservers identified in the Assembler Deployment
5.Notice:
◦CPU and Memory sizes match “Flavor = Small” from Assembler Flavor Mapping.
◦The VM is connected to OC-Web-Cloud-Seg based on the OC-Web-Cloud-Seg Network Profile selected for this
VM. This was selected by the constraint oc-cloud-network:oc-web being matched in the network profile
Note the NSX segment (i.e. OC-Web-Cloud-Seg-###) the VM is connected to. We will connect to the NSX Manager and view details
for this network.
Username: admin
Password: VMware123!VMware123!
Click on the Login button
(Optional )Click zoom or scroll the mouse wheel, you can also click and drag around
Note: that a new segment has been created with a name similar to “OC-Web-Cloud-Seg-###”. This segment was created Assembler
as part of the application deployment.
You will be able to locate the OC-Web-Cloud-Seg-### and the OC-DB-Cloud-Seg-### Tier-1 Gateways that were dynamically created
with the use of the routed network type selected in the YAML file.
1. Click on 2 Services
Services, and you will see a Load Balancer and Firewall rules have been provisioned
2.Click on 2 VMs at the button to expand out to see the VMs as shown above
Both of these VMs were given DHCP address leases by a DHCP server that was dynamically provisioned by Assembler
Here we can see the virtual server, its associated IP address, ports, etc
1. Here we can see the list of both OC-Apache-Cloud-### servers, their IP address, port, weight, etc.
5.All of these rules and their states were derived from the cloud template creating a dynamic security policy. This ensures from
the time that this application is deployed to the time it is retired it adheres to the security policy set forth by the organization.
1. Click on the Aria Automation Assembler browser tab. (note you may need to re-authenticate)
Credentials: configadmin
Password: VMware123!
Optional: If you have a vCenter Server window open during the delete process, you will see virtual machines power off and
being deleted
In this module, we showed how to use Aria Automation Assembler to deploy application workloads on to an on-demand NSX network.
We began by creating a Network Profile in Assembler that identified the network attributes to be used. We then deployed a sample
shopping cart application on the new network from a template. We then explored the objects inside vSphere and NSX to become
familiar with the components that were deployed. Based on your environment would you see an improvement in turnaround times
delivering infrastructure back to the business?
Module 14 - Kubernetes Overview and Deploying vSphere Pod VMs (30 minutes) Advanced
This module provides an introduction to Cloud Foundation with Tanzu and shows how to run vSphere Pod VMs on a vSphere Cluster.
Cloud Foundation (VCF) with Tanzu introduces a new construct that is called vSphere Pod, which is the equivalent of a Kubernetes pod.
A vSphere Pod is a special type of virtual machine with a small footprint that runs one or more Linux containers. Each vSphere Pod is
sized precisely for the workload that it accommodates and has explicit resource reservations for that workload. It allocates the exact
amount of storage, memory, and CPU resources required for the workload to run. vSphere Pods are only supported with Supervisor
Clusters that are configured with NSX-T Data Center as the networking stack.
Note that while Pod VMs are unique to vSphere, they are deployed and managed the same as any upstream conformant Kubernetes
pod.
This module contains six exercises. The exercises are successive and must be completed in order.
It is estimated that it will take ~20 minutes to complete all six exercises.
1. Open Chrome and click on the vCenter tab and verify the page URL to ensure you have the correct user interface. The
1. Navigate to Menu
2.Select Inventory
Observe that there are five ESXi hosts configured in a vSphere cluster named "mgmt-cluster
mgmt-cluster". Running on this cluster, are two NSX
Edge transport nodes (mgmt-edge01
mgmt-edge01, mgmt-edge02
mgmt-edge02). Kubernetes is enabled on the cluster as evident by the three
"SupervisorControlPlane
SupervisorControlPlane" virtual machines.
Note: the term "supervisor cluster" is used to denote a vSphere cluster on which Kubernetes has been enabled.
1. Click ns01
vSphere Pod VMs and Tanzu Kubernetes Clusters get deployed inside vSphere Namespaces. Namespaces control developer access
and define resource boundaries. Namespaces are isolated from each other, enabling a degree of multi-tenancy.
In the lab, we see that the "devteam" group has been granted edit permissions to the "ns01" namespace. Also, developers working in
this namespace have access to the "K8s Storage Policy". There are currently no resource limits set.
To access the Kubernetes instance running on this vSphere cluster, developers use the control plane IP address. To get the Kubernetes
Control Plane IP address:
1. Navigate to Menu
Note: In the vSphere client, the Kubernetes features are enabled under "Workload Management". Enabling Kubernetes is referred to as
enabling the "Workload Control Plane (WCP)".
The 'Control Plane Node IP Address' shown for "k8s-lab" is: 172.16.10.2
172.16.10.2. This is the address the developer will use to connect to the
Kubernetes control plane.
In this exercise, we reviewed the lab configuration. We saw that the lab is comprised of a Cloud Foundation domain named "mgmt-wld"
that is comprised of a four-node vSphere cluster named "mgmt-cluster
mgmt-cluster". An NSX-T Edge Cluster has been configured on the cluster
and Workload Management (i.e. Kubernetes) has been enabled. The Kubernetes control plane IP is 172.16.10.2
172.16.10.2.
In this exercise, we use Putty to log in to the developer's Linux workstation where we will authenticate to the Kubernetes control plane
and choose a vSphere Namespace in preparation for deploying the container-based application.
2.Click CentOS
3.Click Load
4.Click Open
◦Login: root
◦Password: VMware123!
From the Linux workstation, we use the "kubectl" command to connect to the Kubernetes control plane running on the vSphere cluster.
Authentication is done using vCenter Single Sign-On (SSO).
In the lab, we use the account "sam" that is in the "vsphere.local" SSO domain. This account is a member of the "devteam" group which
has been assigned "edit" privileges to the "ns01" namespace.
Notes:
•Before developers can authenticate, they need to download the vSphere CLI tools to their workstation. The CLI tools provide
a version of the "kubectl" command that includes a vCenter SSO authentication plug-in.
•The IP address passed as part of the "--server" flag is the IP address of the Kubernetes control plane that we looked at in the
Run the following “kubectl vSphere login …” command to log into the Kubernetes Control Plane:
Next, set the Kubernetes context to the “ns01” namespace by running the command: "kubectl config use-context ns01.
The developer Sam has successfully authenticated and set his context to the "ns01" namespace. Any Kubernetes objects deployed by
Sam will be created in the "ns01" namespace.
In this exercise, we saw how developers access the Kubernetes instance running on the vSphere Cluster. Developers must first
download the vSphere CLI Tools, which includes a version of the "kubectl" command that includes a vSphere authentication plugin.
They then use “kubectl” command to authenticate and set their context.
We will now deploy a container-based application that will run as a vSphere Pod VM on the vSphere cluster.
1. cd ~/demo
2.cat demo1.yaml
The "demo1.yaml
demo1.yaml" manifest deploys a simple container-based application that is comprised of a single "nginx
nginx" web server. The
application is implemented as a Kubernetes "deployment" comprised of a single pod and includes a "load balancer" service that will be
implemented inside of NSX.
The output shows the creation of both the deployment and service. Wait ~60 seconds to allow the image to be pulled and the container
to start. Then run the following commands to view details about the deployment:
Wait for the pod to enter a running state before continuing. In the lab, this can take two or three minutes.
1. Navigate to Menu
2.Select Inventory
1. Select ns01
Observe the vSphere Pod VM is now shown in the vCenter inventory under the ns01 namespace.
Follow the steps below to view additional details about the Kubernetes components:
1. Select ns01
Follow the steps below to view additional details about the Kubernetes components:
3.Click Deployments
Note: Both the VCF administrator and the developer have full visibility into the vSphere Pod VMs deployed on the vSphere cluster.
In this exercise, we deployed a simple web server using the "nginx" container image. We saw a sample YAML manifest and the steps
developers use to deploy Kubernetes-based workloads directly on the vSphere cluster.
In the previous exercise a single vSphere Pod VM was deployed as part of a Kubernetes "deployment". Let's scale this up so that we
have three pods to provide redundancy for the webserver.
Wait 30 seconds and then query the deployments and pods. Note that there are now three pods running.
Wait for all three pods to enter a running state. It may take two or three minutes. Feel free to re-attempt step 4 until all pods state
running
Observe that all three pods are shown in the vSphere client.
This exercise demonstrated how developers can interact with Kubernetes running on the vSphere cluster. VCF with Tanzu provides a
native Kubernetes experience to developers.
The YAML manifest we used to deploy the vSphere Pod VMs includes a "service" object. This object is used to enable external network
access to the web server. This access is achieved using an NSX load balancer that was deployed by Kubernetes at the time of the
deployment.
Note: the load balancer was automatically created when the Kubernetes deployment was created. There is no requirement to manually
create NSX objects or do any configuration inside NSX prior to deploying Pod VMs.
Run the "kubectl get services" command to view details about the service.
Note the EXTERNAL IP 172.16.10.4 has been assigned to the demo1 service. This is the IP address that has been assigned to the NSX
load balancer. We access the web server using this IP.
3.Login:
◦Username: admin
◦Password: VMware123!VMware123!
4.Click LOG IN
Click Save if you get the Customer Experience Improvement Program screen
Note
Note: You may notice that there is a warning that a 3 node cluster is recommended. This is expected in the lab to minimize resource
usage.
Note: NSX is installed and configured with each workload domain. This includes the configuration of the NSX Container Plugin (NCP) at
the time that Kubernetes is enabled. The NCP provides NSX integration with Kubernetes.
1. Enter the Load Balancer Ingress IP address 172.16.10.4 in the search field and press enter
The NSX load balancer that is serving requests to the Nginx web server running on the three pods is shown. Again, the NSX Load
Balancer was automatically created by Kubernetes when we created the deployment. This was done using the "Service" resource
defined in the demo1.yaml manifest.
3.Click the load balancer name link (domain-c8-<id>) to go to the Load Balancer configuration page.
Here we see the details for the virtual server, including the IP address, port, and type.
1. Click the link in the Server Pool column to view the Server Pool.
We see the three pods listed as members of the server pool. Let's add more pods to the deployment and watch the server pool
members list get updated.
Next, we return to the putty terminal where will scale the deployment again in order to show how the server pool in the NSX load
balancer is automatically updated as the size of the Kubernetes deployment increases.
Run the following commands to scale the deployment to 10 pods and verify all pods reach the "running" state:
Note: In the Linux shell, you can use the up-arrow key to scroll back through previous commands and then edit and modify them to
avoid retyping the full command.
We see the Server Pool Members list automatically updated with the new pods (you may need to scroll down to see all 10).
Run the following commands to scale the deployment back down to 2 pods:
Confirm the Server Pool Members list is updated to include the 2 remaining Pod VMs. Your IPs may differ from what is shown.
1. Click Close
Close.
In this exercise, we looked at the NSX integration that comes with VCF with Tanzu. NSX makes it easy for developers to configure
network-related services (such as NAT and load balancers) for their container-based applications. The developers define the objects in
a YAML manifest. When the manifests are applied, Kubernetes instantiates the objects defined in it. This includes interfacing with the
NCP plug-in to automate the creation of the related network objects inside NSX.
Wait ~30 seconds and then run the following commands to verify the Kubernetes objects are deleted:
In this module, we saw how vSphere with Tanzu enables customers to run containers directly on vSphere. We saw how the
configuration and access to the environment is controlled by the VCF administrator from the vSphere client. We also saw how
developers authenticate and use YAML manifest to deploy pod VMs. Pod VMs created by the developer are visible to the VCF admin.
We also looked at how VCF and NSX make it easy for developers to expose services on the network.
•VCF with Tanzu introduces a new construct called a vSphere Pod, which is the equivalent of a Kubernetes pod.
•vSphere Namespaces with SSO authentication enable the admin to restrict access and control resource consumption.
•VCF with Tanzu makes it easy to deploy Kubernetes networking services, such as Load Balancer (L4) and Ingress (L7),
•To the admin, VCF with Tanzu looks and feels like vSphere. To the developer, VCF with Tanzu looks and feels like Kubernetes.
This module shows how to deploy a Tanzu Kubernetes Cluster (TKC) inside a vSphere Namespace.
The vSphere supervisor cluster provides a management layer from which Tanzu Kubernetes Clusters (TKCs) are built. The Tanzu
Kubernetes Grid Service is a custom controller manager with a set of controllers that runs on the supervisor cluster. One of the roles of
the Tanzu Kubernetes Grid Service is to provision Tanzu Kubernetes clusters.
While there is a one-to-one relationship between the Supervisor Cluster and the vSphere cluster, there is a one-to-many relationship
between the supervisor cluster and Tanzu Kubernetes clusters. You can provision multiple Tanzu Kubernetes clusters within a single
supervisor cluster. The workload management functionality provided by the supervisor cluster gives you control over the cluster
configuration and lifecycle while allowing you to maintain concurrency with upstream Kubernetes.
You deploy one or more Tanzu Kubernetes clusters to a vSphere namespace. Resource quotas and storage policy are applied to a
vSphere Namespace and inherited by the Tanzu Kubernetes clusters deployed there.
When you provision a Tanzu Kubernetes cluster, a resource pool and VM folder are created in the vSphere Namespace. The Tanzu
Kubernetes cluster control plane and worker node VMs are placed within this resource pool and VM folder. Using the vSphere Client,
you can view this hierarchy by selecting the Hosts and Clusters perspective and selecting the VMs and Templates view.
It is estimated that it will take ~25 minutes to complete this exercise. It is recommended that you close all browsers or Putty windows on
the desktop prior to beginning this module.
In this exercise, we deploy a TKC named "tkc02" in the "ns01" namespace. The TKC is deployed with one control plane and one worker
node.
Note, that this configuration is intentionally small to facilitate the lab. It is not recommended for production-grade TKC deployments,
which should always have three control plane VMs with multiple worker nodes.
Login:
1. Username: administrator@vsphere.local
2.Password: VMware123!
3.Click LOGIN
TKC clusters are deployed inside vSphere namespaces. Here we see a TKC named "tkc01" that is deployed in the "ns01" namespace.
We will add a second TKC named "tkc02".
•Login to the Kubernetes Control Plane and set the context to the vSphere namespace.
2.Click CentOS
3.Click Load
4.Click Open
◦Login: root
◦Password: VMware123!
From the Putty terminal, run the following commands to view the TKC manifest that will be used to deploy the TKC.
1. cd ~/tkc
Below is an example of the YAML manifest file that will be used in this exercise. Take a few minutes to familiarize yourself with the
contents of this file.
•apiVersion: run.tanzu.vmware.com/v1alpha1
•kind: TanzuKubernetesCluster
•metadata:
•name: tkc02
•namespace: ns01
•spec:
•distribution:
•version: 1.23.8
•topology:
•controlPlane:
•workers:
•settings:
•network:
•cni:
•name: antrea
•services:
•pods:
•storage:
Notes:
The manifest file specifies the name of the TKC along with the name of the vSphere namespace where it will be deployed. It also is
where the developers indicate the version of Kubernetes to use, along with the number and size of the control plane and work nodes,
as well as network and storage settings.
The storage classes in the YAML manifest map directly to vSphere storage policies that are part of the vSphere Cloud Native Storage
(CNS). CNS is a vSphere feature that makes K8s aware of how to provision storage on vSphere on-demand, in a fully automated,
scalable fashion as well as providing visibility into container volumes from the vSphere client.
While the developer is able to specify the size (e.g. "class") and the number of virtual machines that get deployed as part of a TKC, they
are bound by the resource limits set for the vSphere namespace. While TKCs allow developers to self-provision Kubernetes clusters on
vSphere, the vSphere administrator is able to control and manage the amount of vSphere cluster resources (CPU, memory, storage)
they can consume.
With the YAML manifest in place, we are ready to connect to the Kubernetes Control Plane and set the context to the vSphere
namespace "ns01".
Begin by querying for a list of the currently deployed TKCs in the "ns01" namespace.
We can monitor the progress of the TKC deployment from the Linux workstation using these commands:
Note, that you may need to run these commands a few times as it can take a couple of minutes for the deployment to begin in the lab.
You can also monitor the progress of the virtual machine deployments from the vSphere Web Client.
Note that under the "ns01" namespace there is now a new TKC object named "tkc02".
Monitor the control plane and worker node OVF deployments from the vSphere client. A few minutes after the control plane node is
deployed, the worker node will be deployed. Wait for both virtual machines to be deployed and powered on.
Note, in the lab, it typically takes ~15 minutes. However, when the cloud back-end is highly congested this can take upwards of ~30
minutes to complete in the lab. Please be patient.
After the control plane and worker virtual machines have deployed and powered on, return to the Putty Terminal and confirm the TKC is
running.
Note, wait for all nodes to show a status of "ready". It may take several minutes before the worker node reaches the ready state.
Wait ~30 seconds and then verify "tkc02" has been removed. Note, it may take a few (i.e. up to 5) minutes for the delete tkc command
to complete.
Return to the vSphere web client and verify the TKC cluster "tkc02
tkc02" is no longer shown under the namespace "ns01
ns01".
This module showed how to deploy a Tanzu Kubernetes Cluster (TKC) inside a vSphere Namespace.
VCF with Tanzu enables developers to deploy one or more Tanzu Kubernetes clusters to a vSphere Namespace. Resource quotas and
storage policy are applied to a vSphere Namespace and inherited by the Tanzu Kubernetes clusters deployed there.
When you provision a Tanzu Kubernetes cluster, a resource pool and VM folder are created in the vSphere Namespace. The Tanzu
Kubernetes cluster control plane and worker node VMs are placed within this resource pool and VM folder.
Once deployed, TKCs can be expanded by adding additional worker nodes, resized by increasing the CPU and memory assigned to
each node, upgraded, and deleted. These functions are explored in the next sections of this lab.
•The Tanzu Kubernetes Grid Service is a custom controller manager with a set of controllers that is part of the supervisor
cluster. The purpose of the Tanzu Kubernetes Grid Service is to provision Tanzu Kubernetes clusters.
•There is a one-to-many relationship between the supervisor cluster and Tanzu Kubernetes clusters. You can provision
•The workload management functionality provided by the supervisor cluster gives developers control over the cluster
configuration and lifecycle while allowing you to maintain concurrency with upstream Kubernetes.
•You deploy one or more Tanzu Kubernetes clusters to a vSphere namespace. Resource quotas and storage policy are applied
to a vSphere Namespace and inherited by the Tanzu Kubernetes clusters deployed there.
Module 16 - Adding Worker Nodes to a Tanzu Kubernetes Cluster (15 minutes) Advanced
In this exercise, we will expand a Tanzu Kubernetes Cluster (TKC) by adding an additional worker node.
Being able to dynamically allocate capacity on demand, and add additional capacity over time is a critical capability of a modern private
cloud. When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to allocate infrastructure for hosting TKCs
and to expand the infrastructure over time.
1. You can scale the TKC horizontally by adding more worker nodes.
2.You can scale the TKC vertically by increasing the size of the worker nodes.
In this exercise, we will show how to add additional worker nodes to an existing TKC.
Note: While the developer is able to increase the size and number of virtual machines that get deployed as part of a TKC, the vSphere
administrator controls the amount of vSphere cluster resources (CPU, memory, storage) they can consume.
To avoid confusion while navigating through the lab it is recommended that you close all browsers and putty windows on the desktop
prior to starting this exercise.
Login:
1. Username: administrator@vsphere.local
2.Password: VMware123!
3.Click LOGIN
We start by viewing the details of the TKC from the vSphere client. We then switch to the Linux workstation to show how easy it is for
developers to add additional worker nodes.
Note, to facilitate the lab we use a TKC with a single control plane and worker node. This configuration is not recommended for
production-grade TKC deployments which should always have three control plane VMs with multiple worker nodes.
1. Click ns01
ns01.
Note the details for the TKC. The status is "Running", the version is v1.23.8, and the Control Plane address is 172.16.10.3.
1. Click Menu
The details for k8s-lab are shown. Note the supervisor cluster Control Plane IP address (172.16.10.2). This is the IP address the
developer will use to connect and expand the TKC.
2.Click CentOS
3.Click Load
4.Click Open
◦Login: root
◦Password: VMware123!
Log into the Kubernetes Control Plane and set the context to the "ns01" namespace:
The developer sees the same information that was shown to the vSphere admin in the vSphere client. Again, we are able to observe
that the TKC currently has two VMs - one Control Plane and one Worker node.
Note: The vi editor has two modes. By default, you start in the “command mode”. While in this mode you use the arrow keys to scroll
through the doc and are able to run search commands. To edit the contents of the file you need to switch to “entry mode”. After
making changes, press the “esc” key to switch back to command mode to save the file and exit the vi editor.
•Press the escape key (esc) to verify you are in command mode.
◦This is done by typing “/” followed by the string “workers”. Note that the command is displayed at the bottom
of the putty window. You may need to click 'n' to see the next occurrence
1. Use the arrow keys, move the cursor down to the “workers:” section, and position the cursor on the “1” in the string
“replicas: 1”
1. Press the “r” key (r = replace) then press “2”. The “1” will be replaced by “2”.
2.Press the escape key (esc) to verify you are in command mode.
3.Type “:wq” and press enter to save the change and exit the vi editor.
Kubernetes automatically detects the change and begins the process to add the second worker node.
Note the TKC status shows "updating" while the additional node is being added and that the TKC now has three VMs.
1. Click the “vSphere – Clusters” browser tab to return to the vSphere client
It will take approximately 5 minutes for the new worker node to be deployed.
After the worker node has been deployed and powered on, return to the Putty terminal.
Run the "kubectl describe tkc tkc01" command to confirm the TKC now has two worker nodes.
Repeat running the "kubectl describe tkc tkc01" command until the worker node status is "ready". Approximately 5 minutes.
Next, we will shrink the TKC back to its original size by reducing the number of worker nodes back down to one.
Note the TKC currently has one control plane node and two worker nodes. Edit the configuration a second time to reduce the worker
node count to one.
Note: The vi editor has two modes. By default you start in the “command mode”. While in this mode you use the arrow keys to scroll
through the doc and are able to run search commands. To edit the contents of the file you need to switch to “entry mode”. After
making changes, press the “esc” key to switch back to command mode to save the file and exit the vi editor.
1. Press the escape key (esc) to verify you are in command mode.
◦This is done by typing “/” followed by the string “replicas: 2”. Note that the command is displayed at the
1. Use the arrow keys, move the cursor down to the “workers:
workers:” section and position the cursor on the “2
2” in the string
“replicas: 2”
2.Press the escape key (esc) to verify you are in command mode.
3.Type “:wq
wq” and press enter to save the change and exit the vi editor.
Wait ~30 seconds and then re-run the commands to query the TKC configuration.
Return to the vSphere Client. Verify there are now two VMs in the "tkc01" TKC. One control plane node and one worker node. In the
Recent Tasks you can see the deletion of the tkc-worker vm.
Being able to dynamically allocate capacity on demand, and add additional capacity over time is a critical capability for any modern
cloud. When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to not only allocate infrastructure for
hosting TKCs, but also to resize TKCs as needed.
1. You can scale the TKC horizontally by adding more worker nodes.
2.You can scale the TKC vertically by increasing the size of the worker nodes.
In this exercise, you saw how to add capacity to a TKC by adding an additional worker node.
•Being able to dynamically allocate capacity on demand, and add additional capacity over time is a critical capability of a
•When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to allocate infrastructure for hosting
•While the developer is able to increase the size and number of virtual machines that get deployed as part of a TKC, the
vSphere administrator is able to control and manage the amount of vSphere cluster resources (CPU, memory, storage) they
can consume.
Module 17 - Adding Capacity to a Tanzu Kubernetes Worker Node (15 minutes) Advanced
In this exercise, we will add capacity to a Tanzu Kubernetes Cluster (TKC) by resizing the worker nodes.
Being able to dynamically allocate capacity on demand, and later grow that capacity over time is a critical capability for any modern
cloud. When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to not only allocate infrastructure for
hosting TKCs, but also to resize TKCs as needed.
1. You can scale the TKC horizontally by adding more worker nodes.
2.You can scale the TKC vertically by increasing the size of the worker nodes.
In this exercise, we will show how to increase the size of the TKC worker nodes.
While the developer is able to increase the size and number of virtual machines that get deployed as part of a TKC, the vSphere
administrator controls the amount of vSphere cluster resources (CPU, memory, storage) they can consume.
We start by viewing the details of the TKC from the vSphere client. We then switch to the Linux workstation to show how easy it is for
developers to increase the size of the TKC worker nodes.
To avoid confusion while navigating through the lab it is recommended that you close all browsers and putty windows on the desktop
prior to starting this exercise.
Login:
1. Username: administrator@vsphere.local
2.Password: VMware123!
3.Click LOGIN
Under the "ns01" namespace note the TKC cluster is named "tkc01". The TKC is made up of two VMs, one control plane node, and one
worker node.
Note, that the lab uses a TKC with a single control plane and a single worker node. This configuration is not recommended for
production-grade TKC deployments which should always have three control plane VMs with multiple worker nodes.
Note that the worker node is currently configured with 2 vCPUs and 2GB of memory.
We will resize the worker nodes by adding an additional 2GB of memory so that each has a total of 4GB.
Get the IP address for the Supervisor Cluster. This is the IP address the developer uses to connect to the Kubernetes Control Plane.
1. Click Menu
The details for the Supervisor Cluster "k8s-lab" are shown. Note the Control Plane IP address is 172.16.10.2
172.16.10.2.
2.Click CentOS
3.Click Load
4.Click Open
◦Login: root
◦Password: VMware123!
Log onto the Kubernetes Control Plane as the user "sam@vsphere.local" and set the context to the "ns01" namespace:
The developer sees the same information that was shown to the vSphere admin in the vSphere client. Again, we are able to observe
that the TKC currently has two VMs - one Control Plane and one Worker node.
We will now increase the size of the worker node in the TKC by changing the virtual machine "class".
Note: The vi editor has two modes. By default, you start in the “command mode”. While in this mode you use the arrow keys to scroll
through the doc and are able to run search commands. To edit the contents of the file you need to switch to “entry mode”. After
making changes, press the “esc” key to switch back to command mode to save the file and exit the vi editor.
1. Press the escape key (esc) to verify you are in command mode.
1. This is done by typing “/” followed by the string “replicas: 1”. Note that the command is displayed at the bottom
1. Use the arrow keys to move the cursor down to the “workers:
workers:” section. Position the cursor on the “xx” in the string
“vmClass:
vmClass: best-effort-xsmall
best-effort-xsmall”
2.With the cursor on the “xx”, press the “xx” key to delete the letter “xx”. The line should now read “vmClass:
vmClass: best-effort-
small
small”.
3.Press the escape key (esc) to verify you are in command mode.
4.Type “:wq
:wq” and press enter to save the change and exit the vi editor.
Kubernetes automatically detects the change and will immediately begin work to deploy a new worker node with the increased sizing
and to remove the original worker node. The new node will replace the original worker node.
Note: The Kubernetes cluster upgrade is achieved by deploying new VMs based on the new “class” size. These new VMs are then
added to the Kubernetes Cluster, after which the original VMs (based on the old class size) are removed.
To monitor the progress of the resize operation query the TKC cluster:
A new virtual machine is deployed with the new capacity settings and joined to the Kubernetes clusters. After a few minutes, the original
worker node (based on the old settings) will be removed.
This will take approximately 10 minutes. If the cloud back-end is highly congested this can take as long as 20 minutes.
Return to the Putty window after the worker node has been removed and the total virtual machine count goes back to two.
From the Linux workstation re-run the "kubectl describe ..." command:
Repeat the "kubectl describe tkc tkc01" command until all VMs are in a "ready" state.
Note that the worker node now has 2 vCPU and 4GB of memory assigned (previously it had 2 vCPUs and 2GB of memory).
1. Click OK
Being able to dynamically allocate capacity on demand, and later grow that capacity over time is a critical capability for any modern
cloud. When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to not only allocate infrastructure for
hosting TKCs, but also to resize TKCs as needed.
1. You can scale the TKC horizontally by adding more worker nodes.
2.You can scale the TKC vertically by increasing the size of the worker nodes.
In this exercise, you saw how to add capacity to a TKC by increasing the size of the TKC worker nodes.
•Being able to dynamically allocate capacity on demand, and add additional capacity over time is a critical capability of a
•When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to allocate infrastructure for hosting
•While the developer is able to increase the size and number of virtual machines that get deployed as part of a TKC, the
vSphere administrator is able to control and manage the amount of vSphere cluster resources (CPU, memory, storage) they
can consume.
This module shows how to upgrade a Tanzu Kubernetes Cluster (TKC) running inside a vSphere Namespace.
The Supervisor Cluster provides a Kubernetes control plane from which Tanzu Kubernetes clusters are built. The Tanzu Kubernetes Grid
Service is a custom controller manager with a set of controllers that is part of the Supervisor Cluster. The purpose of the Tanzu
Kubernetes Grid Service is to provision and lifecycle manage Tanzu Kubernetes clusters.
You can provision multiple Tanzu Kubernetes clusters within a single Supervisor Cluster. The workload management functionality
provided by the Supervisor Cluster gives you control over the cluster configuration and lifecycle while allowing you to maintain
concurrency with upstream Kubernetes.
You deploy one or more Tanzu Kubernetes clusters to a vSphere Namespace. Resource quotas and storage policy are applied to a
vSphere Namespace and inherited by the Tanzu Kubernetes clusters deployed there.
When you upgrade a Tanzu Kubernetes cluster, a new control plane and worker nodes are deployed (using the newer Kubernetes
version) and swapped out with the existing nodes. After the swap, the old nodes will be removed. The nodes are updated sequentially.
To avoid confusion while navigating through the lab it is recommended that you close all browsers and putty windows on the desktop
prior to starting this exercise.
We start with a deployed TKC named "tkc01" that is running Kubernetes version 1.23.8
1.23.8. We will upgrade the TKC to Kubernetes version
1.24.9
Notes:
To facilitate the lab we use a TKC with a single control plane and one worker node. This configuration is not recommended for
production-grade TKC deployments, which should always have three control plane VMs with multiple worker nodes.
Prior to updating a TKC it may be necessary to first update the vSphere Namespace. The steps to do this are not covered in this
exercise. For purposes of this exercise, we assume the vSphere Namespace is up to date.
Open the Chrome browser and navigate to the Hosts and Clusters view
Login:
1. Username: administrator@vsphere.local
2.Password: VMware123!
3.Click LOGIN
We see that the TKC is comprised of two virtual machines, one control plane node, and one worker node.
1. Click ns01
Note, that there are multiple ways to upgrade a TKC. In this exercise, we will use the "kubectl edit ..." command. Refer to the
documentation for information on alternative upgrade methods.
2.Click CentOS
3.Click Load
4.Click Open
◦Login: root
◦Password: VMware123!
Connect to the Kubernetes control plane running on the supervisor cluster and set the context to the vSphere namespace "ns01
ns01".
Next, query for the list of available Kubernetes versions. This list is derived from the available OVF templates saved to the vSphere
Content Library that is associated with the namespace.
The command above is using the shortened syntax for the command "# kubectl get tanzukubernetesreleases".
Note, that the available Kubernetes versions are determined by querying the available Kubernetes OVF images that have been saved in
the vSphere content library. If you don't see the version you expect, check the content library settings and ensure the necessary OVF
image is present.
Edit the "tkc01" cluster and change the distribution version in the .spec.distribution.version and .spec.distribution.fullVersion properties
of the cluster manifest.
Note: The vi editor has two modes. By default, you start in the “command mode”. While in this mode you use the arrow keys to scroll
through the doc and are able to run search commands. To edit the contents of the file you need to switch to “entry mode”. After
making changes, press the “esc” key to switch back to command mode to save the file and exit the vi editor.
1. Press the escape key (esc) to verify you are in command mode.
1. This is done by typing “?” followed by the string “controlPlane”. Note that the command is displayed at the
Edit the manifest by changing the name under controlPlane and the nodePools.
•Use the arrow keys to position the cursor on the first letter (“v”) in the string: “v1.23.8---vmware.3-tkg.1”
•Press “i” to enter input mode and type the string “v1.24.9---vmware.1-tkg.4”
•Use the arrow keys to position the cursor on the first letter (“v”) in the string: “v1.23.8---vmware.3-tkg.1”
•Press “i” to enter input mode and type the string “v1.24.9---vmware.1-tkg.4”
Type “:wq
:wq” and press enter to save the change and exit the vi editor.
Example:
From:
topology:
controlPlane:
replicas: 1
storageClass: k8s-storage-policy
tkr:
reference:
name: v1.23.8---vmware.3-tkg.1
vmClass: guaranteed-xsmall
nodePools:
- name: workers
replicas: 1
storageClass: k8s-storage-policy
tkr:
reference:
name: v1.23.8---vmware.3-tkg.1
vmClass: best-effort-xsmall
To:
topology:
controlPlane:
replicas: 1
storageClass: k8s-storage-policy
tkr:
reference:
name: v1.24.9---vmware.1-tkg.4
vmClass: guaranteed-xsmall
nodePools:
- name: workers
replicas: 1
storageClass: k8s-storage-policy
tkr:
reference:
name: v1.24.9---vmware.1-tkg.4
vmClass: best-effort-xsmall
You can also monitor the upgrade progress from the vSphere client.
Watch as the new VMs are deployed and the old VMs removed. Monitor the upgrade from the vSphere client. Wait until both the
control plane and worker nodes have been upgraded.
It will take approximately 10 minutes for both the control plane and worker nodes to be upgraded. If the cloud back-end is highly
congested this can take as long as 40 minutes.
1. Within the host and clusters view, expand mgmt-vcenter.vcf.sddc.lab -> mgmt-datacenter -> mgmt-cluster
When the phase status changes from Upgrading to Running with the target version of v1.24.9
v1.24.9, this indicates that the upgrade
deployment is complete. The Kubernetes cluster itself may still be upgrading in which we will monitor within the console.
Confirm the TKC has been upgraded and is now running Kubernetes version 1.24.9
1.24.9.
The new version can also be verified from the Linux workstation. Also we have confirmed there are no additional updates available.
This module showed how to upgrade a Tanzu Kubernetes Cluster (TKC) inside a vSphere Namespace.
When you upgrade a Tanzu Kubernetes cluster, new virtual machines get deployed with the newer Kubernetes version. These new
nodes replace the existing nodes, which are subsequently removed. Both the control plane and worker nodes are updated using this
approach. The nodes are updated sequentially.
•When you upgrade a Tanzu Kubernetes cluster, a new control plane and worker nodes are deployed (using the newer
Kubernetes version) and swapped out with the existing nodes. After the swap, the old nodes will be removed. The nodes are
updated sequentially.
VMware Cloud Foundation with Tanzu comes with an embedded image registry that can be used to store and serve container images.
The embedded registry provides a secure image repository from which administrators and developers can push and pull the container
images that will be deployed as vSphere Pod VMs and inside Tanzu Kubernetes Clusters.
A separate registry is deployed for each supervisor cluster. A Harbor Project automatically gets created inside the registry for each
vSphere namespace.
Developers must copy the registry's root SSL certificate to their workstation and use the "docker-credential-vsphere" command to
authenticate. Once authenticated, they use docker commands to download, tag, and push images.
Notes:
•The embedded registry is different from the open-source Harbor Registry project. The embedded registry has a limited
feature set and is tailored for use with vSphere with Tanzu and deploying vSphere Pod VMs.
To avoid confusion while navigating through the lab it is recommended that you close all browsers and putty windows on the desktop
prior to starting this exercise.
Login:
1. Username: administrator@vsphere.local
2.Password: VMware123!
3.Click LOGIN
You use the vSphere client to enable the embedded image registry using the steps below.
For this exercise, we will be using the default k8s-storage-policy storage policy for Harbor components. It will take ~5-10 minutes for
the registry to be created.
2.Click OK
The embedded registry runs on vSphere Pod VMs inside a special vSphere namespace. To view the vSphere Pod VMs:
The embedded registry is comprised of seven vSphere Pod VMs. Click through the list of vSphere Pod VMs to view details about each.
The namespace and Pod VMs are created when the image registry is enabled. These objects will automatically be removed if/when the
registry is disabled.
The embedded registry is accessed and managed using the Harbor UI. A link to the UI is available from the Image Registry tab.
3.When the registry is up and running, the status will turn to Running.
4.Click the https://172.16.10.4 link to access the Harbor UI (note that the IP may be different in your lab)
The registry UI opens in a new browser tab and you are prompted to log in. The embedded image registry uses vCenter SSO for user
authentication.
If prompted about the connection not being private, click advance and proceed to the page http://172.16.10.4.
Username: sam@vsphere.local
Password: VMware123!
2.Click LOG IN
We see a project named "ns01". This project corresponds to the vSphere "ns01" namespace. Projects in Harbor are automatically
created for each new vSphere namespace.
We will now create a second namespace named "ns02" to demonstrate its affect on Harbor registry from the vSphere UI.
1. Within the Workload Management page, click on Namespaces on the left navigation pane
3.Click CREATE
4.Click OK
We will add storage for the ns02 namespace to consume by selecting a storage policy.
In this lab, we will not be adding any capacity limits but it can be done by editing limits from the Capacity and Usage widget.
2.Click OK
2.Click on the browser refresh button. If prompted, login again with the following credentials
Username: sam@vsphere.local
Password: VMware123!
3.Confirm the newly created namespace ns02 has been automatically created in Harbor registry with the matching Project
Name
To push and pull images to and from the embedded registry, you first need to enable access. This is done by downloading the SSL
certificate and using the "docker-credential-vsphere" command to create an authentication token.
4.The certificate is saved with the file name "ca.crt" to the "Downloads" folder.
Next, copy the "ca.crt" certificate to the /root folder on the Linux workstation. In the lab, we will do this once. In a customer
environment, this is a step that would need to be repeated on each developer's workstation where images will be pushed.
Password: VMware123!
3.Click Login
3.Drag the ca.crt file from the Windows download directory to the /root directory on the Linux Workstation.
2.Click CentOS
3.Click Load
4.Click Open
◦Login: root
◦Password: VMware123!
1. cd /etc/docker/certs.d
2.ls -l
3.mkdir 172.16.10.4
Note the directory "172.16.10.4" corresponds to the IP assigned to the embedded registry (this is the same IP used to connect to the
Harbor UI). You may need to use a different IP in your lab.
Copy the ca.crt file from the /root directory into this directory:
1. cd /etc/docker/certs.d/172.16.10.4
2.cp /root/ca.crt .
3.ls -l
Next, use the "docker-credential-vsphere" command to authenticate to the harbor registry and create a token.
For the lab, the "docker-credential-vsphere" tool has already been downloaded to the Linux workstation. Note that it uses vCenter SSO
for user authentication.
Username: sam@vsphere.local
Password: VMware123!
With the embedded registry enabled, the certificate copied to the developer’s workstation, and a token successfully created we are
now ready to upload images.
Note, that a valid docker account is needed to pull images from docker, as such we will not pull the images from docker in the lab. The
images have already been downloaded. The steps used to do this are shown below for informational purposes.
Do not run these commands in the lab. They are for reference only, intended to show how the images were pulled from docker and
pushed to the embedded registry.
Login to Docker
Download Images:
docker images
Tag Images:
2.Click on the browser refresh button. If prompted, login again with the following credentials
Username: sam@vsphere.local
Password: VMware123!
3.If not already in the ns01 project view as shown, click on ns01 to enter the ns01 project view.
5.Here we see the three images uploaded to the "ns01" project. Developers can use these images to deploy containers on the
VMware Cloud Foundation with Tanzu comes with an embedded image registry that provides a secure image repository from which
administrators and developers can push and pull the container images deployed as vSphere Pod VMs and inside Tanzu Kubernetes
Clusters.
A separate registry is deployed for each supervisor cluster. A Harbor Project automatically gets created inside the registry for each
vSphere namespace.
Developers must copy the registry's root SSL certificate to their workstation and use the "docker-credential-vsphere" command to
authenticate. Once authenticated, they then use docker commands to download, tag, and push images.
•The embedded registry provides a secure image repository from which administrators and developers can push and pull the
container images that will be deployed as vSphere Pod VMs and inside Tanzu Kubernetes Clusters.
•A separate registry is deployed for each supervisor cluster. A Harbor Project automatically gets created inside the registry for
•Developers must copy the registry's root SSL certificate to their workstation and use the "docker-credential-vsphere"
command to authenticate. Once authenticated, they use docker commands to download, tag, and push images.
•The embedded harbor registry is different from the open-source Harbor Registry project. The embedded version has a
limited feature set and is tailored for use with vSphere with Tanzu and deploying vSphere Pod VMs.
In this module we demonstrate how a developer using helm charts (https://helm.sh/ ) can quickly deploy a sample container-based
application inside a Tanzu Kubernetes Cluster (TKC) running on a vSphere cluster that is part of a Cloud Foundation workload domain.
In this exercise we will use an open source OpenCart application that is freely available from the Bitnami Application Catalog
(https://bitnami.com/stack/opencart). We will deploy the OpenCart application inside an existing TKC (tkc01).
To avoid confusion while navigating through the lab it is recommended that you close all browser and putty windows on the desktop
prior to starting this exercise.
Next, run “kubectl get pods” to verify the OpenCart application is running. Wait for both pods to enter a “running” state.
2.Click CentOS
3.Click Load
4.Click Open
◦Login: root
◦Password: VMware123!
Next, run the helm repo add command to add the public Bitnami repository https://charts.bitnami.com/bitnami to the local Helm
configuration. Then run the helm repo list command to verify the repository was successfully added.
Commands:
With the Helm repository added, we are ready to login to the TKC and deploy the OpenCart chart.
Run the kubectl vsphere login command to log on to the Kubernetes control plane followed by the kubectl config use-context
command to set the context to tkc01:
Commands:
Once we have successfully authenticated and set our context to “tkc01” we are ready to use Helm to deploy the OpenCart chart. To
do this, run “helm install …” and provide a name for the OpenCart instance (myopencart) along with the path to the chart in the
Bitnami repository (bitnami/opencart).
Commands:
Next, run the kubectl get pods command and verify that the mariadb pod is running. Note that in some situations it can take three or
four minutes for the mariadb pod to enter a running state. Re-run the kubectl get pods command until that state shows running.
Once the mariadb pod is in a running state, past the four export commands into the Linux shell. These four commands will query for the
passwords and assign them to four environment variables: APP_HOST, APP_PASSWORD, DATABASE_ROOT_PASSWORD, and
APP_DATABASE_PASSWORD.
Note that when installing the OpenCart chart, Helm presents us with several additional commands that are needed to complete the
deployment.
Note: Helm begins by first deploying a mariadb pod. As part of this pod deployment, unique passwords are generated. The additional
commands are needed to (1) query for the generated passwords and assign them to environment variables in the Linux shell so we can
(2) run the helm upgrade command to deploy the frontend pod and complete the installation.
1. export APP_HOST=$(kubectl get svc --namespace default myopencart --template "{{ range (index
Finally, run the helm upgrade command, passing in the names of the environment variables containing the passwords.
In the putty window scroll up to find the helm upgrade command that was displayed in the output of the helm install command.
Highlight the helm upgrade command to copy it to the clipboard. You can also click and drag the following command into the Putty
window.
opencartHost=$APP_HOST,opencartPassword=$APP_PASSWORD,mariadb.auth.rootPassword=$DATABASE_ROOT_PASSWORD,ma
2.Copy the url or take note. In this case the url would be http://172.16.10.5/ however you may have a slightly different URL
provided.
*Note if you did not use myopencart as the app name you will need to update that section.
With the OpenCart application running, we are ready to access the shopping cart.
Double click the Google Chrome icon on the desktop to open the Chrome browser. (not pictured)
1. In the browser paste the URL for the OpenCart application (copied from the output of the helm upgrade command).
With the OpenCart application running, we can use the kubectl and helm commands to view additional details about the container-
based application.
Commands:
4.helm list
Based upon the above outputs we can see additional information such as Cluster IP, Load Balancer, Ports being used, PVC consumed,
and finally all applications deployed via Helm.
In this module we used Helm to deploy the OpenCart application, available from the Bitnami Application Catalog (https://bitnami.com/
stack/opencart), in order to show how easy it is for a developer to deploy a container-based application inside a Tanzu Kubernetes
Cluster (TKC) running on a vSphere cluster that is part of a Cloud Foundation workload domain.
•TKCs are fully conformant Kubernetes clusters that are easily accessed using existing developer tools (i.e Helm).
•Developers do not require any vSphere knowledge or skills to deploy, configure, and run container-based workloads inside a
TKC.
•TKCs are an ideal place for developers to develop, run, and deploy container-based workloads.
Conclusion
Conclusion [663]
You have completed and reached the end of our lab on Optimizing and Modernizing Data Centers powered by VMware Cloud
Foundation Environment. Please take a few minutes to provide feedback on your experience taking the lab, as this will help with future
updates.