Apache CloudStack 4.2.0 Release Notes en US
Apache CloudStack 4.2.0 Release Notes en US
Apache CloudStack 4.2.0 Release Notes en US
Apache CloudStack
Legal Notice
Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE
file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you
under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
Apache CloudStack is an effort undergoing incubation at The Apache Software Foundation (ASF).
Incubation is required of all newly accepted projects until a further review indicates that the infrastructure,
communications, and decision making process have stabilized in a manner consistent with other successful ASF
projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does
indicate that the project has yet to be fully endorsed by the ASF.
CloudStack is a registered trademark of the Apache Software Foundation.
Apache CloudStack, the CloudStack word design, the Apache CloudStack word design, and the cloud monkey logo are
trademarks of the Apache Software Foundation.
Abstract
Release notes for the Apache CloudStack 4.2.0 release.
Preface
1. Document Conventions
2. Feedback
1. Welcome to CloudStack 4.2
2. What's New in 4.2.0
2.1. Features to Support Heterogeneous Workloads
2.2. Third-Party UI Plugin Framework
2.3. Networking Enhancements
2.4. Host and Virtual Machine Enhancements
2.5. Monitoring, Maintenance, and Operations Enhancements
2.6. Issues Fixed in 4.2.0
2.7. Known Issues in 4.2.0
3. Upgrade Instructions for 4.2
3.1. Upgrade from 4.1.x to 4.2.0
3.2. Upgrade from 3.0.x to 4.2.0
Preface
1. Document Conventions
This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of
information.
In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The Liberation Fonts set is
also used in HTML editions if the set is installed on your system. If not, alternative but equivalent typefaces are displayed.
Note: Red Hat Enterprise Linux 5 and later includes the Liberation Fonts set by default.
The mount -o remount file-system command remounts the named file system. For example, to
remount the /home file system, the command is mount -o remount /home.
To see the version of a currently installed package, use the rpm -q package command. It will return a
result as follows: package-version-release.
Note the words in bold italics above username, domain.name, file-system, package, version and release. Each word
is a placeholder, either for text you enter when issuing a command or for text displayed by the system.
Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For
example:
Publican is a DocBook publishing system.
Desktop
Desktop1
documentation
downloads
drafts
images
mss
notes
photos
scripts
stuff
svgs
svn
Source-code listings are also set in mono-spaced roman but add syntax highlighting as follows:
package org.jboss.book.jca.ex1;
import javax.naming.InitialContext;
public class ExClient
{
public static void main(String args[])
throws Exception
{
InitialContext iniCtx = new InitialContext();
Object
ref
= iniCtx.lookup("EchoBean");
EchoHome
home
= (EchoHome) ref;
Echo
echo
= home.create();
System.out.println("Created Echo");
System.out.println("Echo.echo('Hello') = " + echo.echo("Hello"));
}
}
Note
Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should have no negative
consequences, but you might miss out on a trick that makes your life easier.
Important
Important boxes detail things that are easily missed: configuration changes that only apply to the current session,
or services that need restarting before an update will apply. Ignoring a box labeled 'Important' will not cause data
loss but may cause irritation and frustration.
Warning
Warnings should not be ignored. Ignoring warnings will most likely cause data loss.
2. Feedback
to-do
2.1.1. Regions
To increase reliability of the cloud, you can optionally group resources into geographic regions. A region is the largest
available organizational unit within a cloud deployment. A region is made up of several availability zones, where each
zone is equivalent to a datacenter. Each region is controlled by its own cluster of Management Servers, running in one of
the zones. The zones in a region are typically located in close geographical proximity. Regions are a useful technique for
providing fault tolerance and disaster recovery.
By grouping zones into regions, the cloud can achieve higher availability and scalability. User accounts can span regions,
so that users can deploy VMs in multiple, widely-dispersed regions. Even if one of the regions becomes unavailable, the
services are still available to the end-user through VMs deployed in another region. And by grouping communities of
zones under their own nearby Management Servers, the latency of communications within the cloud is reduced
compared to managing widely-dispersed zones from a single central Management Server.
Usage records can also be consolidated and tracked at the region level, creating reports or invoices for each geographic
region.
Note
If you are upgrading from a previous CloudStack version, and your existing deployment contains a zone with
clusters from multiple VMware Datacenters, that zone will not be forcibly migrated to the new model. It will continue
to function as before. However, any new zone-wide operations, such as zone-wide primary storage, will not be
available in that zone.
2.3.1. IPv6
CloudStack 4.2 introduces initial support for IPv6. This feature is provided as a technical preview only. Full support is
planned for a future release.
In a VPC, you can configure two types of load balancingexternal LB and internal LB. External LB is nothing but a LB rule
created to redirect the traffic received at a public IP of the VPC virtual router. The traffic is load balanced within a tier based
on your configuration. Citrix NetScaler and VPC virtual router are supported for external LB. When you use internal LB
service, traffic received at a tier is load balanced across different VMs within that tier. For example, traffic reached at Web
tier is redirected to another VM in that tier. External load balancing devices are not supported for internal LB. The service
is provided by a internal LB VM configured on the target tier.
2.3.3.2.1. Load Balancing Within a Tier (External LB)
A CloudStack user or administrator may create load balancing rules that balance traffic received at a public IP to one or
more VMs that belong to a network tier that provides load balancing service in a VPC. A user creates a rule, specifies an
algorithm, and assigns the rule to a set of VMs within a tier.
2.3.3.2.2. Load Balancing Across Tiers
CloudStack supports sharing workload across different tiers within your VPC. Assume that multiple tiers are set up in
your environment, such as Web tier and Application tier. Traffic to each tier is balanced on the VPC virtual router on the
public side. If you want the traffic coming from the Web tier to the Application tier to be balanced, use the internal load
balancing feature offered by CloudStack.
2.3.3.2.3. Netscaler Support for VPC
Citrix NetScaler is supported for external LB. Certified version for this feature is NetScaler 10.0 Build 74.4006.e.
Note
You cannot change a VLAN once it's assigned to the network. The VLAN remains with the network for its entire life
cycle.
Note
Ensure that you check whether the required range is available and conforms to account limits. The maximum IPs
per account limit cannot be superseded.
Egress firewall rules were previously supported on virtual routers, and now they are also supported on Juniper SRX
external networking devices.
Egress traffic originates from a private network to a public network, such as the Internet. By default, the egress traffic is
blocked, so no outgoing traffic is allowed from a guest network to the Internet. However, you can control the egress traffic
in an Advanced zone by creating egress firewall rules. When an egress firewall rule is applied, the traffic specific to the
rule is allowed and the remaining traffic is blocked. When all the firewall rules are removed the default policy, Block, is
applied.
Note
Egress firewall rules are not supported on Shared networks. They are supported only on Isolated guest networks.
the IP address on which the DHCP server is running, CloudStack acquires a new IP from the same subnet. If no IP is
available in the subnet, the remove operation fails.
Note
The feature can only be implemented on IPv4 addresses.
2.3.18. Enhanced Load Balancing Services Using External Provider on Shared VLANs
Network services like Firewall, Load Balancing, and NAT are now supported in shared networks created in an advanced
zone. In effect, the following network services shall be made available to a VM in a shared network: Source NAT, Static
NAT, Port Forwarding, Firewall and Load balancing. Subset of these service can be chosen while creating a network
offering for shared networks. Services available in a shared network is defined by the network offering and the service
chosen in the network offering. For example, if network offering for a shared network has source NAT service enabled, a
public IP shall be provisioned and source NAT is configured on the firewall device to provide public access to the VMs on
the shared network. Static NAT, Port Forwarding, Load Balancing, and Firewall services shall be available only on the
acquired public IPs associated with a shared network.
Additionally, Netscaler and Juniper SRX firewall device can be configured inline or side-by-side mode.
Note
This feature is supported only on NetScaler version 10.0 and beyond.
(NetScaler load balancer only) A load balancer rule distributes requests among a pool of services (a service in this
context means an application running on a virtual machine). When creating a load balancer rule, you can specify a health
check which will ensure that the rule forwards requests only to services that are healthy (running and available). When a
health check is in effect, the load balancer will stop forwarding requests to any resources that it has found to be
unhealthy. If the resource later becomes available again, the periodic health check (periodicity is configurable) will
discover it and the resource will once again be made available to the load balancer.
To configure how often the health check is performed by default, use the global configuration setting
healthcheck.update.interval. This default applies to all the health check policies in the cloud. You can override this value
for an individual health check policy.
Note
Limitation: When used with VMware hosts, this feature works only for the following versions: vSphere ESXi 5.1 and
ESXi 5.0 Patch 4.
Note
Best Practice: It is advisable for VMware clusters in CloudStack to be smaller than the VMware hypervisor's
maximum size. A cluster size of up to 8 hosts has been found optimal for most real-world situations.
Note
Archived alerts or events cannot be viewed in the UI, or by using the API. They are maintained in the database for
auditing or compliance purposes.
To control the API request throttling, use the following new global configuration settings:
api.throttling.enabled - Enable/Disable API throttling. By default, this setting is false, so API throttling is not enabled.
api.throttling.interval (in seconds) - Time interval during which the number of API requests is to be counted. When the
interval has passed, the API count is reset to 0.
api.throttling.max - Maximum number of APIs that can be placed within the api.throttling.interval period.
api.throttling.cachesize - Cache size for storing API counters. Use a value higher than the total number of accounts
managed by the cloud. One cache entry is needed for each account, to store the running API total for that account
within the current time window.
Warning
cloud-bugtool collects information which might be considered sensitive and confidential. Using the --nodb option
to avoid the cloud database can reduce this concern, though it is not guaranteed to exclude all sensitive data.
2.5.7. Snaphotting, Backups, Cloning and System VMs for RBD Primary Storage
Note
These new RBD features require at least librbd 0.61.7 (Cuttlefish) and libvirt 0.9.14 on the KVM hypervisors.
This release of CloudStack will leverage the features of RBD format 2. This allows snapshotting and backing up those
snapshots.
Backups of snapshots to Secondary Storage are full copies of the RBD snapshot, they are not RBD diffs. This because
when restoring a backup of a snapshot it is not mandatory that this backup is deployed on RBD again, it could also be a
NFS Primary Storage.
Another key feature of RBD format 2 is cloning. With this release templates will be copied to Primary Storage once and by
using the cloning mechanism new disks will be cloned from this parent template. This saves space and decreases
deployment time for instances dramatically.
Before this release, a NFS Primary Storage was still required for running the System VMs from. The reason was a so
called 'patch disk' that was generated by the hypervisor which contained metadata for the System VM. The scripts
generating this disk didn't support RBD and thus System VMs had to be deployed from NFS. With 4.2 instead of the patch
disk a VirtIO serial console is used to pass meta information to System VMs. This enabled the deployment of System
VMs on RBD Primary Storage.
Apache CloudStack uses Jira to track its issues. All new features and bugs for 4.2.0 have been tracked in Jira, and have a
standard naming convention of "CLOUDSTACK-NNNN" where "NNNN" is the issue number.
For the list of issues fixed, see Issues Fixed in 4.2.
Overprovisioning cautions
If the CloudStack instance you are upgrading is leveraging overprovisioning you need to read and understand
Section 2.4.11, CPU and Memory Over-Provisioning. The overprovisioning factors are now cluster specific and
should you be overprovisioned, your new system VMs may not start up.
4. If you are running a usage server or usage servers, stop those as well:
# service cloudstack-usage stop
5. Make a backup of your MySQL database. If you run into any issues or need to roll back the upgrade, this will assist
in debugging or restoring your existing environment. You'll be prompted for your password.
# mysqldump -u root -p cloud > cloudstack-backup.sql
6. (KVM Only) If primary storage of type local storage is in use, the path for this storage needs to be verified to ensure
it passes new validation. Check local storage by querying the cloud.storage_pool table:
# mysql -u cloud -p -e "select id,name,path from cloud.storage_pool where
pool_type='Filesystem'"
If local storage paths are found to have a trailing forward slash, remove it:
# mysql -u cloud -p -e 'update cloud.storage_pool set path="/var/lib/libvirt/images"
where path="/var/lib/libvirt/images/"';
7. If you are using Ubuntu, follow this procedure to upgrade your packages. If not, skip to step 10.
Community Packages
This section assumes you're using the community supplied packages for CloudStack. If you've created
This section assumes you're using the community supplied packages for CloudStack. If you've created
your own packages and APT repository, substitute your own URL for the ones used in these examples.
a. The first order of business will be to change the sources list for each system with CloudStack packages.
This means all management servers, and any hosts that have the KVM agent. (No changes should be
necessary for hosts that are running VMware or Xen.)
Start by opening /etc/apt/sources.list.d/cloudstack.list on any systems that have CloudStack
packages installed.
This file should have one line, which contains:
deb http://cloudstack.apt-get.eu/ubuntu precise 4.0
If you're using your own package repository, change this line to read as appropriate for your 4.2.0
repository.
b. Now update your apt package list:
$ sudo apt-get update
c. Now that you have the repository configured, it's time to install the cloudstack-management package.
This will pull in any other dependencies you need.
$ sudo apt-get install cloudstack-management
During the installation of cloudstack-agent, APT will copy your agent.properties, log4j-cloud.xml,
and environment.properties from /etc/cloud/agent to /etc/cloudstack/agent.
When prompted whether you wish to keep your configuration, say Yes.
e. Verify that the file /etc/cloudstack/agent/environment.properties has a line that reads:
paths.script=/usr/share/cloudstack-common
8. (VMware only) Additional steps are required for each VMware cluster. These steps will not affect running guests in
the cloud. These steps are required only for clouds using VMware clusters:
a. Stop the Management Server:
service cloudstack-management stop
Store the output from this step, we need to add this in cluster_details table and vmware_data_center
tables in place of the plain text password
c. Find the ID of the row of cluster_details table that you have to update:
mysql -u <username> -p<password>
select * from cloud.cluster_details;
f. Find the ID of the correct row of vmware_data_center that you want to update
select * from cloud.vmware_data_center;
9. (KVM only) Additional steps are required for each KVM host. These steps will not affect running guests in the
cloud. These steps are required only for clouds using KVM as hosts and only on the KVM hosts.
a. Configure the CloudStack yum repository as detailed above.
b. Stop the running agent.
# service cloud-agent stop
10. If you are using CentOS or RHEL, follow this procedure to upgrade your packages. If not, skip to step 12.
Community Packages
This section assumes you're using the community supplied packages for CloudStack. If you've created
your own packages and yum repository, substitute your own URL for the ones used in these examples.
a. The first order of business will be to change the yum repository for each system with CloudStack
packages. This means all management servers, and any hosts that have the KVM agent.
(No changes should be necessary for hosts that are running VMware or Xen.)
Start by opening /etc/yum.repos.d/cloudstack.repo on any systems that have CloudStack packages
installed.
This file should have content similar to the following:
[apache-cloudstack]
name=Apache CloudStack
baseurl=http://cloudstack.apt-get.eu/rhel/4.0/
enabled=1
gpgcheck=0
If you are using the community provided package repository, change the base url to http://cloudstack.aptget.eu/rhel/4.2/
If you're using your own package repository, change this line to read as appropriate for your 4.2.0
repository.
b. Now that you have the repository configured, it's time to install the cloudstack-management package by
upgrading the older cloudstack-management package.
$ sudo yum upgrade cloudstack-management
c. For KVM hosts, you will need to upgrade the cloud-agent package, similarly installing the new version as
cloudstack-agent.
$ sudo yum upgrade cloudstack-agent
12. Once you've upgraded the packages on your management servers, you'll need to restart the system VMs. Ensure
that the admin port is set to 8096 by using the "integration.api.port" global parameter. This port is used by the
cloud-sysvmadm script at the end of the upgrade procedure. For information about how to set this parameter, see
"Setting Global Configuration Parameters" in the Installation Guide. Changing this parameter will require
management server restart. Also make sure port 8096 is open in your local host firewall to do this.
There is a script that will do this for you, all you need to do is run the script and supply the IP address for your
MySQL instance and your MySQL credentials:
# nohup cloudstack-sysvmadm -d IP address -u cloud -p -a > sysvm.log 2>&1 &
You can monitor the log for progress. The process of restarting the system VMs can take an hour or more.
# tail -f sysvm.log
13.
1.
Note
The following upgrade instructions apply only if you're using VMware hosts. If you're not using VMware
hosts, skip this step and move on to 3.
In each zone that includes VMware hosts, you need to add a new system VM template.
a. While running the existing 3.0.x system, log in to the UI as root administrator.
b. In the left navigation bar, click Templates.
c. In Select view, click Templates.
d. Click Register template.
The Register template dialog box is displayed.
e. In the Register template dialog box, specify the following values (do not change these):
Hypervisor
Description
XenServer
Name: systemvm-xenserver-4.2
Description: systemvm-xenserver-4.2
URL:http://download.cloud.com/templates/4.2/systemvmtemplate2013-07-12-master-xen.vhd.bz2
Zone: Choose the zone where this hypervisor is used
Hypervisor: XenServer
Format: VHD
OS Type: Debian GNU/Linux 7.0 (32-bit) (or the highest Debian
release number available in the dropdown)
Extractable: no
Password Enabled: no
Public: no
Featured: no
KVM
Name: systemvm-kvm-4.2
Description: systemvm-kvm-4.2
URL: http://download.cloud.com/templates/4.2/systemvmtemplate2013-06-12-master-kvm.qcow2.bz2
Zone: Choose the zone where this hypervisor is used
Hypervisor: KVM
Format: QCOW2
OS Type: Debian GNU/Linux 7.0 (32-bit) (or the highest Debian
release number available in the dropdown)
Extractable: no
Password Enabled: no
Public: no
Featured: no
VMware
Name: systemvm-vmware-4.2
Description: systemvm-vmware-4.2
URL: http://download.cloud.com/templates/4.2/systemvmtemplate4.2-vh7.ova
Zone: Choose the zone where this hypervisor is used
Hypervisor: VMware
Format: OVA
OS Type: Debian GNU/Linux 7.0 (32-bit) (or the highest Debian
release number available in the dropdown)
Extractable: no
Password Enabled: no
Password Enabled: no
Public: no
Featured: no
f. Watch the screen to be sure that the template downloads successfully and enters the READY state. Do not
proceed until this is successful.
2. (KVM on RHEL 6.0/6.1 only) If your existing CloudStack deployment includes one or more clusters of KVM hosts
running RHEL 6.0 or RHEL 6.1, perform the following:
a. Ensure that you upgrade the operating system version on those hosts before upgrading CloudStack
To do that, change the yum repository for each system with CloudStack packages, that implies that all the
Management Servers and any hosts that have the KVM agent.
b. Open /etc/yum.repos.d/cloudstack.repo on any systems that have CloudStack packages installed.
c. Edit as follows:
[upgrade]
name=rhel63
baseurl=url-of-your-rhel6.3-repo
enabled=1
gpgcheck=0
[apache CloudStack]
name= Apache CloudStack
baseurl= http://cloudstack.apt-get.eu/rhel/4.0/
enabled=1
gpgcheck=0
If you are using the community provided package repository, change the baseurl to http:// cloudstack.aptget.eu/rhel/4.2/
If you are using your own package repository, change this line to read as appropriate for your 4.2.0
repository.
d. Now that you have the repository configured, upgrade the host operating system from RHEL 6.0 to 6.3:
# yum upgrade
3. Stop all Usage Servers if running. Run this on all Usage Server hosts.
# service cloud-usage stop
4. Stop the Management Servers. Run this on all Management Server hosts.
# service cloud-management stop
5. On the MySQL master, take a backup of the MySQL databases. We recommend performing this step even in test
upgrades. If there is an issue, this will assist with debugging.
In the following commands, it is assumed that you have set the root password on the database, which is a
CloudStack recommended best practice. Substitute your own MySQL root password.
# mysqldump -u root -pmysql_password cloud > cloud-backup.dmp
# mysqldump -u root -pmysql_password cloud_usage > cloudusage-backup.dmp
6. Either build RPM/DEB packages as detailed in the Installation Guide, or use one of the community provided
yum/apt repositories to gain access to the CloudStack binaries.
7. If you are using Ubuntu, follow this procedure to upgrade your packages. If not, skip to step 8.
Community Packages
This section assumes you're using the community supplied packages for CloudStack. If you've created
your own packages and APT repository, substitute your own URL for the ones used in these examples.
a. The first order of business will be to change the sources list for each system with CloudStack packages.
This means all management servers, and any hosts that have the KVM agent. (No changes should be
necessary for hosts that are running VMware or Xen.)
Start by opening /etc/apt/sources.list.d/cloudstack.list on any systems that have CloudStack
packages installed.
This file should have one line, which contains:
deb http://cloudstack.apt-get.eu/ubuntu precise 4.0
If you're using your own package repository, change this line to read as appropriate for your 4.2.0
repository.
b. Now update your apt package list:
$ sudo apt-get update
c. Now that you have the repository configured, it's time to install the cloudstack-management package.
This will pull in any other dependencies you need.
$ sudo apt-get install cloudstack-management
During the installation of cloudstack-agent, APT will copy your agent.properties, log4j-cloud.xml,
and environment.properties from /etc/cloud/agent to /etc/cloudstack/agent.
When prompted whether you wish to keep your configuration, say Yes.
e. Verify that the file /etc/cloudstack/agent/environment.properties has a line that reads:
paths.script=/usr/share/cloudstack-common
g. During the upgrade, log4j-cloud.xml was simply copied over, so the logs will continue to be added to
/var/log/cloud/agent/agent.log. There's nothing wrong with this, but if you prefer to be consistent,
you can change this by copying over the sample configuration file:
cd /etc/cloudstack/agent
mv log4j-cloud.xml.dpkg-dist log4j-cloud.xml
service cloudstack-agent restart
h. Once the agent is running, you can uninstall the old cloud-* packages from your system:
sudo dpkg --purge cloud-agent
8. If you are using CentOS or RHEL, follow this procedure to upgrade your packages. If not, skip to step 9.
Community Packages
This section assumes you're using the community supplied packages for CloudStack. If you've created
your own packages and yum repository, substitute your own URL for the ones used in these examples.
a. The first order of business will be to change the yum repository for each system with CloudStack
packages. This means all management servers, and any hosts that have the KVM agent. (No changes
should be necessary for hosts that are running VMware or Xen.)
Start by opening /etc/yum.repos.d/cloudstack.repo on any systems that have CloudStack packages
installed.
This file should have content similar to the following:
[apache-cloudstack]
name=Apache CloudStack
baseurl=http://cloudstack.apt-get.eu/rhel/4.0/
enabled=1
gpgcheck=0
If you are using the community provided package repository, change the baseurl to http://cloudstack.aptget.eu/rhel/4.2/
If you're using your own package repository, change this line to read as appropriate for your 4.2.0
repository.
b. Now that you have the repository configured, it's time to install the cloudstack-management package by
upgrading the older cloud-client package.
$ sudo yum upgrade cloud-client
c. For KVM hosts, you will need to upgrade the cloud-agent package, similarly installing the new version as
cloudstack-agent.
$ sudo yum upgrade cloud-agent
During the installation of cloudstack-agent, the RPM will copy your agent.properties, log4jcloud.xml, and environment.properties from /etc/cloud/agent to /etc/cloudstack/agent.
d. Verify that the file /etc/cloudstack/agent/environment.properties has a line that reads:
paths.script=/usr/share/cloudstack-common
9. If you have made changes to your copy of /etc/cloud/management/components.xml the changes will be
preserved in the upgrade. However, you need to do the following steps to place these changes in a new version of
the file which is compatible with version 4.2.0.
a. Make a backup copy of /etc/cloud/management/components.xml. For example:
# mv /etc/cloud/management/components.xml /etc/cloud/management/components.xmlbackup
# cp -ap /etc/cloud/management/components.xml.rpmnew
/etc/cloud/management/components.xml
c. Merge your changes from the backup file into the new components.xml.
# vi /etc/cloudstack/management/components.xml
Note
If you have more than one management server node, repeat the upgrade steps on each node.
10. After upgrading to 4.2, API clients are expected to send plain text passwords for login and user creation, instead of
MD5 hash. Incase, api client changes are not acceptable, following changes are to be made for backward
compatibility:
Modify componentsContext.xml, and make PlainTextUserAuthenticator as the default authenticator (1st entry in the
userAuthenticators adapter list is default)
<!-- Security adapters -->
<bean id="userAuthenticators" class="com.cloud.utils.component.AdapterList">
<property name="Adapters">
<list>
<ref bean="PlainTextUserAuthenticator"/>
<ref bean="MD5UserAuthenticator"/>
<ref bean="LDAPUserAuthenticator"/>
</list>
</property>
</bean>
Wait until the databases are upgraded. Ensure that the database upgrade is complete. After confirmation, start the
other Management Servers one at a time by running the same command on each node.
Note
Failing to restart the Management Server indicates a problem in the upgrade. Having the Management
Server restarted without any issues indicates that the upgrade is successfully completed.
12. Start all Usage Servers (if they were running on your previous version). Perform this on each Usage Server host.
# service cloudstack-usage start
13. Additional steps are required for each KVM host. These steps will not affect running guests in the cloud. These
steps are required only for clouds using KVM as hosts and only on the KVM hosts.
a. Configure a yum or apt repository containing the CloudStack packages as outlined in the Installation
Guide.
b. Stop the running agent.
# service cloud-agent stop
c. Update the agent software with one of the following command sets as appropriate for your environment.
# yum update cloud-*
# apt-get update
# apt-get upgrade cloud-*
d. Edit /etc/cloudstack/agent/agent.properties to change the resource parameter from
"com.cloud.agent.resource.computing.LibvirtComputingResource" to
"com.cloud.hypervisor.kvm.resource.LibvirtComputingResource".
e. Upgrade all the existing bridge names to new bridge names by running this script:
# cloudstack-agent-upgrade
g. Restart libvirtd.
# service libvirtd restart
i. When the Management Server is up and running, log in to the CloudStack UI and restart the virtual router
for proper functioning of all the features.
14. Log in to the CloudStack UI as administrator, and check the status of the hosts. All hosts should come to Up state
(except those that you know to be offline). You may need to wait 20 or 30 minutes, depending on the number of
hosts.
Note
Troubleshooting: If login fails, clear your browser cache and reload the page.
Do not proceed to the next step until the hosts show in Up state.
15. If you are upgrading from 3.0.x, perform the following:
a. Ensure that the admin port is set to 8096 by using the "integration.api.port" global parameter.
This port is used by the cloud-sysvmadm script at the end of the upgrade procedure. For information about
how to set this parameter, see "Setting Global Configuration Parameters" in the Installation Guide.
b. Restart the Management Server.
Note
If you don't want the admin port to remain open, you can set it to null after the upgrade is done and
restart the management server.
16. Run the cloudstack-sysvmadm script to stop, then start, all Secondary Storage VMs, Console Proxy VMs, and
virtual routers. Run the script once on each management server. Substitute your own IP address of the MySQL
instance, the MySQL user to connect as, and the password to use for that user. In addition to those parameters,
provide the -c and -r arguments. For example:
# nohup cloudstack-sysvmadm -d 192.168.1.5 -u cloud -p password -c -r > sysvm.log 2>&1 &
# tail -f sysvm.log
This might take up to an hour or more to run, depending on the number of accounts in the system.
17. If needed, upgrade all Citrix XenServer hypervisor hosts in your cloud to a version supported by CloudStack 4.2.0.
The supported versions are XenServer 5.6 SP2 and 6.0.2. Instructions for upgrade can be found in the CloudStack
4.2.0 Installation Guide under "Upgrading XenServer Versions."
18. Now apply the XenServer hotfix XS602E003 (and any other needed hotfixes) to XenServer v6.0.2 hypervisor hosts.
a. Disconnect the XenServer cluster from CloudStack.
In the left navigation bar of the CloudStack UI, select Infrastructure. Under Clusters, click View All. Select
the XenServer cluster and click Actions - Unmanage.
This may fail if there are hosts not in one of the states Up, Down, Disconnected, or Alert. You may need to
fix that before unmanaging this cluster.
Wait until the status of the cluster has reached Unmanaged. Use the CloudStack UI to check on the status.
When the cluster is in the unmanaged state, there is no connection to the hosts in the cluster.
b. To clean up the VLAN, log in to one XenServer host and run:
/opt/xensource/bin/cloud-clean-vlan.sh
c. Now prepare the upgrade by running the following on one XenServer host:
/opt/xensource/bin/cloud-prepare-upgrade.sh
If you see a message like "can't eject CD", log in to the VM and unmount the CD, then run this script again.
d. Upload the hotfix to the XenServer hosts. Always start with the Xen pool master, then the slaves. Using your
favorite file copy utility (e.g. WinSCP), copy the hotfixes to the host. Place them in a temporary folder such
as /tmp.
On the Xen pool master, upload the hotfix with this command:
xe patch-upload file-name=XS602E003.xsupdate
Make a note of the output from this command, which is a UUID for the hotfix file. You'll need it in another
step later.
Note
(Optional) If you are applying other hotfixes as well, you can repeat the commands in this section
with the appropriate hotfix number. For example, XS602E004.xsupdate.
e. Manually live migrate all VMs on this host to another host. First, get a list of the VMs on this host:
# xe vm-list
Then use this command to migrate each VM. Replace the example host name and VM name with your
own:
# xe vm-migrate live=true host=host-name vm=VM-name
Troubleshooting
If you see a message like "You attempted an operation on a VM which requires PV drivers to be
installed but the drivers were not detected," run:
/opt/xensource/bin/make_migratable.sh b6cf79c8-02ee-050b-922f-49583d9f1a14.
f. Apply the hotfix. First, get the UUID of this host:
# xe host-list
Then use the following command to apply the hotfix. Replace the example host UUID with the current host
ID, and replace the hotfix UUID with the output from the patch-upload command you ran on this machine
earlier. You can also get the hotfix UUID by running xe patch-list.
xe patch-apply host-uuid=host-uuid uuid=hotfix-uuid
g. Copy the following files from the CloudStack Management Server to the host.
g. Copy the following files from the CloudStack Management Server to the host.
Copy from here...
...to here
/usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver/xenserver60/NFSSR.py
/usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver/setupxenserver.sh
/opt/xensource/sm/NFSSR.py
/opt/xensource/bin/setupxenserver.sh
/usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver/make_migratable.sh
/opt/xensource/bin/make_migratable.sh
h. (Only for hotfixes XS602E005 and XS602E007) You need to apply a new Cloud Support Pack.
Download the CSP software onto the XenServer host from one of the following links:
For hotfix XS602E005: http://coltrane.eng.hq.xensource.com/release/XenServer-6.x/XS6.0.2/hotfixes/XS602E005/56710/xe-phase-2/xenserver-cloud-supp.tgz
For hotfix XS602E007: http://coltrane.eng.hq.xensource.com/release/XenServer-6.x/XS6.0.2/hotfixes/XS602E007/57824/xe-phase-2/xenserver-cloud-supp.tgz
Extract the file:
# tar xf xenserver-cloud-supp.tgz
If the XenServer host is part of a zone that uses basic networking, disable Open vSwitch (OVS):
# xe-switch-network-backend
bridge
Note
If the message "mv: cannot stat `/etc/cron.daily/logrotate': No such file or directory" appears, you can
safely ignore it.
k. Run the following:
for pbd in `xe pbd-list currently-attached=false| grep ^uuid | awk '{print
$NF}'`; do xe pbd-plug uuid=$pbd ;
l. On each slave host in the Xen pool, repeat these steps, starting from "manually live migrate VMs."
Troubleshooting Tip
If passwords which you know to be valid appear not to work after upgrade, or other UI issues are seen, try clearing
your browser cache and reloading the UI page.
KVM Hosts
If KVM hypervisor is used in your cloud, be sure you completed the step to insert a valid username and
password into the host_details table on each KVM node as described in the 2.2.14 Release Notes. This
step is critical, as the database will be encrypted after the upgrade to 4.2.0.
3. While running the 2.2.14 system, log in to the UI as root administrator.
4. Using the UI, add a new System VM template for each hypervisor type that is used in your cloud. In each zone, add
a system VM template for each hypervisor used in that zone
a. In the left navigation bar, click Templates.
b. In Select view, click Templates.
c. Click Register template.
The Register template dialog box is displayed.
d. In the Register template dialog box, specify the following values depending on the hypervisor type (do not
change these):
Hypervisor
Description
XenServer
Name: systemvm-xenserver-4.2
Description: systemvm-xenserver-4.2
URL:http://download.cloud.com/templates/4.2/systemvmtemplate2013-07-12-master-xen.vhd.bz2
2013-07-12-master-xen.vhd.bz2
Zone: Choose the zone where this hypervisor is used
Hypervisor: XenServer
Format: VHD
OS Type: Debian GNU/Linux 7.0 (32-bit) (or the highest Debian
release number available in the dropdown)
Extractable: no
Password Enabled: no
Public: no
Featured: no
KVM
Name: systemvm-kvm-4.2
Description: systemvm-kvm-4.2
URL: http://download.cloud.com/templates/4.2/systemvmtemplate2013-06-12-master-kvm.qcow2.bz2
Zone: Choose the zone where this hypervisor is used
Hypervisor: KVM
Format: QCOW2
OS Type: Debian GNU/Linux 7.0 (32-bit) (or the highest Debian
release number available in the dropdown)
Extractable: no
Password Enabled: no
Public: no
Featured: no
VMware
Name: systemvm-vmware-4.2
Description: systemvm-vmware-4.2
URL: http://download.cloud.com/templates/4.2/systemvmtemplate4.2-vh7.ova
Zone: Choose the zone where this hypervisor is used
Hypervisor: VMware
Format: OVA
OS Type: Debian GNU/Linux 7.0 (32-bit) (or the highest Debian
release number available in the dropdown)
Extractable: no
Password Enabled: no
Public: no
Featured: no
5. Watch the screen to be sure that the template downloads successfully and enters the READY state. Do not
proceed until this is successful
6. WARNING: If you use more than one type of hypervisor in your cloud, be sure you have repeated these steps to
download the system VM template for each hypervisor type. Otherwise, the upgrade will fail.
7. (KVM on RHEL 6.0/6.1 only) If your existing CloudStack deployment includes one or more clusters of KVM hosts
running RHEL 6.0 or RHEL 6.1, perform the following:
a. Ensure that you upgrade the operating system version on those hosts before upgrading CloudStack
To do that, change the yum repository for each system with CloudStack packages, that implies that all the
Management Servers and any hosts that have the KVM agent.
b. Open /etc/yum.repos.d/cloudstack.repo on any systems that have CloudStack packages installed.
c. Edit as follows:
[upgrade]
name=rhel63
baseurl=url-of-your-rhel6.3-repo
enabled=1
gpgcheck=0
[apache CloudStack]
name= Apache CloudStack
baseurl= http://cloudstack.apt-get.eu/rhel/4.0/
enabled=1
gpgcheck=0
If you are using the community provided package repository, change the baseurl to http:// cloudstack.aptget.eu/rhel/4.2/
If you are using your own package repository, change this line to read as appropriate for your 4.2.0
repository.
d. Now that you have the repository configured, upgrade the host operating system from RHEL 6.0 to 6.3:
# yum upgrade
8. Stop all Usage Servers if running. Run this on all Usage Server hosts.
# service cloud-usage stop
9. Stop the Management Servers. Run this on all Management Server hosts.
# service cloud-management stop
10. On the MySQL master, take a backup of the MySQL databases. We recommend performing this step even in test
upgrades. If there is an issue, this will assist with debugging.
In the following commands, it is assumed that you have set the root password on the database, which is a
CloudStack recommended best practice. Substitute your own MySQL root password.
CloudStack recommended best practice. Substitute your own MySQL root password.
# mysqldump -u root -pmysql_password cloud > cloud-backup.dmp
# mysqldump -u root -pmysql_password cloud_usage > cloudusage-backup.dmp
11. Either build RPM/DEB packages as detailed in the Installation Guide, or use one of the community provided
yum/apt repositories to gain access to the CloudStack binaries.
12. If you are using Ubuntu, follow this procedure to upgrade your packages. If not, skip to step 13.
Community Packages
This section assumes you're using the community supplied packages for CloudStack. If you've created
your own packages and APT repository, substitute your own URL for the ones used in these examples.
a. The first order of business will be to change the sources list for each system with CloudStack packages.
This means all management servers, and any hosts that have the KVM agent. (No changes should be
necessary for hosts that are running VMware or Xen.)
Start by opening /etc/apt/sources.list.d/cloudstack.list on any systems that have CloudStack
packages installed.
This file should have one line, which contains:
deb http://cloudstack.apt-get.eu/ubuntu precise 4.0
If you're using your own package repository, change this line to read as appropriate for your 4.2.0
repository.
b. Now update your apt package list:
$ sudo apt-get update
c. Now that you have the repository configured, it's time to install the cloudstack-management package.
This will pull in any other dependencies you need.
$ sudo apt-get install cloudstack-management
d. On KVM hosts, you will need to manually install the cloudstack-agent package:
$ sudo apt-get install cloudstack-agent
During the installation of cloudstack-agent, APT will copy your agent.properties, log4j-cloud.xml,
and environment.properties from /etc/cloud/agent to /etc/cloudstack/agent.
When prompted whether you wish to keep your configuration, say Yes.
e. Verify that the file /etc/cloudstack/agent/environment.properties has a line that reads:
paths.script=/usr/share/cloudstack-common
g. During the upgrade, log4j-cloud.xml was simply copied over, so the logs will continue to be added to
/var/log/cloud/agent/agent.log. There's nothing wrong with this, but if you prefer to be consistent,
you can change this by copying over the sample configuration file:
cd /etc/cloudstack/agent
mv log4j-cloud.xml.dpkg-dist log4j-cloud.xml
service cloudstack-agent restart
h. Once the agent is running, you can uninstall the old cloud-* packages from your system:
sudo dpkg --purge cloud-agent
13. If you are using CentOS or RHEL, follow this procedure to upgrade your packages. If not, skip to step 14.
Community Packages
This section assumes you're using the community supplied packages for CloudStack. If you've created
your own packages and yum repository, substitute your own URL for the ones used in these examples.
a. The first order of business will be to change the yum repository for each system with CloudStack
packages. This means all management servers, and any hosts that have the KVM agent. (No changes
should be necessary for hosts that are running VMware or Xen.)
Start by opening /etc/yum.repos.d/cloudstack.repo on any systems that have CloudStack packages
installed.
This file should have content similar to the following:
[apache-cloudstack]
name=Apache CloudStack
name=Apache CloudStack
baseurl=http://cloudstack.apt-get.eu/rhel/4.0/
enabled=1
gpgcheck=0
If you are using the community provided package repository, change the baseurl to http://cloudstack.aptget.eu/rhel/4.2/
If you're using your own package repository, change this line to read as appropriate for your 4.2.0
repository.
b. Now that you have the repository configured, it's time to install the cloudstack-management package by
upgrading the older cloud-client package.
$ sudo yum upgrade cloud-client
c. For KVM hosts, you will need to upgrade the cloud-agent package, similarly installing the new version as
cloudstack-agent.
$ sudo yum upgrade cloud-agent
During the installation of cloudstack-agent, the RPM will copy your agent.properties, log4jcloud.xml, and environment.properties from /etc/cloud/agent to /etc/cloudstack/agent.
d. Verify that the file /etc/cloudstack/agent/environment.properties has a line that reads:
paths.script=/usr/share/cloudstack-common
14. If you have made changes to your existing copy of the file components.xml in your previous-version CloudStack
installation, the changes will be preserved in the upgrade. However, you need to do the following steps to place
these changes in a new version of the file which is compatible with version 4.0.0-incubating.
Note
How will you know whether you need to do this? If the upgrade output in the previous step included a
message like the following, then some custom content was found in your old components.xml, and you
need to merge the two files:
c. Merge your changes from the backup file into the new components.xml file.
# vi /etc/cloudstack/management/components.xml
15. After upgrading to 4.2, API clients are expected to send plain text passwords for login and user creation, instead of
MD5 hash. If API client changes are not acceptable, following changes are to be made for backward compatibility:
Modify componentsContext.xml, and make PlainTextUserAuthenticator as the default authenticator (1st entry in the
userAuthenticators adapter list is default)
<!-- Security adapters -->
<bean id="userAuthenticators" class="com.cloud.utils.component.AdapterList">
<property name="Adapters">
<list>
<ref bean="PlainTextUserAuthenticator"/>
<ref bean="MD5UserAuthenticator"/>
<ref bean="LDAPUserAuthenticator"/>
</list>
</property>
</bean>
/etc/cloud/management/db.properties:
# cp -ap /etc/cloud/management/db.properties.rpmnew
etc/cloud/management/db.properties
c. Merge your changes from the backup file into the new db.properties file.
# vi /etc/cloudstack/management/db.properties
17. On the management server node, run the following command. It is recommended that you use the command-line
flags to provide your own encryption keys. See Password and Key Encryption in the Installation Guide.
# cloudstack-setup-encryption -e encryption_type -m management_server_key -k
database_key
When used without arguments, as in the following example, the default encryption type and keys will be used:
(Optional) For encryption_type, use file or web to indicate the technique used to pass in the database
encryption password. Default: file.
(Optional) For management_server_key, substitute the default key that is used to encrypt confidential
parameters in the properties file. Default: password. It is highly recommended that you replace this with a
more secure value
(Optional) For database_key, substitute the default key that is used to encrypt confidential parameters in the
CloudStack database. Default: password. It is highly recommended that you replace this with a more secure
value.
18. Repeat steps 10 - 14 on every management server node. If you provided your own encryption key in step 14, use
the same key on all other management servers.
19. Start the first Management Server. Do not start any other Management Server nodes yet.
# service cloudstack-management start
Wait until the databases are upgraded. Ensure that the database upgrade is complete. You should see a
message like "Complete! Done." After confirmation, start the other Management Servers one at a time by running
the same command on each node.
20. Start all Usage Servers (if they were running on your previous version). Perform this on each Usage Server host.
# service cloudstack-usage start
21. (KVM only) Perform the following additional steps on each KVM host.
These steps will not affect running guests in the cloud. These steps are required only for clouds using KVM as
hosts and only on the KVM hosts.
a. Configure your CloudStack package repositories as outlined in the Installation Guide
b. Stop the running agent.
# service cloud-agent stop
c. Update the agent software with one of the following command sets as appropriate.
# yum update cloud-*
# apt-get update
# apt-get upgrade cloud-*
d. Copy the contents of the agent.properties file to the new agent.properties file by using the following
command
sed -i
's/com.cloud.agent.resource.computing.LibvirtComputingResource/com.cloud.hyperv
isor.kvm.resource.LibvirtComputingResource/g'
/etc/cloudstack/agent/agent.properties
e. Upgrade all the existing bridge names to new bridge names by running this script:
# cloudstack-agent-upgrade
g. Restart libvirtd.
# service libvirtd restart
i. When the Management Server is up and running, log in to the CloudStack UI and restart the virtual router
for proper functioning of all the features.
22. Log in to the CloudStack UI as admin, and check the status of the hosts. All hosts should come to Up state
(except those that you know to be offline). You may need to wait 20 or 30 minutes, depending on the number of
hosts.
Do not proceed to the next step until the hosts show in the Up state. If the hosts do not come to the Up state,
contact support.
23. Run the following script to stop, then start, all Secondary Storage VMs, Console Proxy VMs, and virtual routers.
a. Run the command once on one management server. Substitute your own IP address of the MySQL
a. Run the command once on one management server. Substitute your own IP address of the MySQL
instance, the MySQL user to connect as, and the password to use for that user. In addition to those
parameters, provide the "-c" and "-r" arguments. For example:
# nohup cloudstack-sysvmadm -d 192.168.1.5 -u cloud -p password -c -r >
sysvm.log 2>&1 &
# tail -f sysvm.log
This might take up to an hour or more to run, depending on the number of accounts in the system.
b. After the script terminates, check the log to verify correct execution:
# tail -f sysvm.log
24. If you would like additional confirmation that the new system VM templates were correctly applied when these
system VMs were rebooted, SSH into the System VM and check the version.
Use one of the following techniques, depending on the hypervisor.
XenServer or KVM:
SSH in by using the link local IP address of the system VM. For example, in the command below, substitute your
own path to the private key used to log in to the system VM and your own link local IP.
Run the following commands on the XenServer or KVM host on which the system VM is present:
# ssh -i private-key-path link-local-ip -p 3922
# cat /etc/cloudstack-release
ESXi
SSH in using the private IP address of the system VM. For example, in the command below, substitute your own
path to the private key used to log in to the system VM and your own private IP.
Run the following commands on the Management Server:
# ssh -i private-key-path private-ip -p 3922
# cat /etc/cloudstack-release
25. If needed, upgrade all Citrix XenServer hypervisor hosts in your cloud to a version supported by CloudStack 4.0.0incubating. The supported versions are XenServer 5.6 SP2 and 6.0.2. Instructions for upgrade can be found in the
CloudStack 4.0.0-incubating Installation Guide.
26. Apply the XenServer hotfix XS602E003 (and any other needed hotfixes) to XenServer v6.0.2 hypervisor hosts.
a. Disconnect the XenServer cluster from CloudStack.
In the left navigation bar of the CloudStack UI, select Infrastructure. Under Clusters, click View All. Select
the XenServer cluster and click Actions - Unmanage.
This may fail if there are hosts not in one of the states Up, Down, Disconnected, or Alert. You may need to
fix that before unmanaging this cluster.
Wait until the status of the cluster has reached Unmanaged. Use the CloudStack UI to check on the status.
When the cluster is in the unmanaged state, there is no connection to the hosts in the cluster.
b. To clean up the VLAN, log in to one XenServer host and run:
/opt/xensource/bin/cloud-clean-vlan.sh
If you see a message like "can't eject CD", log in to the VM and umount the CD, then run this script again.
d. Upload the hotfix to the XenServer hosts. Always start with the Xen pool master, then the slaves. Using your
favorite file copy utility (e.g. WinSCP), copy the hotfixes to the host. Place them in a temporary folder such
as /root or /tmp.
On the Xen pool master, upload the hotfix with this command:
xe patch-upload file-name=XS602E003.xsupdate
Make a note of the output from this command, which is a UUID for the hotfix file. You'll need it in another
step later.
Note
Note
(Optional) If you are applying other hotfixes as well, you can repeat the commands in this section
with the appropriate hotfix number. For example, XS602E004.xsupdate.
e. Manually live migrate all VMs on this host to another host. First, get a list of the VMs on this host:
# xe vm-list
Then use this command to migrate each VM. Replace the example host name and VM name with your
own:
# xe vm-migrate live=true host=host-name vm=VM-name
Troubleshooting
If you see a message like "You attempted an operation on a VM which requires PV drivers to be
installed but the drivers were not detected," run:
/opt/xensource/bin/make_migratable.sh b6cf79c8-02ee-050b-922f-49583d9f1a14.
f. Apply the hotfix. First, get the UUID of this host:
# xe host-list
Then use the following command to apply the hotfix. Replace the example host UUID with the current host
ID, and replace the hotfix UUID with the output from the patch-upload command you ran on this machine
earlier. You can also get the hotfix UUID by running xe patch-list.
xe patch-apply host-uuid=host-uuid uuid=hotfix-uuid
g. Copy the following files from the CloudStack Management Server to the host.
Copy from here...
...to here
/usr/share/cloudstackcommon/scripts/vm/hypervisor/xenserver/xenserver60/NFSSR.py
/opt/xensource/sm/NFSSR.py
/usr/share/cloudstackcommon/scripts/vm/hypervisor/xenserver/setupxenserver.sh
/opt/xensource/bin/setupxenserver.sh
/usr/lib64/cloudstackcommon/scripts/vm/hypervisor/xenserver/make_migratable.sh
/opt/xensource/bin/make_migratable.sh
h. (Only for hotfixes XS602E005 and XS602E007) You need to apply a new Cloud Support Pack.
Download the CSP software onto the XenServer host from one of the following links:
For hotfix XS602E005: http://coltrane.eng.hq.xensource.com/release/XenServer-6.x/XS6.0.2/hotfixes/XS602E005/56710/xe-phase-2/xenserver-cloud-supp.tgz
For hotfix XS602E007: http://coltrane.eng.hq.xensource.com/release/XenServer-6.x/XS6.0.2/hotfixes/XS602E007/57824/xe-phase-2/xenserver-cloud-supp.tgz
Extract the file:
# tar xf xenserver-cloud-supp.tgz
Run the following script:
# xe-install-supplemental-pack xenserver-cloud-supp.iso
If the XenServer host is part of a zone that uses basic networking, disable Open vSwitch (OVS):
# xe-switch-network-backend bridge
i. Reboot this XenServer host.
j. Run the following:
/opt/xensource/bin/setupxenserver.sh
Note
If the message "mv: cannot stat `/etc/cron.daily/logrotate': No such file or directory" appears, you can
safely ignore it.
k. Run the following:
for pbd in `xe pbd-list currently-attached=false| grep ^uuid | awk '{print $NF}'`;
do xe pbd-plug uuid=$pbd ;
l. On each slave host in the Xen pool, repeat these steps, starting from "manually live migrate VMs."
4.1.2. VM Snapshot
createVMSnapshot (Creates a virtual machine snapshot; see Section 2.4.16, Virtual Machine Snapshots for VMware)
deleteVMSnapshot (Deletes a virtual machine snapshot)
listVMSnapshot (Shows a virtual machine snapshot)
revertToVMSnapshot (Returns a virtual machine to the state and data saved in a given snapshot)
4.1.7. NIC
addNicToVirtualMachine (Adds a new NIC to the specified VM on a selected network; see Section 2.3.14, Configuring
Multiple IP Addresses on a Single NIC)
removeNicFromVirtualMachine (Removes the specified NIC from a selected VM.)
updateDefaultNicForVirtualMachine (Updates the specified NIC to be the default one for a selected VM.)
addIpToNic (Assigns secondary IP to a NIC.)
removeIpFromNic (Assigns secondary IP to a NIC.)
listNics (Lists the NICs associated with a VM.)
4.1.8. Regions
addRegion (Registers a Region into another Region; see Section 2.1.1, Regions)
updateRegion (Updates Region details: ID, Name, Endpoint, User API Key, and User Secret Key.)
removeRegion (Removes a Region from current Region.)
listRegions (Get all the Regions. They can be filtered by using the ID or Name.)
4.1.9. User
getUser (This API can only be used by the Admin. Get user account details by using the API Key.)
getApiLimit (Show number of remaining APIs for the invoking user in current window)
resetApiLimit (For root admin, if accountId parameter is passed, it will reset count for that particular account,
otherwise it will reset all counters)
resetApiLimit (Reset the API count.)
4.1.11. Locking
lockAccount (Locks an account)
lockUser (Locks a user account)
4.1.12. VM Scaling
scaleVirtualMachine (Scales the virtual machine to a new service offering.)
4.1.26. Simulator
configureSimulator (Configures a simulator.)
4.1.31. Portable IP
createPortableIpRange (Adds a range of portable portable IPs to a Region.)
deletePortableIpRange (Deletes a range of portable portable IPs associated with a Region.)
listPortableIpRanges (Lists portable IP ranges.)
Description
listNetworkACLs
copyTemplate
listRouters
updateConfiguration
listVolumes
suspendProject
listRemoteAccessVpns
registerTemplate
addTrafficMonitor
createTemplate
migrateVolume
livemigrate (optional)
The following new response parameters is added:
displayvolume
createAccount
updatePhysicalNetwork
listTrafficMonitors
attachIso
listProjects
enableAccount
listPublicIpAddresses
enableStorageMaintenance
listLoadBalancerRules
stopRouter
listClusters
attachVolume
updateVPCOffering
resetSSHKeyForVirtualMachine
updateCluster
listPrivateGateways
ldapConfig
(optional)
The following parameters has been made optional:
searchbase, hostname, queryfilter
The following new response parameter is added: ssl
listTemplates
listNetworks
restartNetwork
prepareTemplate
rebootVirtualMachine
changeServiceForRouter
ldapRemove
updateServiceOffering
updateStoragePool
listFirewallRules
updateUser
updateProject
updateTemplate
disableUser
activateProject
createNetworkACL
enableStaticNat
registerIso
createVolume
startRouter
listCapabilities
createServiceOffering
restoreVirtualMachine
createNetwork
createVlanIpRange
CreateZone
deployVirtualMachine
createNetworkOffering
listNetworks
listNetworks
listNetworkOfferings
addF5LoadBalancer
configureNetscalerLoadBalancer
addNetscalerLoadBalancer
listF5LoadBalancers
configureF5LoadBalancer
listNetscalerLoadBalancers
listRouters
listVirtualMachines
listRouters
listZones
listFirewallRules
createFirewallRule
listUsageRecords
deleteIso
addCluster
updateCluster
createStoragePool
listStoragePools
updateDiskOffering
changeServiceForVirtualMachine
listCapabilities
createRemoteAccessVpn
startVirtualMachine
detachIso
updateVPC
associateIpAddress
listProjectAccounts
disableAccount
listPortForwardingRules
migrateVirtualMachine
cancelStorageMaintenance
createPortForwardingRule
addVpnUser
createVPCOffering
assignVirtualMachine
listConditions
createPrivateGateway
updateVirtualMachine
destroyRouter
listServiceOfferings
listUsageRecords
createProject
createLoadBalancerRule
updateAccount
copyIso
uploadVolume
createDomain
stopVirtualMachine
listAccounts
createSnapshot
updateIso
listIpForwardingRules
updateNetwork
destroyVirtualMachine
createDiskOffering
rebootRouter
listConfigurations
createUser
listDiskOfferings
detachVolume
deleteUser
listSnapshots
markDefaultZoneForAccount
restartVPC
updateHypervisorCapabilities
updateLoadBalancerRule
listVlanIpRanges
listHypervisorCapabilities
updateNetworkOffering
createVirtualRouterElement
listVpnUsers
listUsers
listSupportedNetworkServices
listIsos