VIPR Controller 3.6.2
VIPR Controller 3.6.2
VIPR Controller 3.6.2
Version 3.6.2
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.“ DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com
Tables 5
Use this roadmap as a starting point for ViPR Controller installation and configuration.
You must perform the following high-level sequence of steps to install and configure
ViPR Controller. These steps must be completed for each instance of a ViPR
Controller virtual data center. Once ViPR Controller is installed and configured, you
can automate block and file storage provisioning tasks within the ViPR Controller
virtual data center.
1. Review the ViPR Controller readiness checklist.
2. Obtain the EMC ViPR Controller license file.
3. Determine which method you will be using to deploy ViPR Controller, and follow
the installation instructions:
l Install ViPR Controller on VMware as a vApp
l Install ViPR Controller on VMware without a vApp
l Install ViPR Controller on Hyper-V
4. Optionally:
l Install the ViPR Controller CLI
l Deploy a compute image server
5. Once you have installed the ViPR Controller, refer to the ViPR Controller User
Interface Tenants, Projects, Security, Users and Multisite Configuration Guide to:
l Add users into ViPR Controller via authentication providers.
l Assign roles to users.
l Create multiple tenants (optional)
l Create projects.
6. Prepare to configure the ViPR Controller virtual data center, as described in the
ViPR Controller Virtual Data Center Requirements and Information Guide.
7. Configure the ViPR Controller virtual data center as described in the ViPR
Controller User Interface Virtual Data Center Configuration Guide.
Use this checklist as an overview of the information you will need when you install and
configure the EMC ViPR Controller virtual appliance.
For the specific models, and versions supported by the ViPR Controller, ViPR
Controller resource requirements see the ViPR Controller Support Matrix.
l Identify an VMware or Hyper-V instance on which to deploy ViPR Controller.
l Make sure all ESXi servers (or all HyperV servers) on which ViPR controller will be
installed are synchronized with accurate NTP servers.
l Collect credentials to access the VMware or Hyper-V instance.
Deploying ViPR Controller requires credentials for an account that has privileges
to deploy on the VMware or Hyper-V instance.
l Refer to the ViPR Controller Support Matrix to understand the ViPR Controller
VMware or Hyper-V resource requirements, and verify that the VMware or Hyper-
V instance has sufficient resources for ViPR Controller deployment.
l If deploying on VMware, it is recommended to deploy the ViPR Controller on a
minimal of a 3 node ESXi DRS cluster, and to set an anti-affinity rule among the
ViPR Controller nodes to, "Separate Virtual Machines," on available ESXi nodes.
Refer to VMware vSphere documentation for instructions to setup ESX/ESXi DRS
anti-affinity rules.
l Identify 4 IP addresses for 3 node deployment or 6 IP addresses for 5 node
deployment. The addresses are needed for the ViPR Controller VMs and for the
virtual IP by which REST clients and the UI access the system. The address can be
IPv4 or IPv6.
Note
In dual mode, all controllers and VIPs must have both IPv6 and IPv4 addresses.
l A supported browser.
l Download the ViPR Controller deployment files from support.EMC.com.
l For each ViPR Controller VM, collect: IP address, IP network mask, IP network
gateway, and optionally IPv6 prefix length and IPv6 default gateway.
l Two or three DNS servers
l The DNS servers configured for ViPR Controller deployment must be configured
to perform both forward and reverse lookup for all devices that will be managed by
ViPR Controller.
l Two or three NTP servers.
l ViPR Controller requires ICMP protocol is enabled for installation and normal
usage.
l FTP/FTPS or CIFS/SMB server for storing ViPR Controller backups remotely. You
need the URL of the server and credentials for an account with read and write
privileges on the server. Plan for 6 GB per backup initially, then monitor usage and
adjust as needed.
l A valid SMTP server and email address.
l An Active Directory or LDAP server and related attributes.
ViPR Controller validates added users against an authentication server. To use
accounts other than the built-in user accounts, you need to specify.
Starting with ViPR Controller 3.0, a new licensing model was deployed.
Overview
Starting with Release 3.0, ViPR Controller implemented a new licensing model. The
new model supports a new-format managed capacity license and a raw, usable, frame-
based capacity license. With the raw capacity single license file, each license file can
include multiple increments, both array-type and tiered.
The new licensing model is not compatible with the old-format managed capacity
license used with older versions of ViPR Controller.
ViPR Controller 3.6.2 new installation
l For a fresh ViPR 3.6.2 installation with a new license, you should encounter no
problem and may proceed normally.
l If you try to do a fresh ViPR 3.6.2 installation with an old license, you will receive
an error message "Error 1013: License is not valid" and will not be able to proceed
with the installation. You must open a Service Request (SR) ticket to obtain a new
license file.
ViPR Controller 3.6.2 upgrade installation
l For an upgrade ViPR 3.6.2 installation with an old license, ViPR 3.6.2 will continue
to use the old-format license, but the license will say "Legacy" when viewing the
Version and License section of the Dashboards in the ViPR GUI. There is no
automatic conversion to the new-format license. To convert to the new-format
license, you must open a Service Request (SR) ticket to obtain a new license file.
After you upload the new-format license, the GUI display will show "Licensed".
Pre-3.0 versions of ViPR Controller
l Pre 3.0 versions of ViPR controller will accept the new-format license file.
However, they will only recognize the last increment in the new file.
l After you upgrade to Version 3.0 or greater, you will need to upload the new-
format license again.
Licensing Model 11
Licensing Model
The license file is needed during initial setup of ViPR Controller or when adding
capacity to your existing ViPR Controller deployment. Initial setup steps are described
in the deployment sections of this guide.
In order to obtain the license file you must have the License Authorization Code
(LAC), which was emailed to you.
Note
There is a new licensing model for EMC ViPR Controller Version 3.0 and above. For
details, refer to the chapter "Licensing Model" in the EMC ViPR Controller Installation,
Upgrade, and Maintenance Guide, which can be found on the ViPR Controller Product
Documentation Index.
If you are adding a ViPR Controller license to an existing deployment, follow these
steps to obtain a license file.
Procedure
1. Go to support.EMC.com
2. Select Service Center.
3. Select Product Registration & Licenses > Manage Licenses and Usage
Intelligence.
4. Select ViPR Controller from the list of products.
5. On the LAC Request page, enter the LAC code and Activate.
6. Select the entitlements to activate and Start Activation Process.
7. Select Add a Machine to specify any meaningful string for grouping licenses.
The "machine name" does not have to be a machine name at all; enter any
string that will help you keep track of your licenses.
8. Enter the quantities for each entitlement to be activated, or select Activate All.
Click Next.
If you are obtaining licenses for a multisite (geo) configuration, distribute the
controllers as appropriate to obtain individual license files for each virtual data
center.
For a System Disaster Recovery environment, you do NOT need extra licenses
for Standby sites. The Active site license is shared between the sites.
9. Optionally specify an addressee to receive an email summary of the activation
transaction.
10. Click Finish.
11. Click Save to File to save the license file (.lic) to a folder on your computer.
Note
The following separate software products are pre-installed on the image: ViPR
Controller 3.7 SOFTWARE IMAGE (453-011-026) and SLES12 SW GPL3 OPEN
SOURCE SOFTWARE IMAGE (453-011-027).
l vipr-<version>-controller-2+1.ova
Deploys on 3 VMs. One VM can go down without affecting availability of the
virtual appliance.
vipr-<version>-controller-3+2.ova
Deploys on 5 VMs. Two VMs can go down without affecting availability of the
virtual appliance.
This option is recommended for deployment in production environments.
9. If resource pools are configured (not required for ViPR Controller), select one.
10. Select the datastore or datastore cluster for your appliance.
11. Select a disk format:
l Thick Provision Lazy Zeroed (Default)
l Thick Provision Eager Zeroed (Recommended for production deployment)
l Thin Provision
12. On the Network Mapping page, map the source network to a destination
network as appropriate.
(If you are running vSphere Web Client, you can disregard the "IP protocol:
IPv4" indicator; it is part of the standard screen text. In fact this deployment is
used for both IPv4 and IPv6.)
13. Enter values for the properties.
Note that when entering IP addresses, you must enter values for the IPv4
properties, or IPv6 properties, or both (if dual stack), according to the mode
you need to support.
Server n IPv4 address
Key name: network_n_ipaddr
One IPv4 address for public network. Each Controller VM requires either a
unique, static IPv4 address in the subnet defined by the netmask, or a
unique static IPv6 address, or both.
Note than an address conflict across different ViPR Controller installations
can result in ViPR Controller database corruption that would need to be
restored from a previous good backup.
Network netmask
Key name: network_netmask
IPv4 netmask for the public network interface.
IPv6 address used for UI and REST client access. See also Avoid conflicts
in EMC ViPR network virtual IP addresses.
15. Wait 7 minutes after powering on the VM before you follow the next steps. This
will give the ViPR Controller services time to start up.
16. Open https://ViPR_virtual_ip with a supported browser and log in as root.
Initial password is ChangeMe.
The ViPR_virtual_IP is the ViPR Controller public virtual IP address, also known
as the network.vip (the IPv4 address) or the network.vip6 (IPv6). Either value,
or the corresponding FQDN, can be used for the URL.
17. Browse to and select the license file that was downloaded from the EMC license
management web site, then Upload License.
18. Enter new passwords for the root and system accounts.
The passwords must meet these requirements:
l at least 8 characters
l at least 1 lowercase
l at least 1 uppercase
l at least 1 numeric
l at least 1 special character
l no more than 3 consecutive repeating
l at least change 2 characters (settable)
l not in last 3 change iterations (settable)
The ViPR Controller root account has all privileges that are needed for initial
configuration; it is also the same as the root user on the Controller VMs. The
system accounts (sysmonitor, svcuser, and proxyuser) are used internally by
ViPR Controller.
19. For DNS servers, enter two or three IPv4 or IPv6 addresses (not FQDNs),
separated by commas.
20. For NTP servers, enter two or three IPv4 or IPv6 addresses (not FQDNs),
separated by commas.
21. Select a transport option for ConnectEMC (FTPS (default), SMTP, or none)
and enter an email address (user@domain) for the ConnectEMC Service
notifications.
If you select the SMTP transport option, you must specify an SMTP server
under SMTP settings in the next step. "None" disables ConnectEMC on the
ViPR Controller virtual appliance.
If TLS/SSL encryption used, the SMTP server must have a valid CA certificate.
23. Finish.
At this point ViPR Controller services restart (this can take several minutes).
After you finish
You can now set up Authentication Providers as described in ViPR Controller User
Interface Tenants, Projects, Security, Users and Multisite Configuration Guide, and setup
your virtual data center as described in ViPR Controller User Interface Virtual Data
Center Configuration Guide. Both guides are available from the ViPR Controller Product
Documentation Index.
l Be prepared to provide new passwords for the ViPR Controller root and system
accounts.
l You need IPv4 and/or IPv6 addresses for DNS and NTP servers.
l Optionally, you need the name of an SMTP server. If TLS/SSL encryption is used,
the SMTP server must have a valid CA certificate.
l You need access to the ViPR Controller license file.
l For details about redeploying ViPR Controller minority nodes see the EMC ViPR
Controller System Disaster Recovery, Backup and Restore Guide, which is available
from the ViPR Controller Product Documentation Index.
Procedure
1. Log in to a Linux or Windows computer that has IP access to the vCenter
Server or to a specific ESXi server.
2. Download vipr-<version>-controller-vsphere.zip from the ViPR download page
on support.emc.com.
3. Unzip the ZIP file.
4. Open a bash command window on Linux, or a PowerShell window on Windows,
and change to the directory where you unzipped the installer.
5. To deploy the ViPR Controller, run the vipr-version-deployment installer script
to deploy ViPR Controller.
You can run the script in interactive mode, or through the command line.
Interactive mode will easily guide you through the installation, and the
interactive script encodes the vCenter username and password for you in the
event the username or password contains special characters, you will not be
required to manually encode them.
l PowerShell
If you choose to deploy the ViPR Controller from the command line, you will
need to manually enter the deployment parameters, and escape special
characters if any are used in the vCenter username and password.
The following are examples of deploying ViPR Controller from the command
line. See the following table for complete syntax.
l bash shell:
l PowerShell:
Option Description
-help Optional, to see the list of parameters, and descriptions.
-mode install Required for initial install.
-mode redeploy Required to redeploy a node for restore. For details see
the EMC ViPR Controller System Disaster Recovery, Backup
and Restore Guide, which is available from the ViPR
Controller Product Documentation Index.
-interactive Optional for install, and redeploy.
Prompts for user input, one parameter at a time. Do not
use delimiters when in interactive mode, that is, no single
quotes, no double quotes.
Option Description
IPv6 default gateway.
node 2:
node 3:
Option Description
-vmprefix Optional for install, and redeploy.
Prefix of virtual machine name.
You can use either -vmprefix, or -vmname, but not both.
Option Description
esxi_host_uri is My-vcener-or-ESXi.example.com/
datacenter-name/host/host-name/Resources/resource-
pool
Entering the username and password in the target URI is
optional. If you do not enter the user name and password
in the Target URI you will go into interactive mode, and be
prompted to enter them during installation. An example
for entering the URI without a user name and password is:
My-vcener-or-ESXi.example.com/ViPR-DataCenter/
host/ViPR-Cluster/Resources/ViPR-Pool
If you chose to enter the username and password in the
URI, when you use URIs as locators, you must escape
special characters using % followed by their ASCII hex
value. For example, if username requires a backslash (for
example, domain\username) use %5c instead of \ (that is,
use domain%5cusername) for example:
vi://mydomain.com
%5cmyuser1:password1@vcenter1.emc.com:443/My-
Datacenter/host/ViPR-Cluster/Resources/ViPR-Pool
For details refer to the VMware OVF Tool User Guide.
6. If redeploying a failed node, for the remaining steps refer to the EMC ViPR
Controller System Disaster Recovery, Backup and Restore Guide, which is available
from the ViPR Controller Product Documentation Index.
If installing ViPR Controller for the first time, repeat steps 1 - 5 for each node
you are installing.
You will need to enter the information required to install the first node,
however, you will not need to enter all of the information for the additional
nodes. A .settings file is created during installation of the first node. The
settings file is used to enter the configuration information for the remaining
nodes.
You will only need to change specific parameters for each subsequent node that
you want to change, such as "node id", VM name, or target datastore.
8. When the installer script indicates successful deployment and the VMs are
powered on, open the ViPR Controller UI with a supported browser and log in as
root.
l The initial password is ChangeMe.
l The ViPR_virtual_IP is the ViPR Controller public virtual IP address, which is
the vip or vip6 value. You can also use the corresponding FQDN for the URL.
9. Browse to and select the license file that was downloaded from the EMC license
management web site, then Upload License.
10. Enter new passwords for the root and system accounts.
The passwords must meet these requirements:
l at least 8 characters
l at least 1 lowercase
l at least 1 uppercase
l at least 1 numeric
l at least 1 special character
l no more than 3 consecutive repeating
l at least change 2 characters (settable)
l not in last 3 change iterations (settable)
The ViPR Controller root account has all privileges that are needed for initial
configuration; it is also the same as the root user on the Controller VMs. The
system accounts (sysmonitor, svcuser, and proxyuser) are used internally by
ViPR Controller.
11. For DNS servers, enter two or three IPv4 or IPv6 addresses (not FQDNs),
separated by commas.
12. For NTP servers, enter two or three IPv4 or IPv6 addresses (not FQDNs),
separated by commas.
13. Select a transport option for ConnectEMC (FTPS (default), SMTP, or none)
and enter an email address (user@domain) for the ConnectEMC Service
notifications.
If you select the SMTP transport option, you must specify an SMTP server
under SMTP settings in the next step. "None" disables ConnectEMC on the
ViPR Controller virtual appliance.
In an IPv6-only environment, use SMTP for the transport protocol. (The
ConnectEMC FTPS server is IPv4-only.)
14. (Optional) Specify an SMTP server and port for notification emails (such as
ConnectEMC alerts, ViPR Controller approval emails), the encryption type
(TLS/SSL or not), a From address, and authentication type (login, plain,
CRAM-MD5, or none).
Optionally test the settings and supply a valid addressee. The test email will be
from the From Address you specified and will have a subject of "Mail Settings
Test".
If TLS/SSL encryption used, the SMTP server must have a valid CA certificate.
your virtual data center as described in ViPR Controller User Interface Virtual Data
Center Configuration Guide. Both guides are available from the ViPR Controller Product
Documentation Index.
l You need credentials to log in to the Service Center Virtual Machine Manager
(SCVMM).
l Be prepared to provide new passwords for the ViPR Controller root and system
accounts.
l You need IPv4 and/or IPv6 addresses for DNS and NTP servers.
l You need the name of an SMTP server. If TLS/SSL encryption is used, the SMTP
server must have a valid CA certificate.
l You need access to the ViPR Controller license file.
l Note the following restrictions on ViPR Controller VMs in a Hyper-V deployment:
n Hyper-V Integration Services are not supported. Do not install Integration
Services on ViPR Controller VMs.
n Restoring from a Hyper-V virtual machine checkpoint or clone is not supported.
n Modifications to VM memory, CPU, or data disk size requires powering off
whole cluster, prior to changing with SCVMM.
Procedure
1. Log in to the SCVMM server using the Administrator account, and copy the zip
file to the SCVMM server node.
2. Unzip the ZIP file.
3. Open a PowerShell window and change to the unzip directory.
4. To deploy the ViPR Controller, run the vipr-version-deployment installer script.
You can run the script in interactive mode, or through the command line.
Interactive mode will easily guide you through the installation, or you can use
the command line to enter the parameters on your own.
For interactive mode enter:
From the command line, you will need to enter the parameters when deploying.
The following is only an example, see the table for complete syntax.
Option Description
-help Optional, to see the list of parameters, and descriptions.
-mode install Required for initial install.
-mode redeploy Required to redeploy a node for restore. For details see
the: EMC ViPR Controller System Disaster Recovery, Backup
and Restore Guide, which is available from the ViPR
Controller Product Documentation Index.
-interactive Optional for install, and redeploy.
Prompts for user input, one parameter at a time. Do not
use delimiters when in interactive mode, that is, no single
quotes, no double quotes.
Option Description
address assigned to this node will be the address specified
in ipaddrs_3 .
For example, when deploying a ViPR Controller 2+1 on
different hosts of a Hyper-V cluster, you run the installer
script 3 times, using different values each time for the
options -nodeid, and -vmpath.
The order of IP addresses for the -ipaddrs_n option must
be the same each time.
node 1:
node 2:
node 3:
Option Description
Name of the virtual machine. Enter a different value for
each node i.e, vipr1, vipr2, vipr3,
You can use either -vmprefix, or -vmname, but not both.
5. If redeploying a failed node, for the remaining steps, refer to the EMC ViPR
Controller System Disaster Recovery, Backup and Restore Guide, which is available
from the ViPR Controller Product Documentation Index.
If installing ViPR Controller for the first time, repeat steps 1 - 4 for each node
you are installing.
You will need to retype all the information required to install the first node,
however, you will not need to enter the information for the additional nodes.
A .settings file is created during installation of the first node. The settings
file is used to enter the configuration information for the remaining nodes.
The ViPR_virtual_IP is the ViPR Controller public virtual IP address, also known
as the network.vip (the IPv4 address) or the network.vip6 (IPv6). Either value,
or the corresponding FQDN, can be used for the URL.
8. Browse to and select the license file that was downloaded from the EMC license
management web site, then Upload License.
9. Enter new passwords for the root and system accounts.
The passwords must meet these requirements:
l at least 8 characters
l at least 1 lowercase
l at least 1 uppercase
l at least 1 numeric
l at least 1 special character
l no more than 3 consecutive repeating
l at least change 2 characters (settable)
l not in last 3 change iterations (settable)
The ViPR Controller root account has all privileges that are needed for initial
configuration; it is also the same as the root user on the Controller VMs. The
system accounts (sysmonitor, svcuser, and proxyuser) are used internally by
ViPR Controller.
10. For DNS servers, enter two or three IPv4 or IPv6 addresses (not FQDNs),
separated by commas.
11. For NTP servers, enter two or three IPv4 or IPv6 addresses (not FQDNs),
separated by commas.
12. Select a transport option for ConnectEMC (FTPS (default), SMTP, or none)
and enter an email address (user@domain) for the ConnectEMC Service
notifications.
If you select the SMTP transport option, you must specify an SMTP server
under SMTP settings in the next step. "None" disables ConnectEMC on the
ViPR Controller virtual appliance.
If TLS/SSL encryption used, the SMTP server must have a valid CA certificate.
14. Finish.
At this point ViPR Controller services restart. This can take several minutes.
After you finish
You can now set up Authentication Providers as described in ViPR Controller User
Interface Tenants, Projects, Security, Users and Multisite Configuration Guide, and setup
your virtual data center as described in ViPR Controller User Interface Virtual Data
Center Configuration Guide. Both guides are available from the ViPR Controller Product
Documentation Index.
Note
For sites with self-signed certificates or where issues are detected, optionally
use http://<ViPR_Controller_VIP>:9998/cli only when you are inside
a trusted network. <ViPR_Controller_VIP> is the ViPR Controller public virtual
IP address, also known as the network vip. The CLI installation bundle is
downloaded to the current directory.
4. Use tar to extract the CLI and its support files from the installation bundle.
tar -xvf <cli_install_bundle>
5. Run the CLI installation program.
python setup.py install
6. Change directory to /opt/storageos/cli or to the directory where the CLI
is installed.
7.
Note
Perform this step only when you have not provided the correct input in step 5.
Edit the viprcli.profile file using the vi command and set the
VIPR_HOSTNAME to the ViPR Controller public virtual IP address and
VIPR_PORT=4443 environment variable and save the file.
# vi viprcli.profile
#!/usr/bin/sh
:wq
8. Run the source command to set the path environment variable for the ViPR
Controller executable.
source ./viprcli.profile
9. From the command prompt run the viprcli -h command.
If the help for viprcli is displayed, then the installation is successful.
10. Authenticate (log into) the ViPR Controller instance with the viprcli to confirm
that your installation was successful.
Note
For sites with self-signed certificates or where issues are detected, optionally
use http://<ViPR_Controller_virtual_IP>:9998/cli only when you
are inside a trusted network. <ViPR_Controller_virtual_IP> is the ViPR
Controller public virtual IP address, also known as the network vip.
l If your browser prompts you to save the ViPR-cli.tar.gz file, save it to
the temporary CLI installer directory that you created in step 2. For example,
c:\cli\temp.
l If your browser automatically downloads the ViPR-cli.tar.gz file,
without giving you the opportunity to select a directory, then copy the
downloaded ViPR-cli.tar.gz file to the temporary CLI installer directory
that you created in step 2.
4. Open a command prompt and change to the directory you created in step 2,
where you saved or copied the ViPR-cli.tar.gz file. This example will use
c:\cli\temp.
5. Enter the python console by typing python at the command prompt:
c:\cli\temp>python
Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64
bit (AMD64)] on win
32
Type "help", "copyright", "credits" or "license" for more
information.
>>>
6. Using the tarfile module, open and extract the files from the ViPR-
cli.tar.gz file.
>>> tfile.extractall('.')
>>> exit()
7. Since you are already in the directory to which the files have been extracted,
run the python setup.py install command. Follow the installation
instructions and provide the required information.
Note
You can also enter y to select the defaults for the installation directory (EMC
\VIPR\cli) and the port number (4443).
8. (Optional) If incorrect information was provided in the previous step, edit the
viprcli.profile.bat file and set the following variables.
Variable Value
SET VIPR_HOSTNAME The ViPR Controller hostname, set to the fully
qualified domain name (FQDN) of the ViPR
Controller host, or the virtual IP address of your
ViPR Controller configuration.
SET VIPR_PORT The ViPR Controller port. The default value is 4443.
9. Change directories to the location where the viprcli was installed. The default is:
C:\EMC\ViPR\cli.
10. Run the viprcli.profile.bat command.
11. Authenticate (log into) the ViPR Controller instance with the viprcli to confirm
that your installation was successful.
See Authenticating with viprcli.
C:/>
viprcli authenticate -u root -d c:\tmp
C:/>
viprcli -hostname <fqdn, or host ip> authenticate -u
root -d c:\tmp
Do not end the directory path with a '\'. For example, c:\tmp\
Type the password when prompted.
Authenticate on Linux
To log into the default ViPR Controller instance use:
#
viprcli authenticate -u root -d /tmp
#
viprcli -hostname <fqdn, or host ip> authenticate -u root
-d /tmp
Note
The non-root users must have read, write, and execute permissions to use the CLI
installed by root. However, they don't need all these permissions for installing and
running the CLI in their home directory.
2. If you do not have the original files that you used to install the ViPR Controller
CLI, then follow the steps to extract the CLI and its support files that are
appropriate for your platform:
3. In the directory to which you extracted the CLI files, run the CLI uninstall
program.
python setup.py uninstall
4. When prompted, provide the directory where the CLI is installed, for
example /opt/storageos/cli.
For information about ViPR Controller support for a Vblock system, see the: ViPR
Controller Virtual Data Center Requirements and Information Guide, which is available
from the ViPR Controller Product Documentation Index.
Note
Note
11. Select a disk format: Thick Provision Lazy Zeroed, Thick Provision Eager
Zeroed, or Thin Provision.
12. On the Network Mapping page, specify a destination network for the
Management Network and for the private OS Install Network.
13. Enter the values for the properties:
Property Description
Appliance fully qualified name FQDN of the image server host name.
Management Network IP IPv4 address for the Management Network
Address interface
Management Network IPv4 netmask for the Management Network
Netmask interface
Management Network IPv4 address for the Management Network
Gateway gateway
Private OS Install Network IP IPv4 address for the OS Install Network
address interface
DNS Server(s) IPv4 addresses for one or more DNS servers
Search Domain(s) One or more domains for directing searches.
Time Zone Select the time zone where the image server
resides.
/etc/sysconfig/network/ifcfg-eth1
DEVICE='eth1'
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='12.0.55.10'
NETMASK='255.255.255.0'
/etc/dhcpd.conf
ddns-update-style none;
ignore client-updates;
/etc/sysconfig/dhcpd
# listen on eth1 only
DHCPD_INTERFACE="eth1"
/etc/xinetd.d/tftp
service tftp
{
socket_type = dgram
protocol = udp
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = -s /opt/tftpboot/ -vvvvvvv
disable = no
per_source = 11
cps = 100 2
flags = IPv4
}
l SSH acces
n User account must have permissions to write to TFTPBOOT directory.
n User account must have permissions to execute mount/umount commands
l Python
l Enough disk space to store multiple OS images - at least 50 GB is recommended
l No firewall blocking standard SSH, DHCP, TFTP ports and HTTP on 44491 (or a
custom port chosen for HTTP).
l wget binary must be installed.
Review the login and user role requirements before setting up ViPR Controller.
Note
Note
In Geo-federated Environment:
l Has Security Administrator privileges on authentication providers, which
are global resources.
Note
System l Has read-only access to all resources in the ViPR Controller virtual data
Monitor center. Has no visibility into security-related resources, such as
authentication providers, ACLs, and role assignments.
l Retrieves bulk event and statistical records for the ViPR Controller
virtual data center.
l Retrieves ViPR Controller virtual data center status and health
information.
l (API only) Can create an alert event, with error logs attached, as an aid
to troubleshooting. The alert event is sent to ConnectEMC.
l View the Database Housekeeping Status.
l View the minority node recovery status from the ViPR Controller UI, and
CLI.
l List backups from external server.
l Check upload status of a backup.
l Check restore status.
l Create/Modify File Protection Policy templates
l Create vPools
l File Protection Policy assignment at project and file system levels (also
need System Administrator privileges)
System Auditor Has read-only access to the ViPR Controller virtual data center audit logs.
Tenant-level roles
Tenant roles are used to administrate the tenant-specific settings, such as the service
catalog and projects, and to assign additional users to tenant roles. The following table
lists the authorized actions for each user role at the tenant level.
Note
In Geo-federated Environment:
l Has Tenant Administrator privileges on tenants, which are global
resources.
Project Administrator l Creates projects in their tenant and obtains an OWN ACL on the
created project.
l Pre-upgrade planning.........................................................................................50
l Upgrade ViPR Controller....................................................................................53
l Upgrade Virtual Hardware..................................................................................54
l Add the Node ID property in VMware after upgrading the ViPR Controller vApp
...........................................................................................................................54
l Change ScaleIO storage provider type and parameters after upgrading ViPR
Controller...........................................................................................................55
l Upgrade the ViPR Controller CLI....................................................................... 56
Pre-upgrade planning
Some pre-upgrade steps are required and you should prepare for ViPR Controller to be
unavailable for a period of time.
l For supported upgrade paths and most recent environment and system
requirements, see the EMC ViPR Controller Release Notes, which are available
from the ViPR Controller Product Documentation Index. If the release notes
indicate that you need interim upgrades, you should first follow the upgrade
instructions for those prior releases.
l To ensure your environment is compliant with the latest support matrix, review the
ViPR Controller Support Matrix.
l Determine if you will be upgrading from an EMC-based repository, or from an
internal location by first downloading the ViPR Controller installation files.
n If upgrading from an EMC-based repository, configure the ViPR Controller to
point to the EMC-based repository as described in: Configuring ViPR Controller
for upgrade from an EMC-based repository.
n If your site cannot access the EMC repository, and you will be installing from an
internal location refer to Configuring ViPR Controller for an upgrade from an
internal location.
l Verify that the ViPR Controller status is Stable from the ViPR Controller UI
System > Dashboard.
l In a multisite (geo) configuration, don't start an upgrade under these conditions:
n if there are add, remove, or update VDC operations in progress on another
VDC.
n if an upgrade is already in progress on another VDC.
n if any other VDCs in the federation are unreachable, or have been manually
disconnected, or if the current VDC has been disconnected.
In these cases, you should manually disconnect the unreachable VDC, and
reconnect any disconnected VDC.
n Also, make sure that the ports that are used for IPSec in ViPR Controller 3.6
are open (not blocked by a firewall) in the customer environment between the
datacenters.
l Before upgrading, make a backup of the ViPR Controller internal databases using a
supported backup method so that in the unlikely event of a failure, you will be able
to restore to the previous instance. Refer to the version of ViPR Controller backup
documentation that matches the version of ViPR Controller you are backing up.
For ViPR Controller versions 2.4 and later, backup information is provided in the
EMC ViPR Controller Disaster Recovery, Backup and Restore Guide. For earlier
versions, backup information is provided in the EMC ViPR Controller Installation,
Upgrade, and Maintenance Guide.
l Prepare for the ViPR Controller virtual appliance to be unavailable for provisioning
operations for 6 minutes plus approximately 1 minute for every 10,000 file shares,
volumes, block mirrors, and block snapshots in the ViPR Controller database.
System Management operations will be unavailable for a period of 8 minutes (for a
2+1 Controller node deployment) or 12 minutes (for a 3+2 Controller node
deployment) plus approximately 1 minute for every 10,000 file shares, volumes,
block mirrors, and block snapshots in the ViPR Controller database.
l Verify that all ViPR Controller orders have completed before you start the
upgrade.
Note
Note
ViPR Controller does not support spaces in project names, therefore, spaces are
not supported on XtremIO folder names.
Option Description
Repository URL URL to the EMC upgrade repository. One value only.
Default value is https://colu.emc.com/soap/rpc.
Option Description
Proxy HTTP/HTTPS proxy required to access the EMC upgrade
repository. Leave empty if no proxy is required.
Username Username to access EMC Online Support.
Password Password to access EMC Online Support.
Check Frequency Number of hours between checks for new upgrade
versions.
3. Click Save.
After you finish
Use the following command to configure the ViPR Controller for an upgrade from an
EMC-based repository using the ViPR Controller CLI:
Note
If you have modified the viprcli.profile file appropriately, you do not need to
append -hostname <vipr_ip_address> to the command.
For complete details refer to the ViPR Controller CLI Reference Guide which is available
from the ViPR Controller Product Documentation Index.
Note
If you have modified the viprcli.profile file appropriately, you do not need
to append -hostname <vipr_ip_address> to the command.
For complete details refer to the ViPR Controller CLI Reference Guide which is
available from the ViPR Controller Product Documentation Index.
3. Enter the following to upload the image file to a location on the ViPR Controller
virtual appliance where it will be found by ViPR Controller to upgrade:
For details about using the ViPR Controller CLI see: ViPR Controller CLI
Reference Guide, which is available from the ViPR Controller Product
Documentation Index.
4. Proceed to the next section to upgrade to the new version.
Wait for the system state to be Stable before making provisioning or data
requests.
5. If you are upgrading on a ViPR Controller instance that was deployed as a
VMware vApp, then continue to add the Node ID property as described in Add
the Node ID property in VMware after upgrading the ViPR Controller vApp.
After you finish
To upgrade ViPR Controller from the ViPR Controller CLI use the following command:
For complete details refer to the ViPR Controller CLI Reference Guide which is available
from the ViPR Controller Product Documentation Index.
Note the following about ViPR Controller after an upgrade:
l Modified ViPR Controller catalog services are always retained on upgrade, but to
obtain new services, and original versions of modified services, go to Edit Catalog,
and click Update Catalog.
l After upgrading to version 2.4 or higher, any array with meta volumes need to be
rediscovered, before you attempt to ingest those meta volumes.
l After upgrading to version 2.4 or higher, rediscover your RecoverPoint Data
Protection Systems. This refreshes ViPR Controller's system information and
avoids inconsistencies when applying RecoverPoint protection with ViPR
Controller 2.4 or higher.
Note
For more information for upgrading the virtual machine hardware version, refer
https://kb.vmware.com/s/article/1010675
property in VMware after upgrading to ViPR Controller 2.4 or higher. You do not have
to perform this action if this is a new installation, and not an upgrade.
Note
Failure to perform this operation after upgrade from ViPR Controller versions 2.3.x or
earlier will cause ViPR Controller operational failures if, at any time, you use vSphere
to rename the original ViPR Controller vApp nodes names.
Procedure
1. From the VMware vSphere, power off the ViPR Controller vApp.
2. Right click on the first virtual machine in the ViPR Controller vApp, and choose
Edit Settings.
3. Go to the Options > vApp Options > Advanced menu.
4. Open the Properties, and create a new property with the following settings:
l Enter a Label, optionally name it Node ID.
l Leave the Class ID empty.
l Enter "node_id" for the ID. The name "node_id" is required for the id name,
and cannot be modified.
l Leave the Instance ID empty.
l Optionally enter a Description of the ViPR Controller node.
l Type: string.
l Enter the Default value, which must be the node id set by ViPR Controller
during deployment for example, vipr1, for the first ViPR Controller node,
vipr2 for the second ViPR Controller node.
ViPR Controller values for a 3 node deployment are vipr1, vipr2, vipr3, and
for a 5 node deployment are vipr1, vipr2, vipr3, vipr4, and vipr5.
l Check User Configurable.
5. Repeat steps 2 through 4 for each virtual machine deployed with the ViPR
Controller vApp.
6. Power on the ViPR Controller vApp.
Change ScaleIO storage provider type and parameters after upgrading ViPR Controller 55
Upgrading ViPR Controller
Procedure
1. Navigate to Pysical > Storage Providers.
2. Select the ScaleIO storage provider.
The Edit Storage Provider screen appears.
3. Change Type to ScaleIO Gateway.
4. Change Host to the FQDN or IP Address of the ScaleIO Gateway host.
5. Change Port to the port used to communicate with the ScaleIO REST API
service .
l With SSL enabled, the default is 443.
l With SSL disabled, the default is 80.
6. Select Save.
7. Navigate to Pysical Assets > Storage Systems.
8. For each of the storage systems associated with the updated ScaleIO storage
provider:
a. Select the ScaleIO storage system.
b. Click Rediscover.
You may need to change IP addresses or customize node names in order to avoid
conflicts.
Change the IP address of ViPR Controller node on VMware without vApp, or Hyper-V using ViPR Controller UI 59
Managing ViPR Controller Nodes
If you are not changing your subnet, you will be able to log back into ViPR
Controller 5 to 15 minutes after the configuration change has been made. Only
perform steps 6 and 7 if you are changing your network adapter settings in the
VM management console.
6. Go to your VM management console (vSphere for VMware or SCVMM for
Hyper-V), and change the network settings for each virtual machine.
7. Power on the VMs from the VM management console.
You should be able to log back into the ViPR Controller 5 to 15 minutes after
powering on the VMs
If you changed ViPR Controller virtual IP address, remember to login with new
virtual IP. ViPR Controller will not redirect you from the old virtual IP to the new
virtual IP.
5. As the node powers on, select the 2nd option in the GRUB boot menu:
Configuration of a single ViPR(vipr-x.x.x.x.x) Controller node.
Be aware that you will only have a few seconds to select this option before the
virtual machine proceeds with the default boot option.
6. On the Cluster Configuration screen, select the appropriate ViPR node id and
click Next.
7. On the Network Configuration screen, enter the new IP addresses for all nodes
that need to change in the appropriate fields and click Next.
You will only need to type new IP addresses in one node, and then accept new
configuration on subsequent nodes in steps 12-13.
8. On the Deployment Confirmation screen, click Config.
9. Wait for the "Multicasting" message at the bottom of the console next to the
Config button, then power on the next ViPR Controller node.
10. As the node powers on, right-click the node and select Open Console.
11. On the next node, select the new VIP.
Note: if you changed the VIP in a previous step, you will see two similar options.
One has the old VIP, the other has the new VIP. Be sure to select the new VIP.
12. Confirm the Network Configuration settings, which are prepopulated.
13. On the Deployment Confirmation screen, click Config.
14. Wait for the "Multicasting" message at the bottom of the console next to the
Config button, then power on the next ViPR Controller node.
15. Repeat steps 10 through 14 for the remaining nodes.
16. When the "Multicasting" message has appeared for all nodes, select Reboot
from the console, for each ViPR node.
After you finish
At this point the IP address change is complete. Note that the virtual machine will fail
to boot up after an IP address change if the ViPR Controller is part of a multi-VDC
(geo) configuration. In this case you would need to revert the IP address change.
Procedure
1. From the ViPR UI, shutdown all VMs (Dashboards > Health > Shutdown All).
2. Open the SCVMM UI on the SCVMM Server that hosts the ViPR Controller.
3. On the SCVMM UI, right-click the ViPR Controller node whose IP address you
want to change and select Power On.
4. On the SCVMM UI, as the node powers on, right-click the node and select
Connect or View > Connect via Console.
5. On the console GRUB menu, select the 2nd option, Configuration of a single
node.
Be aware that you will only have a few seconds to select this option before the
virtual machine proceeds with the default boot option.
6. On the Cluster Configuration screen, select the appropriate ViPR Controller
node id and click Next.
7. On the Network Configuration screen, enter the new IP addresses for all nodes
that need to change in the appropriate fields and click Next.
You will only need to type new IP addresses in one node, and then accept new
configuration on subsequent nodes in steps 12-13.
8. On the Deployment Confirmation screen, click Config.
9. Wait for the "Multicasting" message at the bottom of the console next to the
Config button, then power on the next ViPR Controller node.
10. On the SCVMM UI, as the node powers on, right-click the node and select
Connect or View > Connect via Console.
11. On the next node, select the new VIP for the cluster configuration. .
Note
if you changed the VIP in a previous step, you will see two similar options. One
has the old VIP, the other has the new VIP. Be sure to select the new VIP.
identify the nodes in the ViPR Controller UI, REST API, and ViPR Controller logs. The
custom node names can also be used to SSH between the ViPR Controller nodes.
By default ViPR Controller is installed with the following node IDs, which are also the
default node names:
During initial deployment, the default names are assigned to the nodes in ViPR
Controller, vSphere for VMware installations, and SCVMM for Hyper-V installations.
Note
Node ids cannot be changed. Only the node names can be changed.
Note
l Whether you change the node name or not, if you have deployed ViPR Controller
on VMware with a vApp, and you are upgrading from ViPR Controller versions
2.3.x or lower, then you will need to add the node_id property in VMware after
upgrading to ViPR Controller 2.4 or higher, as described in Add the Node ID
property in VMware after upgrading the ViPR Controller vApp. You do not have to
perform this action if this is a new installation and not an upgrade
# cat nodenames-file.txt
node_1_name=mynode1.domain.com
node_2_name=mynode2.domain.com
node_3_name=mynode3.domain.com
node_4_name=mynode4.domain.com
node_5_name=mynode5.domain.com
use_short_node_name=true
Where the node_n_name, sets the node name for the associated ViPR Controller
Node ID for example:
l The value for node_1_name will replace the node name for vipr1
l The value for node_2_name will replace the node name for vipr2
l The value for node_3_name will replace the node name for vipr3
l The value for node_4_name will replace the node name for vipr4
l The value for node_5_name will replace the node name for vipr5
You can change the node names for as many number of nodes that are deployed
either 3 node, or 5 node deployment.
2. Run the CLI command to update properties, and pass the file as an argument:
PUT https://ViPR_Controller_VIP:4443/config/properties/
<property_update>
<properties>
<entry>
<key>node_1_name</key>
<value>mynode1.domain.com</value>
</entry>
<entry>
<key>node_2_name</key>
<value>mynode2.domain.com</value>
</entry>
<entry>
<key>node_3_name</key>
<value>mynode3.domain.com</value>
</entry>
<entry>
<key>node_4_name</key>
<value>mynode4.domain.com</value>
</entry>
<entry>
<key>node_5_name</key>
<value>mynode5.domain.com</value>
</entry>
<entry>
<key> use_short_node_name </key>
<value>true</value>
</entry>
</properties>
</property_update>
Where the node name key, sets the node name for the associated ViPR Controller
Node ID for example:
l The value for node_1_name will replace the node name for vipr1
l The value for node_2_name will replace the node name for vipr2
l The value for node_3_name will replace the node name for vipr3
l The value for node_4_name will replace the node name for vipr4
l The value for node_5_name will replace the node name for vipr5
You can change the node names for as many number of nodes that are deployed either
3 node, or 5 node deployment.
For more details about using the ViPR Controller REST API, see the ViPR Controller
REST API Reference, available as a zip file from the ViPR Controller Product
Documentation Index.
If ViPR Controller was deployed as separate VMs (that is, no vApp), the
individual VMs are visible in the VMs and Templates view.
e. Adjust the Processor and Memory settings. Refer to the EMC ViPR Support
Matrix for recommended processor and memory sizes.
f. Repeat for each ViPR Controller node.
Use identical settings for processor and memory on all ViPR Controller nodes.
3. in the SCVMM UI, power up the ViPR Controller VM.
ConnectIN is used by EMC Support to interact with ViPR Controller. ConnectIN uses
the ESRS protocol for communications. ConnectIN functionality is generic and does
not require configuration in ViPR Controller. After you register ViPR Controller, EMC
engineers will be able to establish an ESRS tunnel to your ViPR Controller instance
and start an SSH or UI session.
Option Description
SMTP server SMTP server or relay for sending email.
Port Port on which the SMTP service on the SMTP server is listening for
connections. "0" indicates the default SMTP port is used (25, or
465 if TLS/SSL is enabled).
Encryption Use TLS/SSL for the SMTP server connections.
Authentication Authentication type for connecting the SMTP server.
Username Username for authenticating with SMTP server.
Password Password for authenticating with SMTP server.
From address From email address to send email messages (user@domain).
Once these settings have been enabled, you can continue to configure ViPR Controller
for ConnectEMC, Tenant Approver, and user email notifications.
To receive email from ConnectEMC
Configure the ConnectEMC email from the System > General Configuration >
ConnectEMC tab.
To send email to Tenant Approvers
Configure the Tenant Approver email from the Tenants > Approval Settings page.
To send email to root users
You must be logged in as root. Open the root drop-down menu in the right corner of
the ViPR Controller UI title bar, and select Preferences.
To send email to provisioning users
You must be logged in as the provisioning user. Open the user drop-down menu in the
right corner of the ViPR Controller UI title bar, and select Preferences.
System Disaster Recovery provides email alerts for two types of issue:
1. Network issue (the Active site has lost communication with a Standby site)
2. A Standby site has become Degraded, due to a loss of connection with the Active
site for ~15 minutes.
Example 1:
From: "vipr210@vipr.com" <vipr210@vipr.com>
Date: Wednesday, February 10, 2016 5:55 PM
To: Corporate User <root.user@emc.com>
Subject: ATTENTION - standby1-214 network is broken
Your standby site: standby1-214's network connection to Active site has been broken.
Please note that this could be reported for the following reasons. 1) Network
connection between standby site and active site was lost. 2) Standby site is powered
off. 3) Network latency is abnormally large and could cause issues with disaster
recovery operations.
Thank you, ViPR
Example 2:
From: "vipr210@vipr.com" <vipr210@vipr.com>
Date: Wednesday, February 10, 2016 5:55 PM
To: Corporate User <root.user@emc.com>
Subject: ATTENTION - standby 10.247.98.73 is degraded
Your Standby site 10.247.98.73_name has been degraded by Active site at 2016-04-05
10:28:27. This could be caused by following reasons (including but not limited to):1)
Network connection between Standby site and Active site was lost.2) Majority of
nodes in Standby site instance are down.3) Active or Standby site has experienced an
outage or majority of nodes and not all nodes came back online (its controller status is
"Degraded").
Please verify network connectivity between Active site and Standby Site(s), and make
sure Active and Standby Site's controller status is "STABLE".NOTE: If Active site or
Standby site temporarily experienced and outage of majority of nodes, the Standby
site can only return to synchronized state with Active when ALL nodes of Active and
Standby site(s) are back and their controller status is "STABLE".
Thank you, ViPR
4. In the License File field, click Browse and select the license file that was saved
to your local host.
5. Click Upload License File.
5. Send.
Note
System logs and alerts are site-specific. In a System Disaster Recovery environment,
logs can be viewed and collected separately on the Active site and the Standby
site(s) .
Each ViPR Controller service on each virtual machine logs messages at an appropriate
level (INFO, DEBUG, WARN and ERROR) and the service logs can be viewed when a
problem is suspected. However, the log messages may not provide information that
can be acted on by a System Administrator, and may need to be referred to EMC.
System alerts are a class of log message generated by the ViPR Controller system
management service aimed at System Administrators and reflect issues, such as
environment configuration and connectivity, that a System Administrator should be
able to resolve.
Download ViPR Controller System logs
The download button enables the you to download a zip file containing the logs that
correspond to the current filter setting. In addition to the logs directory, the zip also
contains an info directory, containing the configuration parameters currently applied,
and orders directory showing all orders that have been submitted.
1. From the ViPR Controller UI go to the System > Logs page.
2. Click Download and specify the content that will be packaged in the zip file
containing the logs.
A logs archive (.zip) file called logs-<date>-<time>.zip will be downloaded. The
logs archive contains all log, system configuration, and order information. You can
identify the service log file for a specific node in the zip file, by the log file name.
The .log files are named as follows: servicename_nodeid_nodename.log for
example:
l apisvc.vipr1.mynodename.log is a log file of the API service operations run
on the first node of a ViPR Controller. mynodename.log is the custom node
name provided by the user.
If a custom node name was not provided, then the node id will also be in the place of
the node name for example:
l apisvc.vipr1.vipr1.log.
Note
If you complete the Downloads > Download System Logs > Orders field and specify
a date range longer than 30 days, you get a zip file which includes only the most
recent 30 days of orders. If you want orders for a longer time period, go to Catalog >
All Orders to download orders older than 30 days.
system events, the log level that you want to retrieve, the time span over which logs
should be considered, a string that any filtered message must contain.
Audit Log
The System > Audit Log page displays the recorded activities performed by
administrative users for a defined period of time.
The Audit Log table displays the Time at which the activity occurred, the Service Type
(for example, vdc or tenant), the User who performed the activity, the Result of the
operation, and a Description of the operation.
Filtering the Audit Log Display
1. Select System > Audit Log. The Audit Log table defaults to displaying activities
from the current hour on the current day and with a Result Status of ALL STATUS
(both SUCCESS and FAILURE) .
2. To filter the Audit Log table, click Filter.
3. In the Filter System Logs dialog box, you can specify the following filters:
l Result Status: Specify ALL STATUS (the default), SUCCESS, or FAILURE.
l Start Time: To display the audit log for a longer time span, use the calendar
control to select the Date from which you want to see the logs, and use the
Hour control to select the hour of day from which you want to display the audit
log.
l Service Type: Specify a Service Type (for example, vdc or tenant).
l User: Specify the user who performed the activity.
l Keyword: Specify a keyword term to filter the Audit Log even further.
4. Select Update to display the filtered Audit Log.
Downloading Audit Logs
1. Select System > Audit Log. The Audit Log table defaults to displaying activities
from the current hour on the current day and with a Result Status of ALL STATUS
(both SUCCESS and FAILURE) .
2. To download audit logs, click Download.
3. In the Download System Logs dialog box, you can specify the following filters:
l Result Status: Specify ALL STATUS (the default), SUCCESS, or FAILURE.
l Start Time: Use the calendar control to select the Date from which you want
to see the logs, and use the Hour control to select the hour of day from which
you want to display the audit log.
l End Time: Use the calendar control to select the Date to which you want to
see the logs, and use the Hour control to select the hour of day to which you
want to display the audit log. Check Current Time to use the current time of
day.
l Service Type: Specify a Service Type (for example, vdc or tenant).
l User: Specify the user who performed the activity.
l Keyword: Specify a keyword term to filter the downloaded system logs even
further.
4. Select Download to download the system logs to your system as a zip file.
Audit Log 77
Configuration Options for ViPR Controller
Set log retention policy and forward all real-time log events
to a remote Syslog server
This feature allows the setting of log retention policy and the forwarding and
consolidation of all real-time log events to one or more common, configured, remote
Syslog server(s), and will help the user to analyze the log events. All logs from all ViPR
services except Nginx (for example, syssvc, apisvc, dbsvc, etc.) are forwarded in real
time to the remote Syslog server after successful configuration. Audit logs are also
forwarded.
Procedure
1. Specify the number of days for which controller logs should be retained. Click
the down-arrow to the right of Log Retention in days and select a value from 7
to 30. The default value is 30 days.
Note
Logs will be removed even before this period if they exceed 10% of the data
disk size.
Note
Note
9. Configure the format/template for how and where the logs are saved, by
editing /etc/rsyslog.conf. Following are two examples:
a. Example 1: Configure a central location to save the logs (/var/log/
syslog/TemplateLogs):
$template MyTemplate, "[ViPR] - <%pri%> - %timestamp% -
%FROMHOST% - %HOSTNAME% -## %PROGRAMNAME% ##- %syslogtag
% -MM- %msg%\n"
local2.* -/var/log/syslog/TemplateLogs;MyTemplate
Set log retention policy and forward all real-time log events to a remote Syslog server 79
Configuration Options for ViPR Controller
Option Description
Syslog Settings
Enable Remote Select True to specify and enable a remote Syslog server.
Syslog
Remote Server
Settings
Syslog Transport Specify the Syslog transport protocol. Select UDP, TCP,
Protocol or "TCP with encryption" (TLS). For a UDP or TCP
connection, you will specify a Syslog Server IP or FQDN,
and a Port. For a TLS connection, you will specify a Syslog
Server IP or FQDN, a Port, a Security Certificate, and
ViPR Controller Security Certificate.
Remote Syslog
Servers & Ports
Server The IP address of the remote Syslog server. You can
obtain this from the Syslog server Administrator.
Port The port number for the server. The ports on which syslog
services typically accept connections are 514/10514.
Certificate This field appears only if you selected "TCP with
encryption" (TLS) as the Syslog Transport Protocol. This
field contains the certificate file from the remote Syslog
server. Paste the entire content of server.crt
(including --Start and --End strings), generated in step 1,
if TLS is enabled.
Add Click this button to additional remote Syslog servers.
13. Click the Test button to validate the Syslog server input before saving.
14. Click Save.
Confirm that the remote Syslog server setup:
15. Confirm that the remote Syslog server is saving the ViPR Controller logs as
expected.
a. Login to the remote Syslog server.
b. Verify that the Syslog server is running and listening on the configured port
and protocol. In the example below, it is UDP on port 514:
# netstat -uanp|grep rsyslog
udp 0 0 0.0.0.0:514 0.0.0.0:* 21451/rsyslogd
udp 0 0 :::514 :::* 21451/rsyslogd
c. Determine the location of the logs specified in the template section
of /etc/rsyslog.conf. In the example below it is /var/log/syslog/
TemplateLogs.
$template MyTemplate, "[ViPR] - <%pri%> - %timestamp% -
%FROMHOST% - %HOSTNAME% -## %PROGRAMNAME% ##- %syslogtag
% -MM- %msg%\n"
local2.* -/var/log/syslog/TemplateLogs;MyTemplate
d. Go to the directory defined in /etc/rsylsog.conf and confirm that logs
are written to that directory. Note that the format of the saved files will be
depend on templates defined by the Syslog server System Administrator.
Example 1:
# tail -f /var/log/syslog/TemplateLogs
…
[ViPR] - <150> - Aug 19 09:28:33 - lglw2022.lss.emc.com -
vipr2 -## ...lable_versions><available_ver
##- ...lable_versions><available_ver -MM-
on><new_version>vipr-3.6.0...
[ViPR] - <150> - Aug 19 09:28:33 - lglw2023.lss.emc.com -
vipr3 -## vipr ##- vipr -MM- vipr3 syssvc 2016-08-19
09:28:33 INFO DrUtil:531 - get local coordinator mode from
vipr3:2181
[ViPR] - <150> - Aug 19 09:28:33 - lglw2023.lss.emc.com -
vipr3 -## vipr ##- vipr -MM- vipr3 syssvc 2016-08-19
09:28:33 INFO DrUtil:543 - Get current zookeeper mode
leader
…
Example 2:
# tail -f /var/log/syslog/AuditLog.log
…
2016-08-19T07:56:56+00:00 vipr2 vipr vipr2 AuditLog
2016-08-19 07:56:56 INFO AuditLog:114 - audit log is
config null SUCCESS "Update system property
(config_version=1471593416583,network_syslog_remote_server
s_ports=10.247.102.30:514) succeed."
2016-08-19T07:58:04+00:00 vipr2 vipr vipr2 AuditLog
2016-08-19 07:58:04 INFO AuditLog:114 - audit log is
config null SUCCESS "Update system property
(config_version=1471593484027,network_syslog_remote_server
s_ports=lglw2030.lss.emc.com:
514,system_syslog_transport_protocol=TCP) succeed."
…
#tail -f /var/log/syslog/syssvcLog.log
Set log retention policy and forward all real-time log events to a remote Syslog server 81
Configuration Options for ViPR Controller
…
2016-08-19T09:37:39+00:00 vipr4 vipr vipr4 syssvc
2016-08-19 09:37:39 INFO DrUtil:543 - Get current
zookeeper mode follower
2016-08-19T09:37:39+00:00 vipr4 vipr vipr4 syssvc
2016-08-19 09:37:39 INFO DrDbHealthMonitor:55 - Current
node is not ZK leader. Do nothing
Supported storage system The storage driver may support both a storage system and a
storage provider. When this happens, the Supported Storage
Systems column has two items separated by a comma.
Status In-use or Ready. When set to In-Use, the driver may upgraded,
but not deleted. When set to Ready, the driver may be upgraded
or deleted. If a driver is In-use and you want to delete it, you
must first delete the storage systems that use the driver. Then
the status will change to Ready.
Procedure
1. Select the row with the driver to be upgraded or deleted.
Check the Actions column to confirm your choice is possible.
2. Click Upgrade or Delete depending upon which task you want to perform.
When you click Upgrade, the page displays where you can browse to the new
driver jar file and upload it. If you want to bypass the version check, select the
Force box. Then click Upgrade.
A dialog displays asking you to confirm that controller services will be restarted
by the upgrade process.
Results
A message, Upgrade request initiated, displays. When finished, the status
changes to Ready.
Diagutils
Diagutils is a built-in ViPR Controller utility program that enables you to easily
download critical troubleshooting information, including a databse dump, logs, ZK
properties, ViPR system properties, etc.
Command
# /opt/storageos/bin/diagutils
Note
In order to execute the diagutils utility, the user should enable root console login
and permit root ssh access, which are disabled by default.
Note
Syntax
diagutils
<-all|-quick>
<-min_cfs|-all_cfs|-zk|-backup|-logs|-properties|-health>
[-ftp <ftp/ftps server url> -u <user name> -p <password>]
[-conf <config file>]
Options
-all
Includes the output gathered by options: -backup (with default backup name), -
zk, -logs, -properties, -health, and -all_cfs.
Equivalent to
-ftp and -conf are the only two other options allowed in conjunction with -
all.
Note
diagutils collects a large amount of data when the -all option is used, and will
take longer to run.
-quick
Includs the output gathered by options: -zk, -logs, -properties, -health,
and -min_cfs.
Equivalent to
-ftp and -conf are the only two other options allowed in conjunction with -
quick.
-min_cfs
Collect a minimum set of column families via the output of dbutils list and/or cqlsh
The default cfs list includes:
BlockConsistencyGroup, BlockMirror, BlockSnapshot, Cluster, ExportGroup,
ExportMask, FCZoneReference, Host, Initiator, Network, NetworkSystem,
ProtectionSet, ProtectionSystem, StorageProvider, StoragePool, StoragePort,
StorageSystem, Vcenter, VirtualArray, VirtualDataCenter, VirtualPool, Volume.
-all_cfs
Collect all column families via the output of dbutils list and/or cqlsh
Note
diagutils collects a large amount of data when the -all_cfs option is used, and
will take longer to run.
-zk
Collect zk jobs and queues through zkutils.
-logs
Collect all system logs (/var/log/messages), and ViPR logs, including the
rotated ones and orders in the last 30 days.
Note
Collecting orders is possible only with ViPR Controller 3.6 and above.
Diagutils 85
Configuration Options for ViPR Controller
Note
ViPR Controller keeps a maximum of 30 days of logs. diagutils will take longer to
run if there is a large amount of logs.
-properties
Collect system properties (version, node count, node names, etc.).
-health
Collect system health information (for example, node and service status, etc.),
performance data of local node from top output.
Note
Examples
diagutils -all -ftp ftp://10.247.101.11/logs -u usera -p xxx
diagutils -quick
diagutils -min_cfs -zk -logs
Output
diagutils creates a compressed zip archive of several formatted output text files.
When it completes running, it provides the name and location of the output file
generated.
For example: "The output file is: /data/diagutils-data/
diagutils-20161104011601.zip"
Exclusions and Limitations
l When majority nodes are down, and the database and ZK are not accessible,
diagutils can only collect logs on live nodes and system properties.
l diagutils collects information only from the ViPR Controller system/cluster
l diagutils generates a warning if the /data directory is 50% full, and it will exit if
the /data directory reaches 80% full.