Clustered Data ONTAP - NFS Best Practice and Implementation Guide
Clustered Data ONTAP - NFS Best Practice and Implementation Guide
Clustered Data ONTAP - NFS Best Practice and Implementation Guide
Executive Summary
This report serves as an NFSv3 and NFSv4 operational guide and an overview of the NetApp
clustered Data ONTAP operating system, with a primary focus on NFSv4. The report details
steps in the configuration of an NFS server, NFSv4 features, and the differences between
clustered Data ONTAP and Data ONTAP operating in 7-Mode.
TABLE OF CONTENTS
1 Introduction .............................................................................................................................................. 7
1.1
Scope ..............................................................................................................................................................7
1.2
2.2
Architecture ........................................................................................................................................... 9
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
Umask ...........................................................................................................................................................44
5.2
5.3
7.2
7.3
7.4
7.5
General Best Practices for NDO with NFS in Clustered Data ONTAP ..........................................................60
9.2
NFSv4.0 ........................................................................................................................................................68
NFSv4.1 ........................................................................................................................................................98
9.4
LIST OF TABLES
Table 1) Benefits of a cluster namespace. ...................................................................................................................11
Table 2) Export examples.............................................................................................................................................19
Table 3) Pros and cons for volume-based multitenancy based on design choice. .......................................................38
Table 4) Directory tree structure for volume-based multitenancy. ................................................................................38
Table 5) Export policy rule attributes. ...........................................................................................................................42
Table 6) Supported authentication types for ro, rw, and superuser. .............................................................................43
Table 7) Octal values in umask. ...................................................................................................................................45
Table 8) Caches and time to live (TTL). .......................................................................................................................51
Table 9) Replay cache NDO behavior. .........................................................................................................................57
Table 10) Lock state NDO behavior. ............................................................................................................................58
Table 11) NFSv4.x lock terminology. ............................................................................................................................72
Table 12) NFS lease and grace periods. ......................................................................................................................93
Table 13) Referrals versus migration versus pNFS. .....................................................................................................97
Table 14) NFSv4.1 delegation benefits. .....................................................................................................................101
Table 15) Limits on local users and groups in clustered Data ONTAP. ......................................................................120
Table 16) 7-Mode to clustered Data ONTAP mapping. ..............................................................................................122
Table 17) Limitations of existing security styles..........................................................................................................123
Table 18) Mixed versus unified security style. ............................................................................................................124
Table 19) NFS well-known principal definitions. .........................................................................................................125
Table 20) Mixed mode versus unified security style. ..................................................................................................128
Table 21) Common mount failures. ............................................................................................................................136
Table 22) Common access issues. ............................................................................................................................139
Table 23) Files written as nobody in NFSv4.............................................................................................................140
Table 24) Stale file handle on NFS mount. .................................................................................................................141
Table 25) Virtual machine statistic masks. .................................................................................................................145
Table 26) NFS options in clustered Data ONTAP. .....................................................................................................147
Table 27) Export policy rule options. ..........................................................................................................................154
Table 28) NFSv3 configuration options in clustered Data ONTAP. ............................................................................156
Table 29) NFSv4 configuration options in clustered Data ONTAP. ............................................................................157
LIST OF FIGURES
Figure 1) Cluster namespace. ......................................................................................................................................10
Figure 2) Client request to mount a file system in NFSv4.............................................................................................18
Figure 3) Server sends file handle to complete request. ..............................................................................................19
Figure 4) Symlink example using vsroot. ......................................................................................................................23
Figure 5) Volume-based multitenancy using junctioned volumes. ................................................................................36
Figure 6) Volume-based multitenancy using qtrees. ....................................................................................................37
Figure 7) UNIX permissions. ........................................................................................................................................44
Figure 8) RPC packet with 16 GIDs. ............................................................................................................................64
Figure 9) NFSv4.x read and write ops: no multiprocessor. ...........................................................................................67
Figure 10) NFSv4.x read and write ops: with multiprocessor. ......................................................................................67
Figure 11) pNFS data workflow. .................................................................................................................................100
Figure 12) Example of setting NFSv4 audit ACE........................................................................................................105
Figure 13) Multiprotocol user mapping. ......................................................................................................................113
Figure 14) Mixed-style (left) and unified-style (right) mode bit display on Windows. ..................................................124
Figure 15) UNIX permission in an NTFS ACL in unified style.....................................................................................126
1 Introduction
As more and more data centers evolve from application-based silos to server virtualization and scale-out
systems, storage systems have evolved to support this change. NetApp clustered Data ONTAP provides
shared storage for enterprise and scale-out storage for various applications, including databases, server
virtualization, and home directories. Clustered Data ONTAP provides a solution for emerging workload
challenges in which data is growing in size and becoming more complex and unpredictable.
Clustered Data ONTAP is unified storage software that scales out to provide efficient performance and
support of multitenancy and data mobility. This scale-out architecture provides large scalable containers
to store petabytes of data. The architecture also upgrades, rebalances, replaces, and redistributes load
without disruption, which means that the data is perpetually alive and active.
1.1
Scope
7-Mode and clustered Data ONTAP differences and similarities for NFS access-cache
implementation
Configuration of NFS v4 features in clustered Data ONTAP, such as user ID mapping, delegations,
ACLs, and referrals
Note:
1.2
This document is not intended to provide information about migration from 7-Mode to clustered
Data ONTAP; it is specifically about NFSv3 and NFSv4 implementation in clustered Data ONTAP
and the steps required to configure it.
This technical report is for storage administrators, system administrators, and data center managers. It
assumes basic familiarity with the following:
Note:
This document contains advanced and diag-level commands. Exercise caution when using these
commands. If there are questions or concerns about using these commands, contact NetApp
Support for assistance.
Capacity Scaling
Capacity expansion in traditional storage systems might require downtime, either during physical
installation or when redistributing existing data across the newly installed capacity.
Performance Scaling
Standalone storage systems might lack the I/O throughput to meet the needs of large-scale
enterprise applications.
Availability
Traditional storage systems often have single points of failure that can affect data availability.
Right-Sized SLAs
Not all enterprise data requires the same level of service (performance, resiliency, and so on).
Traditional storage systems support a single class of service, which often results in poor utilization or
unnecessary expense.
Cost
With rapid data growth, storage is consuming a larger and larger portion of shrinking IT budgets.
Complicated Management
Discrete storage systems and their subsystems must be managed independently. Existing resource
virtualization does not extend far enough in scope.
2.2
Clustered Data ONTAP helps achieve results and get products to market faster by providing the
throughput and scalability needed to meet the demanding requirements of high-performance computing
and digital media content applications. Clustered Data ONTAP also facilitates high levels of performance,
manageability, and reliability for large Linux, UNIX, and Microsoft Windows clusters.
Features of clustered Data ONTAP include:
Scale-up, scale-out, and scale-down are possible with numerous nodes using a global namespace.
Storage virtualization with storage virtual machines (SVMs) eliminates the physical boundaries of a
single controller (memory, CPU, ports, disks, and so on).
Nondisruptive operations (NDO) are available when you redistribute load or rebalance capacity
combined with network load balancing options within the cluster for upgrading or expanding its nodes.
NetApp storage efficiency features such as NetApp Snapshot copies, thin provisioning, space
efficient cloning, deduplication, data compression, and NetApp RAID DP technology are also
available.
You can address solutions for the previously mentioned business challenges by using the scale-out
clustered Data ONTAP approach.
Scalable Capacity
Grow capacity incrementally, on demand, through the nondisruptive addition of storage shelves and
growth of storage containers (pools, LUNs, file systems). Support nondisruptive redistribution of
existing data to the newly provisioned capacity as needed using volume moves.
High Availability
Leverage highly available pairs to provide continuous data availability in the face of individual
component faults.
Unified Management
Provide a single point of management across the cluster. Leverage policy-based management to
streamline configuration, provisioning, replication, and backup. Provide a flexible monitoring and
reporting structure implementing an exception-based management model. Virtualize resources
across numerous controllers so that volumes become simple-to-manage logical entities that span
storage controllers for performance and dynamic redistribution of data.
3 Architecture
3.1
An SVM is a logical file system namespace capable of spanning beyond the boundaries of physical
nodes in a cluster.
Clients can access virtual servers from any node in the cluster, but only through the associated
logical interfaces (LIFs).
Each SVM has a root volume under which additional volumes are mounted, extending the
namespace.
It is associated with one or more logical interfaces; clients access the data on the virtual server
through the logical interfaces that can live on any node in the cluster.
A LIF is essentially an IP address with associated characteristics, such as a home port, a list of ports
for failover, a firewall policy, a routing group, and so on.
Client network data access is through logical interfaces dedicated to the SVM.
An SVM can have more than one LIF. You can have many clients mounting one LIF or one client
mounting several LIFs.
This means that IP addresses are no longer tied to a single physical interface.
Aggregates
An aggregate is a RAID-level collection of disks; it can contain more than one RAID group.
Aggregates serve as resources for SVMs and are shared by all SVMs.
Flexible Volumes
A volume is a logical unit of storage. The disk space that a volume occupies is provided by an
aggregate.
Each volume is associated with one individual aggregate, and therefore with one physical node.
3.2
Volumes can be moved from aggregate to aggregate with the NetApp DataMotion for Volumes
feature, without loss of access to the client. This capability provides more flexibility to move
volumes within a single namespace to address issues such as capacity management and load
balancing.
The appendix contains a table that covers the various options used for NFS servers, the version of
clustered Data ONTAP in which they are available, the privilege level, and their use. All NFS server
options can be viewed using the nfs server show command or through NetApp OnCommand
System Manager.
Best Practice 1: NFS Server Options Recommendation (See Best Practice 2)
The best practice for setting NFS server options is to evaluate each options relevance in an
environment on a case-by-case basis. The defaults are recommended in most cases, particularly in
all NFSv3 environments. Some use cases might arise that require options to be modified, such as
enabling NFSv4.0 to allow NFSv4 access. There is not a one-size-fits-all configuration for all
scenarios, so each use case should be evaluated at the time of implementation.
3.3
Cluster Namespace
A cluster namespace is a collection of file systems hosted from different nodes in the cluster. Each SVM
has a file namespace that consists of a single root volume. The SVM namespace consists of one or more
volumes linked by means of junctions that connect from a named junction inode in one volume to the root
directory of another volume. A cluster can have more than one SVM.
All the volumes belonging to the SVM are linked into the global namespace in that cluster. The cluster
namespace is mounted at a single point in the cluster. The top directory of the cluster namespace within a
cluster is a synthetic directory containing entries for the root directory of each SVM namespace in the
cluster.
Figure 1) Cluster namespace.
10
3.4
NetApp assumes that the following configuration steps were completed before you proceed with setting
up a clustered Data ONTAP NFS server.
Aggregate creation
SVM creation
LIF creation
Volume creation
Note:
3.5
NFS server creation and options are explained in detail in the File Access and Protocols
Management Guide for the version of clustered Data ONTAP being used.
Clustered Data ONTAP 8.3 allows storage administrators to provide the following benefits:
Nondisruptive operations
11
Why SVMs?
SVMs are logical storage containers that own storage resources such as flexible volumes, logical
interfaces (LIFs), exports, CIFS shares, and so on. Think of them as a storage blade center in your
cluster. These SVMs share physical hardware resources in the cluster with one another, such as network
ports/VLANs, aggregates with physical disk, CPU, RAM, switches, and so on. As a result, load for SVMs
can be balanced across a cluster for maximum performance and efficiency or to leverage SaaS
functionality, among other benefits.
Cluster Considerations
A cluster can comprise several HA pairs of nodes (4 HA pairs/8 nodes with SAN, 12 HA pairs/24 nodes
with NAS). Each node in the cluster has its own copy of a replicated database with the cluster and SVM
configuration information. Additionally, each node has its own set of user space applications that handle
cluster operations and node-specific caches, not to mention its own set of RAM, CPU, disks, and so on.
So while a cluster operates as a single entity, it does have the underlying concept of individualized
components. As a result, it makes sense to take under consideration the physical hardware in a cluster
when implementing and designing.
12
Ability to spread the load across nodes and leverage all the available hardware (CPU, RAM,
and so on)
If you load up all your NAS traffic on one node through one data LIF, you are not realizing the value of
the other nodes in the cluster. Spreading network traffic enables all available physical entities to be
used. Why pay for hardware you do not use?
Note:
Keep in mind that the preceding points are merely recommendations and not requirements unless
you use data locality features such as pNFS.
In some circumstances, the number of ports used to mount or for NFS operations might be exhausted,
which then causes subsequent mount and NFS operations to hang until a port is made available.
If an environment has thousands of clients that are mounted through NFS and generating I/O, it is
possible to exhaust all ports. One such scenario is with ESX using NFS datastores, because some best
practices call for a data LIF/IP address per datastore. For the most recent best practices for ESX/NFS
datastores, see TR-4333: VMware vSphere 5 on NetApp Clustered Data ONTAP.
This situation affects the source port (client-side) only: The mountd, portmapper, NFS, and nlm ports for
the NFS server are designated by the server. In clustered Data ONTAP, they are controlled by the
following options:
cluster::*> nfs server show -fields nlm-port,nsm-port,mountd-port,rquotad-port -vserver NFS83
vserver mountd-port nlm-port nsm-port rquotad-port
------- ----------- -------- -------- -----------NFS83
635
4045
4046
4049
13
As such, the enabling or disabling of the rootonly options will hinge upon need. Does the environment
require more ports to allow NFS to function properly? Or is it more important to prevent untrusted clients
from accessing mounts?
One potential compromise is to make use of NFSv4.x and/or Kerberos authentication for a higher level of
secured access to NFS exports. TR-4073: Secure Unified Authentication covers how to use NFSv4.x and
Kerberos in detail.
In these scenarios, using the mount-rootonly and/or nfs-rootonly options can alleviate these
issues.
To check port usage on the client:
# netstat -na | grep [IP address]
3.6
Clustered Data ONTAP introduces dynamic NAS TCP autotuning, which enables the NAS protocol stack
to adjust buffer sizes on the fly to the most optimal setting. This capability is needed because static
methods to set TCP buffer sizes do not consider the dynamic nature of networks nor the range of different
types of connections made to a system at one time. Autotuning is used to optimize the throughput of NAS
TCP connections by computing the application data read rate and the rate of the data being received by
the system to compute optimal buffer size. The feature is not configurable and only increases buffer
sizes; buffers never decrease. The starting value for this is 32K. Autotuning applies to individual TCP
connections, rather than on a global scale.
Best Practice 2: NFS Block Size Changes (See Best Practice 3)
If these values are adjusted, they affect only new mounts. Existing mounts maintain the block size
that was set at the time of the mount. If the sizes are changed, existing mounts can experience
rejections of write operations or smaller responses for reads than requested.
Whenever you change block size options, ensure that clients are unmounted and remounted to
reflect those changes. See bug 962596 for more information.
These options are not the same as the max transfer size values included under the NFS server options:
[-tcp-max-xfer-size <integer>] - TCP Maximum Transfer Size (privilege: advanced)
This optional parameter specifies the maximum transfer size (in bytes) that the storage system
negotiates with the client for TCP transport of data for NFSv2 and NFSv4 protocols. The range is
8192 to 65536. The default setting is 65536 when created.
[-v3-tcp-max-read-size <integer>] - NFSv3 TCP Maximum Read Size (privilege: advanced)
This optional parameter specifies the maximum transfer size (in bytes) that the storage system
negotiates with the client for TCP transport of data for NFSv3 read requests. The range is 8192
to 1048576. The default setting is 65536 when created.
[-v3-tcp-max-write-size <integer>] - NFSv3 TCP Maximum Write Size (privilege: advanced)
This optional parameter specifies the maximum transfer size (in bytes) that the storage system
negotiates with the client for TCP transport of data for NFSv3 write requests. The range is 8192
to 65536. The default setting is 65536 when created.
Note:
The NFS TCP size settings can be modified (8.1 and later only), but NetApp generally does not
recommend doing so.
14
3.7
NAS Flowcontrol
Clustered Data ONTAP also adds NAS flowcontrol. This flowcontrol mechanism is separate from the TCP
flowcontrol enabled on the NICs and switches of the data network. It is always on and implemented at the
NAS protocol stack to prevent rogue clients from overloading a node in the cluster and creating a denial
of service (DoS) scenario. This flowcontrol affects all NAS traffic (CIFS and NFS).
How It Works
When a client sends too many packets to a node, the flowcontrol adjusts the window size to 0 and tells
the client to wait on sending any new NAS packets until the other packets are processed. If the client
continues to send packets during this zero window, then the NAS protocol stack flowcontrol mechanism
sends a TCP reset to that client. The reset portion of the flowcontrol and the threshold for when a reset
will occur are configurable per node as of clustered Data ONTAP 8.2 using the following commands:
cluster::> node run [nodename] options ip.tcp.fcreset_enable [on|off]
cluster::> node run [nodename] options ip.tcp.fcreset_thresh_high [numerical value]
Note:
These values should be adjusted only if necessary and at the guidance of NetApp Support.
Run the command in increments to see if the numbers increase. Seeing packets in extreme flowcontrol
does not necessarily signify a problem. Contact NetApp Support if you suspect a performance problem.
15
To avoid potentially causing denial of service on a cluster node, modify clients running RHEL 6.3
and later to use, at most, 128 RPC slots.
To do this, run the following on the NFS client (alternatively, edit the /etc/modprobe.d/sunrpc.conf
file manually to use these values):
# echo "options sunrpc udp_slot_table_entries=64 tcp_slot_table_entries=128
tcp_max_slot_table_entries=128" >> /etc/modprobe.d/sunrpc.conf
3.8
Clustered Data ONTAP has removed the /vol requirement for exported volumes and instead uses a more
standardized approach to the pseudo file system. Because of this, you can now seamlessly integrate an
existing NFS infrastructure with NetApp storage because / is now truly / and not a redirector to
/vol/vol0 as it was in 7-Mode. A pseudo file system applies only in clustered Data ONTAP if the
permissions flow from more restrictive to less restrictive. For example, if the vsroot (mounted to /) has
more restrictive permissions than a data volume (such as /volname) does, then pseudo file systems
apply. Currently, -actual is not supported in clustered Data ONTAP.
The lack of -actual support in clustered Data ONTAP can be problematic if storage administrators wish
to ambiguate mount paths to their clients. For instance, if /storage/vol1 is exported by the storage
administrator, NFS clients have to mount /storage/vol1. If the intent is to mount clients to a pseudo
path of /vol1, then the only currently available course of action is to mount the volume to /vol1
instead.
If you are making the transition from 7-Mode to clustered Data ONTAP, where actual is present in the
/etc/exports file and there are qtrees present, then you might need to architect the cluster to convert
qtrees in 7-Mode to volumes to maintain the pseudo path. If this is the case, clusterwide volume limits
must be considered. See limits documentation for details on clusterwide volume limits.
16
that deployers can't simply drop the NetApp 7-Mode system into the place of an existing NFS server
without changing the client mounts, depending on how things are implemented in /etc/vfstab or
automounter. In NFSv3, if the complete path from /vol/vol0 is not used and <NetApp storage: />
is mounted, the mount point is NetApp storage: /vol/vol0. That is, if the path does not begin with
/vol in NFSv3, then Data ONTAP assumes that /vol/vol0 is the beginning of the path. This does not
get users into the desired areas of the NFS file system.
In clustered Data ONTAP, there is no concept of /vol/vol0. Volumes are junctioned below the root of the
SVM, and nested junctions are supported. Therefore, in NFSv3, there is no need to modify anything when
cutting over from an existing NFS server. It simply works.
In NFSv4, if the complete path from /vol/vol0 is not used and <NetApp storage: /> is mounted,
that is considered the root of the pseudo file system and not /vol/vol0. Data ONTAP does not add
/vol/vol0 to the beginning of the path, unlike NFSv3. Therefore, if <NetApp storage: /
/n/NetApp storage> is mounted using NFSv3 and the same mount is mounted using NFSv4, a
different file system is mounted.
This is why Data ONTAP 7-Mode has the /vol prefix in the exported global namespace and that feature
represents an instance of the NFSv4 pseudo file system namespace. The traversal from the pseudo file
system namespace to those of actual exported volumes is marked by a change in file system ID (fsid). In
the Data ONTAP implementation of the NFSv4 pseudo file system, the paths "/" and "/vol" are always
present and form the common prefix of any reference into the pseudo file system. Any reference that
does not begin with /vol is invalid in 7-Mode.
In clustered Data ONTAP, the notion of a pseudo file system integrates seamlessly with junction paths
and the unified namespace, so no additional pathing considerations are needed when leveraging NFSv4.
The NFSv4 server has a known root file handle for the servers available exported file systems that are
visible from this global server root by means of ordinary NFSv4 operations. For example, LOOKUP,
GETATTTR is used within the pseudo file system. The mountd protocol is not supported in NFSv4; it is
replaced by PUTROOTFH, which represents ROOT all the time. PUTFH represents the location of the
pointer in the directory tree under ROOT. When a request to mount a file system comes from the client, the
request traverses the pseudo file system (/ and /vol) before it gets to the active file system. While it is
traversing from the pseudo file system to the active file system, the FSID changes.
In clustered Data ONTAP, there is a diag-level option on the NFS server to enable preservation of the
FSID in NFSv4. This is on by default and should not be changed in most cases.
cluster::> set diag
cluster::*> nfs server modify -vserver vs0 -v4-fsid-change
17
18
Export Path
Exported Object
/vol1
/NFSvol
/vol1/NFSvol
/vol1/qtree
/vol1/NFSvol/qtree1
19
One use case for -actual that is not inherently covered by the clustered Data ONTAP NFS architecture
is -actual for qtrees. For instance, if a storage administrator wants to export a qtree to a path such as
/qtree, there is no way to do this natively using the NFS exports in the SVM.
Sample export from 7-Mode:
/qtree actual=/vol/vol1/qtree,rw,sec=sys
In clustered Data ONTAP, the path for a qtree that NFS clients mount is the same path as the qtree is
mounted to in the namespace. If this is not desirable, then the workaround is to leverage symlinks to
mask the path to the qtree.
What Is a Symlink?
Symlink is an abbreviation for symbolic link. A symbolic link is a special type of file that contains a
reference to another file or directory in the form of an absolute or relative path. Symbolic links operate
transparently to clients and act as actual paths to data.
When mounting a folder using NFS, it is better to use a relative path with symlinks, because there is no
guarantee that every user will mount to the same mount point on every client. With relative paths,
symlinks can be created that work regardless of what the absolute path is.
20
flexvol
unix2
nfstree
/vol/unix2/nfstree
unix
enable
---rwxr-xr-x
1
normal
volume
true
The parent volume is unix2 (/unix/unix2), which is mounted to volume unix (/unix), which is
mounted to vsroot (/).
cluster::> vol show -vserver flexvol -volume unix2 -fields junction-path
(volume show)
vserver volume junction-path
------- ------ ------------flexvol unix2 /unix/unix2
(everyone)
Some storage administrators might not want to expose the entire path of /unix/unix2/nfstree,
because it can allow clients to attempt to navigate other portions of the path. To allow the masking of that
path to an NFS client, a symlink volume or folder can be created and mounted to a junction path. For
example:
cluster::> vol create -vserver flexvol -volume symlinks -aggregate aggr1 -size 20m -state online
-security-style unix -junction-path /NFS_links
The volume size can be small (minimum of 20MB), but that depends on the number of symlinks in the
volume. Each symlink is 4k in size. Alternatively, create a folder under vsroot for the symlinks.
After the volume or folder is created, mount the vsroot to an NFS client to create the symlink.
# mount -o nfsvers=3 10.63.3.68:/ /symlink
# mount | grep symlink
10.63.3.68:/ on /symlink type nfs (rw,nfsvers=3,addr=10.63.3.68)
Note:
If using a directory under vsroot, mount vsroot and create the directory.
To create a symlink to the qtree, use the -s option (s = symbolic). The link path needs to include a
relative path that directs the symlink to the correct location without needing to specify the exact path. If
the link is inside a folder that does not navigate to the desired path, then ../ needs to be added to the
path.
For example, if a folder named NFS_links is created under / and the volume unix is also mounted under
/, then navigating to /NFS_links and creating a symlink will cause the relative path to require a redirect
to the parent folder.
21
Note that despite the fact that the symlink points to the actual path of /unix/unix2/nfstree, pwd
returns the path of the symlink, which is /symlink/NFS_links/LINK. The file you_are_here has the
same date and timestamp across both paths.
Note:
Because the path includes ../, this symlink cannot be directly mounted.
Again, despite the fact that the actual path is /unix/unix2/nfstree, we see an ambiguated path of
/symlink/LINK1. The file you_are_here has the same date and timestamp across both paths.
Additionally, the symlink created can be mounted instead of the vsroot path, adding an extra level of
ambiguity to the export path:
# mount -o nfsvers=3 10.63.3.68:/LINK1 /mnt
# mount | grep mnt
10.63.3.68:/LINK1 on /mnt type nfs (rw,nfsvers=3,addr=10.63.3.68
# cd /mnt
# pwd
/mnt
One use case for this setup is with automounters. Every client can mount the same path and never
actually know where in the directory structure they are. If clients will mount the SVM root volume (/), be
sure to lock down the volume to non-administrative clients.
22
For more information about locking down volumes to prevent listing of files and folders, see the section in
this document on how to limit access to the SVM root volume.
The following figure shows a sample of how a namespace can be created to leverage symlinks to create
ambiguation of paths for NAS operations.
Figure 4) Symlink example using vsroot.
Note:
Export policies and rules can be applied to volumes and qtrees, but not symlinks. This fact should
be taken into consideration when creating symlinks for use as mount points. Symlinks will instead
inherit the export policy rules of the parent volume in which the symlink resides.
23
export policies and rules can be set for every volume under the vsroot. For more information about
configuring export policies and rules, as well as specific use cases for securing the vsroot volume, see
the section in this document detailing those steps.
Each volume has only one export policy, although numerous volumes can use the same export policy. An
export policy can contain several rules to allow granularity in access control. With this flexibility, a user
can choose to balance workload across numerous volumes, yet can assign the same export policy to all
volumes. Export policies are simply containers for export policy rules.
Best Practice 4: Export Policy Rule Requirement (See Best Practice 5)
If a policy is created with no rule, that policy effectively denies access to everyone. Always create a
rule with a policy to control access to a volume. Conversely, if you wish to deny all access, remove the
policy rule.
Export policy and export policy rule creation (including examples) is specified in detail in the File Access
and Protocols Management Guide for the version of clustered Data ONTAP being used.
Use the vserver export-policy commands to set up export rules; this is equivalent to the
/etc/exports file in 7-Mode.
All exports are persistent across system restarts, and this is why temporary exports cannot be
defined.
There is a global namespace per virtual server; this maps to the actual=path syntax in 7-Mode. In
clustered Data ONTAP, a volume can have a designated junction path that is different from the
volume name. Therefore, the -actual parameter found in the /etc/exports file is no longer
applicable. This rule applies to both NFSv3 and NFSv4. For more information, see the section on
pseudo file systems and -actual support in this document.
In clustered Data ONTAP, an export rule has the granularity to provide different levels of access to a
volume for a specific client or clients, which has the same effect as fencing in the case of 7-Mode.
Export policy rules affect CIFS access in clustered Data ONTAP by default versions earlier than 8.2.
In clustered Data ONTAP 8.2 and later, export policy rule application to CIFS operations is disabled
by default. However, if upgrading from 8.1.x to 8.2, export policies and rules still apply to CIFS until it
is disabled. For more information about how export policies can be applied to volumes hosting CIFS
shares, see the File Access and Protocols Management Guide for the version of clustered Data
ONTAP being used.
Refer to Table 26 for NFSv3 config options that are modified in clustered Data ONTAP.
Note:
Older Linux clients (such as Fedora 8) might not understand AUTH_NULL as an authentication
type for NFS mounts. As such, export policy rules must be configured using explicit authentication
types, such as sys, to enable access to these clients.
Note:
If using Kerberos with NFSv3, the export policy rule must allow ro and rw access to sys in
addition to krb5. This requirement is because of the need to allow NLM access to the export and
the fact that NLM is not kerberized in krb5 mounts.
4.1
The appendix of this document offers a table that lists the various options used for export policy rules and
what they are used for. All export policy rule options can be viewed using the export-policy rule
show command or using OnCommand System Manager.
24
4.2
Clustered Data ONTAP exports do not follow the 7-Mode model of file-based access definition, in which
the file system path ID is described first and then the clients who want to access the file system path are
specified. Clustered Data ONTAP export policies are sets of rules that describe access to a volume.
Exports are applied at the volume level, rather than to explicit paths as in 7-Mode.
Policies can be associated with one or more volumes.
For example, in 7-Mode, exports could look like this:
/vol/test_vol
-sec=sys,rw=172.17.44.42,root=172.17.44.42
/vol/datastore1_sata
-sec=sys,rw,nosuid
7-Mode supports subvolume or nested exports; Data ONTAP supports exporting /vol/volX and
/vol/volX/dir. Clustered Data ONTAP currently does not support subvolume or nested exports. The
concept of subvolume exports does not exist because the export path applicable for a particular clients
access is specified at mount time based on the mount path.
Clustered Data ONTAP did not support qtree exports earlier than 8.2.1. In previous releases, a qtree
could not be a junction in the namespace independent of its containing volume because the "export
permissions" were not specified separately for each qtree. The export policy and rules of the qtrees
parent volume were used for all the qtrees contained within it. This implementation is different from the 7Mode qtree implementation, in which each qtree is a point in the namespace where export policies can be
specified.
In 8.2.1 and later versions of clustered Data ONTAP 8.2.x, qtree exports are available for NFSv3 exports
only. Qtree exports in clustered Data ONTAP 8.3 support NFSv4.x. The export policy can be specified at
the qtree level or inherited from the parent volume. By default, the export policy is inherited from the
parent volume, so if it is not modified, the qtree behaves in the same way as the parent volume. Qtree
export policies and rules work exactly the way volume export policies and rules work.
4.3
The UID and GID that a cluster will leverage depends on how the SVM has been configured with regard
to name mapping and name switch. In clustered Data ONTAP 8.2 and earlier, the name service switch
(ns-switch) option for SVMs specifies the source or sources that are searched for network information and
the order in which they are searched. Possible values include nis, file, and ldap. This parameter provides
the functionality of the /etc/nsswitch.conf file on UNIX systems. The name mapping switch (nm-switch)
option for SVMs specifies the sources that are searched for name mapping information and the order in
which they are searched. Possible values include file and ldap.
In clustered Data ONTAP 8.3 and later, the ns-switch and nm-switch parameters have been moved under
the vserver services name-service command set:
cluster ::vserver services name-service>
dns
ldap
netgroup
nis-domain ns-switch
unix-group
unix-user
For more information about the new name services functionality, see the section in this document
regarding name-service changes or TR-4073: Secure Unified Authentication.
If NIS or LDAP is specified for name services and/or name mapping, then the cluster will contact the
specified servers for UID and GID information. Connectivity to NIS and LDAP will attempt to use a data
25
LIF in the SVM by default. Therefore, data LIFs must be routable to name service servers in 8.2.x and
earlier. Clustered Data ONTAP 8.3 and later introduce improved SecD routing logic, so it is no longer
necessary to have a LIF that routes to name services on every node. SecD will figure out the data LIF to
use and pass traffic over the cluster network to the data LIF. Management LIFs will be used in the event a
data LIF is not available to service a request. If data LIFs are not able to communicate with name service
servers, then there might be some latency in authentication requests that will manifest as latency in data
access.
If desired, name service and name mapping communication can be forced over the management network
by default. This can be useful in environments in which an SVM does not have access to name service
and name mapping servers.
To force all authentication requests over the management network in clustered Data ONTAP 8.2.x and
earlier:
cluster::> set diag
cluster::> vserver modify -vserver vs0 -protocol-services-use-data-lifs false
Note:
This option is no longer available in clustered Data ONTAP 8.3 and later.
NetApp recommends leaving this option as true because management networks are often more
bandwidth-limited than data networks (1Gb versus 10Gb), which can result in authentication latency in
some cases.
If local files are used, then the cluster will leverage the unix-user and unix-group tables created for the
specified SVM. Because no remote servers are being used, there will be little to no authentication latency.
However, in large environments, managing large lists of unix-users and groups can be daunting and
mistake prone.
Best Practice 6: Name Services Recommendation (See Best Practice 7)
NetApp recommends leveraging either NIS or LDAP (preferably LDAP) for name services in larger
environments.
Unix-users and groups are not created by default when creating an SVM using the vserver create
command. However, using System Manager or the vserver setup command will create the default
users of root (0), pcuser (65534), and nobody (65535) and default groups of daemon (1), root (0), pcuser
(65534), and nobody (65535).
cluster::> unix-user show -vserver vs0
(vserver services unix-user show)
User
User
Group
Vserver
Name
ID
ID
-------------- --------------- ------ -----vs0
nobody
65535 65535
vs0
pcuser
65534 65534
vs0
root
0
1
Full
Name
--------------------------------
26
NetApp recommends using OnCommand System Manager when possible to avoid configuration
mistakes when creating new SVMs.
4.4
The anon user ID specifies a UNIX user ID or user name that is mapped to client requests that arrive
without valid NFS credentials. This can include the root user. Clustered Data ONTAP determines a users
file access permissions by checking the users effective UID against the SVMs specified name-mapping
and name-switch methods. After the effective UID is determined, the export policy rule is leveraged to
determine the access that UID has.
The anon option in export policy rules allows specification of a UNIX user ID or user name that is
mapped to client requests that arrive without valid NFS credentials (including the root user). The default
value of anon, if not specified in export policy rule creation, is 65534. This UID is normally associated
with the user name nobody or nfsnobody in Linux environments. NetApp appliances use 65534 as the
user pcuser, which is generally used for multiprotocol operations, Because of this difference, if using
local files and NFSv4, the name string for users mapped to 65534 might not match. This discrepancy
might cause files to be written as the user specified in the /etc/idmapd.conf file on the client (Linux)
or /etc/default/nfs file (Solaris), particularly when using multiprotocol (CIFS and NFS) on the same
datasets.
4.5
The "root" user must be explicitly configured in clustered Data ONTAP to specify which machine has
"root" access to a share, or else "anon=0 must be specified. Alternatively, the -superuser option can
be used if more granular control over root access is desired. If these settings are not configured properly,
"permission denied" might be encountered when accessing an NFS share as the "root" user (0). If the
anon option is not specified in export policy rule creation, the root user ID is mapped to the "nobody" user
(65534). There are several ways to configure root access to an NFS share.
AUTH Types
When an NFS client authenticates, an AUTH type is sent. An AUTH type specifies how the client is
attempting to authenticate to the server and depends on client-side configuration. Supported AUTH types
include:
AUTH_NONE/AUTH_NULL
This AUTH type specifies that the request coming in has no identity (NONE or NULL) and will be
mapped to the anon user. See http://www.ietf.org/rfc/rfc1050.txt and http://www.ietf.org/rfc/rfc2623.txt
for details.
AUTH_SYS/AUTH_UNIX
This AUTH type specifies that the user is authenticated at the client (or system) and will come in as
an identified user. See http://www.ietf.org/rfc/rfc1050.txt and http://www.ietf.org/rfc/rfc2623.txt for
details.
AUTH_RPCGSS
This is kerberized NFS authentication.
27
Squashing Root
The following examples show how to squash root to anon in various configuration scenarios.
Example 1: Root is squashed to the anon user using superuser for all NFS clients using sec=sys;
other sec types are denied access.
cluster::> vserver export-policy rule show policyname root_squash -instance
(vserver export-policy rule show)
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
vs0
root_squash
1
nfs
only NFS is allowed (NFSv3 and v4)
or Domain: 0.0.0.0/0 all clients
sys
only AUTH_SYS is allowed
sys
only AUTH_SYS is allowed
65534 mapped to 65534
none
superuser (root) squashed to anon user
true
true
ls -la
root
106496 Apr 24 2013
root
4096 Apr 24 11:24
daemon
4096 Apr 18 12:54
nobody
0 Apr 24 11:33
Apr
Apr
Apr
Apr
Apr
.
..
junction
root_squash
24 2013 .
24 11:24 ..
24 11:05 .snapshot
18 12:54 junction
24 2013 root_squash
28
Example 2: Root is squashed to the anon user using superuser for a specific client; sec=sys and
sec=none are allowed.
cluster::> vserver export-policy rule show policyname root_squash_client -instance
(vserver export-policy rule show)
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
vs0
root_squash_client
1
nfs
only NFS is allowed (NFSv3 and v4)
or Domain: 10.10.100.25
just this client
sys,none
AUTH_SYS and AUTH_NONE are allowed
sys,none
AUTH_SYS and AUTH_NONE are allowed
65534 mapped to 65534
none
superuser (root) squashed to anon user
true
true
29
Apr
Apr
Apr
Apr
Apr
24 2013 .
24 11:24 ..
24 11:05 .snapshot
18 12:54 junction
24 2013 root_squash_client
Example 3: Root is squashed to the anon user using superuser for a specific set of clients using
sec=krb5 (Kerberos) and only NFSv4 and CIFS are allowed.
cluster::> vserver export-policy rule show policyname root_squash_krb5 -instance
(vserver export-policy rule show)
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
an IP address of 10.10.100.X
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
vs0
root_squash_krb5
1
nfs4,cifs
only NFSv4 and CIFS are allowed
or Domain: 10.10.100.0/24
just clients with
krb5
krb5
65534
none
true
true
ls -la
root
106496 Apr 24 2013
root
4096 Apr 24 11:24
daemon
4096 Apr 18 12:54
nobody
0 Apr 24 11:50
Note:
.
..
junction
root_squash_krb5
24 2013 .
24 11:24 ..
18 12:54 junction
24 11:50 root_squash_krb5
The UID of 99 in this example occurs in NFSv4 when the user name cannot map into the NFSv4
domain. /var/log/messages confirms this:
Apr 23 10:54:23 nfsclient nfsidmap[1810]: nss_getpwnam: name 'pcuser' not found in domain
nfsv4domain.netapp.com'
In the preceding examples, when the root user requests access to a mount, it maps to the anon UID. In
this case, the UID is 65534. This mapping prevents unwanted root access from specified clients to the
NFS share. Because sys is specified as the rw and ro access rules in the first two examples, only clients
using sec=sys will gain access. The third example shows a possible configuration using Kerberized NFS
authentication. Setting the access protocol to NFS allows only NFS access to the share (including NFSv3
and NFSv4). If multiprotocol access is desired, then the access protocol must be set to allow NFS and
CIFS. NFS access can be limited to only NFSv3 or NFSv4 here as well.
30
vs0
root_allow_anon_squash
1
nfs
only NFS is allowed (NFSv3 and v4)
or Domain: 0.0.0.0/0 all clients
sys,none
AUTH_SYS and AUTH_NONE allowed
sys,none
AUTH_SYS and AUTH_NONE allowed
65534 mapped to 65534
sys
superuser for AUTH_SYS only
true
31
mnt]# ls -lan
0 0 106496 Apr
0 0
4096 Apr
0 1
4096 Apr
0 0
0 Apr
106496
4096
4096
4096
0
Apr
Apr
Apr
Apr
Apr
24 2013 .
24 11:24 ..
24 11:05 .snapshot
18 12:54 junction
24 2013 root_allow_anon_squash_nfsv3
24 2013 .
24 11:24 ..
18 12:54 junction
24 11:56 root_allow_anon_squash_nfsv3
Example 2: Root is allowed access as root using superuser for sec=krb5 only; anon access is
mapped to 65534; sec=sys and sec=krb5 are allowed, but only using NFSv4.
cluster::> vserver export-policy rule show policyname root_allow_krb5_only -instance
(vserver export-policy rule show)
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
vs0
root_allow_krb5_only
1
nfs4
only NFSv4 is allowed
or Domain: 0.0.0.0/0 all clients
sys,krb5
AUTH_SYS and AUTH_RPCGSS allowed
sys,krb5
AUTH_SYS and AUTH_RPCGSS allowed
65534 mapped to 65534
krb5
superuser via AUTH_RPCGSS only
true
ls -la
root
106496 Apr 24 2013 .
root
4096 Apr 24 11:24 ..
daemon
4096 Apr 18 12:54 junction
nobody
0 Apr 24 2013 root_allow_krb5_only_notkrb5
24 2013 .
24 11:24 ..
18 12:54 junction
24 2013 root_allow_krb5_only_notkrb5
NOTE: Again, the UID of an unmapped user in NFSv4 is 99. This is controlled via /etc/idmapd.conf
in Linux and /etc/default/nfs in Solaris.
[root@nfsclient /]# mount -t nfs4 -o sec=krb5 cluster:/nfsvol /mnt
[root@nfsclient /]# kinit
Password for root@KRB5DOMAIN.NETAPP.COM:
[root@nfsclient /]# cd /mnt
[root@nfsclient mnt]# touch root_allow_krb5_only_krb5mount
[root@nfsclient mnt]#
drwxrwxrwx. 3 root
dr-xr-xr-x. 26 root
drwxr-xr-x. 2 root
-rw-r--r--. 1 root
[root@nfsclient
drwxrwxrwx. 3
dr-xr-xr-x. 26
drwxr-xr-x. 2
-rw-r--r--. 1
32
ls -la
root
106496 Apr 24 2013 .
root
4096 Apr 24 11:24 ..
daemon
4096 Apr 18 12:54 junction
daemon
0 Apr 24 2013 root_allow_krb5_only_krb5mount
mnt]# ls -lan
0 0 106496 Apr
0 0
4096 Apr
0 1
4096 Apr
0 1
0 Apr
24 2013 .
24 11:24 ..
18 12:54 junction
24 2013 root_allow_krb5_only_krb5mount
Example 3: Root and all anonymous users are allowed access as root using anon=0, but only for
sec=sys and sec=krb5 over NFSv4.
cluster::> vserver export-policy rule show policyname root_allow_anon0 -instance
(vserver export-policy rule show)
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
vs0
root_allow_anon0
1
nfs4
only NFSv4 is allowed
or Domain: 0.0.0.0/0 all clients
krb5, sys
AUTH_SYS and AUTH_RPCGSS allowed
krb5, sys
AUTH_SYS and AUTH_RPCGSS allowed
0
mapped to 0
none
superuser (root) squashed to anon user
true
true
ls -la
root
106496 Apr 24 2013 .
root
4096 Apr 24 11:24 ..
daemon
4096 Apr 18 12:54 junction
daemon
0 Apr 24 2013 root_allow_anon0
mnt]# ls -lan
0 0 106496 Apr
0 0
4096 Apr
0 1
4096 Apr
0 1
0 Apr
24 2013 .
24 11:24 ..
18 12:54 junction
24 2013 root_allow_anon0
33
ls -la
root
106496 Apr 24 2013 .
root
4096 Apr 24 11:24 ..
daemon
4096 Apr 18 12:54 junction
daemon
0 Apr 24 2013 root_allow_anon0_krb5
mnt]# ls -lan
0 0 106496 Apr
0 0
4096 Apr
0 1
4096 Apr
0 1
0 Apr
24 2013 .
24 11:24 ..
18 12:54 junction
24 2013 root_allow_anon0_krb5
4.6
By default, when an SVM is created, the root volume is configured with 755 permissions and owner:group
of root (0): root (0). This means that:
The group and others permission levels are set to 5, which is Read & Execute.
When this is configured, everyone who accesses the SVM root volume can list and read junctions
mounted below the SVM root volume, which is always mounted to / as a junction-path. In addition, the
default export policy rule that is created when an SVM is configured using System Manager or vserver
setup commands permits user access to the SVM root.
Example of the default export policy rule created by vserver setup:
cluster::> export-policy rule show -vserver nfs_svm -policyname default -instance
(vserver export-policy rule show)
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
nfs_svm
default
1
any
or Domain: 0.0.0.0/0
any
any
65534
none
true
true
In the preceding export policy rule, all clients have any RO and RW access. Root is squashed to anon,
which is set to 65534.
For example, if an SVM has three data volumes, all would be mounted under / and could be listed with a
basic ls command by any user accessing the mount.
# mount | grep /mnt
10.63.3.68:/ on /mnt type nfs (rw,nfsvers=3,addr=10.63.3.68)
# cd /mnt
# ls
nfs4 ntfs unix
In some environments, this behavior might be undesirable, because storage administrators might want to
limit visibility to data volumes to specific groups of users. Although read and write access to the volumes
themselves can be limited on a per-data-volume basis using permissions and export policy rules, users
can still see other paths using the default policy rules and volume permissions.
To limit the ability to users to be able to list SVM root volume contents (and subsequent data volume
paths) but still allow the traversal of the junction paths for data access, the SVM root volume can be
modified to allow only root users to list folders in SVM root. To do this, change the UNIX permissions on
the SVM root volume to 0711 using the volume modify command:
cluster::> volume modify -vserver nfs_svm -volume rootvol -unix-permissions 0711
After this is done, root will still have Full Control using the 7 permissions, because it is the owner.
Group and others will get Execute permissions as per the 1 mode bit, which will only allow them to
traverse the paths using cd.
When a user who is not the root user attempts an ls, he or she will have access denied:
sh-4.1$ ls
ls: cannot open directory .: Permission denied
34
In many cases, NFS clients will log into their workstations as the root user. With the default export policy
rule created by System Manager and vserver setup, root access will be limited:
# id
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0s0:c0.c1023
# ls -la
ls: cannot open directory .: Permission denied
This is because the export policy rule attribute superuser is set to none. If root access is desired by
certain clients, this can be controlled by adding export policy rules to the policy and specifying the host IP,
name, netgroup, or subnet in the clientmatch field. When creating this rule, list it ahead of any rule that
might override it, such as a clientmatch of 0.0.0.0/0, which is all hosts.
Example of adding an administrative host rule to a policy:
cluster::> export-policy rule create -vserver nfs_svm -policyname default -clientmatch
10.228.225.140 -rorule any -rwrule any -superuser any -ruleindex 1
cluster::> export-policy rule show -vserver nfs_svm -policyname default -ruleindex 1
(vserver export-policy rule show)
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
nfs_svm
default
1
any
or Domain: 10.228.225.140
any
any
65534
any
true
true
Now the client is able to see the directories as the root user:
# ifconfig | grep "inet addr"
inet addr:10.228.225.140 Bcast:10.228.225.255 Mask:255.255.255.0
inet addr:127.0.0.1 Mask:255.0.0.0
# id
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0s0:c0.c1023
# ls
nfs4 ntfs unix
35
For more information about export policy rules and their effect on the root user, review the Root User
section of this document.
For more information about mode bits, see the following link: http://www.zzee.com/solutions/unixpermissions.shtml.
4.7
In some cases, storage administrators might want to limit access to all users in a data volume to only
being able to mount and access specific volumes or qtrees. Use cases for this would be for volume-based
multitenancy, limiting access to .snapshot directories or more granular control over access to specific
folders.
Doing so can be done by either junctioned volumes or qtrees. The following diagram shows an example
of two volume-based multitenancy designs, one using volumes mounted under volumes and one using
qtrees. Each design would limit read access to all users to only the volumes or qtrees under the main
data volume (/data) and allow only the owner to have full access.
Figure 5) Volume-based multitenancy using junctioned volumes.
36
Each method of locking down a data volume to users presents pros and cons. The following table
illustrates the pros and cons for each.
37
Table 3) Pros and cons for volume-based multitenancy based on design choice.
Using qtrees
Pros
Cons
Object
Export Policy
Vsroot = /
allow_readonly
allow_readonly
Qtree = /data/qtree
allow_access
In the preceding structure, a data volume is mounted under / and a qtree is mounted below the data
volume. The vsroot and data volumes have export policies assigned to allow readonly access. The qtree
will allow normal access upon mount.
38
After the data volume is mounted, the client will be restricted to file-level permissions. Even though the
allow_access policy says that the client has read-write (rw) access, if the file-level permissions disallow
write access, then the file-level permissions override the export policy rule.
The following export policy rule examples show how to accomplish this. In addition to allowing only read
access, the rule also disallows the root user from being seen by the storage as root on the storage
objects where access will be limited. Thus, while root users are allowed to mount the storage objects,
the file-level permissions are set to disallow those users to access anything, because they will be
squashed to the anonymous UID set in the export policy rule. Squashing root is covered in detail in this
document.
Export policy rule examples for volume-based multitenancyread access on mounts only:
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
SVM
allow_readonly
1
any
or Domain: 0.0.0.0/0
sys
never
65534
none
true
true
Export policy rule examples for volume-based multitenancyroot is root; read/write access:
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
SVM
allow_access
1
nfs
or Domain: 0.0.0.0/0
any
any
65534
any
true
true
File-Level Permissions
When mounting a typical NFS export, a mount occurs at either the vsroot (/) or a data volume (/data). As
such, file-level permissions would have to allow users to at least traverse the volume. If read or write
access were required, then additional mode bits would have to be granted to the volume. If access to
these volumes requires that all users be denied all access, then the mode bit could be set to no access (0
access), provided the mount point is at a level that does not require the user to traverse. Thus, the vsroot
volume (/) and data volume hosting the multitenant folders (/data) could both be set to 700 to allow only
the owner of the volume access to do anything in the directory. Multitenant clients could then access only
their specified volumes or qtrees based on export policy rules and file-level permissions.
39
Currently the only way to hide snapshots for NFS clients is to set the volume-level option snapdir-access to false.
Root does not have access to volumes it should not have access to.
Root gets the proper access when mounting the proper export path.
Note:
40
4.8
In some cases, storage administrators may want to control which UID (such as root) some or all users
map to when coming in through NFS to a UNIX-security-style volume. If a volume has NTFS security
style, doing so is as simple as setting a default Windows user in the NFS server options. However, when
the volume is UNIX security style, no name mapping takes place when coming in from NFS clients. To
control this situation, you can create an export policy rule.
Recall that export policy rules have the attributes listed in Table 5 in admin mode.
41
What It Does
Rule index
Access protocol
Controls the allowed access protocols. Options include any, nfs, nfs3,
nfs4, cifs.
Client match
RO access
Controls which authentication types can access the export in a readonly capacity. Valid entries include any, none, never, krb5, ntlm, sys.
RW access
Controls which authentication types can access the export in a readwrite capacity. Valid entries include any, none, never, krb5, ntlm, sys.
Anon UID
Superuser
Controls which authentication types can access the export with root
access. Valid entries include any, none, krb5, ntlm, sys.
This parameter specifies whether set user ID (suid) and set group ID
(sgid) access is enabled by the export rule. The default setting is
true.
42
For authentication types that are allowed to access the export, the following are used.
Table 6) Supported authentication types for ro, rw, and superuser.
Authentication Type
What It Does
None
Sys
Krb5
NTLM
Any
With these values, storage administrators can apply specific combinations to their export policy rules to
control access to clients on a granular level.
nfs_svm
default
1
any
or Domain: 10.228.225.0/24
none
none
65534
none
true
true
43
Example:
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
4.9
nfs_svm
default
1
any
or Domain: 10.228.225.0/24
none
none
0
none
true
true
Umask
In NFS operations, permissions can be controlled through mode bits, which leverage numerical attributes
to determine file and folder access. These mode bits determine read, write, execute, and special
attributes. Numerically, these are represented as:
Execute = 1
Read = 2
Write = 4
44
Prevents
Read/write/execute
Read
Write
Execute
No permissions
For more information about UNIX permissions, visit the following link: http://www.zzee.com/solutions/unixpermissions.shtml.
Umask is a functionality that allows an admin to restrict the level of permissions allowed to a client. By
default, the umask for most clients is set to 0022, which means that files created from that client will be
assigned that umask. The umask is subtracted from the base permissions of the object. If a volume has
0777 permissions and is mounted using NFS to a client with a umask of 0022, objects written from the
client to that volume have 0755 access (0777 0022).
# umask
0022
# umask -S
u=rwx,g=rx,o=rx
However, many operating systems do not allow files to be created with execute permissions, but they do
allow folders to have the correct permissions. Thus, files created with a umask of 0022 might end up with
permissions of 0644.
The following is an example using RHEL 6.5:
# umask
0022
# cd /cdot
# mkdir umask_dir
# ls -la | grep umask_dir
drwxr-xr-x. 2 root
root
# touch umask_file
# ls -la | grep umask_file
-rw-r--r--. 1 root
root
45
nfs_svm
default
1
any
or Domain: 10.228.225.140
any
any
65534
any
true
true
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
2 entries were displayed.
nfs_svm
default
2
any
or Domain: 0.0.0.0/0
any
any
65534
none
true
true
As per the example in the section Limiting Access to the SVM Root Volume, root would not be able to
list the contents of the SVM root based on the volume permissions (711) and the existing export policy
rules on any hosts other than 10.228.225.140.
# ifconfig | grep "inet addr"
inet addr:10.228.225.141 Bcast:10.228.225.255 Mask:255.255.255.0
inet addr:127.0.0.1 Mask:255.0.0.0
# id
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0s0:c0.c1023
# mount | grep mnt
10.63.3.68:/ on /mnt type nfs (rw,nfsvers=3,addr=10.63.3.68)
# cd /mnt
# ls
ls: cannot open directory .: Permission denied
If the data volumes in the SVM also are set to this export policy, they will use the same rules and only the
client set to have root access will be able to log in as root.
46
If root access is desired to the data volumes, then a new export policy can be created and root access
can be specified for all hosts or a subset of hosts through subnet, netgroup, or multiple rules with
individual client IP addresses or host names.
The same concept applies to the other export policy rule attributes, such as RW.
For example, if the default export policy rule is changed to disallow write access to all clients except
10.228.225.140 and to allow superuser, then even root is disallowed write access to volumes with that
export policy applied:
cluster::> export-policy rule modify -vserver nfs_svm -policyname default -ruleindex 2 -rwrule
never -superuser any
cluster::> export-policy rule show -vserver nfs_svm -policyname default -instance
(vserver export-policy rule show)
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
nfs_svm
default
1
any
or Domain: 10.228.225.140
any
any
65534
any
true
true
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
2 entries were displayed.
nfs_svm
default
2
any
or Domain: 0.0.0.0/0
any
never
65534
any
true
true
When a new policy and rule are created and applied to the data volume, the same user is allowed to write
to the data volume mounted below the SVM root volume. This is the case despite the export policy rule at
the SVM root volume disallowing write access.
47
Example:
cluster::>
cluster::>
rorule any
restricted
nfs_svm
volume
1
any
or Domain: 0.0.0.0/0
any
any
65534
any
true
true
However, the read-only attribute for the export policy rules needs to allow read access from the parent to
allow mounts to occur. Setting rorule to never or not setting an export policy rule in the parent
volumes export policy (empty policy) disallows mounts to volumes underneath that parent.
In the following example, the vsroot volume has an export policy that has rorule and rwrule set to
never, while the data volume has an export policy with a rule that is wide open:
cluster::> export-policy rule show -vserver nfs -policyname wideopen -instance
(vserver export-policy rule show)
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
nfs
wideopen
1
any
or Domain: 0.0.0.0/0
any
any
0
any
true
true
cluster ::> export-policy rule show -vserver nfs -policyname deny -instance
(vserver export-policy rule show)
48
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
cluster::> volume show
vserver volume policy
------- ------- -----nfs
rootvol deny
nfs
deny
1
any
or Domain: 0.0.0.0/0
never
never
65534
sys
true
true
When the deny policy is changed to allow read-only access, mounting is allowed:
cluster ::> export-policy rule modify -vserver nfs -policyname deny -rorule any -ruleindex 1
# mount -o nfsvers=3 10.63.3.68:/unix /cdot
# mount | grep unix
10.63.3.68:/unix on /cdot type nfs (rw,nfsvers=3,addr=10.63.3.68)
As a result, storage administrators can have complete and granular control over what users see and
access file systems using export policies, rules, and volume permissions.
Best Practice 9: Export Policy Rules: Parent Volumes (See Best Practice 10)
Parent volumes (such as vsroot) should always allow at least read access in the export policy rule.
Parent volumes should also traverse access in the UNIX permissions to enable mounts and I/O
access to be allowed at the desired level.
Keep in mind the export policy rule limits when creating export policies and rules. A rule index of
999999999 is an absolute maximum, but NetApp does not recommend it. Use more sensible
numbers for the index. In the following examples, 1 and 99 are used.
If a rule index with a higher number (such as 1) is read and has allowed access for a subnet but later a
host that is in that subnet is denied access through a rule at a lower index (such as 99), then that host is
granted access based on the rule that allows access being read earlier in the policy.
49
Conversely, if a client is denied access through an export policy rule at a higher index and then allowed
access through a global export policy rule later in the policy (such as 0.0.0.0/0 client match), then that
client is denied access.
In the following example, a client with the IP address of 10.228.225.140 (host name of centos64) has
been denied access to read a volume while all other clients are allowed access. However, the client rule
is below the all access rule, so mount and read are allowed.
Example:
cluster::> export-policy rule show -vserver NAS -policyname allow_all
Policy
Rule
Access
Client
RO
Vserver
Name
Index
Protocol Match
Rule
------------ --------------- ------ -------- --------------------- --------NAS
allow_all
1
any
0.0.0.0/0
any
NAS
allow_all
99
any
10.228.225.140
never
2 entries were displayed.
cluster::> vol
vserver volume
------- -----NAS
unix
[root@centos64
[root@centos64
[root@centos64
[root@centos64
total 12
drwxrwxrwx. 3
dr-xr-xr-x. 46
drwxrwx---. 2
If those rules are flipped, the client will be denied access despite the rule allowing access to everyone
being in the policy. Rule index numbers can be modified with the export-policy rule setindex
command. In the following example, rule #1 has been changed to rule #99. Rule #99 gets moved to #98
by default.
cluster::> export-policy rule setindex -vserver NAS -policyname allow_all -ruleindex 1 newruleindex 99
cluster::> export-policy rule show -vserver NAS -policyname allow_all
Policy
Rule
Access
Client
RO
Vserver
Name
Index
Protocol Match
Rule
------------ --------------- ------ -------- --------------------- --------NAS
allow_all
98
any
10.228.225.140
never
NAS
allow_all
99
any
0.0.0.0/0
any
2 entries were displayed.
cluster::> export-policy cache flush -vserver NAS -cache all
Warning: You are about to flush the "all (but showmount)" cache for Vserver "NAS" on node
"node2", which will result in increased traffic to the name servers. Do you want to proceed with
flushing the cache? {y|n}: y
[root@centos64 /]# mount 10.63.21.9:/unix /mnt
mount.nfs: access denied by server while mounting 10.63.21.9:/unix
Note:
Export-policy cache flush is a new command in clustered Data ONTAP 8.3. See Export
Policy Rule Caching for more information regarding this command.
It is important to consider the order of the export policy rules when determining the access that will and
will not be allowed for clients in clustered Data ONTAP.
50
Best Practice 11: Export Policy Rule Index Ordering (See Best Practice 12)
If you use multiple export policy rules, be sure that rules that deny or allow access to a broad range
of clients do not step on rules that deny or allow access to those same clients. Rule index ordering
will factor in when rules are read; higher-number rules will override lower-number rules in the index.
These options do not currently exist in clustered Data ONTAP. In addition, 7-Mode allowed exportfs
commands to be used to clear export caches. In clustered Data ONTAP, exportfs currently does not
exist, but caches are flushed each time an export policy rule is updated. The cache is stored at the NAS
layer and ages out every 5 minutes if no export rule changes are made. The management gateway in
clustered Data ONTAP caches host name to IP resolution (1-minute TTL) and resolved netgroups (15minute TTL). Clustered Data ONTAP 8.3 and later introduced a command to manually flush the export
policy caches as well as other related caches.
Table 8 lists the different caches and their time to live (TTL).
Table 8) Caches and time to live (TTL).
Cache Name
Type of Information
Access
Name
Name to UID
ID
ID to name
Host
Host to IP
Netgroup
Netgroup to IP
15
Showmount
Export paths
51
Policy
---------default
default
default
data
Policy
Owner
--------vs1_root
vs1_root
vs1_root
flex_vol
Policy
Rule
Owner Type Index Access
---------- ------ ---------volume
1 read
volume
1 read
volume
1 read
volume
10 read
52
When running a showmount in clustered Data ONTAP, the NFS server would be an SVM IP. The SVM
has a vsroot volume mounted to /, which is the volume returned in the showmount. All other volumes are
mounted below that mount point. In the preceding example, / is shown as allowing everyone. This is the
export policy rule for / in the SVM being queried:
cluster::> vol show -vserver vs0 -volume vsroot -fields policy
(volume show)
vserver volume
policy
------- ----------- ------vs0
vsroot default
cluster::> export-policy rule show -vserver vs0 -policyname default -instance
(vserver export-policy rule show)
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
vs0
default
1
any
or Domain: 0.0.0.0/0
any
any
65534
any
true
true
If the export policy rule is changed to allow just a host, the showmount e output does not change:
cluster::> export-policy rule modify -vserver vs0 -policyname default -ruleindex 1 -clientmatch
10.61.179.164
(vserver export-policy rule modify)
cluster::> export-policy rule show -vserver vs0 -policyname default -instance
(vserver export-policy rule show)
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
vs0
default
1
any
or Domain: 10.61.179.164
any
any
65534
any
true
true
Thus, for clustered Data ONTAP, showmount is not really useful in some cases, especially for
troubleshooting access issues. To get similar functionality to showmount, leverage SSH or the Data
ONTAP SDK to extract the desired information. The fields to extract are:
Any desired fields from the export policy rule set in the policy assigned to the volume
53
5.1
Showmount leverages the MOUNT protocol in NFSv3 to issue an EXPORT query to the NFS server. If
the mount port is not listening or blocked by a firewall, or if NFSv3 is disabled on the NFS server,
showmount queries fail:
# showmount -e 10.63.21.9
mount clntudp_create: RPC: Program not registered
The following shows output from a packet trace of the showmount command being run against a data LIF
in clustered Data ONTAP 8.3:
16
1.337459
10.228.225.140 10.63.21.9
Mount Service
Program Version: 3
V3 Procedure: EXPORT (5)
17
1.340234
10.63.21.9
Mount Service
Export List Entry: /unix ->
MOUNT
170
10.228.225.140 MOUNT
202
Note that the trace shows that the server returns /unix ->. However, this export path has a specific client
in the rule set:
cluster::> vol show -vserver NFS83 -junction-path /unix -fields policy
(volume show)
vserver volume policy
------- ------ -------NFS83
unix
restrict
cluster ::> export-policy rule show -vserver NFS83 -policyname restrict
Policy
Rule
Access
Client
RO
Vserver
Name
Index
Protocol Match
Rule
------------ --------------- ------ -------- --------------------- --------NFS83
restrict
1
any
10.228.225.141
any
In 7-Mode, if a client was specified in an export, the server would return that client:
88
1.754728
10.228.225.145 10.61.83.141
MOUNT
194
89
1.755175
10.61.83.141
10.228.225.145 MOUNT
Export List Entry: /vol/unix -> 10.228.225.141
198
If client match is required in showmount functionality, the showmount utility in the toolchest provides that
functionality.
Best Practice 12: Showmount Permissions Considerations (See Best Practice 13)
To use showmount in clustered Data ONTAP, the parent volume (including vsroot, or /) needs to
allow read or traverse access to the client/user attempting to run showmount.
5.2
The support tool chest now contains a showmount plug-in for clustered Data ONTAP. This plug-in has
limited support and should be used only in situations in which showmount is required, such as with Oracle
OVM.
5.3
Clustered Data ONTAP 8.3 introduced support for showmount queries from clients. This functionality is
disabled by default. It can be enabled with the following command:
54
After this functionality is enabled, clients can query data LIFs for export paths. However, the clientmatch
(access from clients, netgroups, and so on) information is not available. Instead, each path reflects
everyone as having access, even if clients are specified in export policy rule sets.
Best Practice 13: Showmount Security Style Considerations (See Best Practice 14)
To use showmount in clustered Data ONTAP, the vsroot volume (/) needs to use UNIX security
style. NTFS security style is currently not supported. See bug 907608 for details.
Here is sample output of showmount in clustered Data ONTAP 8.3:
# showmount
Export list
/unix
/unix/unix1
/unix/unix2
/
-e 10.63.21.9
for 10.63.21.9:
(everyone)
(everyone)
(everyone)
(everyone)
Showmount Caching
When showmount is run from a client, it requests information from the NFS server on the cluster.
Because export lists can be large, the cluster maintains a cache of this information.
When a volume is unmounted from the cluster using the volume unmount command or from
OnCommand System Manager, the cache does not update, so the exported path remains in cache until it
expires or is flushed.
To flush the showmount cache:
cluster::> export-policy cache flush -vserver SVM -cache showmount
Note:
The cache will only flush on the node you are logged in to. For example, if you are logged in to
node1s management LIF, then the cache on node1 will flush. This means that only clients
connecting to data LIFs local to node1 will benefit from the cache flush. To flush the cache on
other nodes, log into the node management LIF on those nodes. The node that is flushing will be
displayed when running the command.
55
6 Name Services
In clustered Data ONTAP versions earlier than 8.2.x, name services (DNS, NIS, LDAP, and so on) were
all handled by the authentication process called secd, which is the security daemon. Configuration for
nsswitch.conf functionality was done under SVM options.
Note:
If using name services in clustered Data ONTAP, the recommended version is 8.2.3 or later.
In clustered Data ONTAP 8.3 and later, LDAP and NIS authentication is still handled by secd, but DNS is
moved to its own userspace process called mdnsd. Configuration of name-services functionality has been
moved to its own command set, called vserver services name-service.
cluster::vserver services name-service>
dns
ldap
netgroup
nis-domain ns-switch
unix-group
unix-user
nis-domain ns-switch
Note that in the preceding, support for granular control over passwd, group, netgroup, and so on has
been added. Doing so makes the ns-switch functionality in clustered Data ONTAP 8.3 and later more
comparable to actual nsswitch.conf files.
In addition to ns-switch functionality, other new features have been added to name services:
getxxbyyy support
6.1
Large and complex name service environments can be challenged to deliver quick responses to file
servers such as NetApp FAS systems running clustered Data ONTAP. NetApp continues to enhance
name service algorithms to minimize the impact of external name service servers. However, in some
56
cases, environmental issues can affect name service resolution, which in turn can affect file service
authentication and mounting. The following recommendations can help reduce environmental issues. For
information about best practices for name services in clustered Data ONTAP, see TR-4379: Name
Services Best Practices.
7.1
Replay Cache
The replay cache in clustered Data ONTAP is crucial to preventing NFS requests from trying nonidempotent requests twice. This cache is stored at the data layer with the volumes. When this cache is
lost, CREATE operations can fail with EEXIST and REMOVE operations can fail with ENOENT. If a
locking mechanism is not in place, data can be at risk when the replay cache is lost. The following table
shows different scenarios in which replay cache is kept or lost in clustered Data ONTAP 8.2.x and later.
Table 9) Replay cache NDO behavior.
Operation
NFSv3
NFSv4.x
Volume move
Unplanned takeover
Planned takeover
7.2
File Locking
File locking mechanisms were created to prevent a file from being accessed for write operations by more
than one user or application at a time. NFS leverages file locking either using the NLM process in NFSv3
or by leasing and locking, which is built in to the NFSv4.x protocols. Not all applications leverage file
locking, however; for example, the application vi does not lock files. Instead, it uses a file swap method
to save changes to a file.
When an NFS client requests a lock, the client interacts with the clustered Data ONTAP system to save
the lock state. Where the lock state is stored depends on the NFS version being used. In NFSv3, the lock
state is stored at the data layer. In NFSv4.x, the lock states are stored in the NAS protocol stack.
57
Best Practice 14: NFSv3 and File Locking (See Best Practice 15)
Use file locking using the NLM protocol when possible with NFSv3.
To view or remove file locks in an SVM, use the following commands in advanced mode:
cluster::> set advanced
cluster::*> vserver locks
break show
When potentially disruptive operations occur, lock states do not transfer in some instances. As a result,
delays in NFS operations can occur as the locks are reclaimed by the clients and reestablished with their
new locations. The following table covers the scenarios in which locks are kept or lost in clustered Data
ONTAP 8.2.x and later.
Table 10) Lock state NDO behavior.
Operation
NFSv3
NFSv4.x
Volume move
Unplanned takeover
Planned takeover
7.3
NFSv4.1 Sessions
In clustered Data ONTAP, NFSv4.1 sessions are supported. With NFSv4.1 sessions, LIF migrations can
be disruptive to NFSv4.1 operations, but they are less disruptive than with NFSv4.0.
From RFC 5661:
After an event like a server restart, the client may have lost its
connections. The client assumes for the moment that the session has
not been lost. It reconnects, and if it specified connection
association enforcement when the session was created, it invokes
BIND_CONN_TO_SESSION using the session ID. Otherwise, it invokes
SEQUENCE. If BIND_CONN_TO_SESSION or SEQUENCE returns
58
2.
3.
4.
59
For more information about NFSv4.1 sessions, see the corresponding section in this document.
7.4
When a data LIF hosting NFSv4.x traffic is migrated in clustered Data ONTAP, existing NFSv4.x traffic
must be quiesced until a safe point in the process to move the LIF. After the NFS server is determined
safe to allow the migration, the LIF is then moved to the new location and lock states are reclaimed by
NFS clients. Lock state reclamation is controlled by the NFS option -v4-grace-seconds (45 seconds
by default). With NFSv4.1 sessions, this grace period is not needed, because the lock states are stored in
the NFSv4.1 session. Busier systems cause longer latency in LIF migrations, because the system has to
wait longer for the operations to quiesce and the LIF waits longer to migrate. However, disruptions occur
only during the lock reclamation process.
7.5
General Best Practices for NDO with NFS in Clustered Data ONTAP
Storage administrators have a lot of control over planned maintenance of their clusters, but not a lot of
control over unplanned events. As such, the best that can be done to avoid issues when experiencing
outages is to consider NDO when architecting a clustered Data ONTAP platform. This section covers only
general best practices and does not detail specific environmental considerations. For more information
about detailed best practices, see the list of technical reports in the Planned Outages section, next.
There are two types of outages:
PlannedUpgrades, hardware replacements, planned reboots, and so on
UnplannedStorage failovers, network blips/changes, external server issues, power/environmental,
bugs
Planned Outages
With planned outages, clustered Data ONTAP has a number of mechanisms to help maintain uptime,
such as volume moves, LIF migrations, rolling upgrades, and so on. For more information about NDO
features and functionality, see the following technical reports:
TR-4075: DataMotion for Volumes in Clustered Data ONTAP Overview and Best Practices
TR-4100: Nondisruptive Operations and SMB File Shares for Clustered Data ONTAP
TR-4146: Aggregate Relocate Overview and Best Practices for Clustered Data ONTAP
TR-4277: Nondisruptively Replace a Complete Disk Shelf Stack with Clustered Data ONTAP
Unplanned Outages
Unplanned outages are considerably trickier to handle because of the nature of their being unplanned.
As such, for maximum NDO functionality with NFS and multiprotocol implementations, the following set of
NAS-specific best practices are worth consideration.
60
Best Practice 15: NDO Best Practices for NFS Environments (See Best Practice 16)
Make sure that every node in the cluster has a data LIF that can be routed to external name
services.
If using name service servers (DNS, LDAP, NIS, and so on), make sure that there are multiple
servers for redundancy and that those servers are on a fast connection and configured for use
with the SVM as a client.
Configure data LIFs properly as per TR-4182: Ethernet Storage Design Considerations and
Best Practices for Clustered Data ONTAP Configurations.
Use NFSv4.x (4.1 if possible) when appropriate to take advantage of stateful connections,
integrated locking, and session functionality.
Make sure that a DR copy of NAS data and configuration exists at a remote site through DP
NetApp SnapMirror and SVM peering. See TR-4015: SnapMirror Configuration and Best
Practices Guide for Clustered Data ONTAP for details.
Increased maximum auxiliary groups for AUTH_SYS and AUTH_GSS in 8.3 (1024 for both)
Increased default maximum TCP read and write size (from 32K to 64K)
61
4046
4046
4045
4045
4047
4047
4048
4048
4049
port
111
111
111
111
111
111
2049
2049
2049
635
635
635
635
635
635
4045
4045
4046
4046
4049
portmapper
portmapper
portmapper
portmapper
portmapper
portmapper
nfs
nfs
nfs
mountd
mountd
mountd
mountd
mountd
mountd
nlockmgr
nlockmgr
status
status
rquotad
NetApp does not recommend changing this option unless directed by support. If this option is
changed with clients mounted to the NFS server, data corruption can take place.
FSID changes can be enabled and disabled for NFSv4 operations as well.
not identical. FSIDs of files are formulated using a combination of NetApp WAFL (Write Anywhere File
Layout) inode numbers, volume identifiers, and snapshot IDs. Because every snapshot has a different ID,
every snapshot copy of a file will have a different FSID in NFSv3, regardless of the setting of the option v3-fsid-change. The NFS RFC spec does not require that FSIDs for a file are identical across file
versions.
62
With NFSv4, however, the FSID of a file across versions will be identical if the option -v4-fsid-change
is enabled. That option ensures that the WAFL inode number is returned as the FSID of a file instead of a
FSID created using snapshot IDs. See bug 933937 for more information.
If your application requires that file versions maintain identical FSIDs, use NFSv4 and the -v4-fsidchange option.
FSID Changes With Storage Virtual Machine Disaster Recover (SVM DR)
Clustered Data ONTAP 8.3.1 introduced a new feature to enable disaster recovery for entire SVMs called
SVM DR. This feature is covered in TR-4015: SnapMirror Configuration and Best Practices Guide.
When SVM DR is used with NFS exports, the FSID of those exports will change and clients will have to
remount the exports on the destination system. Otherwise, the clients will show stale for NFS operations
on those mounts.
How It Works
The options to extend the group limitation work just the way that the manage-gids option for other NFS
servers works. Basically, rather than dumping the entire list of auxiliary GIDs a user belongs to, the option
does a lookup for the GID on the file or folder and returns that value instead.
From the man page for mountd:
-g
or
--manage-gids
Accept requests from the kernel to map user id numbers into lists of
group id numbers for use in access control. An NFS request will normally
except when using Kerberos or other cryptographic authentication) contains
a user-id and a list of group-ids. Due to a limitation in the NFS
protocol, at most 16 groups ids can be listed. If you use the -g flag, then
the list of group ids received from the client will be replaced by a list of
group ids determined by an appropriate lookup on the server.
In 7-Mode, the maximum number of GIDs supported was 256. In clustered Data ONTAP 8.3, that
maximum is increased (and configurable) to 1,024 for both AUTH_SYS and AUTH_GSS.
When an access request is made, only 16 GIDs are passed in the RPC portion of the packet.
63
Any GID past the limit of 16 will be dropped by the protocol. With the extended GID option in clustered
Data ONTAP 8.3, when an NFS request comes in, the SecD process requests information about the
users group membership by way of a new function called
secd_rpc_auth_user_id_to_unix_ext_creds.
A Detailed Look
This function uses a LibC library call to do a credential lookup from the name service (for example, LDAP)
before the cluster replies to the NFS request with access denied or allowed. When the credentials are
fetched from the name service, then SecD populates the NAS credential cache with the appropriate group
membership for that user up to the extended group limit. The cluster then replies to the NFS request and
allows or denies access based on what is in the credential cache and not what was in the RPC packet.
Because of this, latency to the name services from the cluster should be low to enable the credential
caches to always be accurate. Otherwise, access results could vary from expected behaviors.
The following example shows the results of the same NFS request as seen earlier. Note how 18 GIDs are
discovered, as opposed to the 16 in the RPC packet.
Example of NAS Credential Cache with Extended GIDs Enabled
::*> diag nblade credentials show -node node2 -vserver NAS -unix-user-name seventeengids
Getting credential handles.
1 handles found....
Getting cred 0 for user.
Global Virtual Server: 5
Cred Store Uniquifier: 1
Cifs SuperUser Table Generation: 0
Locked Ref Count: 0
Info Flags: 1
Alternative Key Count: 0
Additional Buffer Count: 0
Creation Time: 4853460910 ms
Time Since Last Refresh: 492530 ms
Windows Creds:
Flags: 0
Primary Group: S-0-0
Unix Creds:
Flags: 1
Domain ID: 0
Uid: 2000
64
Gid: 513
Additional Gids:
Gid 0: 513
Gid 1: 2001
Gid 2: 2002
Gid 3: 2003
Gid 4: 2004
Gid 5: 2005
Gid 6: 2006
Gid 7: 2007
Gid 8: 2008
Gid 9: 2009
Gid 10: 2010
Gid 11: 2011
Gid 12: 2012
Gid 13: 2013
Gid 14: 2014
Gid 15: 2015
Gid 16: 2016
Gid 17: 2017
Gid 18: 10005
For more information about name services best practices, see the section in this document covering that
subject. For more information about LDAP in clustered Data ONTAP, see TR-4073.
9.1
The following are some advantages to using NFSv4.x in your environment. However, it is important that
you treat every specific use case differently. NFSv4.x is not ideal for all workload types. Be sure to test for
desired functionality and performance before rolling out NFSv4.x en masse.
Firewall-friendly because NFSv4 uses only a single port (2049) for its operations
Internationalization
Compound operations
Support for 3DES for encryption in clustered Data ONTAP 8.2.x and earlier
No NFSv4 replication support (see RFC 7530, section 8.4.1 for details)
Parallel access to data through pNFS (does not apply for NFSv4.0)
65
NFSv3 has always used multiple processors for reads and writes. NFSv3 also uses multiple
processors for metadata operations.
Best Practice 16: Version Recommendations with NFSv4.x (See Best Practice 17)
For NFSv4.x workloads, be sure to upgrade the cluster to the latest patched GA version of clustered
Data ONTAP 8.2.x or 8.3 and upgrade NFS clients to the latest patched release of the kernel.
66
The following diagrams illustrate the effect that a multiprocessor can have on NFSv4.x operations.
Figure 9) NFSv4.x read and write ops: no multiprocessor.
67
9.2
NFSv4.0
NetApp Data ONTAP NFSv4.x implementation (clustered and 7-Mode) provides the following.
Write Order
The implementation provides the capability to write data blocks to shared storage in the same order as
they occur in the data buffer.
Synchronous Write Persistence
Upon return from a synchronous write call, Data ONTAP (clustered and 7-Mode) guarantees that all the
data has been written to durable, persistent storage.
Distributed File Locking
The implementation provides the capability to request and obtain an exclusive lock on the shared storage,
without assigning the locks to two servers simultaneously.
Unique Write Ownership
Data ONTAP (clustered and 7-Mode) guarantees that the file lock is the only server process that can write
to the file. After Data ONTAP transfers the lock to another server, pending writes queued by the previous
owner fail.
Name services
Firewall considerations
Client support
For an in-depth look at the NFSv4.x protocol, including information about NFSv4.2, see the SNIA
overview of NFSv4.
Note:
ID Domain Mapping
While customers prepare to migrate their existing setup and infrastructure from NFSv3 to NFSv4, some
environmental changes must be made before moving to NFSv4. One of them is "id domain mapping."
In clustered Data ONTAP 8.1, a new option called v4-id-numerics was added. With this option
enabled, even if the client does not have access to the name mappings, numeric IDs can be sent in the
user name and group name fields. The server accepts them and treats them as representing the same
user as would be represented by a v2/v3 UID or GID having the corresponding numeric value.
Essentially, this approach makes NFSv4.x behave more like NFSv3. This approach also removes the
security enhancement of forcing ID domain resolution for NFSv4.x name strings; whenever possible, keep
this option as the default of disabled. If a name mapping for the user is present, however, the name string
will be sent across the wire rather than the UID/GID. The intent of this option is to prevent the server from
sending nobody as a response to credential queries in NFS requests.
68
Note:
To access this command in versions earlier than clustered Data ONTAP 8.3, you must be in diag
mode. Commands related to diag mode should be used with caution.
Although it is possible to allow the NFSv4.x server to return numeric IDs for NFS requests, it is best
to make sure that user names have appropriate name mappings on the client and server so that the
security feature of NFSv4.x is leveraged. This is easiest to accomplish when using name service
servers such as LDAP to connect to both client and server.
Some production environments have the challenge to build new naming service infrastructures like NIS or
LDAP for string-based name mapping to be functional in order to move to NFSv4. With the new
numeric_id option, setting name services does not become an absolute requirement. The numeric_id
feature must be supported and enabled on the server as well as on the client. With this option enabled,
the user and groups exchange UIDs/GIDs between the client and server just as with NFSv3. However, for
this option to be enabled and functional, NetApp recommends having a supported version of the client
and the server. For client versions that support numeric IDs with NFSv4, contact the OS vendor.
Note:
Note that -v4-id-numerics should be enabled only if the client supports it.
Configuration Step 1) Enabling numeric ID support for NFSv4 in clustered Data ONTAP.
Category
Commands
Enable
NFSv4.0.
cluster::> vserver nfs modify -vserver test_vs1 -access true -v4.0 enabled -tcp
enabled
Verification
Set up NFSv4
user ID
mapping.
test_vs1
true
enabled
enabled
enabled
enabled
disabled
disabled
disabled
disabled
defaultv4iddomain.com
disabled
disabled
enabled
disabled
disabled
Note:
On a clustered Data ONTAP system, the command to turn on the v4-id-numerics option
follows.
69
Verification
cluster::> vserver nfs show -vserver testvs1 -fields v4-numeric-ids
Vserver v4-numeric-ids
------- -------------testvs1
enabled
If the v4-id-numerics option is disabled, the server accepts only the user name/group
name of the form user@domain or group@domain.
The NFSv4 domain name is a pseudodomain name that both the client and storage
controller must agree upon before they can execute NFSv4 operations. The NFSv4 domain
name might or might not be equal to the NIS or DNS domain name, but it must be a string
that both the NFSv4 client and server understand.
This is a two-step process in which the Linux client and the clustered Data ONTAP system
are configured with the NFSv4 domain name.
On the clustered Data ONTAP system:
The default value of the NFS option -v4-id-domain is defaultv4iddomain.com.
cluster::> vserver nfs modify -vserver test_vs1 -v4-id-domain
nfsv4domain.netapp.com
Verification
cluster::> vserver nfs show -vserver test_vs1 -fields v4-id-domain
Vserver v4-id-domain
-------- ------------------------------test_vs1 nfsv4domain.netapp.com
This section describes how the domain name can be changed on the client.
Solaris. Edit the /etc/default/nfs file and change NFSMAPID_DOMAIN to that set
for the server. Reboot the client for the change to take effect.
Linux. Make the necessary adjustments to /etc/idmapd.conf. Restart the idmapd
process to have the change take effect. Note: Restarting idmapd varies per client.
Rebooting the server is an option as well.
[root@nfsclient /]# vi /etc/idmapd.conf
[General]
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = nfsv4domain.netapp.com
[mapping]
Nobody-User = nobody
Nobody-Group = nobody
70
[Translation]
Method = nsswitch
Create a UNIX
group with GID
1 and assign it
to the SVM.
Note: Whenever a volume is created, it is associated with UID 0 and GID 1 by default.
NFSv3 ignores this, whereas NFSv4 is sensitive to the UID and GID mapping. If GID 1 was
not previously created, follow these steps to create one.
cluster::> vserver services unix-group create -vserver test_vs1 -name daemon -id
1
Verification
Mount the
client over
NFSv4.
Name
------------------daemon
root
displayed.
ID
---------1
0
On the client:
Verification
Note: Linux clients must mount the file system from the NetApp storage with a -t nfs4
option. However, RHEL 6.0 and later mount NFSv4 by default. Solaris 10 clients mount the
file system over NFSv4 by default when NFSv4 is enabled on the NetApp storage
appliance. For mounting over NFSv3, vers=3 must be explicitly specified on the mounts.
Note: A volume can be mounted using NFSv3 and NFSv4.
71
Term
Lease
The time period in which Data ONTAP irrevocably grants a lock to a client
Grace period
The time period in which clients attempt to reclaim their locking state from Data
ONTAP during server recovery
Lock
Refers to both record (byte-range) locks as well as file (share) locks unless
specifically stated otherwise
For more information about locking, see the section in this document on NFSv4.x locking. Because of this
new locking methodology, as well as the statefulness of the NFSv4.x protocol, storage failover operates
differently as compared to NFSv3. For more information, see the section in this document on
nondisruptive operations with NFS in clustered Data ONTAP.
Name Services
When deciding to use NFSv4.x, it is a best practice to centralize the NFSv4.x users in name services
such as LDAP or NIS. Doing so allows all clients and clustered Data ONTAP NFS servers to leverage the
same resources and guarantees that all names, UIDs, and GIDs will be consistent across the
implementation. For more information about name services, see TR-4073: Secure Unified Authentication
for Kerberos, LDAP, and NFSv4.x Information and TR-4379: Name Services Best Practices.
Firewall Considerations
NFSv3 required several ports to be opened for ancillary protocols such as NLM, NSM, and so on in
addition to port 2049. NFSv4.x requires only port 2049. If you wish to use NFSv3 and NFSv4.x in the
same environment, open all relevant NFS ports. These ports are referenced in this document.
72
Internationalization
If you intend to migrate to clustered Data ONTAP from a 7-Mode system and use NFSv4.x, use some
form of UTF-8 language support, such as C.UTF-8 (which is the default language of volumes in clustered
Data ONTAP). If the 7-Mode system does not already use a UTF-8 language, then it should be converted
before you transition to clustered Data ONTAP or when you intend to transition from NFSv3 to NFSv4.
The exact UTF-8 language specified will depend on the specific requirements of the native language to
ensure proper display of character sets.
Data ONTAP operating in 7-Mode allowed volumes that hosted NFSv4.x data to use C language types.
Clustered Data ONTAP does not do so, because it honors the RFC standard recommendation of UTF-8.
TR-4160: Secure Multi-tenancy Considerations covers language recommendations in clustered Data
ONTAP. When changing a volume's language, every file in the volume must be accessed after the
change to ensure that they all reflect the language change. Use a simple ls -lR to access a recursive
listing of files.
For more information on transitioning to clustered Data ONTAP, see TR-4052: Successfully Transitioning
to Clustered Data ONTAP.
For more information, consult the product documentation for your specific version of clustered Data
ONTAP.
73
Client Considerations
When you use NFSv4.x, clients are as important to consider as the NFS server. Follow the client
considerations below when implementing NFSv4.x. Other considerations might be necessary. Contact the
OS vendor for specific questions about NFSv4.x configuration.
NFSv4.x is supported.
The fstab file and NFS configuration files are configured properly. When mounting, the client will
negotiate the highest NFS version available with the NFS server. If NFSv4.x is not allowed by the
client or fstab specifies NFSv3, then NFSv4.x will not be used at mount.
The client either contains identical users and UID/GID (including case sensitivity) or uses the same
name service server as the NFS server/clustered Data ONTAP SVM.
If using name services on the client, the client is configured properly for name services
(nsswitch.conf, ldap.conf, sssd.conf, and so on) and the appropriate services are started, running,
and configured to start at boot.
Note:
TR-4073: Secure Unified Authentication covers some NFSv4.x and name service considerations
as they pertain to clients.
74
Following are two test cases in which the users test and mock-build, creating files without using ID
domain mapping just by using UID/GID.
[root@localhost nfsv4]# su - test
<-- lets test a REAL user...
[test@localhost ~]$ id
uid=500(test) gid=500(test) groups=500(test)
[test@localhost ~]$ cd /mnt/nfsv4
[test@localhost nfsv4]$ ls -al
total 12
drwxrwxrwt 2 nobody bin
4096 Nov 11 20:20 .
drwxr-xr-x 5 root
root 4096 Nov 9 21:01 ..
20:22
21:01
20:21
20:22
.
..
1231
mockbird
Because ID domain mapping is not used, the ID mapping falls back to classic UID/GID-style mapping,
eliminating the need for an NFSv4 ID domain. However, in large environments, NetApp recommends a
centralized name repository for NFSv4.x.
75
Category
Configure namemapping
methodologies.
Commands
Configure LDAP.
Create an LDAP client.
Verification
76
Verification
Configure NIS.
cluster::> vserver services nis-domain create -vserver test_vs1 -domain
nisdom.netapp.com -active true -servers 10.10.10.110
Verification
77
Example:
cluster::> network connections active show
show
show-clients
show-lifs
show-protocols show-services
Additionally, it is possible to view network connections in a LISTEN state with network connections
listening show.
Example:
cluster::> network connections listening show -node node22 -vserver NAS
Vserver Name
Interface Name:Local Port
Protocol/Service
---------------- ------------------------------------- ----------------------Node: node2
NAS
data1:40001
TCP/cifs-msrpc
NAS
data1:135
TCP/cifs-msrpc
NAS
data1:137
UDP/cifs-nam
NAS
data1:139
TCP/cifs-srv
NAS
data1:445
TCP/cifs-srv
NAS
data1:4049
UDP/unknown
NAS
data1:2050
TCP/fcache
NAS
data1:111
TCP/port-map
NAS
data1:111
UDP/port-map
NAS
data1:4046
TCP/sm
NAS
data1:4046
UDP/sm
NAS
data1:4045
TCP/nlm-v4
NAS
data1:4045
UDP/nlm-v4
NAS
data1:2049
TCP/nfs
NAS
data1:2049
UDP/nfs
NAS
data1:635
TCP/mount
NAS
data1:635
UDP/mount
NAS
data2:40001
TCP/cifs-msrpc
NAS
data2:135
TCP/cifs-msrpc
NAS
data2:137
UDP/cifs-nam
NAS
data2:139
TCP/cifs-srv
NAS
data2:445
TCP/cifs-srv
NAS
data2:4049
UDP/unknown
NAS
data2:2050
TCP/fcache
NAS
data2:111
TCP/port-map
NAS
data2:111
UDP/port-map
NAS
data2:4046
TCP/sm
NAS
data2:4046
UDP/sm
NAS
data2:4045
TCP/nlm-v4
NAS
data2:4045
UDP/nlm-v4
NAS
data2:2049
TCP/nfs
NAS
data2:2049
UDP/nfs
NAS
data2:635
TCP/mount
NAS
data2:635
UDP/mount
34 entries were displayed.
78
Removal of the NFS limitation of 16 groups per user with AUTH_SYS security
ACLs bypass the need for GID resolution, which effectively removes the GID limit.
Currently this works only for Infinite Volumes and Unified security styles.
Permissions displayed to NFS clients for files that have Windows ACLs are "display" permissions, and the
permissions used for checking file access are those of the Windows ACL.
Note:
79
Category
Modify the
NFSv4 server
to enable
ACLs by
enabling the
v4.0-acl
option.
On a Linux
client
Commands
cluster::> vserver nfs modify -vserver test_vs1 -v4.0-acl enabled
Verification
Note: After you enable ACLs on the server, the nfs4_setfacl and nfs4_getfacl
commands are required on the Linux client to set or get NFSv4 ACLs on a file or directory,
respectively. To avoid problems with earlier implementations, use RHEL 5.8 or RHEL 6.2
and later for using NFSv4 ACLs in clustered Data ONTAP. The following example illustrates
the use of the -e option to set the ACLs on the file or directory from the client. To learn
more about the types of ACEs that can be used, refer to the following links:
www.linuxcertif.com/man/1/nfs4_setfacl/145707/
http://linux.die.net/man/5/nfs4_acl
Verification
80
A client using NFSv4 ACLs can set and view ACLs for files and directories on the system. When a new
file or subdirectory is created in a directory that has an ACL, the new file or subdirectory inherits all ACEs
in the ACL that have been tagged with the appropriate inheritance flags. For access checking, CIFS users
are mapped to UNIX users. The mapped UNIX user and that users group membership are checked
against the ACL.
If a file or directory has an ACL, that ACL is used to control access no matter which protocolNFSv3,
NFSv4, or CIFSis used to access the file or directory. The ACL is also used even if NFSv4 is no longer
enabled on the system.
Files and directories inherit ACEs from NFSv4 ACLs on parent directories (possibly with appropriate
modifications) as long as the ACEs have been tagged with the correct inheritance flags. This process can
be controlled using the following command:
cluster::> nfs server modify vserver vs0 -v4-acl-max-aces [number up to 1024]
In versions earlier than clustered Data ONTAP 8.2, the maximum ACE limit was 400. If reverting to a
version of Data ONTAP earlier than 8.2, files or directories with more than 400 ACEs will have their ACLs
dropped and the security will revert to mode-bit style.
When a file or directory is created as the result of an NFSv4 request, the ACL on the resulting file or
directory depends on whether the file creation request includes an ACL or only standard UNIX file access
permissions. The ACL also depends on whether the parent directory has an ACL.
If the request includes an ACL, that ACL is used.
If the request includes only standard UNIX file access permissions and the parent directory does not
have an ACL, the client file mode is used to set standard UNIX file access permissions.
If the request includes only standard UNIX file access permissions and the parent directory has a
non-inheritable ACL, a default ACL based on the mode bits passed into the request is set on the new
object.
If the request includes only standard UNIX file access permissions but the parent directory has an
ACL, the ACEs in the parent directory's ACL are inherited by the new file or directory as long as the
ACEs have been tagged with the appropriate inheritance flags.
Note:
81
ACL Formatting
NFSv4.x ACLs have specific formatting. The following is an ACE set on a file:
A::ldapuser@domain.netapp.com:rwatTnNcCy
The preceding follows the ACL format guidelines of:
type:flags:principal:permissions
A type of A means allow. The flags are not set in this case, because the principal is not a group and
does not include inheritance. Also, because the ACE is not an AUDIT entry, there is no need to set the
audit flags. For more information about NFSv4.x ACLs, see http://linux.die.net/man/5/nfs4_acl.
82
Also, because of the vast difference between NTFS and UNIX-style ACLs, the approximation of
permissions might not be exact. For example, if a user has a granular permission provided only in NTFS
security semantics, then the NFS client cannot interpret that properly.
The default value for the option is disabled, which allows the approximation of interpreted NTFS ACLs
on NFS clients mounting NTFS objects.
To disable this functionality, modify the option to enabled.
Choose either NTFS- or UNIX-style security unless there is a specific recommendation from an
application vendor to use mixed mode.
ACL Behaviors
For any NT user, the user's SID is mapped to a UNIX ID and the NFSv4 ACL is then checked for
access for that UNIX ID. Regardless of which permissions are displayed, the actual permissions set
on the file take effect and are returned to the client.
If a file has an NT ACL and a UNIX client does a chmod, chgrp, or chown, the NT ACL is dropped.
In versions earlier than clustered Data ONTAP 8.1, run the following command on the node that owns the
data volume:
cluster::> node run node [nodename that owns data volume] fsecurity show /vol/volname
In clustered Data ONTAP 8.2 and later, use the following command:
cluster::> vserver security file-directory show -vserver vs0 -path /junction-path
Explicit DENY
NFSv4 permissions may include explicit DENY attributes for OWNER, GROUP, and EVERYONE. That is
because NFSv4 ACLs are default-deny, which means that if an ACL is not explicitly granted by an ACE,
then it is denied.
Example:
sh-4.1$ nfs4_getfacl /mixed
A::ldapuser@domain.netapp.com:ratTnNcCy
A::OWNER@:rwaDxtTnNcCy
D::OWNER@:
A:g:GROUP@:rxtncy
D:g:GROUP@:waDTC
A::EVERYONE@:rxtncy
D::EVERYONE@:waDTC
DENY ACEs should be avoided whenever possible, because they can be confusing and complicated.
When DENY ACEs are set, users might be denied access when they expect to be granted access. This is
because the ordering of NFSv4 ACLs affects how they are evaluated.
83
The preceding set of ACEs is equivalent to 755 in mode bits. That means:
However, even if permissions are adjusted to the 775 equivalent, access can be denied because of the
explicit DENY set on EVERYONE.
For example, the user ldapuser belongs to the group Domain Users.
sh-4.1$ id
uid=55(ldapuser) gid=513(Domain Users) groups=513(Domain Users),503(unixadmins)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
Permissions on the volume mixed are 775. The owner is root and the group is Domain Users:
[root@nfsclient /]# nfs4_getfacl /mixed
A::OWNER@:rwaDxtTnNcCy
D::OWNER@:
A:g:GROUP@:rwaDxtTnNcy
D:g:GROUP@:C
A::EVERYONE@:rxtncy
D::EVERYONE@:waDTC
[root@nfsclient /]# ls -la | grep mixed
drwxrwxr-x.
3 root
Domain Users
4096 Apr 30 09:52 mixed
Because ldapuser is a member of Domain Users, it should have write access to the volume, and it does:
[root@nfsclient /]# su ldapuser
sh-4.1$ cd /mixed
sh-4.1$ ls -la
total 12
drwxrwxr-x. 3 root Domain Users 4096 Apr 30 09:52 .
dr-xr-xr-x. 28 root root
4096 Apr 29 15:24 ..
drwxrwxrwx. 6 root root
4096 Apr 30 08:00 .snapshot
sh-4.1$ touch newfile
sh-4.1$ nfs4_getfacl /mixed
sh-4.1$ ls -la
total 12
drwxrwxr-x. 3
dr-xr-xr-x. 28
drwxrwxrwx. 6
-rw-r--r--. 1
84
root
root
root
ldapuser
09:56
15:24
08:00
09:56
.
..
.snapshot
newfile
However, if the ACLs are reordered and the explicit DENY for EVERYONE is placed ahead of group, then
ldapuser is denied access to write to the same volume it just had access to write to:
[root@nfsclient /]# nfs4_getfacl /mixed
A::OWNER@:rwaDxtTnNcCy
D::OWNER@:
A::EVERYONE@:rxtncy
D::EVERYONE@:waDTC
A:g:GROUP@:rwaDxtTnNcy
[root@nfsclient /]# su ldapuser
sh-4.1$ cd /mixed
sh-4.1$ ls -la
total 12
drwxrwxr-x. 3 root
Domain Users 4096 Apr 30 09:56
dr-xr-xr-x. 28 root
root
4096 Apr 29 15:24
drwxrwxrwx. 6 root
root
4096 Apr 30 08:00
-rw-r--r--. 1 ldapuser Domain Users
0 Apr 30 09:56
.
..
.snapshot
newfile
root
root
root
ldapuser
09:56
15:24
08:00
09:56
.
..
.snapshot
newfile
.
..
.snapshot
newfile
newfile2
root
root
root
ldapuser
ldapuser
Best Practice 19: Using DENY ACEs (See Best Practice 20)
85
NetApp recommends this option in environments using NFSv3 and NFSv4 on the same NFS exports.
vs0
/unix
unix
unix
10
----D--0
1
755
rwxr-xr-x
-
In the preceding example, the volume (/unix) has 755 permissions. That means that the owner has ALL
access, the owning group has READ/EXECUTE access, and everyone else has READ/EXECUTE
access.
Even though there are no NFSv4 ACLs in the fsecurity output, there are default values set that can be
viewed from the client:
[root@nfsclient /]# mount -t nfs4 krbsn:/unix /unix
[root@nfsclient /]# ls -la | grep unix
drwxr-xr-x.
2 root
daemon
4096 Apr 30 11:24 unix
[root@nfsclient /]# nfs4_getfacl /unix
A::OWNER@:rwaDxtTnNcCy
A:g:GROUP@:rxtncy
A::EVERYONE@:rxtncy
The NFSv4 ACLs earlier show the samethe owner has ALL access, the owning group has
READ/EXECUTE access, and everyone else has READ/EXECUTE access. The default mode bits are
tied to the NFSv4 ACLs.
When mode bits are changed, the NFSv4 ACLs are also changed:
[root@nfsclient /]# chmod 775 /unix
[root@nfsclient /]# ls -la | grep unix
drwxrwxr-x.
2 root
daemon
[root@nfsclient /]# nfs4_getfacl /unix
A::OWNER@:rwaDxtTnNcCy
A:g:GROUP@:rwaDxtTnNcy
A::EVERYONE@:rxtncy
86
When a user ACE is added to the ACL, the entry is reflected in the ACL on the appliance. In addition, the
entire ACL is now populated. Note that the ACL is in SID format.
[root@nfsclient /]# nfs4_setfacl -a A::ldapuser@nfsv4domain.netapp.com:ratTnNcCy /unix
[root@nfsclient /]# nfs4_getfacl /unix
A::ldapuser@nfsv4domain.netapp.com:ratTnNcCy
A::OWNER@:rwaDxtTnNcCy
A:g:GROUP@:rwaDxtTnNcy
A::EVERYONE@:rxtncy
cluster::> vserver security file-directory show -vserver vs0 -path /unix
Vserver:
File Path:
Security Style:
Effective Style:
DOS Attributes:
DOS Attributes in Text:
Expanded Dos Attributes:
Unix User Id:
Unix Group Id:
Unix Mode Bits:
Unix Mode Bits in Text:
ACLs:
vs0
/unix
unix
unix
10
----D--0
1
775
rwxrwxr-x
NFSV4 Security Descriptor
Control:0x8014
DACL - ACEs
ALLOW-S-1-8-55-0x16019d
ALLOW-S-1-520-0-0x1601ff
ALLOW-S-1-520-1-0x1201ff-IG
ALLOW-S-1-520-2-0x1200a9
To see the translated ACLs, use fsecurity from the node shell on the node that owns the volume:
cluster::> node run -node node2 fsecurity show /vol/unix
[/vol/unix - Directory (inum 64)]
Security style: Unix
Effective style: Unix
DOS attributes: 0x0010 (----D---)
Unix security:
uid: 0
gid: 1
mode: 0775 (rwxrwxr-x)
NFSv4 security descriptor:
DACL:
Allow - uid: 55 - 0x0016019d
Allow - OWNER@ - 0x001601ff
Allow - GROUP@ - 0x001201ff
Allow - EVERYONE@ - 0x001200a9 (Read and Execute)
SACL:
No entries.
87
When a change is made to the mode bit when NFSv4 ACLs are present, the NFSv4 ACL that was just set
is wiped by default:
[root@nfsclient /]# chmod 755 /unix
[root@nfsclient /]# ls -la | grep unix
drwxr-xr-x.
2 root
daemon
[root@nfsclient /]# nfs4_getfacl /unix
A::OWNER@:rwaDxtTnNcCy
A:g:GROUP@:rxtncy
A::EVERYONE@:rxtncy
To control this behavior in clustered Data ONTAP, use the following diag-level option:
cluster::> set diag
cluster::*> nfs server modify -vserver vs0 -v4-acl-preserve [enabled|disabled]
88
After the option is enabled, the ACL stays intact when mode bits are set.
[root@nfsclient /]# nfs4_setfacl -a A::ldapuser@nfsv4domain.netapp.com:ratTnNcCy /unix
[root@nfsclient /]# ls -la | grep unix
drwxr-xr-x.
2 root
daemon
4096 Apr 30 11:24 unix
[root@nfsclient /]# nfs4_getfacl /unix
A::ldapuser@nfsv4domain.netapp.com:ratTnNcCy
A::OWNER@:rwaDxtTnNcCy
A:g:GROUP@:rxtncy
A::EVERYONE@:rxtncy
cluster::> vserver security file-directory show -vserver vs0 -path /unix
Vserver:
File Path:
Security Style:
Effective Style:
DOS Attributes:
DOS Attributes in Text:
Expanded Dos Attributes:
Unix User Id:
Unix Group Id:
Unix Mode Bits:
Unix Mode Bits in Text:
ACLs:
vs0
/unix
unix
unix
10
----D--0
1
755
rwxr-xr-x
NFSV4 Security Descriptor
Control:0x8014
DACL - ACEs
ALLOW-S-1-8-55-0x16019d
ALLOW-S-1-520-0-0x1601ff
ALLOW-S-1-520-1-0x1200a9-IG
ALLOW-S-1-520-2-0x1200a9
89
Note that the ACL is still intact after mode bits are set:
[root@nfsclient /]# chmod 777 /unix
[root@nfsclient /]# ls -la | grep unix
drwxrwxrwx.
2 root
daemon
4096 Apr 30 11:24 unix
[root@nfsclient /]# nfs4_getfacl /unix
A::ldapuser@win2k8.ngslabs.netapp.com:ratTnNcCy
A::OWNER@:rwaDxtTnNcCy
A:g:GROUP@:rwaDxtTnNcy
A::EVERYONE@:rwaDxtTnNcy
cluster::> vserver security file-directory show -vserver vs0 -path /unix
Vserver:
File Path:
Security Style:
Effective Style:
DOS Attributes:
DOS Attributes in Text:
Expanded Dos Attributes:
Unix User Id:
Unix Group Id:
Unix Mode Bits:
Unix Mode Bits in Text:
ACLs:
vs0
/unix
unix
unix
10
----D--0
1
777
rwxrwxrwx
NFSV4 Security Descriptor
Control:0x8014
DACL - ACEs
ALLOW-S-1-8-55-0x16019d
ALLOW-S-1-520-0-0x1601ff
ALLOW-S-1-520-1-0x1201ff-IG
ALLOW-S-1-520-2-0x1201ff
NFSv4 Delegations
NFSv4 introduces the concept of delegations that provide an aggressive cache, which is different from
the ad hoc caching that NFSv3 provides. There are two forms of delegationsread and write.
Delegations provide more cache correctness rather than improving performance. For delegations to work,
a supported UNIX client is required along with the right delegation options enabled on the NetApp
controller. These options are disabled by default.
When a server determines to delegate a complete file or part of the file to the client, the client caches it
locally and avoids additional RPC calls to the server. This reduces GETATTR calls in case of read
delegations because there are fewer requests to the server to obtain the files information. However,
delegations do not cache metadata. Reads can be delegated to numerous clients, but writes can be
delegated only to one client at a time. The server reserves the right to recall the delegation for any valid
90
reason. The server determines to delegate the file under two scenariosa confirmed call-back path from
the client that the server uses to recall a delegation if needed and when the client sends an OPEN
function for a file.
Goal
How to
Goal
How to
Note:
Both the file read and write delegation options take effect as soon as they are changed. There is
no need to reboot or restart NFS.
91
NFSv4 Locking
For NFSv4 clients, Data ONTAP supports the NFSv4 file-locking mechanism, maintaining the state of all
file locks under a lease-based model. In accordance with RFC 3530, Data ONTAP "defines a single lease
period for all state held by an NFS client. If the client does not renew its lease within the defined period,
all state associated with the client's lease may be released by the server." The client can renew its lease
explicitly or implicitly by performing an operation, such as reading a file. Furthermore, Data ONTAP
defines a grace period, which is a period of special processing in which clients attempt to reclaim their
locking state during a server recovery.
Locks are issued by Data ONTAP to the clients on a lease basis. The server checks the lease on each
client every 30 seconds. In the case of a client reboot, the client can reclaim all the valid locks from the
server after it has restarted. If a server reboots, then upon restarting it does not issue any new locks to
the clients for a grace period of 45 seconds (tunable in clustered Data ONTAP to a maximum of 90
seconds). After that time the locks are issued to the requesting clients. The lease time of 30 seconds can
be tuned based on the application requirements.
92
Term
Lease
Grace period
NFSv4.x Referrals
Clustered Data ONTAP 8.1 introduced NFSv4.x referrals. A referral directs a client to another LIF in the
SVM. The NFSv4.x client uses this referral to direct its access over the referred path to the target LIF
from that point forward. Referrals are issued when there is a LIF in the SVM that resides on the cluster
node where the data volume resides. In other words, if a cluster node receives an NFSv4.x request for a
nonlocal volume, the cluster node is able to refer the client to the local path for that volume by means of
the LIF. Doing so allows clients faster access to the data using a direct path and avoids extra traffic on
the cluster network.
93
For example:
The data volume lives on node1:
cluster::> volume show -vserver vs0 -volume nfsvol -fields node
vserver volume node
------- ------ ------------vs0
nfsvol node1
Current Is
Port
Home
------- ---e0a
true
The client makes a mount request to the data LIF on node2, at the IP address 10.61.92.37:
[root@nfsclient /]# mount -t nfs4 10.61.92.37:/nfsvol /mnt
But the cluster shows that the connection was actually established to node1, where the data volume
lives. No connection was made to node2:
cluster::> network connections active show -node node1 -service nfs*
Vserver
Interface
Remote
CID Ctx Name
Name:Local Port
Host:Port
Protocol/Service
--------- --- --------- ----------------- -------------------- ---------------Node: node1
286571835
6 vs0
data:2049
10.61.179.164:763
TCP/nfs
cluster::> network connections active show -node node2 -service nfs*
There are no entries matching your query.
Because clients might become confused about which IP address they are actually connected to as per
the mount command, NetApp recommends using host names in mount operations.
94
Best Practice 20: Data LIF Locality (See Best Practice 21)
NetApp highly recommends that there be at least one data LIF per node per SVM so that a local
path is always available to data volumes. This process is covered in Data LIF Best Practices with
NAS Environments in this document.
If a volume moves to another aggregate on another node, the NFSv4.x clients must unmount and
remount the file system manually to be referred to the new location of the volume. Doing so provides a
direct data path for the client to reach the volume in its new location. If the manual mount/unmount
process is not followed, the client can still access the volume in its new location, but I/O requests would
then take a remote path. NFSv4.x referrals were introduced in RHEL as early as 5.1 (2.6.18-53), but
NetApp recommends using no kernel older than 2.6.25 with NFS referrals and no version earlier than
1.0.12 of nfs-utils.
If a volume is junctioned below other volumes, the referral uses the volume being mounted to refer to as
the local volume. For example:
The referral returns the IP address of a LIF that lives on node2, regardless of what IP address is
returned from DNS for the host name cluster.
In a mixed client environment, if any of the clients do not support referrals, then the -v4.0-referrals
option should not be enabled. If the option is enabled and clients that do not support referrals get a
referral from the server, that client will be unable to access the volume and will experience failures. See
RFC 3530 for more details on referrals.
95
Category
Commands
Configure
NFSv4.x
referrals.
Verification
Refer to Table 29) NFSv4 configuration options in clustered Data ONTAP., for more information.
96
All clients accessing the NFSv4.x server on the SVM are stateless.
All clients accessing the NFSv4.x server on the SVM support migrations.
Locking
Share reservations
Delegations
The NFSv4.x clients do not have a state established on the NFS server.
NFS migration support can be useful in the following scenarios in clustered Data ONTAP:
Volume moves
LIF migration/failover
Referrals
Stateless Migration
pNFS
At mount
place?
metadata)
All traffic
All traffic
Use case
Automounter
Oracle dNFS
Drawback
97
Only on mount
redirected
9.3
NFSv4.1
NFSv4.1 support began in clustered Data ONTAP 8.1. NFSv4.1 is considered a minor version of NFSv4.
Even though the NFSv4.1 RFC 5661 suggests that directory delegations and session trunking are
available, there is currently no client support, nor is there currently support in clustered Data ONTAP.
To mount a client using NFSv4.1, there must be client support for NFSv4.1. Check with the client vendor
for support for NFSv4.1. Mounting NFSv4.1 is done with the minorversion mount option.
Example:
# mount -o nfsvers=4,minorversion=1 10.63.3.68:/unix /unix
# mount | grep unix
10.63.3.68:/unix on /unix type nfs
(rw,nfsvers=4,minorversion=1,addr=10.63.3.68,clientaddr=10.228.225.140)
Category
Commands
Enable NFSv4.1.
cluster::> vserver nfs modify -vserver test_vs1 -v4.0 enabled -v4.1 enabled
98
NetApp highly recommends using the latest patched general-availability release of the client OS to
leverage the advantages of any and all pNFS bug fixes.
Volume constituents
The device information is cached to the local nodes NAS Volume Location Database for improved
performance.
99
To see pNFS devices in the cluster, use the following diag-level command:
cluster::> set diag
cluster::*> vserver nfs pnfs devices cache show
Metadata server
Responsible for maintaining metadata that informs the clients of the file locations
Data server
Clients
These components will leverage three different protocols. The control protocol will be the way the
metadata and data servers stay in sync. The pNFS protocol is used between clients and the metadata
server. pNFS supports file-, block-, and object-based storage protocols, but NetApp currently only
supports file-based pNFS.
Figure 11) pNFS data workflow.
100
NFSv4.1 Delegations
In clustered Data ONTAP 8.2, support for NFSv4.1 delegations was added. NFSv4.1 delegations are very
similar to NFSv4.0 delegations, but are part of the v4.1 protocol rather than v4.0. The following is a table
that covers the new additions to NFSv4.1 and how they benefit an environment over NFSv4.0. These
additions are covered in detail in RFC 5661.
Table 14) NFSv4.1 delegation benefits.
EXCHANGE_ID is used
OPEN4_SHARE_ACCESS_WANT_DELEG_MASK
OPEN4_SHARE_ACCESS_WANT_NO_PREFERENCE
OPEN4_SHARE_ACCESS_WANT_READ_DELEG
OPEN4_SHARE_ACCESS_WANT_WRITE_DELEG
OPEN4_SHARE_ACCESS_WANT_ANY_DELEG
OPEN4_SHARE_ACCESS_WANT_NO_DELEG
For information regarding pNFS with RHEL 6.4, see TR-4063: Parallel Network File System Configuration
and Best Practices for Clustered Data ONTAP.
101
NFSv4.1 Sessions
NFSv4.1 sessions have been available since clustered Data ONTAP 8.1. As per RFC 5661:
A session is a dynamically created, long-lived server object created by a client and used over time from
one or more transport connections. Its function is to maintain the server's state relative to the
connection(s) belonging to a client instance. This state is entirely independent of the connection itself,
and indeed the state exists whether or not the connection exists. A client may have one or more sessions
associated with it so that client-associated state may be accessed using any of the sessions associated
with that client's client ID, when connections are associated with those sessions. When no connections
are associated with any of a client ID's sessions for an extended time, such objects as locks, opens,
delegations, layouts, and so on, are subject to expiration. The session serves as an object representing a
means of access by a client to the associated client state on the server, independent of the physical
means of access to that state.
A single client may create multiple sessions. A single session MUST NOT serve multiple clients.
Best Practice 22: NFSv4.x Version Recommendation (See Best Practice 23)
Use NFSv4.1 with clustered Data ONTAP when possible. Performance, NDO, and features in
NFSv4.1 surpass those in NFSv4.0.
9.4
When specifying a mount, you can apply a variety of mount options to help resiliency and performance.
The following is a list of some of those options, as well as information to assist with setting these options.
Keep in mind that some application and/or OS vendors might have different recommendations for mount
option best practices. It is important to consult with the application and/or OS vendors so that the correct
options are used. For example, Oracle mount best practices are covered in TR-3633: Oracle Databases
on Data ONTAP. This technical report focuses on a general catch-all configuration.
Mount Options
The following is a list of typical mount options and suggestions on how to apply them with NetApp storage
using NFS (v3 and v4.x). In most cases, mount options are standardized. Mount options might vary
depending on the version and variety of Linux being used. Always consult the man pages of the Linux
kernel being used to verify that the mount options exist for that version.
Mount options are specified using the -o flag. Mount options such as noacl and nolock do not apply to
NFSv4 and NetApp does not recommend them.
If NFSv4 is enabled on the NetApp storage system, then newer clients will negotiate NFSv4 on their own
without mount options. Older clients will use NFSv3 unless specified. If NFSv4 is disabled on the NetApp
storage system, clients fall back to using NFSv3.
102
For business-critical NFS exports, NetApp recommends using hard mounts. NetApp strongly
discourages the use of soft mounts.
intr
intr allows NFS processes to be interrupted when a mount is specified as a hard mount. This policy is
deprecated in new clients such as RHEL 6.4 and is hardcoded to nointr. Kill -9 is the only way to
interrupt a process in newer kernels.
For business-critical NFS exports, NetApp recommends using intr with hard mounts in clients that
support it.
nfsvers
nfsvers does not apply to NFSv4 mounts. To specify NFSv4, use the t option for type.
noexec
noexec prevents the execution of binaries on an NFS mount.
NetApp recommends use of this option only when advised by the application or client vendor.
nosuid
nosuid prevents the setting of set-user-identifier or set-group-identifier bits. Doing so prevents remote
users from gaining higher privileges by running a setuid program.
NetApp recommends use of this option for better security on NFS mounts.
port
port allows the specification of which port an NFS mount will leverage. By default, NFS uses port 2049
for communication with NetApp storage. If a different port is specified, firewall considerations should be
considered, because communication can be blocked if an invalid port is specified.
NetApp does not recommend changing this value unless necessary.
In the case of automounter, NetApp recommends the following change in the auto.home or auto.misc or
auto.* files:
-fstype=nfs4, rw, proto=tcp,port=2049
103
sec
sec specifies the type of security to utilize when authenticating an NFS connection.
sec=sys is the default setting, which uses local UNIX UIDs and GIDs by means of AUTH_SYS to
authenticate NFS operations.
sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users.
sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS operations
using secure checksums to prevent data tampering.
sec=krb5p uses Kerberos V5 for user authentication and integrity checking and encrypts NFS traffic to
prevent traffic sniffing. This is the most secure setting, but it also involves the most performance
overhead.
Data ONTAP 7-Mode supports all security varieties specified.
Clustered Data ONTAP supports only sys and krb5 currently.
NetApp recommends using sec only when clients have been configured to use the specified security
mode.
tcp or udp
tcp or udp is used to specify whether the mount will use TCP or UDP for transport.
NFSv4 only supports TCP, so this option does not apply to NFSv4.
NetApp recommends TCP for all mounts, regardless of version, provided the client supports mounting
using TCP.
10 NFS Auditing
NFS auditing is new in clustered Data ONTAP 8.2. In 7-Mode, NFS auditing required CIFS to function
properly. That is no longer the case in clustered Data ONTAP. NFS auditing can now be set up
independently and does not require a CIFS license.
The following section covers the setup and use of NFS auditing.
104
This command enables auditing for NFS and CIFS access on the junction path /unix for the SVM named
nfs.
After auditing is enabled on the clustered Data ONTAP system, the AUDIT ACEs should be created.
Best Practice 23: Audit ACE Recommendation (See Best Practice 24)
If using inheritable audit ACEs, be sure to create at least one inheritable allow or deny ACE on the
parent directory to avoid access issues. See bug 959337 for details.
Read
Write
Execute
Append
Delete
For information about all of the ACE permissions in NFSv4, see http://linux.die.net/man/5/nfs4_acl.
Each Linux client will use a different method of assigning NFSv4.x ACEs. In RHEL/CentOS/Fedora, the
commands nfs4_setacl and nfs4_getacl are used.
An AUDIT ACE will leverage flags to specify if auditing should be for successes, failures, or both. AUDIT
ACEs will use the ACE type of U.
Figure 12) Example of setting NFSv4 audit ACE.
After the AUDIT ACE is applied and the user that is being audited attempts access, the events get logged
to an XML file on the volume.
105
NFS on Windows
To use NFS with clustered Data ONTAP systems earlier than version 8.2.3 and 8.3.1 on Windows
operating systems, server administrators can install third-party tools, such as the Hummingbird/OpenText
NFS Client. Red Hats Cygwin emulates NFS but leverages the SMB protocol rather than NFS, which
requires a CIFS license. True Windows NFS is available natively only through Services for Network File
System or third-party applications such as Hummingbird/OpenText.
106
The way that Windows uses NLM is with nonmonitored lock calls. The following nonmonitored lock calls
are required for Windows NFS support:
NLM_SHARE
NLM_UNSHARE
NLM_NM_LOCK
These lock calls are currently not supported in versions of clustered Data ONTAP earlier than 8.3.1 or in
versions of clustered Data ONTAP earlier than 8.2.3. Bug 296324 covers this point. Check the NFS
Interoperability Matrix for updates.
Note: PCNFS, WebNFS, and HCLNFS (legacy Hummingbird NFS client) are not supported with
clustered Data ONTAP storage systems and there are no plans to include support for these
protocols.
Network Status Monitor (NSM) is not supported in Windows NFS. As such, volume moves and
storage failovers can cause disruptions that might not be seen on NFS clients that do support NSM.
If using Windows NFS on an SVM, the following options need to be set to disabled.
enable-ejukebox
v3- connection-drop
Note:
These options are enabled by default. Disabling them will not harm other NFS clients, but might
cause some unexpected behavior.
Windows NFS clients will not be able to properly see the used space and space available through the
df commands.
107
Applications that run on Windows and require NFS connectivity and/or Linux-style commands and
functions (such as GETATTR, SETATTR, and so on)
When there is a wish to leverage the NFS protocol rather than CIFS
Although Windows can be used to leverage NFS connectivity, it might make more sense to use CIFS and
the newer features of the latest SMB version that Windows provides for performance and NDO
functionality. Additionally, using Windows NFS with clustered Data ONTAP requires some considerations,
covered later.
Files that are accessed and written to by only one user or application
Note: There might be other scenarios in which NLM is not required. Contact the OS and/or application
vendor for recommendations.
108
The following is an example of a policy configured to allow only single-client access to an export using the
-clientmatch option in the export policy rule.
cluster ::> export-policy rule show -vserver SVM -policyname default -ruleindex 1
(vserver export-policy rule show)
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
SVM
default
1
any
or Domain: 10.228.225.140
any
any
65534
any
true
true
In the preceding example, access is granted only to the 10.228.225.140 client. All other clients will be
unable to mount using NFS. If control over CIFS access to the export is desired, then export policy rules
can be enabled for CIFS shares using the following command in advanced privilege:
cluster::> set advanced
cluster::*> cifs options modify -vserver SVM -is-exportpolicy-enabled true
109
Create a user on the cluster or in a name service server with UID 501 to authenticate all Apple users.
Change the UID on the Apple OS for each user who intends to use NFS on Apple.
110
Note:
When using the Finder to mount NFS, mount options cannot be specified.
Example:
In 8.1.x:
cluster::> set diag
cluster::*> diag nblade cifs credentials show -vserver vs0 -unix-user-name root
Getting credential handles.
1 handles found....
Getting cred 0 for user.
Global Virtual Server:
Cred Store Uniquifier:
Cifs SuperUser Table Generation:
Locked Ref Count:
Info Flags:
Alternative Key Count:
Additional Buffer Count:
Allocation Time:
Hit Count:
Locked Count:
Windows Creds:
Flags: 0
Primary Group: S-0-0
Unix Creds:
Flags: 0
Domain ID: 0
Uid: 0
Gid: 1
Additional Gids:
8
23
0
0
1
0
0
0 ms
0 ms
0 ms
111
8
23
0
0
1
0
0
0 ms
0 ms
0 ms
SecD Caching
SecD is a user space application that runs on a per-node basis. The SecD application handles name
service lookups such as DNS, NIS, and LDAP, as well as credential queries, caching, and name
mapping. Because SecD is responsible for so many functions, caching plays an important role in its
operations. SecD contains two types of caches: LRU and DB style.
LRU-Style Caches
LRU caches are Least Recently Used cache types and age out individual entries at a specified timeout
value based on how long it has been since the entry was last accessed. LRU cache timeout values are
viewable and configurable using diag-level commands in the cluster.
In the following example, the sid-to-name cache (responsible for Windows SID to UNIX user name
caching) allows a default of 2,500 max entries, which will stay in cache for 86.400 seconds:
cluster::> set diag
cluster::*> diag secd cache show-config node node1 cache-name sid-to-name
Current Entries: 0
Max Entries: 2500
Entry Lifetime: 86400
Caches can be manually flushed, but can only be flushed one at a time on a per-SVM basis:
cluster::> set diag
cluster::*> diag secd cache clear node node1 vserver vs0 cache-name sid-to-name
112
DB-Style Caches
DB-style caches are caches that time out as a whole. These caches do not have maximum entries
configured and are rarer than LRU-style caches.
Caches can be flushed in their entirety rather than per node, but both methods involve disrupting the
node. One way to flush is to reboot the node using storage failover/giveback. The other method is to
restart the SecdD process using the following diag-level command:
cluster::> set diag
cluster::*> diag secd restart node node1
NetApp does not recommend adjusting SecD caches unless directed by NetApp Support.
113
By default, NTFS securitystyle volumes are set to 777 permissions, with a UID and a GID of 0, which
generally translates to the root user. NFS clients will see these volumes in NFS mounts with this security
setting, but users will not have full access to the mount. The access will be determined by which Windows
user the NFS user is mapped to.
The cluster will use the following order of operations to determine the name mapping:
Note: 1:1 implicit name mapping
a. If no 1:1 name mapping exists, SecD checks for name mapping rules.
b. Example: WINDOWS\john maps to UNIX user unixjohn.
Note: Default Windows/UNIX user
a. If no 1:1 name mapping and no name mapping rule exist, SecD will check the NFS server for a
default Windows user or the CIFS server for a default UNIX user.
b. By default, pcuser is set as the default UNIX user in CIFS servers when created using System
Manager 3.0 or vserver setup.
c.
a. In most cases in Windows, this failure manifests as the error A device attached is not
functioning.
b. In NFS, a failed name mapping manifests as access or permission denied.
Name mapping and name switch sources will depend on the SVM configuration. See the File Access
and Protocols Management Guide for the specified version of clustered Data ONTAP for configuration
details.
Best Practice 24: Name Mapping Recommendation (See Best Practice 25)
It is a best practice to configure an identity management server such as LDAP with Active Directory
for large multiprotocol environments. See TR-4073: Secure Unified Authentication for more
information about LDAP.
114
Category
Add CIFS
license.
Commands
Note: None of the CIFS-related operations can be initiated without adding the CIFS
license key.
Enable CIFS.
cluster::> vserver modify -vserver test_vs1 -allowed-protocols nfs,cifs
Verification
Configure DNS
server.
test_vs1
aggr1_Cluster01
file, ldap
file
unix
ldapclient1
C
default
default
default
unlimited
running
nfs, cifs
fcp, iscsi
A DNS server must be created and configured properly to provide the name services to
resolve the LIF names assigned to the network ports.
Verification
115
Domains:
Name Servers:
Enable/Disable DNS:
Timeout (secs):
Maximum Attempts:
domain.netapp.com
172.17.32.100
enabled
2
1
Create CIFS
server.
cluster::> cifs create -vserver test_vs1 -cifs-server test_vs1_cifs -domain
domain.netapp.com
Verification
cluster::> cifs server show
Server
Domain/Workgroup Authentication
vserver
Name
Name
Style
----------- --------------- ---------------- -------------test_vs1
TEST_VS1_CIFS
DOMAIN
domain
Create CIFS
share.
cluster::> cifs share create -vserver test_vs1 -share-name testshare1 -path
/testshare1
Verification
test_vs1
testshare1
TEST_VS1_CIFS
/testshare1
oplocks
browsable
changenotify
Everyone / Full Control
-
116
cluster::> unix-user create -vserver test_vs1 -user pcuser -id 65534 -primarygid 65534 -full-name pcuser
Verification:
cluster::> unix-user show -vserver test_vs1
(vserver services unix-user show)
vserver
Name
ID
-------------- ------------------- ---------test_vs1
pcuser
65534
test_vs1
root
0
2 entries were displayed.
117
mapped to pcuser
Attempt to map
the CIFS share.
For more
information
118
Before you attempt name mapping, verify that the default UNIX user is mapped to
pcuser. By default, no UNIX user is associated with the Vserver. For more information,
including how to create name mapping rules, see the File Access and Protocols
User name
UID/GID
Users and groups can be either created manually or loaded from the URI. For information about the
procedure to load from the URI, see the File Access and Protocol Guide for the release of clustered
Data ONTAP running on the system.
Note:
UID and GID numbers can use a range of 0 to 4,294,967,295 (the largest 32-bit unsigned integer
possible).
Using local users and groups can be beneficial in smaller environments with a handful of users, because
the cluster would not need to authenticate to an external source. This prevents latency for lookups, as
well as the chance of failed lookups due to failed connections to name servers.
119
If you use NFSv4.x, it makes sense to create a local group for wheel on the clustered Data ONTAP
SVM using GID 10 (or whichever GID wheel is used on your clients). Doing so helps prevent issues
with resolving the wheel group.
For larger environments, NetApp recommends using a name server such as NIS or LDAP to service
UID/GID translation requests.
Best Practice 26: Primary GIDs (See Best Practice 27)
UNIX users will always have primary GIDs. When specifying a primary GID, whether with local
users or name services, be sure that the primary GID exists in the specified nm-switch and nsswitch locations. Using primary GIDs that do not exist can cause authentication failures in clustered
Data ONTAP 8.2 and earlier.
In versions earlier than clustered Data ONTAP 8.3, there was no hard limit on local users and
groups. However, that does not mean that there is no actual limit. NetApp highly recommends not
exceeding the local UNIX user and group limits as defined in the following table when using
clustered Data ONTAP versions earlier than 8.3.
Note:
This limit is for local UNIX users and groups. Local CIFS users and groups (vserver cifs
users-and-groups) have an independent limit and are not affected by this limit.
Table 15) Limits on local users and groups in clustered Data ONTAP.
Local UNIX User Limit in 8.3 (default and max)
32,768 (default)
32,768 (default)
65,536 (max)
65,536 (max)
120
As previously mentioned, the local UNIX user and group limits are cluster-wide and affect clusters with
multiple SVMs. Thus, if a cluster has 4 SVMs, then the maximum number of users in each SVM must add
up to the maximum limit set on the cluster.
For example:
SVM4 would then have 23,516 local UNIX users available to be created.
Any attempted creation of any UNIX user or group beyond the limit would result in an error message.
Example:
cluster::> unix-group create -vserver NAS -name test -id 12345
Error: command failed: Failed to add "test" because the system limit of {limit number}
"local unix groups and members" has been reached.
The limits are controlled by the following commands in the advanced privilege level:
cluster::*> unix-user max-limit
modify show
Best Practice 28: Local UNIX Users and Group Limits (See Best Practice 1)
If a cluster requires more than the allowed limit of UNIX users and groups, an external name service
such as LDAP should be leveraged. Doing so bypasses the limits on the cluster and allows a
centralized mechanism for user and group lookup and management.
121
7-Mode Mapping
-pattern
-replacement
-position
X => Y
Win-UNIX
X <= Y
UNIX-Win
X == Y
UNIX-Win/
Win-UNIX
X/Y
Y/X
Note:
For further information about CIFS configuration and name mapping, refer to TR-4191: Best
Practices Guide for Clustered Data ONTAP 8.2.x and 8.3 Windows File Services.
Note:
1:1 name mappings do not require specific rules in clustered Data ONTAP (such as X == Y). This
implicit name mapping is done by default. Additionally, as of clustered Data ONTAP 8.2.1, trusted
domain name mapping is supported. For more information, see the File Access and Protocol
Guides.
Infinite Volumes use only unified security style. This style is not currently available for NetApp FlexVol
volumes.
For detailed information about Infinite Volumes, see TR-4037: Introduction to NetApp Infinite Volume and
TR-4178: Infinite Volume Deployment and Implementation Guide.
122
Security Style
UNIX
NTFS
Mixed
Note:
Limitations
These limitations apply to all objects in NetApp storage (files, directories, volumes, qtrees, and
LUNs).
UNIX and Windows clients can view ACLs and permissions regardless of the on-disk effective
security style; that is, regardless of the protocol previously used to set ownership or permissions.
UNIX and Windows clients can modify ACLs and permissions regardless of the on-disk effective
security style; that is, regardless of the protocol previously used to set ownership or permissions.
UNIX mode bits can be merged into an existing ACL regardless of the on-disk effective security style;
that is, regardless of whether the ACL is an NFSv4 ACL or an NTFS ACL.
ACEs in NTFS ACLs can represent UNIX or Windows principals (users or groups).
Current NFSv4 clients and servers support a single NFSv4 domain, and all principals must be
mapped into that NFSv4 domain. For this reason, NFSv4 ACLs set by NFSv4 clients contain only
NFSv4 principals.
123
The following figure illustrates the NFS well-known principals (OWNER@, GROUP@, and
EVERYONE@) in unified style (on the right), contrasted with mixed style (on the left).
Figure 14) Mixed-style (left) and unified-style (right) mode bit display on Windows.
124
The NFS well-known principals (OWNER@, GROUP@, and EVERYONE@) are defined in the NFSv4
specification. There is a significant difference between these principals in an ACL and the UNIX mode
classes (owner, owning group, and other). The NFS well-known principals are defined in Table 19.
Table 19) NFS well-known principal definitions.
Who
Description
OWNER@
GROUP@
EVERYONE@
The UNIX mode classes are specific and exclusive. For example, permissions granted to other exclude
the owner and the owning group. Thus a mode mask of 007 grants rwx permission to everyone except
members of the owning group and the owner.
The well-known NFS principals are inclusive, and an ACL is processed in order to determine the sum of
the permissions granted to each principal. Thus an ACL granting FullControl to EVERYONE@ will result
in a mode mask of 777.
While recognizing that it is not possible to represent the entirety of an ACL in mode bits, the NFSv4.1
specification provided clarification to potential ambiguities in the original NFSv4 specification:
Interpreting EVERYONE@ as equivalent to UNIX other does not follow the intent of the
EVERYONE@ definition. The intent of EVERYONE@ is literally everyone, which includes the owner
and owning group.
A server that supports both mode and ACL must take care to synchronize the mode bits with
OWNER@, GROUP@, and EVERYONE@. This way, the client can see semantically equivalent
access permissions whether the client asks for the mode attributes or the ACL.
NTFS security style explicitly blocks attempts to change ownership or permissions using NFS.
NTFS security style always displays the most permissive mode permissions possible to NFS clients,
which are calculated by summing all the permissions granted across the ACL. Thus NFS clients often
display 777 regardless of the actual permissions on a file.
The ability of the superuser (root) or regular users to change file ownership can be controlled and
restricted using the -superuser and -chown-mode options, which are described in subsequent
sections of this document.
It is not currently possible to completely block permission changes using NFSv3, but the capability of
an NFSv3 client to change an ACL is limited. An NFSv3 client can only affect (add, remove, modify)
the NFS well-known principal ACEs (OWNER@, GROUP@, and EVERYONE@). It is not possible
for an NFSv3 client to add, remove, or modify any Windows ACE in the ACL.
There was an option in Data ONTAP 7-Mode to generate least permissive mode bits in NTFS security
style but that option is not available in clustered Data ONTAP.
125
Permissions are calculated as directed in the NFSv4.1 specification. The algorithm uses the NFS
well-known principal ACEs (OWNER@, GROUP@, and EVERYONE@), the Windows owner, and
the Windows Everyone group when calculating the UNIX mode. Thus an NTFS ACL that grants
FullControl to OWNER@ and Read+Execute to GROUP@ would generate a mode of 750.
The generated mode might be 000 if an ACL contains Windows group ACEs but no Windows
Everyone ACE and none of the NFS well-known principals.
If a mode of 000 is disconcerting or undesirable, the OWNER@ ACE can be included in an NTFS
ACL with little or no impact on access control on the file because the UNIX owner always has the
right to read and change permissions. Note that this unconditional right permitting the UNIX owner to
read and change permissions does not automatically result in an OWNER@ ACE being included in
an ACL.
Figure 15 illustrates an NTFS ACL in unified security style containing both NFS well-known principals and
Windows groups.
Figure 15) UNIX permission in an NTFS ACL in unified style.
ACLs and permissions can be viewed by UNIX and Windows clients regardless of the on-disk
effective security style; that is, regardless of the protocol previously used to set ownership or
permissions.
ACLs and permissions can be modified by UNIX and Windows clients regardless of the on-disk
effective security style; that is, regardless of the protocol previously used to set ownership or
permissions.
126
UNIX mode bits can be merged into an existing ACL regardless of the on-disk effective security style;
that is, regardless of whether the ACL is an NFSv4 ACL or an NTFS ACL.
ACEs in NTFS ACLs can represent UNIX or Windows principals (users or groups).
o
Current NFSv4 clients and servers support a single NFSv4 domain, and all principals must be
mapped into that NFSv4 domain. For this reason, NFSv4 ACLs set by NFSv4 clients contain only
NFSv4 principals.
To control the NFSv4 ACL preservation option, use the following command:
cluster::> set advanced
cluster::*> nfs server modify -vserver [SVM] -v4-acl-preserve enabled
In clustered Data ONTAP, it is possible to view the effective security style and ACLs of an object in
storage by using the vserver security file-directory command set. Currently, the command
does not autocomplete for SVMs with content repository enabled, so the SVM name must be entered
manually.
Example:
::> vserver security file-directory show -vserver infinite -path /infinitevolume/CIFS
Vserver:
File Path:
Security Style:
Effective Style:
DOS Attributes:
DOS Attributes in Text:
Expanded Dos Attributes:
Unix User Id:
Unix Group Id:
Unix Mode Bits:
Unix Mode Bits in Text:
ACLs:
infinite
/infinitevolume/CIFS
mixed
ntfs
10
----D--500
512
777
rwxrwxrwx
NTFS Security Descriptor
Control:0x8504
Owner:DOMAIN\Administrator
Group:DOMAIN\Domain Users
DACL - ACEs
ALLOW-S-1-520-0-0x1f01ff-OI|CI
ALLOW-S-1-520-1-0x1201ff-OI|CI
ALLOW-S-1-520-2-0x1201ff-OI|CI
ALLOW-DOMAIN\unified1-0x1f01ff-OI|CI
ALLOW-DOMAIN\Administrator-0x1f01ff-OI|CI
ALLOW-DOMAIN\unifiedgroup-0x1f01ff-OI|CI
127
infinite
/infinitevolume/NFS
mixed
unix
10
----D--100059
10008
777
rwxrwxrwx
NFSV4 Security Descriptor
Control:0x8014
DACL - ACEs
ALLOW-S-1-8-10001-0x16019f
ALLOW-S-1-520-0-0x1601ff
ALLOW-S-1-520-1-0x1201ff-IG
ALLOW-S-1-520-2-0x1201ff
In this example, a volume named infinite contains a folder with effective security style of UNIX called
NFS and an effective NTFS style folder called CIFS. The effective style reflects the protocol that last
applied an ACL to the object and, although both folders indicate mixed security style, the behavior is
unified security style. Table 20 shows the main differences between the mixed and unified security styles.
Table 20) Mixed mode versus unified security style.
Mixed
Unified
Note:
The effective style indicates the protocol most recently used to set the ACL in all security styles.
The difference in unified security style is that the effective style does not indicate ACL
management restrictions or limitations.
128
Example:
infinite
/infinitevolume/NFS
mixed
unix
10
----D--100059
10008
777
rwxrwxrwx
NFSV4 Security Descriptor
Control:0x8014
DACL - ACEs
ALLOW-S-1-8-10001-0x16019f
ALLOW-S-1-520-0-0x1601ff
ALLOW-S-1-520-1-0x1201ff-IG
ALLOW-S-1-520-2-0x1201ff
129
The other NFSv4 ACLs listed on the object are the default EVERYONE@, GROUP@, and OWNER@
ACLs.
cluster::*> diag secd authentication translate -node node1 -vserver infinite -sid S-1-520-0
OWNER@ (Well known group)
cluster::*> diag secd authentication translate -node node1 -vserver infinite -sid S-1-520-1
GROUP@ (Well known group)
cluster::*> diag secd authentication translate -node node1 -vserver infinite -sid S-1-520-2
EVERYONE@ (Well known group)
These default ACLs get set on every object and reflect the mode bit translation for NFSv3 backward
compatibility.
Example:
# ls -la | grep NFS
drwxrwxrwx. 2 unified1
unifiedgroup
4096 Nov
1 13:46 NFS
# nfs4_getfacl /infinitevol/NFS
A::test@domain.win2k8.netapp.com:rwatTnNcCy
A::OWNER@:rwaDxtTnNcCy
A:g:GROUP@:rwaDxtTnNcy
A::EVERYONE@:rwaDxtTnNcy
# chmod 755 NFS
# ls -la | grep NFS
drwxr-xr-x. 2 unified1
unifiedgroup
4096 Nov
1 13:46 NFS
# nfs4_getfacl /infinitevol/NFS
A::test@domain.win2k8.netapp.com:rwatTnNcCy
A::OWNER@:rwaDxtTnNcCy
A:g:GROUP@:rxtncy
A::EVERYONE@:rxtncy
130
To avoid this behavior, set the v4-id-domain option in the NFS server even if NFSv4.x is not being
used.
Example:
cluster::> nfs server modify -vserver infinite -v4-id-domain domain.win2k8.netapp.com
Return-generated returns default values for the attributes, which appear to the client as a file size of
0 and timestamps that are in the past. This is the default setting.
Wait causes the volume to return a RETRY error, which can cause some clients to appear to hang
because they retry the request indefinitely.
131
When an Infinite Volume is added, two additional policies are also created:
default
repos_namespace_export_policy
repos_restricted_export_policy
repos_root_readonly_export_policy
IV
repos_namespace_export_policy
1
any
or Domain: 0.0.0.0/0
any
any
0
any
true
true
fail
restricted
-
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
NTFS Unix Security Options:
Vserver NTFS Unix Security Options:
Change Ownership Mode:
Vserver Change Ownership Mode:
IV
repos_namespace_export_policy
2
any
or Domain: ::0/0
any
any
0
any
true
true
fail
restricted
-
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
NTFS Unix Security Options:
Vserver NTFS Unix Security Options:
Change Ownership Mode:
Vserver Change Ownership Mode:
IV
repos_root_readonly_export_policy
1
any
or Domain: 0.0.0.0/0
any
never
0
any
true
true
fail
restricted
-
132
Vserver:
Policy Name:
Rule Index:
Access Protocol:
Client Match Hostname, IP Address, Netgroup,
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Types:
Honor SetUID Bits in SETATTR:
Allow Creation of Devices:
NTFS Unix Security Options:
Vserver NTFS Unix Security Options:
Change Ownership Mode:
Vserver Change Ownership Mode:
Note:
IV
repos_root_readonly_export_policy
2
any
or Domain: ::0/0
any
never
0
any
true
true
fail
restricted
-
The policies named default and repos_restricted_export_policy do not contain any rules by
default.
For information about how these rules affect access, see section 3.4, Translation of NFS Export Policy
Rules from 7-Mode to Clustered Data ONTAP.
The chown-mode option is restricted by default and only available at advanced privilege.
cluster::> set advanced
cluster::*> vserver nfs modify -vserver [SVM] chown-mode
<resticted|unrestricted|use_export_policy>
In UNIX or mixed security style, this option applies only to NFSv4 ACLs. This option is not relevant in
NTFS security style because NFS permission change operations are blocked.
133
When v4-acl-preserve is enabled, it is not possible to affect (add, modify, or remove) Windows ACEs
using NFSv3. A chmod command can manipulate the NFS well-known principal ACEs (OWNER@,
GROUP@, and EVERYONE@), but it cannot manipulate any other ACEs in the ACL.
When v4-acl-preserve is disabled, a chmod command replaces an existing NTFS or NFSv4 ACL with the
mode bits specified by the command.
The v4-acl-preserve option is enabled by default and only available at advanced privilege.
cluster::> set advanced
cluster::> vserver nfs modify -vserver [SVM] v4-acl-preserve <enable|disable>
134
SecD Troubleshooting
SecD provides a number of diag-level commands to troubleshoot authentication and permissions issues.
The following information shows examples of commands to use for various scenarios. All commands are
at the diagnostic level (denoted by * in the CLI prompt). Exercise caution while at the diagnostic level.
Check name mapping functionality:
cluster::*> diag secd name-mapping show -node node1 -vserver vs0 -direction unix-win -name
ldapuser
ldapuser maps to WIN2K8\ldapuser
Check user name credentials and group membership as SecD sees them:
cluster::*> diag secd authentication show-creds -node node1 -vserver vs0 -unix-user-name ldapuser
-list-name true -list-id true
UNIX UID: 55 (ldapuser) <> Windows User: S-1-5-21-2216667725-3041544054-3684732124-1123
(DOMAIN\ldapuser (Domain User))
GID: 513 (Domain Users)
Supplementary GIDs:
503 (unixadmins)
Windows Membership:
S-1-5-21-2216667725-3041544054-3684732124-513
DOMAIN\Domain Users (Domain group)
S-1-5-21-2216667725-3041544054-3684732124-1108
DOMAIN\unixadmins (Domain group)
S-1-5-32-545
BUILTIN\Users (Alias)
User is also a member of Everyone, Authenticated Users, and Network Users
Privileges (0x80):
135
Note:
Restarting SecD is not necessary in most cases and should be done only as a last resort.
Restarting SecD is not needed to set log tracing. It is used only to clear all caches at once, fix
configuration replay issues, or clear a hung process.
Cannot Mount
The following section covers common errors and scenarios in which an NFS mount fails to a clustered
Data ONTAP system. The section also covers how to resolve the issue.
Note:
The 7-Mode option nfs.mountd.trace is currently not available in clustered Data ONTAP.
Error
What to Check
How to Resolve
NFS client
NFS server
-
Is SecD running?
136
Clientmatch is incorrect
RO policy is set to
incorrect value.
NFS server
-
TCP/UDP settings.
Network/firewall
-
SVM
-
137
Network
Mount syntax
-
NFS server
-
NFS client
-
If volume permission
changes have been
made, has the volume
been remounted?
NFS client
NFS client
Is something already
mounted to that mount
point?
NFS client
-
NFS server
Operation not permitted
NFS client
-
NFS server
138
Is superuser set
properly?
For information regarding mount issues using NFS Kerberos, see TR-4073: Secure Unified Authentication
for more details.
Error
What to Check
How to Resolve
NFS server
NFS server
-
Is chown allowed by
anyone other than root?
NFS client
Not a directory (when traversing
Snapshot directory)
139
NFS client
-
Error
What to Check
How to Resolve
NFS client
/var/log/messages file.
NFS server
140
Error
What to Check
How to Resolve
NFS server
NFS client
-
NFS server
NFS client
141
The following command provides information from each individual volume about NFS workload and
latency.
cluster::*> statistics show-periodic -node node1
-object volume -instance vs2_nfs4_data3
The following command identifies the type of protocol in use and the details of the RPC calls. These are
available in advanced privilege.
cluster::*> statistics oncrpc show-rpc-calls -node node1 -protocol tcp
Node:
Transport Protocol:
Bad Procedure Calls:
Bad Length Calls:
Bad Header Calls:
Bad Calls:
Bad Program Calls:
Total Calls:
node1
tcp
0
0
8
0/s:16s
8
0/s:16s
0
116491426
58/s:16s
Per-client statistics are also available to identify which client IP addresses are generating what NFS traffic
in clustered Data ONTAP. These are available in advanced privilege.
cluster::> set diag
cluster::*> statistics settings modify -client-stats enabled
Warning: System performance may be significantly impacted. Are you sure?
Do you want to continue? {y|n}: y
cluster::*> statistics show -object client
142
In clustered Data ONTAP, use the locks show command to list all the locks assigned to files residing in
a specific volume under an SVM.
cluster::*> vserver locks show
143
The locks break command can be used to remove a lock on a particular file.
cluster::*> locks break -vserver vs2 -volume vs2_nfs4_data2 -lif vs2_nfs4_data1 -path
/nfs4_ds2/app/grid/product/11.2.0/dbhome_1/oc4j/j2ee/home/persistence/jms.state
Perfstat8 is also available for clustered Data ONTAP for use in performance collection. Each version
of Perfstat improves data collection for clusters.
"Admin" and "diag" user access is needed to run the perfstat command.
The following command illustrates how to capture a perfstat for a clustered Data ONTAP cluster. The
cluster management IP should always be used. Perfstat will discern the nodes in the cluster and collect
data for each node. In this example, the cluster management IP is 172.17.37.200 for a 4-node cluster.
This perfstat collects 24 iterations with a sleep time of 300 seconds between iterations. More examples
are available from the Perfstat8 tool download page:
https://support.netapp.com/NOW/download/tools/perfstat/perfstat8.shtml
A valid NetApp Support account is required for access to the perfstat8 tool.
[root@linux]# ./perfstat8 --verbose -i 24,300 172.17.37.200
NFS per-client statistics do not exist under statistics in 8.2; they exist only under statisticsv1.
In 8.2.1, per-client statistics work properly with the regular statistics commands.
Regular statistics commands can implement multiple counters. These will be separated by a pipe
symbol rather than comma-separated, as seen in previous versions of clustered Data ONTAP.
Example:
cluster::> statistics show -object zapi|aggregate
Note:
Currently there is no way to sort top clients in per-client statistics. Newer releases of clustered
Data ONTAP will introduce new performance improvements and bug fixes so that statistics-v1 will
no longer be necessary.
For more information regarding performance in clustered Data ONTAP, see TR-4211: NetApp Storage
Performance Primer for Clustered Data ONTAP 8.2.
The output of these statistics will show masks for specific virtual machines. The masks are described
later.
144
Mask
None
ESX/ESXi
Citrix Xen
If more than one virtual machine application is being used, then the masks are added together to
determine which ones are in use. For example, if ESX/ESXi and Red Hat KVM are in use, then the masks
would be 1 + 4 = 5.
To collect these statistics, the sample must be started and stopped using the statistics start
command. The following is an example of what those statistics look like in diagnostic privilege.
Example:
cluster::> set diag
cluster ::*> statistics start -object wafl
Statistics collection is being started for Sample-id: sample_454
cluster ::*> statistics stop
Statistics collection is being stopped for Sample-id: sample_454
cluster::*> statistics show -object wafl -counter wafl_nfs_application_mask
Object: wafl
Instance: wafl
Start-time: 2/2/2015 10:08:09
End-time: 2/2/2015 10:08:28
Elapsed-time: 19s
Node: node1
Counter
Value
-------------------------------- -------------------------------wafl_nfs_application_mask
2
Object: wafl
Instance: wafl
Start-time: 2/2/2015 10:08:09
End-time: 2/2/2015 10:08:28
Elapsed-time: 19s
Node: node2
Counter
Value
-------------------------------- -------------------------------wafl_nfs_application_mask
2
2 entries were displayed.
145
Example:
cluster::*> statistics show -object wafl -counter wafl_nfs_oracle_wcount
Object: wafl
Instance: wafl
Start-time: 2/2/2015 10:08:09
End-time: 2/2/2015 10:08:28
Elapsed-time: 19s
Node: node1
Counter
Value
-------------------------------- -------------------------------wafl_nfs_oracle_wcount
168
Object: wafl
Instance: wafl
Start-time: 2/2/2015 10:08:09
End-time: 2/2/2015 10:08:28
Elapsed-time: 19s
Node: node2
Counter
Value
-------------------------------- -------------------------------wafl_nfs_oracle_wcount
188
2 entries were displayed.
146
Appendix
NFS Server Option List in Clustered Data ONTAP
Table 26) NFS options in clustered Data ONTAP.
NFS Option
Version
Privilege
Level
Definition
All
Admin
NFSv3 (-v3)
All
Admin
Enable/disable NFSv3.
NFSv4 (-v4.0)
8.1 and
later
Admin
Enable/disable NFSv4.
UDP (-udp)
All
Admin
Enable/disable UDP.
TCP (-tcp)
All
Admin
Enable/disable TCP.
Spin Authentication
(-spinauth)
All
Admin
All
Admin
8.1 and
later
Admin
8.1 and
later
Admin
8.1 and
later
Admin
8.1 and
later
Admin
Rquota Enable
(-rquota)
8.1 and
later
Admin
pNFS Support
(-v4.1-pnfs)
8.1 and
later
Admin
147
8.1 and
later
Admin
8.1 and
later
Admin
8.2 and
later
Admin
8.2 and
later
Admin
8.2 and
later
Admin
8.2 and
later
Admin
All
NFSv2
(-v2)
8.0 and
8.1.x
All
8.0 only
All
All
148
(-v3-fsid-change)
All
All
All
8.1 and
later
8.1 and
later
8.1 and
later
149
8.1 and
later
8.1 and
later
8.1 and
later
NFSv4.1 Implementation ID
Domain
(-v4.1-implementation-domain)
8.1 and
later
8.1 and
later
8.2 and
later
NFSv4.1 Implementation ID
Name
(-v4.1-implementation-name)
NFSv4.1 Implementation ID
Date
(-v4.1-implementation-date)
150
8.2 and
later
Showmount Enabled
(-showmount)
AUTH_SYS and
RPCSEC_GSS Auxiliary
Groups Limit
8.3 and
later
8.3 and
later
8.3 and
later
8.3 and
later
8.3 and
later
8.3 and
later
(-auth-sys-extended-groups)
151
8.3 and
later
(-extended-groups-limit)
8.3 and
later
8.3 and
later
8.3.1 and
later
8.2.3 and
later; 8.3.1
and later
Admin
(-name-service-lookup-protocol)
8.3.1 and
later
(-skip-root-owner-write-permcheck)
152
8.3.1 and
later
perms)
8.3.1 and
later
8.3.1 and
later
8.3.1 and
later
(-netgroup-dns-domain-search)
153
Privilege Level
What It Does
Policy Name
Admin
Admin
Admin
Admin
Admin
Admin
Admin
Admin
Admin
(-policyname)
Rule Index
(-ruleindex)
Access Protocol
(-protocol)
Client Match
(-clientmatch)
154
(-allow-dev)
NTFS Unix Security Options
rule.
Advanced
Advanced
Advanced
Advanced
(-ntfs-unix-security-ops)
155
7-Mode Option
How to Apply
Remark
nfs.response.trace
nfs.rpcsec.ctx.high
nfs.rpcsec.ctx.idle
nfs.tcp.enable
nfs.udp.xfersize
nfs.v3.enable
156
7-Mode Option
How to Apply
Remark
nfs.v4.enable
nfs.v4.read_delegation
nfs.v4.write_delegation
nfs.tcp.xfersize
nfs.v4.acl.enable
nfs.v4.reply_drop
nfs.v4.id.domain
locking.grace_lease_seconds
nfs.v4.snapshot.active.fsid.enable vserver nfs modify -vserver This affects the behavior of the
vs0 -v4-fsid-change
fsid used for the .snapshot
directory and entities in the
.snapshot directory. The
default behavior is that they use
a different fsid than the active
copy of the files in the file
system. When this option is
enabled, the fsid is identical to
that for files in the active file
system. "Off" by default.
157
kerberos.file_keytab.principal
kerberos.file_keytab.realm
nfs.kerberos.enable on/off
'vserver services
kerberos-realm modify -kdcvendor Other'
In this case, the keytab file must be
added to the clustered Data ONTAP
configuration:
158
159
References
TR-3580: NFSv4 Enhancements and Best Practices Guide: Data ONTAP Implementation
TR-4182: Ethernet Storage Best Practices for Clustered Data ONTAP Configurations
TR-4191: Best Practices Guide for Clustered Data ONTAP 8.2.x and 8.3 Windows File Services
TR-4211: NetApp Storage Performance Primer for Clustered Data ONTAP 8.2
RFC 5661: Network File System (NFS) Version 4 Minor Version 1 Protocol
160
Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact
product and feature versions described in this document are supported for your specific environment. The
NetApp IMT defines the product components and versions that can be used to construct configurations
that are supported by NetApp. Specific results depend on each customer's installation in accordance with
published specifications.
Copyright Information
Copyright 19942016 NetApp, Inc. All rights reserved. Printed in the U.S. No part of this document
covered by copyright may be reproduced in any form or by any meansgraphic, electronic, or
mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system
without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY
DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein, except as
expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license
under any patent rights, trademark rights, or any other intellectual property rights of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents, or
pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to
restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software
clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
Trademark Information
NetApp, the NetApp logo, Go Further, Faster, AltaVault, ASUP, AutoSupport, Campaign Express, Cloud
ONTAP, Clustered Data ONTAP, Customer Fitness, Data ONTAP, DataMotion, Fitness, Flash Accel,
Flash Cache, Flash Pool, FlashRay, FlexArray, FlexCache, FlexClone, FlexPod, FlexScale, FlexShare,
FlexVol, FPolicy, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster, MultiStore, NetApp
Insight, OnCommand, ONTAP, ONTAPI, RAID DP, RAID-TEC, SANtricity, SecureShare, Simplicity,
Simulate ONTAP, SnapCenter, Snap Creator, SnapCopy, SnapDrive, SnapIntegrator, SnapLock,
SnapManager, SnapMirror, SnapMover, SnapProtect, SnapRestore, Snapshot, SnapValidator,
SnapVault, StorageGRID, Tech OnTap, Unbound Cloud, WAFL, and other names are trademarks or
registered trademarks of NetApp Inc., in the United States and/or other countries. All other brands or
products are trademarks or registered trademarks of their respective holders and should be treated as
such. A current list of NetApp trademarks is available on the web at
http://www.netapp.com/us/legal/netapptmlist.aspx. TR-4067-0216
161