NFS Version 4 Working Group S. Shepler INTERNET-DRAFT B. Callaghan Document: draft-ietf-nfsv4-02.txt M. Eisler D. Robinson R. Thurlow Sun Microsystems D. Noveck Network Appliance C. Beame Hummingbird Communications October 1999 NFS version 4 Status of this Memo This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet- Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Abstract NFS version 4 is a distributed file system protocol which owes heritage to NFS versions 2 [RFC1094] and 3 [RFC1813]. Unlike earlier versions, NFS version 4 supports traditional file access while integrating support for file locking and the mount protocol. In addition, support for strong security (and its negotiation), compound operations, and internationlization have been added. Of course, Expires: April 2000 [Page 1] Draft Protocol Specification NFS version 4 October 1999 attention has been applied to making NFS version 4 operate well in an Internet environment. Copyright Copyright (C) The Internet Society (1999). All Rights Reserved. Key Words The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119. Expires: April 2000 [Page 2] Draft Protocol Specification NFS version 4 October 1999 Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 7 2. RPC and Security Flavor . . . . . . . . . . . . . . . . . . 8 2.1. Ports and Transports . . . . . . . . . . . . . . . . . . . 8 2.2. Security Flavors . . . . . . . . . . . . . . . . . . . . . 8 2.2.1. Security mechanisms for NFS version 4 . . . . . . . . . 8 2.2.1.1. Kerberos V5 as security triple . . . . . . . . . . . . 8 2.2.1.2. . . . . . . . . . . . . . . 9 2.3. Security Negotiation . . . . . . . . . . . . . . . . . . . 9 2.3.1. Security Error . . . . . . . . . . . . . . . . . . . . 10 2.3.2. SECINFO . . . . . . . . . . . . . . . . . . . . . . . 10 3. File handles . . . . . . . . . . . . . . . . . . . . . . . 11 3.1. Obtaining the First File Handle . . . . . . . . . . . . 11 3.1.1. Root File Handle . . . . . . . . . . . . . . . . . . . 11 3.1.2. Public File Handle . . . . . . . . . . . . . . . . . . 12 3.2. File Handle Types . . . . . . . . . . . . . . . . . . . 12 3.2.1. General Properties of a File Handle . . . . . . . . . 12 3.2.2. Persistent File Handle . . . . . . . . . . . . . . . . 13 3.2.3. Volatile File Handle . . . . . . . . . . . . . . . . . 13 3.2.4. One Method of Constructing a Volatile File Handle . . 15 3.3. Client Recovery from File Handle Expiration . . . . . . 15 4. Basic Data Types . . . . . . . . . . . . . . . . . . . . . 17 5. File Attributes . . . . . . . . . . . . . . . . . . . . . 19 5.1. Mandatory Attributes . . . . . . . . . . . . . . . . . . 20 5.2. Recommended Attributes . . . . . . . . . . . . . . . . . 20 5.3. Named Attributes . . . . . . . . . . . . . . . . . . . . 20 5.4. Mandatory Attributes - Definitions . . . . . . . . . . . 22 5.5. Recommended Attributes - Definitions . . . . . . . . . . 25 5.6. Interpreting owner and owner_group . . . . . . . . . . . 30 5.7. Access Control Lists . . . . . . . . . . . . . . . . . . 30 5.7.1. ACE type . . . . . . . . . . . . . . . . . . . . . . . 31 5.7.2. ACE flag . . . . . . . . . . . . . . . . . . . . . . . 31 5.7.3. ACE Access Mask . . . . . . . . . . . . . . . . . . . 33 5.7.4. ACE who . . . . . . . . . . . . . . . . . . . . . . . 33 6. Filesystem Migration and Replication . . . . . . . . . . . 35 6.1. Replication . . . . . . . . . . . . . . . . . . . . . . 35 6.2. Migration . . . . . . . . . . . . . . . . . . . . . . . 35 6.3. Interpretation of the fs_locations Attribute . . . . . . 36 6.4. Filehandle Recovery for Migration or Replication . . . . 37 7. NFS Server Namespace . . . . . . . . . . . . . . . . . . . 38 7.1. Server Exports . . . . . . . . . . . . . . . . . . . . . 38 7.2. Browsing Exports . . . . . . . . . . . . . . . . . . . . 38 7.3. Server Pseudo File-System . . . . . . . . . . . . . . . 39 7.4. Multiple Roots . . . . . . . . . . . . . . . . . . . . . 39 7.5. Filehandle Volatility . . . . . . . . . . . . . . . . . 39 7.6. Exported Root . . . . . . . . . . . . . . . . . . . . . 40 7.7. Mount Point Crossing . . . . . . . . . . . . . . . . . . 40 Expires: April 2000 [Page 3] Draft Protocol Specification NFS version 4 October 1999 7.8. Security Policy and Namespace Presentation . . . . . . . 41 7.9. Summary . . . . . . . . . . . . . . . . . . . . . . . . 41 8. File Locking . . . . . . . . . . . . . . . . . . . . . . . 42 8.1. Definitions . . . . . . . . . . . . . . . . . . . . . . 42 8.2. Locking . . . . . . . . . . . . . . . . . . . . . . . . 43 8.2.1. Client ID . . . . . . . . . . . . . . . . . . . . . . 43 8.2.2. nfs_lockowner and stateid Definition . . . . . . . . . 45 8.2.3. Use of the stateid . . . . . . . . . . . . . . . . . . 45 8.2.4. Sequencing of Lock Requests . . . . . . . . . . . . . 46 8.3. Blocking Locks . . . . . . . . . . . . . . . . . . . . . 46 8.4. Lease Renewal . . . . . . . . . . . . . . . . . . . . . 47 8.5. Crash Recovery . . . . . . . . . . . . . . . . . . . . . 47 8.5.1. Client Failure and Recovery . . . . . . . . . . . . . 47 8.5.2. Server Failure and Recovery . . . . . . . . . . . . . 48 8.5.3. Network Partitions and Recovery . . . . . . . . . . . 48 8.6. Server Revocation of Locks . . . . . . . . . . . . . . . 49 8.7. Share Reservations . . . . . . . . . . . . . . . . . . . 50 8.8. OPEN/CLOSE Procedures . . . . . . . . . . . . . . . . . 51 9. Client-Side Caching . . . . . . . . . . . . . . . . . . . 52 9.1. Performance Challenges for Client-Side Caching . . . . . 52 9.2. Proxy Caching . . . . . . . . . . . . . . . . . . . . . 53 9.3. Delegation and Callbacks . . . . . . . . . . . . . . . . 54 9.3.1. Delegation Recovery . . . . . . . . . . . . . . . . . 55 9.4. Data Caching . . . . . . . . . . . . . . . . . . . . . . 57 9.4.1. Data Caching and OPENs . . . . . . . . . . . . . . . . 57 9.4.2. Data Caching and File Locking . . . . . . . . . . . . 58 9.4.3. Data Caching and Mandatory File Locking . . . . . . . 59 9.4.4. Data Caching and File Identity . . . . . . . . . . . . 60 9.5. Open Delegation . . . . . . . . . . . . . . . . . . . . 61 9.5.1. Open Delegation and Data Caching . . . . . . . . . . . 63 9.5.2. Open Delegation and File Locks . . . . . . . . . . . . 64 9.5.3. Recall of Open Delegation . . . . . . . . . . . . . . 64 9.5.4. Delegation Revocation . . . . . . . . . . . . . . . . 67 9.6. Data Caching and Revocation . . . . . . . . . . . . . . 67 9.6.1. Revocation Recovery for Write Open Delegation . . . . 67 9.7. Attribute Caching . . . . . . . . . . . . . . . . . . . 68 9.8. Name Caching . . . . . . . . . . . . . . . . . . . . . . 69 9.9. Directory Caching . . . . . . . . . . . . . . . . . . . 70 10. Defined Error Numbers . . . . . . . . . . . . . . . . . . 72 11. NFS Version 4 Requests . . . . . . . . . . . . . . . . . 77 11.1. Compound Procedure . . . . . . . . . . . . . . . . . . 77 11.2. Evaluation of a Compound Request . . . . . . . . . . . 77 12. NFS Version 4 Procedures . . . . . . . . . . . . . . . . 79 12.1. Procedure 0: NULL - No Operation . . . . . . . . . . . 79 12.2. Procedure 1: COMPOUND - Compound Operations . . . . . . 80 12.2.1. Operation 2: ACCESS - Check Access Rights . . . . . . 82 12.2.2. Operation 3: CLOSE - Close File . . . . . . . . . . . 86 12.2.3. Operation 4: COMMIT - Commit Cached Data . . . . . . 88 Expires: April 2000 [Page 4] Draft Protocol Specification NFS version 4 October 1999 12.2.4. Operation 5: CREATE - Create a Non-Regular File Object 91 12.2.5. Operation 6: DELEGPURGE - Purge Delegations Awaiting Recovery . . . . . . . . . . . . . . . . . . . . . . 95 12.2.6. Operation 7: DELEGRETURN - Return Delegation . . . . 96 12.2.7. Operation 8: GETATTR - Get Attributes . . . . . . . . 97 12.2.8. Operation 9: GETFH - Get Current Filehandle . . . . . 99 12.2.9. Operation 10: LINK - Create Link to a File . . . . . 101 12.2.10. Operation 11: LOCK - Create Lock . . . . . . . . . . 103 12.2.11. Operation 12: LOCKT - Test For Lock . . . . . . . . 105 12.2.12. Operation 13: LOCKU - Unlock File . . . . . . . . . 107 12.2.13. Operation 14: LOOKUP - Lookup Filename . . . . . . . 109 12.2.14. Operation 15: LOOKUPP - Lookup Parent Directory . . 112 12.2.15. Operation 16: NVERIFY - Verify Difference in Attributes . . . . . . . . . . . . . . . . . . . . . 114 12.2.16. Operation 17: OPEN - Open a Regular File . . . . . . 116 12.2.17. Operation 18: OPENATTR - Open Named Attribute Directory . . . . . . . . . . . . . . . . . . . . . 124 12.2.18. Operation 19: PUTFH - Set Current Filehandle . . . . 126 12.2.19. Operation 20: PUTPUBFH - Set Public Filehandle . . . 128 12.2.20. Operation 21: PUTROOTFH - Set Root Filehandle . . . 129 12.2.21. Operation 22: READ - Read from File . . . . . . . . 130 12.2.22. Operation 23: READDIR - Read Directory . . . . . . . 133 12.2.23. Operation 24: READLINK - Read Symbolic Link . . . . 137 12.2.24. Operation 25: REMOVE - Remove Filesystem Object . . 139 12.2.25. Operation 26: RENAME - Rename Directory Entry . . . 141 12.2.26. Operation 27: RENEW - Renew a Lease . . . . . . . . 144 12.2.27. Operation 28: RESTOREFH - Restore Saved Filehandle . 145 12.2.28. Operation 29: SAVEFH - Save Current Filehandle . . . 147 12.2.29. Operation 30: SECINFO - Obtain Available Security . 149 12.2.30. Operation 31: SETATTR - Set Attributes . . . . . . . 151 12.2.31. Operation 32: SETCLIENTID - Negotiated Clientid . . 154 12.2.32. Operation 33: VERIFY - Verify Same Attributes . . . 156 12.2.33. Operation 34: WRITE - Write to File . . . . . . . . 158 13. NFS Version 4 Callback Procedures . . . . . . . . . . . . 163 13.1. Procedure 0: CB_NULL - No Operation . . . . . . . . . . 163 13.2. Procedure 1: CB_COMPOUND - Compound Operations . . . . 164 13.2.1. Procedure 2: CB_GETATTR - Get Attributes . . . . . . 166 13.2.2. Procedure 3: CB_RECALL - Recall an Open Delegation . 168 14. Locking notes . . . . . . . . . . . . . . . . . . . . . . 170 14.1. Short and long leases . . . . . . . . . . . . . . . . . 170 14.2. Clocks and leases . . . . . . . . . . . . . . . . . . . 170 14.3. Locks and lease times . . . . . . . . . . . . . . . . . 170 14.4. Locking of directories and other meta-files . . . . . . 171 14.5. Proxy servers and leases . . . . . . . . . . . . . . . 171 14.6. Locking and the new latency . . . . . . . . . . . . . . 171 15. Internationalization . . . . . . . . . . . . . . . . . . 172 15.1. Universal Versus Local Character Sets . . . . . . . . . 172 15.2. Overview of Universal Character Set Standards . . . . . 173 Expires: April 2000 [Page 5] Draft Protocol Specification NFS version 4 October 1999 15.3. Difficulties with UCS-4, UCS-2, Unicode . . . . . . . . 174 15.4. UTF-8 and its solutions . . . . . . . . . . . . . . . . 175 16. Security Considerations . . . . . . . . . . . . . . . . . 176 17. NFS Version 4 RPC definition file . . . . . . . . . . . . 177 18. Bibliography . . . . . . . . . . . . . . . . . . . . . . 206 19. Authors and Contributors . . . . . . . . . . . . . . . . 210 19.1. Editor's Address . . . . . . . . . . . . . . . . . . . 210 19.2. Authors' Addresses . . . . . . . . . . . . . . . . . . 210 20. Full Copyright Statement . . . . . . . . . . . . . . . . 212 Expires: April 2000 [Page 6] Draft Protocol Specification NFS version 4 October 1999 1. Introduction NFS version 4 is a further revision of the NFS protocol defined already by versions 2 [RFC1094] and 3 [RFC1813]. It retains the essential characteristics of previous versions: design for easy recovery, independent of transport protocols, operating systems and filesystems, simplicity, and good performance. The NFS version 4 revision has the following goals: o Improved access and good performance on the Internet. The protocol is designed to transit firewalls easily, perform well where latency is high and bandwidth is low, and scale to very large numbers of clients per server. o Strong security with negotiation built into the protocol. The protocol builds on the work of the ONCRPC working group in supporting the RPCSEC_GSS protocol. Additionally NFS version 4 provides a mechanism to allow clients and servers to negotiate security and require clients and servers to support a minimal set of security schemes. o Good cross-platform interoperability. The protocol features a filesystem model that provides a useful, common set of features that does not unduly favor one filesystem or operating system over another. o Designed for protocol extensions. The protocol is designed to accept standard extensions that do not compromise backward compatibility. Expires: April 2000 [Page 7] Draft Protocol Specification NFS version 4 October 1999 2. RPC and Security Flavor The NFS version 4 protocol is a Remote Procedure Call (RPC) application that uses RPC version 2 and the corresponding eXternal Data Representation (XDR) as defined in [RFC1831] and [RFC1832]. The RPCSEC_GSS security flavor as defined in [RFC2203] MUST be used as the mechanism to deliver stronger security to NFS version 4. 2.1. Ports and Transports Historically, NFS version 2 and version 3 servers have resided on UDP/TCP port 2049. Port 2049 is a IANA registered port number for NFS and therefore will continue to be used for NFS version 4. Using the well known port for NFS services means the NFS client will not need to use the RPC binding protocols as described in [RFC1833]; this will allow NFS to transit firewalls. The NFS server SHOULD offer its RPC service via TCP as the primary transport. The server SHOULD also provide UDP for RPC service. The NFS client SHOULD also have a preference for TCP usage but may supply a mechanism to override TCP in favor of UDP as the RPC transport. 2.2. Security Flavors Traditional RPC implementations have included AUTH_NONE, AUTH_SYS, AUTH_DH, and AUTH_KRB4 as security flavors. With [RFC2203] an additional security flavor of RPCSEC_GSS has been introduced which uses the functionality of GSS-API [RFC2078]. This allows for the use of varying security mechanisms by the RPC layer without the additional implementation overhead of adding RPC security flavors. For NFS version 4, the RPCSEC_GSS security flavor MUST be used to enable the mandatory security mechanism. The flavors AUTH_NONE, AUTH_SYS, and AUTH_DH MAY be implemented as well. 2.2.1. Security mechanisms for NFS version 4 The use of RPCSEC_GSS requires selection of: mechanism, quality of protection, and service (authentication, integrity, privacy). The remainder of this document will refer to these three parameters of the RPCSEC_GSS security as the security triple. 2.2.1.1. Kerberos V5 as security triple The Kerberos V5 GSS-API mechanism as described in [RFC1964] MUST be implemented and provide the following security triples. columns: Expires: April 2000 [Page 8] Draft Protocol Specification NFS version 4 October 1999 1 == number of pseudo flavor 2 == name of pseudo flavor 3 == mechanism's OID 4 == mechanism's algorithm(s) 5 == RPCSEC_GSS service 1 2 3 4 5 ----------------------------------------------------------------------- 390003 krb5 1.2.840.113554.1.2.2 DES MAC MD5 rpc_gss_svc_none 390004 krb5i 1.2.840.113554.1.2.2 DES MAC MD5 rpc_gss_svc_integrity 390005 krb5p 1.2.840.113554.1.2.2 DES MAC MD5 rpc_gss_svc_privacy for integrity, and 56 bit DES for privacy. Note that the pseudo flavor is presented here as a mapping aid to the implementor. Because this NFS protocol includes a method to negotiate security and it understands the GSS-API mechanism, the pseudo flavor is not needed. The pseudo flavor is needed for NFS version 3 since the security negotiation is done via the MOUNT protocol. For a discussion of NFS' use of RPCSEC_GSS and Kerberos V5, please see [RFC2623]. 2.2.1.2. Another GSS-API mechanism will need to be specified here along with the corresponding security triple(s). 2.3. Security Negotiation With the NFS version 4 server potentially offering multiple security mechanisms, the client will need a way to determine or negotiate which mechanism is to be used for its communication with the server. The NFS server may have multiple points within its file system name space that are available for use by NFS clients. In turn the NFS server may be configured such that each of these entry points may have different or multiple security mechanisms in use. The security negotiation between client and server must be done with a secure channel to eliminate the possibility of a third party intercepting the negotiation sequence and forcing the client and server to choose a lower level of security than required/desired. Expires: April 2000 [Page 9] Draft Protocol Specification NFS version 4 October 1999 2.3.1. Security Error Based on the assumption that each NFS version 4 client and server must support a minimum set of security (i.e. Kerberos-V5 under RPCSEC_GSS, ), the NFS client will start its communication with the server with one of the minimal security triples. During communication with the server, the client may receive an NFS error of NFS4ERR_WRONGSEC. This error allows the server to notify the client that the security triple currently being used is not appropriate for access to the server's file system resources. The client is then responsible for determining what security triples are available at the server and choose one which is appropriate for the client. 2.3.2. SECINFO The new procedure SECINFO (see SECINFO procedure definition) will allow the client to determine, on a per filehandle basis, what security triple is to be used for server access. In general, the client will not have to use the SECINFO procedure except during initial communication with the server or when the client crosses policy boundaries at the server. It could happen that the server's policies change during the client's interaction therefore forcing the client to negotiate a new security triple. Expires: April 2000 [Page 10] Draft Protocol Specification NFS version 4 October 1999 3. File handles The file handle in the NFS protocol is a per server unique identifier for a file system object. The contents of the file handle are opaque to the client. Therefore, the server is responsible for translating the file handle to an internal representation of the file system object. Since the file handle is the client's reference to an object and the client may cache this reference, the server should not reuse a file handle for another file system object. If the server needs to reuse a file handle value, the time elapsed before reuse SHOULD be large enough that it is likely the client no longer has a cached copy of the reused file handle value. 3.1. Obtaining the First File Handle The procedures of the NFS protocol are defined in terms of one or more file handles. Therefore, the client needs a file handle to initiate communication with the server. With NFS version 2 [RFC1094] and NFS version 3 [RFC1813], there exists an ancillary protocol to obtain this first file handle. The MOUNT protocol, RPC program number 100005, provides the mechanism of translating a string based file system path name to a file handle which can then be used by the NFS protocols. The MOUNT protocol has deficiencies in the area of security and use via firewalls. This is one reason that the use of the public file handle was introduced [RFC2054] [RFC2055]. With the use of public file handle in combination with the LOOKUP procedure in NFS version 2 and 3, it has been demonstrated that the MOUNT protocol is unnecessary for viable interaction between NFS client and server. Therefore, NFS version 4 will not use an ancillary protocol for translation from string based path names to a file handle. Two special file handles will be used as starting points for the NFS client. 3.1.1. Root File Handle The first of the special file handles is the ROOT file handle. The ROOT file handle is the "conceptual" root of the file system name space at the NFS server. The client uses or starts with the ROOT file handle by employing the PUTROOTFH procedure. The PUTROOTFH procedure instructs the server to set the "current" file handle to the ROOT of the server's file tree. Once this PUTROOTFH procedure is used, the client can then traverse the entirety of the server's file tree with the LOOKUP procedure. A complete discussion of the server name space is in section 7, "NFS Server Name Space". Expires: April 2000 [Page 11] Draft Protocol Specification NFS version 4 October 1999 3.1.2. Public File Handle The second special file handle is the PUBLIC file handle. Unlike the ROOT file handle, the PUBLIC file handle may be bound or represent an arbitrary file system object at the server. The server is responsible for this binding. It may be that the PUBLIC file handle and the ROOT file handle refer to the same file system object. However, it is up to the administrative software at the server and the policies of the server administrator to define the binding of the PUBLIC file handle and server file system object. The client may not make any assumptions about this binding. 3.2. File Handle Types In NFS version 2 and 3, there was one type of file handle with a single set of semantics. NFS version 4 introduces a new type of file handle in an attempt to accommodate certain server environments. The first type of file handle is 'persistent'. The semantics of a persistent file handle are the same as the file handles of NFS version 2 and 3. The second or new type of file handle is the 'volatile' file handle. The volatile file handle type is being introduced to address server functionality or implementation issues which prevent correct or feasible implementation of a persistent file handle. Some server environments do not provide a file system level invariant that can be used to construct a persistent file handle. The underlying server file system may not provide the invariant or the server's file system APIs may not provide access to the needed invariant. Volatile file handles may ease the implementation of server functionality such as hierarchical storage management or file system reorganization or migration. However, the volatile file handle increases the implementation burden for the client but this increased burden is deemed acceptable based on the overall gains achieved by the protocol. Since the client will have different paths of logic to handle persistent and volatile file handles, a file attribute is defined which may be used by the client to determine the file handle types being returned by the server. 3.2.1. General Properties of a File Handle The file handle contains all the information the server needs to distinguish an individual file. To the client, the file handle is opaque. The client stores file handles for use in a later request and Expires: April 2000 [Page 12] Draft Protocol Specification NFS version 4 October 1999 can compare two file handles from the same server for equality by doing a byte-by-byte comparison, but MUST NOT otherwise interpret the contents of file handles. If two file handles from the same server are equal, they MUST refer to the same file, but if they are not equal, no conclusions can be drawn. Servers SHOULD try to maintain a one-to-one correspondence between file handles and files but this is not required. Clients MUST only use file handle comparisons only to improve performance, not for correct behavior. As an example, in the case that two different path names when traversed at the server terminate at the same file system object, the server SHOULD return the same file handle for each path. This can occur if a hard link is used to create two file names which refer to the same underlying file object and associated data. For example, if paths /a/b/c and /a/d/c refer to the same file, the server SHOULD return the same file handle for both path names traversals. 3.2.2. Persistent File Handle A persistent file handle is defined as having a persistent value for the lifetime of the file system object to which it refers. Once the server creates the file handle for a file system object, the server MUST return the same file handle for the object for the lifetime of the object. If the server restarts or reboots, or the filesystem is migrated, the NFS server must honor and present the same file handle value as it did in the server's previous instantiation. The persistent file handle will be become stale or invalid when the file system object is removed. When the server is presented with a persistent file handle that refers to a deleted object, it MUST return an error of NFS4ERR_STALE. A file handle may become stale when the file system containing the object is no longer available. The file system may become unavailable if it exists on removable media and the media is no longer available at the server or the file system in whole has been destroyed or the file system has simply been removed from the server's name space (i.e. unmounted in a Unix environment). 3.2.3. Volatile File Handle A volatile file handle does not share the same longevity attributes of the persistent file handle. The server may determine that a volatile file handle is no longer valid at many different points in time. If the server can definitively determine that a volatile file handle refers to an object that has been removed, the server should return NFS4ERR_STALE to the client (as is the case for persistent Expires: April 2000 [Page 13] Draft Protocol Specification NFS version 4 October 1999 file handles). In all other cases where the server determines that a volatile file handle can no longer be used, it should return an error of NFS4ERR_EXPIRED. The following table shows the most common points at which a volatile file handle may expire. This table represents the view from the client's perspective and as such provides columns for when the file may be open or closed by the client. Server Provides Persistent or Volatile File Handle File Open File Closed ___________________________________________________________________ Restart of Server (note 4) P / V P / V Filesystem Migration (note 5) P / V P / V SHARE/LOCK recovery P / V N/A (note 1) Client RENAMEs object P / V P / V Client RENAMEs path to object P / V P / V Other client RENAMEs object P / V P / V Other client RENAMEs path to object P / V P / V Client REMOVEs object P / V (note 2) P / V Other client REMOVEs object P / V N/A (note 3) Note 1 If the file is not open, persistence of the file handle is not applicable for the recovery of SHARE/LOCK. Note 2 With NFS version 2 and 3, when the client removes a file it has open it follows the convention of RENAMEing the file to '.nfsXXXX' until the file is closed. At this point the REMOVE is done at the server. If this same model is used for v4 then this entry will be 'N/A'. Note 3 If the file is not open by the client, then it should not expect any cached file handle to be valid. Note 4 The restart of the NFS server signifies when the operating system or NFS software is (re)started. This also includes High Availability configurations where a separate operating system instantiation acquires ownership of the file system resources and network resources (i.e. disks and IP addresses). Expires: April 2000 [Page 14] Draft Protocol Specification NFS version 4 October 1999 Note 5 Filesystem migration may occur in response to an unresponsive server or when the current server indicates that a filesystem has moved by returning NFS4ERR_MOVED. In both cases, the attribute fs_locations designates the new server location for the filesystem. 3.2.4. One Method of Constructing a Volatile File Handle As mentioned, in some instances a file handle is stale (no longer valid, perhaps because the file was removed from the server), or it is expired (the underlying file is valid, but since the file handle is volatile, it may have expired). Thus the server needs to be able to return NFS4ERR_STALE in the former case, and NFS4ERR_FHEXPIRED in the latter case. This can be done by careful construction of the volatile file handle. One possible implementation follows. A volatile file handle, while opaque to the client could contain: [volatile bit = 1 | server boot time | slot | generation number] o slot is an index in the server volatile file handle table o generation number is the generation number for the table entry/slot If the server boot time is less than the current server boot time, return NFS4ERR_FHEXPIRED. If slot is out of range, return NFS4ERR_BADHANDLE. If the generation number does not match, return NFS4ERR_BADHANDLE. When the server reboots, the table is gone (it is volatile). If volatile bit is 0, then it is a persistent file handle with a different structure following it. 3.3. Client Recovery from File Handle Expiration With the introduction of the volatile file handle, the client must take on additional responsibility so that it may prepare itself to recover from the expiration of a volatile file handle. If the server returns persistent file handles, the client does not need these additional steps. For volatile file handles, most commonly the client will need to store the component names leading up to and including the file system Expires: April 2000 [Page 15] Draft Protocol Specification NFS version 4 October 1999 object in question. With these names, the client should be able to recover by finding a file handle in the name space that is still available or by starting at the root of the server's file system name space. If the expired file handle refers to an object that has been removed from the file system, obviously the client will not be able to recover from the expired file handle. It is also possible that the expired file handle refers to a file that has been renamed. If the file was renamed by another client, again it is possible that the original client will not be able to recover. However, in the case that the client itself is renaming the file and the file is open, it is possible that the client may be able to recover. The client can determine the new path name based on the processing of the rename request. The client can then regenerate the new file handle based on the new path name. The client could also use the compound operation mechanism to construct a set of operations like: RENAME A B LOOKUP B GETFH Expires: April 2000 [Page 16] Draft Protocol Specification NFS version 4 October 1999 4. Basic Data Types Arguments and results from operations will be described in terms of basic XDR types defined in [RFC1832]. The following data types will be defined in terms of basic XDR types: filehandle: opaque <128> An NFS version 4 filehandle. A filehandle with zero length is recognized as a "public" filehandle. utf8string: opaque <> A counted array of octets that contains a UTF-8 string. Note: Section 11, Internationalization, covers the rational of using UTF-8. bitmap: uint32 <> A counted array of 32 bit integers used to contain bit values. The position of the integer in the array that contains bit n can be computed from the expression (n / 32) and its bit within that integer is (n mod 32). 0 1 +-----------+-----------+-----------+-- | count | 31 .. 0 | 63 .. 32 | +-----------+-----------+-----------+-- createverf: opaque<8> Verify used for exclusive create semantics nfstime4 struct nfstime4 { int64_t seconds; uint32_t nseconds; } The nfstime4 structure gives the number of seconds and nanoseconds since midnight or 0 hour January 1, 1970 Coordinated Universal Time (UTC). Values greater than zero for the seconds field denote dates after the 0 hour January 1, 1970. Values less than zero for the seconds field denote dates before the 0 hour January 1, 1970. In both cases, the nseconds field is to be added to the seconds field for the final time representation. For example, if the time to be represented is one-half second Expires: April 2000 [Page 17] Draft Protocol Specification NFS version 4 October 1999 before 0 hour January 1, 1970, the seconds field would have a value of negative one (-1) and the nseconds fields would have a value of one-half second (500000000). Values greater than 999,999,999 for nseconds are considered invalid. This data type is used to pass time and date information. A server converts to and from local time when processing time values, preserving as much accuracy as possible. If the precision of timestamps stored for a file system object is less than defined, loss of precision can occur. An adjunct time maintenance protocol is recommended to reduce client and server time skew. specdata4 struct specdata4 { uint32_t specdata1; uint32_t specdata2; } This data type represents additional information for the device file types NFCHR and NFBLK. Expires: April 2000 [Page 18] Draft Protocol Specification NFS version 4 October 1999 5. File Attributes To meet the NFS Version 4 requirements of extensibility and increased interoperability with non-Unix platforms, attributes must be handled in a more flexible manner. The NFS Version 3 fattr3 structure contained a fixed list of attributes that not all clients and servers are able to support or care about, which cannot be extended as new needs arise, and which provides no way to indicate non-support. With NFS Version 4, the client will be able to ask what attributes the server supports, and will be able to request only those attributes in which it is interested. To this end, attributes will be divided into three groups: mandatory, recommended and named. Both mandatory and recommended attributes are supported in the NFS V4 protocol by a specific and well-defined encoding, and are identified by number. They are requested by setting a bit in the bit vector sent in the GETATTR request; the server response includes a bit vector to list what attributes were returned in response. New mandatory or recommended attributes may be added to the NFS protocol between revisions by publishing a standards-track RFC which allocates a new attribute number value and defines the encoding for the attribute. Named attributes are accessed by the new OPENATTR operation, which accesses a hidden directory of attributes associated with a filesystem object. OPENATTR takes a filehandle for the object and returns the filehandle for the attribute hierarchy, which is a directory object accessible by LOOKUP or READDIR, and which contains files whose names represent the named attributes and whose data bytes are the value of the attribute. For example: LOOKUP "foo" ; look up file GETATTR attrbits OPENATTR ; access foo's named attributes LOOKUP "x11icon" ; look up specific attribute READ 0,4096 ; read stream of bytes Named attributes are intended primarily for data needed by applications rather than by an NFS client implementation per se; NFS implementors are strongly encouraged to define their new attributes as recommended attributes by bringing them to the working group. The set of attributes which are classified as mandatory is deliberately small, since servers must do whatever it takes to support them. The recommended attributes may be unsupported, though a server should support as many as it can. Attributes are deemed Expires: April 2000 [Page 19] Draft Protocol Specification NFS version 4 October 1999 mandatory if the data is both needed by a large number of clients and is not otherwise reasonably computable by the client when support is not provided on the server. 5.1. Mandatory Attributes These MUST be supported by every NFS Version 4 client and server in order to ensure a minimum level of interoperability. The server must store and return these attributes, and the client must be able to function with an attribute set limited to these attributes, though some operations may be impaired or limited in some ways in this case. A client may ask for any of these attributes to be returned by setting a bit in the GETATTR request, and the server must return their value. 5.2. Recommended Attributes These attributes are understood well enough to warrant support in the NFS Version 4 protocol, though they may not be supported on all clients and servers. A client may ask for any of these attributes to be returned by setting a bit in the GETATTR request, but must be able to deal with not receiving them. A client may ask for the set of attributes the server supports and should not request attributes the server does not support. A server should be tolerant of requests for unsupported attributes, and simply not return them, rather than considering the request an error. It is expected that servers will support all attributes they comfortably can, and only fail to support attributes which are difficult to support in their operating environments. A server should provide attributes whenever they don't have to "tell lies" to the client. For example, a file modification time should be either an accurate time or should not be supported by the server. This will not always be comfortable to clients but it seems that the client has a better ability to fabricate or construct an attribute or do without. Most attributes from NFS V3's FSINFO, FSSTAT and PATHCONF procedures have been added as recommended attributes, so that filesystem info may be collected via the filehandle of any object the filesystem. This renders those procedures unnecessary in NFS V4. 5.3. Named Attributes These attributes are not supported by direct encoding in the NFS Version 4 protocol but are accessed by string names rather than numbers and correspond to an uninterpreted stream of bytes which are Expires: April 2000 [Page 20] Draft Protocol Specification NFS version 4 October 1999 stored with the filesystem object. The namespace for these attributes may be accessed by using the OPENATTR operation to get a filehandle for a virtual "attribute directory" and using READDIR and LOOKUP operations on this filehandle. Named attributes may then be examined or changed by normal READ and WRITE and CREATE operations on the filehandles returned from READDIR and LOOKUP. Named attributes may have attributes, for example, a security label may have access control information in its own right. It is recommended that servers support arbitrary named attributes. A client should not depend on the ability to store any named attributes in the server's filesystem. If a server does support named attributes, a client which is also able to handle them should be able to copy a file's data and meta-data with complete transparency from one location to another; this would imply that there should be no attribute names which will be considered illegal by the server. Names of attributes will not be controlled by a standards body. However, vendors and application writers are encouraged to register attribute names and the interpretation and semantics of the stream of bytes via informational RFC so that vendors may interoperate where common interests exist. Expires: April 2000 [Page 21] Draft Protocol Specification NFS version 4 October 1999 5.4. Mandatory Attributes - Definitions Name # DataType Access Description ___________________________________________________________________ supp_attr 0 bitmap READ The bit vector which would retrieve all mandatory and recommended attributes which may be requested for this object. The client must ask this question to request correct attributes. object_type 1 nfs4_ftype READ The type of the object (file, directory, symlink) The client cannot handle object correctly without type. persistent_fh 2 boolean READ Is the filehandle for this object persistent? Server should know if the filehandles being provided are persistent or not. If the server is not able to make this determination, then it can choose volatile or non-persistent. Expires: April 2000 [Page 22] Draft Protocol Specification NFS version 4 October 1999 change 3 uint64 READ A value created by the server that the client can use to determine if a file data, directory contents or attributes have been modified. The server can just return the file mtime in this field though if a more precise value exists then it can be substituted, for instance, a sequence number. Necessary for any useful caching, likely to be available. object_size 4 uint64 R/W The size of the object in bytes. Could be very expensive to derive, likely to be available. link_support 5 boolean READ Does the object's filesystem supports hard links? Server can easily determine if links are supported. symlink_support 6 boolean READ Does the object's filesystem supports symbolic links? Server can easily determine if links are supported. named_attr 7 boolean READ Does this object have named attributes? Expires: April 2000 [Page 23] Draft Protocol Specification NFS version 4 October 1999 fsid 8 fsid4 READ Unique filesystem identifier for the filesystem holding this object. fsid contains major and minor components each of which are uint64. unique_handles 9 boolean READ Are two distinct filehandles guaranteed to refer to two different file system objects? lease_time 10 uint32 READ Duration of leases at server in seconds. rdattr_error 11 enum READ Error returned from getattr during readdir. Expires: April 2000 [Page 24] Draft Protocol Specification NFS version 4 October 1999 5.5. Recommended Attributes - Definitions Name # Data Type Access Description _____________________________________________________________________ ACL 12 nfsace4<> R/W The access control list for the object. [The nature and format of ACLs is still to be determined.] aclsupport 13 uint32 READ Indicates what ACLs are supported on the current filesystem. archive 14 boolean R/W Whether or not this file has been archived since the time of last modification (deprecated in favor of backup_time). cansettime 15 boolean READ Whether or not this object's filesystem can fill in the times on a SETATTR request without an explicit time. case_insensitive 16 boolean READ Are filename comparisons on this filesystem case insensitive? case_preserving 17 boolean READ Is filename case on this filesystem preserved? chown_restricted 18 boolean READ Will a request to change ownership be honored? filehandle 19 nfs4_fh READ The filehandle of this object (primarily for readdir requests). Expires: April 2000 [Page 25] Draft Protocol Specification NFS version 4 October 1999 fileid 20 uint64 READ A number uniquely identifying the file within the filesystem. files_avail 21 uint64 READ File slots available to this user on the filesystem containing this object - this should be the smallest relevant limit. files_free 22 uint64 READ Free file slots on the filesystem containing this object - this should be the smallest relevant limit. files_total 23 uint64 READ Total file slots on the filesystem containing this object. fs_locations 24 fs_locations READ Locations where this filesystem may be found. If the server returns NFS4ERR_MOVED as an error, this attribute must be supported. hidden 25 boolean R/W Is file considered hidden? homogeneous 26 boolean READ Whether or not this object's filesystem is homogeneous, i.e. whether pathconf is the same for all filesystem objects. maxfilesize 27 uint64 READ Maximum supported file size for the filesystem of this object. Expires: April 2000 [Page 26] Draft Protocol Specification NFS version 4 October 1999 maxlink 28 uint32 READ Maximum number of links for this object. maxname 29 uint32 READ Maximum filename size supported for this object. maxread 30 uint64 READ Maximum read size supported for this object. maxwrite 31 uint64 READ Maximum write size supported for this object. This attribute SHOULD be supported if the file is writable. Lack of this attribute can lead to the client either wasting bandwidth or not receiving the best performance. mime_type 32 utf8<> R/W MIME body type/subtype of this object. mode 33 uint32 R/W Unix-style permission bits for this object (deprecated in favor of ACLs) no_trunc 34 boolean READ If a name longer than name_max is used, will an error be returned or will the name be truncated? numlinks 35 uint32 READ Number of links to this object. owner 36 utf8<> R/W The string name of the owner of this object. Expires: April 2000 [Page 27] Draft Protocol Specification NFS version 4 October 1999 owner_group 37 utf8<> R/W The string name of the group of the owner of this object. quota_hard 38 uint64 READ Number of bytes of disk space beyond which the server will decline to allocate new space. quota_soft 39 uint64 READ Number of bytes of disk space at which the client may choose to warn the user about limited space. quota_used 40 uint64 READ Number of bytes of disk space occupied by the owner of this object on this filesystem. rawdev 41 specdata4 READ Raw device identifier. space_avail 42 uint64 READ Disk space in bytes available to this user on the filesystem containing this object - this should be the smallest relevant limit. space_free 43 uint64 READ Free disk space in bytes on the filesystem containing this object - this should be the smallest relevant limit. space_total 44 uint64 READ Total disk space in bytes on the filesystem containing this object. Expires: April 2000 [Page 28] Draft Protocol Specification NFS version 4 October 1999 space_used 45 uint64 READ Number of filesystem bytes allocated to this object. system 46 boolean R/W Whether or not this file is a system file. time_access 47 nfstime4 R/W The time of last access to the object. time_backup 48 nfstime4 R/W The time of last backup of the object. time_create 49 nfstime4 R/W The time of creation of the object. This attribute does not have any relation to the traditional Unix file attribute time'. time_delta 50 nfstime4 READ Smallest useful server time granularity. time_metadata 51 nfstime4 R/W The time of last meta-data modification of the object. time_modify 52 nfstime4 R/W The time since the epoch of last modification to the object. version 53 utf8<> R/W Version number of this document. volatility 54 nfstime4 READ Approximate time until next expected change on this filesystem, as a measure of volatility. Expires: April 2000 [Page 29] Draft Protocol Specification NFS version 4 October 1999 5.6. Interpreting owner and owner_group The recommended attributes "owner" and "owner_group" are represented in terms of a UTF-8 string. To avoid a representation that is tied to a particular underlying implementation at the client or server, the use of the UTF-8 string has been chosen. Note that section 6.1 of [RFC2624] provides additional rationale. It is expected that the client and server will have their own local representation of owner and owner_group that is used for local storage or presentation to the end user. Therefore, it is expected that the when these attributes are transferred between the client and server that the local representation is translated to a syntax of the form "user@dns_domain". This will allow for a client and server that do not use the same local representation the ability to translate to a common syntax that can be interpreted by both. The translation is not specified as part of the protocol. This allows various solutions to be employed. For example, a local translation table may be consulted that maps between a numeric id to the user@dns_domain syntax. A name service may also be used to accomplish the translation. The 'dns_domain' portion of the owner string is meant to be a DNS domain name. For example, user@ietf.org. In the case where there is no translation available to the client or server, the attribute value must be constructed without the '@'. Therefore, the absence of the @ from the owner or owner_group attribute signifies that no translation was available and the receiver of the attribute should not place any special meaning with the attribute value. Even though the attribute value can not be translated, it may still be useful. In the case of a client, the attribute string may be used for local display of ownership. 5.7. Access Control Lists The NFS ACL attribute is an array of access control entries (ACE). There are various access control entry types. The server is able to communicate which ACE types are support by returning the appropriate value within the aclsupport attribute. The types of ACEs are defined as follows: Type Description _____________________________________________________ ALLOW Explicitly grants the access defined in acemask4 to the file or directory. Expires: April 2000 [Page 30] Draft Protocol Specification NFS version 4 October 1999 DENY Explicitly denies the access defined in acemask4 to the file or directory. AUDIT LOG (system dependant) any access attempt to a file or directory which uses an access method which is a subset of acemask4. ALARM Generate a system ALARM (system dependant) when any access attempt is made to a file or directory which is a subset of acemask4 The NFS ACE attribute is defined as follows: struct nfsace4 { acetype4 type; aceflag4 flag; acemask4 access_mask; utf8string who; }; Each nfsace4 entry is assumed to be processed in order by the server. The first Access Control Entry is used where both the "who" and the "access_mask" match the requester and the type of access desired. Any later additional Access Control Entries which also match are ignored. 5.7.1. ACE type The semantics of the 'type' field follow the descriptions provided above. 5.7.2. ACE flag The 'flag' field contains values based on the following descriptions. ACE4_FILE_INHERIT_ACE Can be placed on a directory and indicates that this ACE should be added to each new non-directory file created. ACE4_DIRECTORY_INHERIT_ACE Expires: April 2000 [Page 31] Draft Protocol Specification NFS version 4 October 1999 Can be placed on a directory and indicates that this ACE should be added to each new directory created. ACE4_INHERIT_ONLY_ACE Can be placed on a directory but does not apply to the directory, only to newly created files/directories as specified by the above two flags. ACE4_NO_PROPAGATE_INHERIT_ACE Can be placed on a directory. Normally when a new directory is created and an ACE exists on the parent directory which is marked ACL4_DIRECTORY_INHERIT_ACE, two ACEs are placed on the new directory. One for the directory itself and one which is an inheritable ACE for newly created directories. This flag tells the O/S to not place an ACE on the newly created directory which is inheritable by subdirectories of the created directory. ACE4_SUCCESSFUL_ACCESS_ACE_FLAG ACL4_FAILED_ACCESS_ACE_FLAG Both indicate for AUDIT and ALARM which state to log the event. On every ACCESS or OPEN call which occurs on a file or directory which has an ACL that is of type ACE4_SYSTEM_AUDIT_ACE_TYPE or ACE4_SYSTEM_ALARM_ACE_TYPE, the attempted access is compared to the ace4mask of these ACLs. If the access is a subset of ace4mask and the identifier match, an AUDIT trail or an ALARM is generated. By default this happens regardless of the success or failure of the ACCESS or OPEN call. The flag ACE4_SUCCESSFUL_ACCESS_ACE_FLAG only produces the AUDIT or ALARM if the ACCESS or OPEN call is successful. The ACE4_FAILED_ACCESS_ACE_FLAG causes the ALARM or AUDIT if the ACCESS or OPEN call fails. ACE4_IDENTIFIER_GROUP Indicates that the "who" refers to a GROUP as defined under Unix. Expires: April 2000 [Page 32] Draft Protocol Specification NFS version 4 October 1999 5.7.3. ACE Access Mask The access_mask field contains values based on the following: Access Description ____________________________________________________________________ READ_DATA Permission to read the data of the file LIST_DIRECTORY Permission to list the contents of a directory WRITE_DATA Permission to modify the file's data ADD_FILE Permission to add a new file to a directory APPEND_DATA Permission to append data to a file ADD_SUBDIRECTORY Permission to create a subdirectory to a directory READ_STREAMS Permission to read the additional streams of a file WRITE_STREAMS Permission to write the additional streams of a file EXECUTE Permission to execute a file DELETE_CHILD Permission to delete a file or directory within a directory READ_ATTRIBUTES The ability to read basic attributes (non-acls) of a file WRITE_ATTRIBUTES Permission to change basic attributes (non-acls) of a file READ_CONTROL ? READ_EXTENDED_ATTRIBUTES ? WRITE_EXTENDED_ATTRIBUTES ? DELETE Permission to Delete the File, IF FILE BASED READ_ACL Permission to Read the ACL WRITE_ACL Permission to Write the ACL WRITE_OWNER Permission to change the owner SYNCHRONIZE Allow the forcing of mutual-exclusion to the file 5.7.4. ACE who There are several special identifiers ("who") which need to be understood universally. Some of these identifiers cannot be understood when an NFS client accesses the server, but have meaning when a local process accesses the file. The ability to display and Expires: April 2000 [Page 33] Draft Protocol Specification NFS version 4 October 1999 modify these permissions is permitted over NFS. Who Description _______________________________________________________________ "OWNER" The owner of the file. "GROUP" The group associated with the file. "EVERYONE" The world. "INTERACTIVE" Accessed from an interactive terminal. "NETWORK"- cessed via the network. "DIALUP" Accessed as a dialup user to the server. "BATCH" Accessed from a batch job. "ANONYMOUS" Accessed without any authentication. "AUTHENTICATED" Any authenticated user (opposite of ANONYMOUS) "SERVICE" Access from a system service. To avoid conflict these special identifiers should be of the form "xxxx@". For example: ANONYMOUS@. Expires: April 2000 [Page 34] Draft Protocol Specification NFS version 4 October 1999 6. Filesystem Migration and Replication With the use of the recommended attribute "fs_locations", the NFS version 4 server has a method of providing filesystem migration or replication services. For the purposes of migration and replication, a filesystem will be defined as all files that share a given fsid (major and minor values are the same). The fs_locations attribute provides a list of filesystem locations. These locations are specified by providing the server name (either DNS domain or IP address) and the path name representing the root of the filesystem. Depending on the type of service being provided, the list will provide a new or alternate locations for the filesystem. The client will use this information to redirect its requests to the new server. 6.1. Replication It is expected that filesystem replication will be used in the case of read-only data. Typically, the filesystem will be replicated amongst two or more servers. The fs_locations attribute will provide the list of these locations to the client. On first access of the filesystem, the client should obtain the value of the fs_locations attribute. If, in the future, the client finds the server unresponsive, the client may attempt to use another server specified by fs_locations. If applicable, the client must take the appropriate steps to recover valid filehandles from the new server. This is described in more detail in the following sections. 6.2. Migration Filesystem migration is used to move a filesystem from one server to another. Migration is typically used for a filesystem that is writable and has a single copy. The expected use of migration is for load balancing or general resource reallocation. The protocol does not specify how the filesystem will be moved between servers. This server-to-server transfer mechanism is left to the server implementor. However, the method used to communicate the migration event between client and server is specified here. Once the servers participating in the migration have completed the move of the filesystem, the error NFS4ERR_MOVED will be returned for subsequent requests received by the original server. The NFS4ERR_MOVED error is returned for all operations except GETATTR. Expires: April 2000 [Page 35] Draft Protocol Specification NFS version 4 October 1999 Upon receiving the NFS4ERR_MOVED error, the client will obtain the value of the fs_locations attribute. The client will then use the contents of the attribute to redirect its requests to the specified server. To facilitate the use of GETATTR operations such as PUTFH must also be accepted by the server for the migrated filesystem's filehandles. Note that if the server returns NFS4ERR_MOVED, the server MUST support the fs_locations attribute. If the client requests more attributes than fs_locations, the server may return fs_locations only. This is to be expected since the server has migrated the filesystem and may not have a method of obtaining additional attribute data. The server implementor needs to be careful in developing a migration solution. The server must consider all of the state information clients may have outstanding at the server. This includes but is not limited to locking/share state, delegation state, and asynchronous file writes which are represented by WRITE and COMMIT verifiers. The server should strive to minimize the impact on its clients during and after the migration process. 6.3. Interpretation of the fs_locations Attribute The fs_location attribute is structured in the following way: struct fs_location { utf8string server<>; pathname4 rootpath; }; struct fs_locations { pathname4 fs_root; fs_location locations<>; }; The fs_location struct is used to represent the location of a filesystem by providing a server name and the path to the root of the filesystem. For a multi-homed server or a set of servers that use the same rootpath, an array of server names may be provided. An entry in the server array is an UTF8 string and represents one of a traditional DNS host name, IPv4 address, or IPv6 address. It is not a requirement that all servers that share the same rootpath be listed in one fs_location struct. The array of server names is provided for convenience. Servers that share the same rootpath may also be listed in separate fs_location entries in the fs_locations attribute. The fs_locations struct and attribute then contains an array of Expires: April 2000 [Page 36] Draft Protocol Specification NFS version 4 October 1999 locations. Since the namespace of each server may be constructed differently, the "fs_root" field is provided. The path represented by fs_root represents the location of the filesystem in the server's namespace. Therefore, the fs_root path is only associated with the server from which the fs_locations attribute was obtained. The fs_root path is meant to aid the client in locating the filesystem at the various servers listed. As an example, there is a replicated file system located at two servers (servA and servB). At servA the filesystem is located at path "/a/b/c". At servB the filesystem is located at path "/x/y/z". In this example the client accesses the filesystem first at servA with a multi-component lookup path of "/a/b/c/d". Since the client used a multi-component lookup to obtain the filehandle at "/a/b/c/d", it is unaware that the filesystem's root is located in servA's namespace at "/a/b/c". When the client switches to servB, it will need to determine that the directory it first referenced at servA is now represented by the path "/x/y/z/d" on servB. To facilitate this, the fs_locations attribute provided by servA would have a fs_root value of "/a/b/c" and two entries in fs_location. One entry in fs_location will be for itself (servA) and the other will be for servB with a path of "/x/y/z". With this information, the client is able to substitute "/x/y/z" for the "/a/b/c" at the beginning of its access path and construct "/x/y/z/d" to use for the new server. 6.4. Filehandle Recovery for Migration or Replication Filehandles for filesystems that are replicated or migrated have the same semantics as for filesystems that are not replicated or migrated. For example, if a filesystem has persistent filehandles and it is migrated to another server, the filehandle values for the filesystem will be valid at the new server. The same is true for a filesystem which is made up of volatile filehandles. In fact, in this case the client should expect that the new server will return NFS4ERR_EXPIRED when old filehandles are presented; the client will need to recover the filehandles appropriately. Expires: April 2000 [Page 37] Draft Protocol Specification NFS version 4 October 1999 7. NFS Server Namespace 7.1. Server Exports On a UNIX server the name-space describes all the files reachable by pathnames under the root directory "/". On a Windows NT server the name-space constitutes all the files on disks named by mapped disk letters. NFS server administrators rarely make the entire server's file-system name-space available to NFS clients. Typically, pieces of the name-space are made available via an "export" feature. In previous versions of NFS, the root file-handle for each export is obtained through the MOUNT protocol; the client sends a string that identifies the export of name-space and the server returns the root file-handle for it. The MOUNT protocol supports an EXPORTS procedure that will enumerate the server's exports. 7.2. Browsing Exports The NFS version 4 protocol provides a root file-handle that clients can use to obtain file-handles for these exports via a multi- component LOOKUP. A common user experience is to use a graphical user interface (perhaps a file "Open" dialog window) to find a file via progressive browsing through a directory tree. The client must be able to move from one export to another export via single-component, progressive LOOKUP operations. This style of browsing is not well supported by NFS version 2 and 3 protocols. The client expects all LOOKUP operations to remain within a single server file-system, i.e. the device attribute will not change. This prevents a client from taking name-space paths that span exports. An automounter on the client can obtain a snapshot of the server's name-space using the EXPORTS procedure of the MOUNT protocol. If it understands the server's pathname syntax, it can create an image of the server's name-space on the client. The parts of the name-space that are not exported by the server are filled in with a "pseudo file-system" that allows the user to browse from one mounted file- system to another. There is a drawback to this representation of the server's name-space on the client: it is static. If the server administrator adds a new export the client will be unaware of it. Expires: April 2000 [Page 38] Draft Protocol Specification NFS version 4 October 1999 7.3. Server Pseudo File-System NFS version 4 servers avoid this name-space inconsistency by presenting all the exports within the framework of a single server name-space. An NFS version 4 client uses LOOKUP and READDIR operations to browse seamlessly from one export to another. Portions of the server name-space that are not exported are bridged via a "pseudo file-system" that provides a view of exported directories only. A pseudo file-system has a unique fsid and behaves like a normal, read-only file-system. Based on the construction of the server's name space, it is possible that multiple pseudo filesystems may exist. For example, /a pseudo filesystem /a/b real filesystem /a/b/c pseudo filesystem /a/b/c/d real filesystem Need to discuss the ramifications of multiple pseudo filesystems. 7.4. Multiple Roots DOS, Windows 95, 98 and NT are sometimes described as having "multiple roots". File-Systems are commonly represented as disk letters. MacOS represents file-systems as top-level names. NFS version 4 servers for these platforms can construct a pseudo file- system above these root names so that disk letters or volume names are simply directory names in the pseudo-root. 7.5. Filehandle Volatility The nature of the server's pseudo file-system is that it is a logical representation of file-system(s) available from the server. Therefore, the pseudo file-system is most likely constructed dynamically when the NFS version 4 is first instantiated. It is expected the pseudo file-system may not have an on-disk counterpart from which persistent filehandles could be constructed. Even though it is preferable that the server provide persistent filehandles for the pseudo file-system, the NFS client should expect that pseudo file-system file-handles are volatile. This can be confirmed by checking the associated "persistent_fh" attribute for those Expires: April 2000 [Page 39] Draft Protocol Specification NFS version 4 October 1999 filehandles in question. If the filehandles are volatile, the NFS client must be prepared to recover a filehandle value (i.e. with a v4 multi-component LOOKUP) when receiving an error of NFS4ERR_FHEXPIRED. 7.6. Exported Root If the server's root file-system is exported, it might be easy to conclude that a pseudo-file-system is not needed. This would be wrong. Assume the following file-systems on a server: / disk1 (exported) /a disk2 (not exported) /a/b disk3 (exported) Because disk2 is not exported, disk3 cannot be reached with simple LOOKUPs. The server must bridge the gap with a pseudo-file-system. 7.7. Mount Point Crossing The server file-system environment may be constructed in such a way that one file-system contains a directory which is 'covered' or mounted upon by a second file-system. For example: /a/b (file system 1) /a/b/c/d (file system 2) The pseudo file-system for this server may be constructed to look like: / (place holder/not exported) /a/b (file system 1) /a/b/c/d (file system 2) It is the server's responsibility to present the pseudo file-system that is complete to the client. If the client sends a lookup request for the path "/a/b/c/d", the server's response is the filehandle of the file system "/a/b/c/d". In previous versions of NFS, the server would respond with the directory "/a/b/d/d" within the file-system "/a/b". The NFS client will be able to determine if it crosses a server mount point by a change in the value of the "fsid" attribute. Expires: April 2000 [Page 40] Draft Protocol Specification NFS version 4 October 1999 7.8. Security Policy and Namespace Presentation The application of the server's security policy needs to be carefully considered by the implementor. One may choose to limit the viewability of portions of the pseudo file-system based on the server's perception of the client's ability to authenticate itself properly. However with the support of multiple security mechanisms and the ability to negotiate the appropriate use of these mechanisms, the server is unable to properly determine if a client will be able to authenticate itself. If, based on its policies, the server chooses to limit the contents of the pseudo file-system, the server may effectively hide file-systems from a client that may otherwise have legitimate access. 7.9. Summary NFS version 4 provides LOOKUP and READDIR operations for browsing of NFS file-systems. These operations are also used to browse server exports. A v4 server supports export browsing by including exported directories in a pseudo-file-system. A browsing client can cross seamlessly between a pseudo-file-system and a real, exported file- system. Clients must support volatile filehandles and recognize mount point crossing of server file-systems. Expires: April 2000 [Page 41] Draft Protocol Specification NFS version 4 October 1999 8. File Locking Integrating locking into NFS necessarily causes it to be state-full, with the invasive nature of "share" file locks it becomes substantially more dependent on state than the traditional combination of NFS and NLM [XNFS]. There are three components to making this state manageable: o Clear division between client and server o Ability to reliably detect inconsistency in state between client and server o Simple and robust recovery mechanisms In this model, the server owns the state information. The client communicates its view of this state to the server as needed. The client is also able to detect inconsistent state before modifying a file. To support Windows "share" locks, it is necessary to atomically open or create files. Having a separate share/unshare operation will not allow correct implementation of the Windows OpenFile API. In order to correctly implement share semantics, the existing mechanisms used when a file is opened or created (LOOKUP, CREATE, ACCESS) need to be replaced. NFS V4 will have an OPEN procedure that subsumes the functionality of LOOKUP, CREATE, and ACCESS. However, because many operations require a file handle, the traditional LOOKUP is preserved to map a file name to file handle without establishing state on the server. Policy of granting access or modifying files is managed by the server based on the client's state. It is believed that these mechanisms can implement policy ranging from advisory only locking to full mandatory locking. While ACCESS is just a subset of OPEN, the ACCESS procedure is maintained as a lighter weight mechanism. 8.1. Definitions Lock The term "lock" will be used to refer to both record (byte-range) locks as well as file (share) locks unless specifically stated otherwise. Client Throughout this proposal the term "client" is used to indicate the entity that maintains a set of locks on behalf of one or more applications. The client is responsible for crash recovery of those locks it manages. Multiple clients may share the same transport and multiple clients may exist Expires: April 2000 [Page 42] Draft Protocol Specification NFS version 4 October 1999 on the same network node. Clientid A 64-bit quantity returned by a server that uniquely corresponds to a client supplied Verifier and ID. Lease An interval of time defined by the server for which the client is irrevokeably granted a lock. At the end of a lease period the lock may be revoked if the lease has not been extended. The lock must be revoked if a conflicting lock has been granted after the lease interval. All leases granted by a server have the same fixed interval. Stateid A 64-bit quantity returned by a server that uniquely defines the locking state granted by the server for a specific lock owner for a specific file. A stateid composed of all bits 0 or all bits 1 have special meaning and are reserved. Verifier A 32-bit quantity generated by the client that the server can use to determine if the client has restarted and lost all previous lock state. 8.2. Locking It is assumed that manipulating a lock is rare when compared to I/O operations. It is also assumed that crashes and network partitions are relatively rare. Therefore it is important that I/O operations have a light weight mechanism to indicate if they possess a held lock. A lock request contains the heavy weight information required to establish a lock and uniquely define the lock owner. The following sections describe the transition from the heavy weight information to the eventual stateid used for most client and server locking and lease interactions. 8.2.1. Client ID For each LOCK request, the client must identify itself to the server. This is done in such a way as to allow for correct lock identification and crash recovery. Client identification is accomplished with two values. o A verifier that is used to detect client reboots. o A variable length opaque array to uniquely define a client. For an operating system this may be a fully qualified host Expires: April 2000 [Page 43] Draft Protocol Specification NFS version 4 October 1999 name or IP address, and for a user level NFS client it may additionally contain a process id or other unique sequence. The data structure for the Client ID would then appear as: struct nfs_client_id { opaque verifier[4]; opaque id<>; } It is possible through the mis-configuration of a client or the existence of a rogue client that two clients end up using the same nfs_client_id. This situation is avoided by 'negotiating' the nfs_client_id between client and server with the use of the SETCLIENTID. The following describes the two scenarios of negotiation. 1 Client has never connected to the server In this case the client generates an nfs_client_id and unless another client has the same nfs_client_id.id field, the server accepts the request. The server also records the principal (or principal to uid mapping) from the credential in the RPC request that contains the nfs_client_id negotiation request. Two clients might still use the same nfs_client_id.id due to perhaps configuration error (say a High Availability configuration where the nfs_client_id.id is derived from the ethernet controller address and both systems have the same address). In this case, the result is a switched union that returns in addition to NFS4ERR_CLID_INUSE, the network address (the rpcbind netid and universal address) of the client that is using the id. 2 Client is re-connecting to the server after a client reboot In this case, the client still generates an nfs_client_id but the nfs_client_id.id field will be the same as the nfs_client_id.id generated prior to reboot. If the server finds that the principal/uid is equal to the previously "registered" nfs_client_id.id, then locks associated with the old nfs_client_id are immediately released. If the principal/uid is not equal, then this is a rogue client and the request is returned in error. For more discussion of crash recovery semantics, see the section on "Crash Recovery" Expires: April 2000 [Page 44] Draft Protocol Specification NFS version 4 October 1999 In both cases, upon success, NFS4_OK is returned. To help reduce the amount of data transferred on OPEN and LOCK, the server will also return a unique 64-bit clientid value that is a short hand reference to the nfs_client_id values presented by the client. From this point forward, the client can use the clientid to refer to itself. 8.2.2. nfs_lockowner and stateid Definition When requesting a lock, the client must present to the server the clientid and an identifier for the owner of the requested lock. These two fields are referred to as the nfs_lockowner and the definition of those fields are: o A clientid returned by the server as part of the clients use of the SETCLIENTID procedure o A variable length opaque array used to uniquely define the owner of a lock managed by the client. This may be a thread id, process id, or other unique value. When the server grants the lock it responds with a unique 64-bit stateid. The stateid is used as a short hand reference to the nfs_lockowner, since the server will be maintaining the correspondence between them. 8.2.3. Use of the stateid All I/O requests contain a stateid. If the nfs_lockowner performs I/O on a range of bytes within a locked range, the stateid returned by the server must be used to indicate the appropriate lock (record or share) is held. If no state is established by the client, either record lock or share lock, a stateid of all bits 0 is used. If no conflicting locks are held on the file, the server may grant the I/O request. If a conflict with an explicit lock occurs, the request is failed (NFS4ERR_LOCKED). This allows "mandatory locking" to be implemented. A stateid of all bits 1 allows read requests to bypass locking checks at the server. However, write requests with stateid with bits all 1 does not bypass file locking requirements. An explicit lock may not be granted while an I/O operation with conflicting implicit locking is being performed. Expires: April 2000 [Page 45] Draft Protocol Specification NFS version 4 October 1999 The byte range of a lock is indivisible. A range may be locked, unlocked, or changed between read and write but may not have subranges unlocked or changed between read and write. This is the semantics provided by Win32 but only a subset of the semantics provided by Unix. It is expected that Unix clients can more easily simulate modifying subranges than Win32 servers adding this feature. 8.2.4. Sequencing of Lock Requests Locking is different than most NFS operations as it requires "at- most-one" semantics that are not provided by ONC RPC. In the face of retransmission or reordering, lock or unlock requests must have a well defined and consistent behavior. To accomplish this each lock request contains a sequence number that is a monotonically increasing integer. Different nfs_lockowners have different sequences. The server maintains the last sequence number (L) received and the response that was returned. If a request with a previous sequence number (r < L) is received it is silently ignored as its response must have been received before the last request (L) was sent. If a duplicate of last request (r == L) is received, the stored response is returned. If a request beyond the next sequence (r == L + 2) is received it is silently ignored. Sequences are reinitialized whenever the client verifier changes. 8.3. Blocking Locks Some clients require the support of blocking locks. The current proposal lacks a call-back mechanism, similar to NLM, to notify a client when the lock has been granted. Clients have no choice but to continually poll for the lock, which presents a fairness problem. Two new lock types are added, READW and WRITEW used to indicate to the server that the client is requesting a blocking lock. The server should maintain an ordered list of pending blocking locks. When the conflicting lock is released, the server may wait the lease period for the first client to re-request the lock. After the lease period expires the next waiting client request is allowed the lock. Clients are required to poll at an interval sufficiently small that it is likely to acquire the lock in a timely manner. The server is not required to maintain a list of pending blocked locks as it is used to increase fairness and not correct operation. Because of the unordered nature of crash recovery, storing of lock state to stable storage would be required to guarantee ordered granting of blocking locks. Expires: April 2000 [Page 46] Draft Protocol Specification NFS version 4 October 1999 8.4. Lease Renewal The purpose of a lease is to allow a server to remove stale locks that are held by a client that has crashed or is otherwise unreachable. It is not a mechanism for cache consistency and lease renewals may not be denied if the lease interval has not expired. Any I/O request that has been made with a valid stateid is a positive indication that the client is still alive and locks are being maintained. This becomes an implicit renewal of the lease. In the case no I/O has been performed within the lease interval, a lease can be renewed by having the client issue a zero length READ. Because the nfs_lockowner contains a unique client value, any stateid for a client will renew all leases for locks held with the same client field. This will allow very low overhead lease renewal that scales extremely well. In the typical case, no extra RPC calls are needed and in the worst case one RPC is required every lease period regardless of the number of locks held by the client. 8.5. Crash Recovery The important requirement in crash recovery is that both the client and the server know when the other has failed. Additionally it is required that a client sees a consistent view of data across server reboots. All I/O operations that may have been queued within the client or network buffers must wait until the client has successfully recovered the locks protecting the I/O operations. 8.5.1. Client Failure and Recovery In the event that a client fails, the server may recover the client's locks when the associated leases have expired. Conflicting locks from another client may only be granted after this lease expiration. If the client is able to restart or reinitialize within the lease period the client may be forced to wait the remainder of the lease period before obtaining new locks. To minimize client delay upon restart, lock requests contain a verifier field in the lock_owner. This verifier is part of the initial SETCLIENTID call made by the client. Since the verifier will be changed by the client upon each initialization, the server can compare a new verifier to the the verifier associated with currently held locks and determine that they do not match. This signifies the client's new instantiation and loss of locking state. As a result, the server is free to release all locks held which are associated with the old verifier. Expires: April 2000 [Page 47] Draft Protocol Specification NFS version 4 October 1999 For secure environments, a change in the verifier must only cause the release of locks associated with the authenticated requester. This is required to prevent a rogue entity from freeing otherwise valid locks. Note that the verifier must have the same uniqueness properties of the COMMIT verifier. 8.5.2. Server Failure and Recovery If the server fails and loses locking state, the server must wait the lease period before granting any new locks or allowing any I/O. An I/O request during the grace period with a stale stateid will fail with NFS4ERR_GRACE. To recover the lock and associate state, the client will reissue the lock request with reclaim set to TRUE. Upon receiving a successful reply and associated stateid, the client may reissue the I/O request with the new stateid. Any time a client receives an NFS4ERR_GRACE error, the client must assume that all locking state associated with the server returning the error has been lost. The client should start recovering all outstanding locks upon receiving NFS4ERR_GRACE. If the server receives a lock request during its grace period that does not have reclaim set to TRUE, the server must return NFS4ERR_GRACE. This error return will trigger the client to recover all of its locking state by reclaiming locks. A lock request outside the server's grace period with reclaim set to TRUE can only succeed if the server can guarantee that no conflicting lock or I/O request has been granted since reboot. 8.5.3. Network Partitions and Recovery If the duration of a network partition is greater than the lease period provided by the server, the server will have not received a lease renewal from the client. If this occurs, the server may free all locks held for the client. As a result, all stateids held by the client will become invalid. Once the client is able to reach the server after such a network partition, all I/O submitted by the client with the now invalid stateids will fail with the server returning the error NFS4ERR_EXPIRED. Once this error is received, the client will suitably notify the application that held the lock. As a courtesy to the client or optimization, the server may continue to hold locks on behalf of a client for which recent communication has extended beyond the lease period. If the server receives a lock or I/O request that conflicts with one of these courtesy locks, the Expires: April 2000 [Page 48] Draft Protocol Specification NFS version 4 October 1999 server must free the courtesy lock and grant the new request. In the event of a network partition with a duration extending beyond the expiration of a client's leases, the server MUST employ a method of recording this fact in its stable storage. Conflicting locks requests from another client may be serviced after the lease expiration. There are various scenarios involving server failure after such an event that require the storage of these lease expirations or network partitions. One scenario is as follows: A client holds a lock at the server and encounters a network partition and is unable to renew the associated lease. A second client obtains a conflicting lock and then frees the lock. After the unlock request by the second client, the server reboots or reinitializes. Once the server recovers, the network partition heals and the original client attempts to reclaim the original lock. In this scenario and without any state information, the server will allow the reclaim and the client will be in an inconsistent state because the server or the client has no knowledge of the conflicting lock. The server may choose to store this lease expiration or network partitioning state in a way that will only identify the client as a whole. Note that this may potentially lead to lock reclaims being denied unnecessarily because of a mix of conflicting and non- conflicting locks. The server may also choose to store information about each lock that has an expired lease with an associated conflicting lock. The choice of the amount and type of state information that is stored is left to the implementor. In any case, the server must have enough state information to enable correct recovery from multiple partitions and multiple server failures. 8.6. Server Revocation of Locks At any point, the server can revoke locks held by a client and the client must be prepared for this event. When the client detects that its locks have been or may have been revoked, the client is responsible for validating the state information between itself and the server. Validating locking state for the client means that it must verify or reclaim state for each lock currently held. The first instance of lock revocation is upon server reboot or re- initialization. In this instance the client will receive an error or NFS4ERR_GRACE and the client will proceed with normal crash recovery as described in the previous section. Expires: April 2000 [Page 49] Draft Protocol Specification NFS version 4 October 1999 The second lock revocation event can occur as a result of administrative intervention within the lease period. While this is considered a rare event, it is possible that the server's administrator has decided to release or revoke a particular lock held by the client. As a result of revocation, the client will receive an error of NFS4ERR_EXPIRED and the error is received within the lease period for the lock. In this instance the client may assume that only the lock_owner's locks have been lost. The client notifies the lock holder appropriately. The client may not assume the lease period has been renewed as a result of failed operation. The third lock revocation event is the inability to renew the lease period. While this is considered a rare or unusual event, the client must be prepared to recover. Both the server and client will be able to detect the failure to renew the lease and are capable of recovering without data corruption. For the server, it tracks the last renewal event serviced for the client and knows when the lease will expire. Similarly, the client must track operations which will renew the lease period and is able to determine lease period expiration. When the client determines that the lease period has expired, the client must mark all locks held for the associated lease as "unvalidated". This means the client has been unable to re-establish or confirm the appropriate lock state with the server. As described in the previous section on crash recovery, there are scenarios in which the server may grant conflicting locks after the lease period has expired for a client. Once the lease period has expired, the client must validate each lock it has held to ensure that a conflicting lock has not been granted. The client may accomplish this task by issuing an I/O request, either a pending I/O or zero length read. If the response to the request is success, the client has validated the lock and re-established the appropriate state between itself and the server. If the I/O request is not successful, the lock was revoked by the server and the client must notify the owner. 8.7. Share Reservations A share reservation is a mechanism to control access to a file. It is a separate and independent mechanism from record locking. When a client opens a file, it issues an OPEN request to the server specifying the type of access required (READ, WRITE, or BOTH) and the type of access to deny others (deny NONE, READ, WRITE, or BOTH). If the OPEN fails the client will fail the applications open request. Pseudo-code definition of the semantics: Expires: April 2000 [Page 50] Draft Protocol Specification NFS version 4 October 1999 if ((request.access & file_state.deny)) || (request.deny & file_state.access)) return (NFS4ERR_DENIED) 8.8. OPEN/CLOSE Procedures To provide correct share semantics, a client MUST use the OPEN procedure to obtain the initial file handle and indicate the desired access and what if any access to deny. Even if the client intends to use a stateid of all 0's or all 1's, it must still obtain the filehandle for the regular file with the OPEN procedure. For clients that do not have a deny mode built into their open API, deny equal to NONE should be used. The OPEN procedure with the CREATE flag, also subsumes the CREATE procedure for regular files as used in previous versions of NFS, allowing a create with a share to be done atomicly. Will expand on create semantics here. The CLOSE procedure removes all share locks held by the lock_owner on that file. If record locks are held they should be explicitly unlocked. Some servers may not support the CLOSE of a file that still has record locks held; if so, CLOSE will fail and return an error. The LOOKUP procedure is preserved and will return a file handle without establishing any lock state on the server. Without a valid stateid, the server will assume the client has the least access. For example, a file opened with deny READ/WRITE cannot be accessed using a file handle obtained through LOOKUP. Expires: April 2000 [Page 51] Draft Protocol Specification NFS version 4 October 1999 9. Client-Side Caching Client-side caching of data, of file attributes, and of file names is essential to providing good performance in NFS. Providing dis- tributed cache-coherence is a difficult problem and previous versions of NFS have not attempted it. Instead, several client implementation techniques have been used to reduce the problems that lack of co- herence poses for users. These techniques have not been clearly defined by earlier specifications and it is often unclear what is valid or invalid client behavior. NFS version 4 uses many techniques similar to those that have been used in previous versions of NFS. It does not provide distributed cache coherence, but it defines a more limited set of caching guarantees to allow locks and share reservation to be used without destructive interference from client-side caching. In addition, version 4 introduces a delegation mechanism which allows many decisions normally made by the server to be made locally by clients. This provides efficient support of the common cases where sharing is infrequent or where sharing is read-only. 9.1. Performance Challenges for Client-Side Caching Caching techniques used in previous versions of NFS have been successful in providing good performance. However, several scala- bility challenges can arise when those techniques are used with very large numbers of clients, particularly when those clients are geographically distributed, increasing the latency for cache revalidation requests. When latencies are large, repeated cache validation requests at open time, which NFS-v2 and NFS-v3 clients typically do, can have serious performance drawbacks. A common case is one in which a file is only accessed by a single client. Sharing is infrequent. In this case, repeated reference to the server to find that no conflicts exist, is expensive. A better option is to allow a client repeatedly opening a file to do so without reference to the server, until potentially conflicting operations from another client actually occur. A similar situation arises in connection with file locking. Sending file lock and unlock requests to the server as well as the I/O requests necessary to make data caching consistent with the locking semantics (see the section "Data Caching and File Locking") can severely limit performance. When locking is used to provide pro- Expires: April 2000 [Page 52] Draft Protocol Specification NFS version 4 October 1999 tection against infrequent conflicts, a large penalty will be paid, which may discourage the use of locking. In NFS Version 4, more aggressive caching strategies are designed: o To be compatible with a large range of server semantics. o Provide the same caching benefits as previous versions of NFS when unable to provide the more aggressive model. o Requirements for aggressive caching are organized so that a large portion of the benefit can be obtained even when not all of the requirements can be met. The appropriate requirements for the server are discussed in later sections in which specific forms of caching are dealt with. (see Section "Open Delegation"). NOTE: [[This discussion of proxy caching assumes that the a proxy server appears to the (real) server as an ordinary client. Should there be a proposal for non-transparent proxy server support (Mike Eisler's proxy model 2), this can be modified.]] 9.2. Proxy Caching Proxy caching is a useful technique to reduce latency and avoid server overload when a large number of geographically distributed clients share data. The proxy cache allows many requests to be satisfied by a local server, reducing bandwidth and latencies associated with accessing the primary server. If NFS version 4 were to limit itself to the caching approaches used in NFS v2 and NFS v3, a large number of the requests which a proxy server would receive would result in corresponding requests to the distant server: o All OPEN and CLOSE requests o WRITE requests necessary to flush out dirty data before all file close operations. o All LOCK and UNLOCK requests. Expires: April 2000 [Page 53] Draft Protocol Specification NFS version 4 October 1999 o READ and WRITE requests which must go to the server because locks are held or being released. o All directory modification requests (e.g. CREATE, REMOVE, etc.) o All SETATTR requests o Many other requests because of cache entry staleness Maintaining distributed caches allowing authoritative decisions to be made locally is difficult, in the general case. However, there are many situations in which access patterns allow such decisions to be delegated opportunistically to particular clients (such as proxy servers) avoiding a great deal of unnecessary communication. This is of particular importance when scaling to very large numbers of clients. 9.3. Delegation and Callbacks Recallable delegation of server responsibilities for a file to a client (which may include proxy servers) improves performance by avoiding repeated requests to the server in the absence of interclient conflict. A server recalls delegated responsibilities, using a callback rpc from the server to the client, when another client engages in sharing of a delegated file. A delegation is passed from the server to the client, specifying the object for which the delegation is being done and type of delegation. There are different types of delegations but each contains a stateid to be used to represent the delegation when performing operations that depend on the delegation. This stateid is similar to those associated with locks and share reservations but differs in that the stateid for a delegation is associated with a clientid and may be used on behalf of all the nfs_lockowner's for the given client. A delegation is made to the client as a whole and not to any specific process within it. Because callback rpc's may not work in all environments (due to firewalls, for example), correct operation does not depend on them. Preliminary testing of callback functionality by means of a CB_NULL request determines whether callbacks can be supported. The CB_NULL request checks the continuity of the callback path. A server makes a preliminary assessment of callback availability to a given client and avoids delegating responsibilities until it has determined that callbacks are supported. Because client requests for delegation are always conditional upon the absence of conflicting access, clients Expires: April 2000 [Page 54] Draft Protocol Specification NFS version 4 October 1999 can not assume that a request for delegation will be granted, and must always be prepared for denial. Once granted, a delegation behaves in most ways like a lock. There is an associated lease that is subject to renewal together with all of the other leases held by that client. Unlike locks, a request to a delegated file from a second client will cause the server to recall a delegation through a callback. On recall, the client holding the delegation must flush modified state (such as modified data) to the server and return the delegation. The conflicting request will not be responded to until the recall is complete, either by the return of the delegation or by the server timing out the recall and revoking the delegation. Following recall, the server has the information necessary to grant or deny second client's request. Since recalling a delegation may involve the flushing of substantial state to the server, the server should allow a time to complete the recall substantially longer than for a typical single RPC. The server may also extend the time allowed if it can determine that state is being diligently flushed by the client. However, the time to complete the recall should not be unbounded. For example, when responsibility to mediate opens on a given file is delegated to a client (see the section "Open Delegation"), the server will not know what opens are in effect on the client and thus will be unable to determine whether the access and deny state for the file allows any particular open until the delegation has been returned. Client failure or a network partition can result in failure to respond to a recall callback. The server will revoke the delegation, rendering any modified state still on the client useless. 9.3.1. Delegation Recovery There are three situations that delegation recovery must deal with: o Client reboot o Server reboot o Network partition (full or callback-only) Expires: April 2000 [Page 55] Draft Protocol Specification NFS version 4 October 1999 In the even of a client reboot, the failure to renew leases will result in the revocation of record locks and share reservations. Delegations, however, may treated a bit differently. Because data associated with some delegations may be written to stable storage on the client and because a delegation held by a proxy server may be further delegated to its client in turn whereupon the proxy server may reboot, there will be situations in which delegations will need to be re-established after a client (which includes a proxy server) reboots. To accommodate such situations, the server may, after leases expire, force requests that conflict with existing delegations to wait for a longer period of time. This is consistent with the fact that recall, including the time necessary to flush modified state to the server and return the delegation, may take significant time. This longer interval would allow clients which reboot to consult stable storage and request the reclamation of delegations which have not been timed out using this longer interval. For open delegations, such delegations are reclaimed using OPEN with a claim type of CLAIM_DELEGATE_PREV. (See the Sections on "Data Caching and Revocation" and "Procedure 17: OPEN" for discussion of open delegation and the details of OPEN respectively). When the server reboots, delegations are reclaimed (using OPEN with CLAIM_DELEGATE_PREV) in a similar fashion to record locks and share reservations. However, there is a slight semantic difference. Normally, the server decides that a delegation should not be granted, it performs the requested action (e.g. OPEN) without granting any delegation. When this happens as part of reclaim, the server grants the delegation but marks it specially so that the client treats the delegation as having been granted but recalled by the server so that it then has the duty to write all modified state to the server and then return the delegation. This handling of delegation reclaim reconciles three principles of NFS Version 4: o That upon reclaim, a client faithfully reporting resources assigned to it by an earlier server instance, must be granted those resources. o That the server has untrammeled authority to determine whether delegations are to be granted and, once granted, whether they are to be continued. o That the use of callbacks is not to be depended upon until the client has proved its ability to receive them. Expires: April 2000 [Page 56] Draft Protocol Specification NFS version 4 October 1999 When a network partition occurs, delegations, like locks and share reservations will be subject to freeing when the lease renewal period expires, although the server will normally extend the period in which conflicting requests are held off in the case of delegations. Eventually, however, the occurrence of a conflicting request from another client will cause revocation of the delegation. A blockage of the callback (e.g. by later network configuration change) will have the same effect. A recall request will fail and revocation of the delegation will result. A client normally finds out about revocation of a delegation when it uses a stateid associated with a delegation and receives the error NFS4ERR_EXPIRED. It also may find out about delegation revocation after a client reboot when it attempts to reclaim a delegation and receives that same error. Note that in the case of a revoked write open delegation, there are issues because data may have been modified by the client whose delegation is revoked and separately by other clients. See the section "Revocation Recovery for Write Open Delegation" for a discussion of such issues. Note also that when delegations are revoked information about the revoked delegation will be written by the server to stable storage (as described in section 7.5) to deal with the case in which a server reboots after revoking a delegation but before revoked delegate find out about the revocation. 9.4. Data Caching When programs share access to a set of files they need to be implemented so as to take account of the possibility of conflicting access by another program. This is true whether the programs in question are on different hosts or reside on the same host. Share reservations and record locks are the facilities that NFS v4 provides to allow programs to co-ordinate access by providing mutual exclusion facilities. NFS v4 data caching must be implemented so that it does not vitiate the assumptions that those using these facilities depend on. 9.4.1. Data Caching and OPENs In order to avoid invalidating the sharing assumptions that applications rely on, NFS v4 clients should not provide cached data to applications or modify it on behalf of an application when it would not be valid to obtain/modify that same data via a READ or WRITE rpc. Expires: April 2000 [Page 57] Draft Protocol Specification NFS version 4 October 1999 Further, in the absence of open delegation (see the Section "Open delegation"), two further rules apply. These rules are obeyed in practice by many NFS v2 and NFS v3 clients. o The first rule is that cached data present on a client must be revalidated after doing an OPEN, to make sure that the data for the file in question, is still validly reflected in the client's cache. This must be done at least when a client open includes DENY=WRITE or BOTH, terminating a period in which other clients may have had the opportunity to open the file with WRITE access. Clients may choose to do the revalidation more often (i.e. on opens specifying DENY=NONE) to parallel NFS v3 practice for the benefit of users assuming this degree of cache revalidation. o The second rule, complementary to the first, is that modified data must be flushed to the server before closing a file opened for write. If this rule is not adhered to, the revalidation done after client OPEN's cannot achieve its purpose. This data must be committed to stable storage before the CLOSE is done since retransmission of the data after a server reboot might not be possible, once the file is closed. 9.4.2. Data Caching and File Locking When users do not use share reservations to exclude inconsistent access, but use file locking instead, there is an analogous set of constraints that apply to client side data caching. These rules are effective only if file locking is used in a way which is congruent with the actual IO operations being done, as opposed to being used in a purely conventional way. For example, it is possible to manipulate a 2MB file, dividing the file into two 1MB regions, and using a lock for write on byte 0 of the file to represent the right to do IO to the first region and a lock for write to byte 1 of the file to represent the right to do IO on the second region. As long as all applications manipulating the file obey this convention, they will work on a local file system, but they may not work on NFS v4 unless clients refrain from data caching. The first rule is that when a client locks a region, it must revalidate its data cache if it has any cached data in the region newly locked and invalidate it if the change attribute shows that the file may have been written since that data was obtained. (A client might choose to invalidate all of non-modified cached data that it has, but invalidating all of the data in the newly locked region is necessary for correct operation). Expires: April 2000 [Page 58] Draft Protocol Specification NFS version 4 October 1999 The second rule is that before releasing a write lock for a region, all modified data for that region must be flushed to the server (although not necessarily to disk). Note that flushing data to the server and the invalidation of cached data must reflect the actual byte ranges locked or unlocked. Rounding these up or down to reflect client cache block boundaries will cause problems if not carefully done. For example, writing a modified block when only half of that block is within an area being unlocked may cause invalid modification to the region outside the unlocked area which may be part of a region locked by another client. Clients can avoid this situation by synchronously performing portions of write operations that overlap that portion (initial or final) that is not a full block. Similarly, invalidating a locked area which is not an integral number of full buffer blocks would require the client to read one or two partial blocks from the server if the revalidation procedure shows that the data which the client possesses may not be valid. Writes required to flush data before unlocking must be done to stable storage, either by doing synchronous writes or a COMMIT as part of the flush operation. The is so because retransmission of the modified data after a server reboot might conflict with a lock held by another client. Clients may choose to accommodate programs using record locking in non-standard ways (e.g. using a record lock as a global semaphore), by flushing to the server more data upon an UNLOCK than is covered by the locked range, possibly including modified data in other files. Any client doing so must ensure that for any file in which all data written is to properly locked areas, no piece of data be written to the server which is not within the locked area. 9.4.3. Data Caching and Mandatory File Locking Client side data caching needs to respect mandatory file locking when this is in effect. The presence of mandatory file locking for a given file is indicated in the result flags for an OPEN. When there is a read or write for a file for which mandatory locking is in effect, the client must check if it holds an appropriate lock for the range of bytes being read or written. If it does, it may satisfy the request using client side caching, just as for any other read or write. If such a lock is not held, the read or write cannot be satisfied by caching but must be sent to the server. When a request partially overlaps a locked area, the request should be broken up into multiple pieces with each region (locked or not) treated Expires: April 2000 [Page 59] Draft Protocol Specification NFS version 4 October 1999 appropriately. 9.4.4. Data Caching and File Identity When clients cache data, data needs to organized according to the the file system object to which the data belongs. For NFS v3 clients, the typical practice has been to assume (for this purpose) that distinct handles represent distinct filesystem objects (even though in some unusual cases this has not been the case) and that the data cache may be maintained on the this basis. In NFS v4, we have the prospect (due to pathname based handles) of more significant deviations from a one-filehandle-per-object model. This requires some method by which clients may reliably determine whether two filehandles designate the same object. If they were to simply assume that all distinct filehandles denoted distinct objects and proceeded to do data caching on that basis, caching inconsistencies would arise between the distinct client side objects which mapped to the same server side object. While it is true that such inconsistencies would be similar to those typically seen by programs running on multiple clients (apart from this issue), these inconsistencies would not be expected an NFS v3 clients not sharing files with any other client. The appearance of such inconsistencies would be a definite problem inhibiting transition from NFS v3 to NFS v4 and so must be avoided. The following procedure allows an NFS v4 client to determine (for the purposes of data caching) whether two distinct filehandles denote the same server side object: o If GETATTR directed to the two handles in question have different values of fsid.major or fsid.minor, then they are distinct objects. o If GETATTR for any file on the fsid (major and minor) to which the two handles belong and unique_handles is TRUE, then the two objects are distinct. o If GETATTR directed to the two handles does not return the fileid attribute for one or both of the handles, then the it cannot be determined whether the two objects are the same and so operations which depend on that knowledge (e.g. client side data caching) cannot be done reliably. o If the two GETATTR's return different values for the fileid Expires: April 2000 [Page 60] Draft Protocol Specification NFS version 4 October 1999 attribute, then they are distinct objects. o Otherwise they are the same object. 9.5. Open Delegation When a file is being opened, the server may delegate further handling of opens and closes for that file to the opening client. Any such delegation is recallable, since the circumstances that occasioned it are subject to change. In particular, the server may receive a conflicting OPEN from another client, which obliges it to recall the delegation before deciding whether the OPEN may be granted. Granting a delegation request is up to the server and it may deny all such requests. The following is a typical set of conditions that servers might use in deciding whether open should be delegated: o The client must be able to respond to callbacks (as evidenced by responding to previous CB_NULL requests). o The client must not have failed to respond properly to previous recalls. o There must be no current open conflicting with the requested delegation. o There should be no current delegation that conflicts with the delegation being requested. o The probability of future conflicting open requests should be low based on the recent history of the file. o The existence of any server specific semantics of OPEN/CLOSE that would make the required handling incompatible with the prescribed handling that the delegated client would apply (see below). There are two types of open delegations, read and write. A read open delegation allows a client to handle, on its own, requests to open a file for reading that do not deny read access to others. Multiple read open delegations may be outstanding simultaneously and do not conflict. A write open delegation allows the client to handle on its own all opens. Only one write open delegation may exist for a given file at a given time and it is inconsistent with any read open Expires: April 2000 [Page 61] Draft Protocol Specification NFS version 4 October 1999 delegations. When a client has a read open delegation, it may not make any changes to the contents or attributes of the file but it is assured that no other client may do so. When a client has a write open delegation it may modify the file data as it wishes secure in the knowledge that no other client is accessing the file's data. The client holding a write delegation may only affect file attributes which are intimately connected with the file data: length, modify_time, change. When a client has an open delegation, it does not send OPEN's, or CLOSE's to the server but updates the appropriate status internally. For a read open delegation, opens that cannot be handled locally (opens for write or that deny read access) must go to the server. When an open delegation is requested and granted, the response to the OPEN contains an open delegation structure which specifies, the type of delegation (read or write), space limitation information to control flushing of data on close (write open delegation only, see the Section "Open Delegation and Data Caching"), an nfsace4 specifying read and write permissions and a stateid to represent the delegation when doing IO. This stateid is separate and distinct from the stateid for the OPEN proper, which, unlike the delegation stateid, is associated with a particular nfs_lockowner, and will continue to be valid after the delegation is recalled, if the file remains open. When an internal request (or a request by one of a proxy server's clients) is made to open a file when open delegation is in effect, it will be accepted or rejected solely on the basis of the following conditions. Any requirement for other checks to be made by the delegate, should result in open delegation being denied so that the checks can be made by the server itself. o The access and deny bits for the request and the file as described in Section 7.7, Share reservations o The read and write permissions as determined below. The nfsace4 passed with delegation can be used to avoid frequent ACCESS calls. The permission check should be as follows: o If the nfsace4 indicates that the open may be done, then it should be granted, without reference to the server. Expires: April 2000 [Page 62] Draft Protocol Specification NFS version 4 October 1999 o If the nfsace4 indicates that the open may not be done, then an ACCESS request must be made to the server to obtain the definitive answer. The server may thus return an nfsace4 that is more restrictive than the actual ACL of the file, including one that specifies denial of all access. Note that some common practices like mapping root to nobody may make it incorrect to send the actual ACL of the file in some cases. The use of delegation together with various other forms of caching creates the possibility that no server authentication will ever be performed on a given user since all of his requests might be satisfied locally. Where the client is depending of the server for authentication, it should make sure that some authentication (via an ACCESS call) happens for each user, even if an ACCESS call would not otherwise be required. The server may enforce frequent authentication by returning an nfsace4 denying all access with every open delegation. 9.5.1. Open Delegation and Data Caching Open delegation allows much of the message overhead associated with opening and close files to be eliminated. This is also the case for a proxy server to which an open delegation was made but which did not pass the delegation on. In either case, an open when an open delegation was in effect would not require that a validation message be sent to the server. The continued endurance of the read-open- delegation provides a guarantee that no open for write and thus no write has occurred. Similarly, when closing a file opened for write, if write open delegation is in effect, the data written does not have to be flushed to the server until the open delegation is recalled. The continued endurance of the open delegation provides a guarantee that no open and thus no read or write has been done by another client. For the purposes of open delegation, IO done without an OPEN (via special stateid's consisting of all zero bits or all one bits) are treated as the functional equivalent of a corresponding type of open. Thus, such READ's or WRITE's done by another client need will provoke recall of a write open delegation, will any such WRITE will provoke recall of a read open delegation. In order to maintain current semantics in which the non-availability of storage to hold a file written by an NFS client is guaranteed to Expires: April 2000 [Page 63] Draft Protocol Specification NFS version 4 October 1999 be determined at or before the associated close operation, the avoidance by the client of the requirement to flush data to the server on close, is limited to cases in which the client and server together can determine in advance that the required space will be available. The server specifies one of a number of limiting conditions, either a limit on the size of the file or a limit on the number of modified blocks using a blocksize supplied by the server. Based on implementation experience, changes in the form of these conditions may be made or new types of limiting conditions defined. Whatever the form of condition used, it us up to the server to ensure that any set of writes, no matter how arranged that meets the specified condition will ever encounter a lack of disk space availability when the modified data is allowed to remain on the client unflushed to the server past the point of close. The server must make sure that the maximum possible amount of storage is reserved so that all outstanding delegations together meet that condition, and to recall delegations appropriately to maintain that invariant. When a server implements quotas, it should also be careful that it does not invalidate its quota invariants when granting write open delegation. When a user is near a quota limit, this may result in write open delegations granted with very restrictive space limitation conditions or those which always force modified data to be flushed to the server on close. When authentication considerations make flushing of modified data to the server after the close problematic (after the last close, the user may have logged off and unexpired local credentials may not exist), the client may need to take special care to ensure that local unexpired credentials will in fact be available, either by tracking the expiration time of credentials and flushing data well in advance of their expiration, or by making private copies of credentials to assure their availability when needed. 9.5.2. Open Delegation and File Locks When a client holds a write-open-delegation, lock operations, including those required by mandatory file locking are performed locally since the delegation implies that there can be no conflicting locks. On a similar basis, all of the revalidations that would normally be associated with obtaining locks and the flushing of data which would attend the releasing of locks for write need not be done. 9.5.3. Recall of Open Delegation Expires: April 2000 [Page 64] Draft Protocol Specification NFS version 4 October 1999 The following events necessitate recall of an open delegation: o Potentially conflicting OPEN request (or IO done with "special" stateid) o SETATTR issued by another client o REMOVE request for the file in question o RENAME request for the file in question as either source or target of the RENAME NOTE: [[The following are necessary unless the spec is cleaned up to disallow LOCK's and IO operations without a corresponding OPEN.]] o LOCK request by another client. o IO operation done with "special" stateid by another client. Whether a RENAME of a directory in the path leading to the file results in recall of an open delegation depends on the semantics of the server file system. If that filesystem denies such RENAME's when a file is open, the recall must be performed to determine whether the file in question is, in fact, open. In addition to the situations above, the server may choose to recall open delegations at any time if resource constraints make it advisable to do so. Clients should always be prepared for the possibility of recall. Special handling is needed for a GETATTR which occurs when a write open delegation is in effect. In this case, the client holding the delegation needs to be interrogated, using a CB_GETATTR callback, if the GETATTR attribute bits include any of the attributes that a write open delegate may modify (length, modify time, change). When a client receives a recall for an open delegation, it needs to update state on the server before returning the delegation. These same updates must be done whenever a client chooses to return a delegation voluntarily. The following items of state need to be dealt with: Expires: April 2000 [Page 65] Draft Protocol Specification NFS version 4 October 1999 o If the open file associated with the OPEN which delivered the delegation to the client is no longer open, then a CLOSE must be done to the server, if this has not been done previously. o If there are other opens extant for that file, then OPEN operations must be done to update the server and obtain the stateid's to be used subsequently, given that the delegation stateid will no longer be valid. Such OPEN's are done using a claim type of CLAIM_DELEGATE_CUR so that the delegation stateid can be presented to the server to establish the client's right to perform this OPEN. (See the section "Procedure 17: OPEN" for details). o If there are locks which have been granted (write open delegation case only), then these need to be performed to the server. In the case of a write open delegation, if the file in question is not opened for write at the time of recall, then any modified data for the file needs to be flushed to the server, as it would have been flushed when the file was closed, had the write open delegation not been in effect. The possibility of truncation on the client means that the following needs to be done: o If a file truncate has been done on the client (as part of an OPEN UNCHECKED, for example), and this has not yet been propagated to the server (it must be before allowing any new data to be written to the server), it must be done as part of recall, again before writing modified data to the server. o Any modified data for the file needs to be flushed to the the server. In the case of write open delegation, file locking imposes some additional requirements. The flushing of any modified data in any area for which a write lock was released while the write open delegation was in effect is what is required to precisely maintain the associated invariant. However, because the write open delegation implies no other locking by other clients, a simpler implementation is to flush all modified data for the file (as described just above) if any write lock has been released while the write open delegation was in effect. Expires: April 2000 [Page 66] Draft Protocol Specification NFS version 4 October 1999 9.5.4. Delegation Revocation When a delegation is revoked, if there are associated opens on the client, the processes holding these opens need to be notified, normally by returning errors whenever IO operations or a close is attempted on that open file. When an open delegation is revoked, if no opens are present on the client, then no error needs to be reported, unless there is modified data present on the client. In this case, the user will have to be notified, since there may not be an active application to get an error status. (See the section "Revocation Recovery for Write Open Delegation" for more details). 9.6. Data Caching and Revocation When locks (including delegations) are revoked, the assumptions upon which successful caching depend, are no longer guaranteed. Therefore the client, in addition to notifying the owner of a record lock or share reservation, and processes holding opens for the delegation, needs to remove all data for the file from its cache. In the case of modified data, it must be removed from the client's cache without being written to the server. Notification to the lock owner will in many cases consist of simply returning an error on the next (and all subsequent) IO to the open file or on the close. Where the client API make such notification impossible (because errors for certain operations may not be returned), more drastic action such as signals or process termination may be appropriate since an invariant that an application depends on may be violated. Depending on how errors are typically treated on the client operating system, further levels of notification including logging, console messages, and GUI pop-up's may be in order. 9.6.1. Revocation Recovery for Write Open Delegation Revocation recovery for a write open delegation poses the issue in that there may be modified data in the client cache while the file is not open. In this situation, any client which does not flush modified data to the server on each close must make sure that the user receives appropriate notification of the failure. Since such situations may require human action to correct problems, notification schemes in which the appropriate user or administrator is notified Expires: April 2000 [Page 67] Draft Protocol Specification NFS version 4 October 1999 may be necessary. Logging and console messages are typical examples. If there is modified data on the client, it must not be flushed normally to the server. A client may attempt to provide a copy of the file data as modified during the delegation under a different name, to ease recovery. Unless the client can determine that the file was has not modified by any other client, this technique is limited to situations in which a client has a complete cached copy of the file in question. Use of such a technique may be limited to files under a certain size or may only be used when sufficient disk space is guaranteed available within the target file system and when the client has sufficient buffering resources to keep the cached copy available until it is properly stored to the target file system. 9.7. Attribute Caching First note that when attributes are discussed here, extended or named attributes are not included. Individual named attributes are analogous to files and caching of the data for these needs to be handled just as data caching is for ordinary files. Similarly, LOOKUP results from an OPENATTR directory are to be cached on the same basis as any other pathnames and similarly for directory contents. Clients may cache file attributes obtained from the server and use them to avoid subsequent GETATTR requests. Such caching is write through in that modification to file attributes is always done by means of requests to the server and should not be done locally and cached, the exception being modifications to attributes that are intimately connected with data caching. Thus, extending a file by writing data to the local data cache is reflected immediately in the length as seen on the client without this change being immediately reflected on the server. Normally such changes are not propagated directly to the server, but when the modified data is flushed to the server, analogous attribute changes are made on the server. When open delegation is in effect, the modified attributes may be returned to the server in the response to a CB_RECALL call. The result of local caching of attributes is that the attribute caches maintained on individual clients will not be coherent. Changes made in one order on the server may be seen in a different order on one client and in a third order on a different client. Given that typical file API's do not provide means to atomically modify or interrogate attributes for multiple files at the same time, the undesirable effects of these incoherencies have proved manageable, if the following rules, derived from the practice of Expires: April 2000 [Page 68] Draft Protocol Specification NFS version 4 October 1999 NFSv3 implementations are followed: o All attributes for a given file (per-fsid attributes excepted) are cached as a unit so that no non-serializability can arise within the context of a single file. o A bound is maintained on how long a client cache entry can be kept without being refreshed from the server. o When performing any operation that changes attributes on the server, including directory operations that due so indirectly, updated attributes would be fetched as part of the associated rpc, using a GETATTR following the operation in question, which the results of the GETATTR used to update the client's attribute cache. Note that if the full set of attributes to be cached is requested by READDIR, the results can be cached by the client on the same basis as attributes obtained GETATTR. A client may validate its cached version of attributes for a file by fetching only the change attribute and assuming that if the change attribute has the same value as it did when the attributes were cached, then no attributes have changed, with the possible exception of access_time. 9.8. Name Caching The results of LOOKUP and READDIR operations may be cached to avoid the cost of subsequent LOOKUP operations. Just as in the case attribute caching, inconsistencies may arise among the various client caches. To mitigate the effects of these inconsistencies, given the context of typical file API's, the following rules should be adhered to: o The results of unsuccessful LOOKUP's should not cached, unless they are specifically reverified at the point of use. o A bound is maintained on how long a client name cache entry can be kept without verifying that the entry in question has not been made invalid by a directory change operation performed by another client. Expires: April 2000 [Page 69] Draft Protocol Specification NFS version 4 October 1999 When a client is not making changes to a directory for which there exist name cache entries, it needs to periodically fetch attributes for that directory to make sure that it is not changing. After determining that no change that has occurred, the expiration time for the associated name cache entries may be updated to be the current time plus the name cache staleness bound. When a client is making changes to a given directory, it needs to determine whether there have been changes made to the directory by other clients. It does this using the change attribute as reported before and after the directory operation in the associated wcc4_info returned on that operation. When the server is able to report these values atomically with respect to the directory operation, which the server indicates in the wcc4_info, comparison of the pre-operation change value with the change value which the client has in his cache determines whether there has been a change by another client, necessitating a purge of name cache associated with the directory. If there has been no such change, the name cache can be updated on the client to reflect the directory operation and the associated timeout extended. The post-operation change value needs to be saved as the basis for future wcc4_info comparisons. Name caching requires that the client revalidate cached data by comparing the change attribute for a directory when the name item was cached. This requires that any changes in the contents of a directory be visible as a changed value for the change attribute of the directory. Proper use of wcc4_info, when a client makes a change to a directory, requires that reporting of the pre-operation and post-operation change attribute values are in fact atomic with the actual directory change. When the server cannot reliably report before and after values atomically with respect to the directory operation, the server indicates that in the wcc4_info and the client should not assume that other clients have not changed the directory. 9.9. Directory Caching The results of READDIR operations may be used to avoid subsequent READDIR operations. Just as in the cases of attribute and name caching, this may result in inconsistencies among the various client caches. To mitigate the effects of these inconsistencies, given the context of typical file API's, the following rules should be adhered to: o Cached READDIR information for a directory which is not obtained in a single READDIR operation must always be a consistent Expires: April 2000 [Page 70] Draft Protocol Specification NFS version 4 October 1999 snapshot of directory contents as evidenced by a GETATTR before the first and after the last of READDIR's which contribute. o A bound is maintained on the amount of time that a directory cache entry may be kept on the client without revalidation. The revalidation technique parallels that discussed in the case of name caching. When the client is not changing the directory in question, checking that the directory has not changed (by using GETATTR to obtain the change attribute) is adequate to extend the lifetime of the cache entry. When a client is modifying the directory, it needs to use the wcc4_info data to determine whether there are other clients who are modifying the directory, allowing it to update the directory cache to reflect its own changes if it is the only client making modifications. Directory caching requires that the client revalidate cached data by comparing the change attribute for a directory when the directory data was cached. This requires that any changes in the contents of a directory be visible as a changed value for the change attribute of the directory. Proper use of wcc4_info, when a client makes a change to a directory, require that reporting of the pre-operation and post-operation change attribute values are in fact atomic with the actual directory change. When the server cannot reliably report before and after values atomically with respect to the directory operation, the server indicates that in the wcc4_info and the client should not assume that other clients have not changed the directory. Expires: April 2000 [Page 71] Draft Protocol Specification NFS version 4 October 1999 10. Defined Error Numbers NFS error numbers are assigned to failed operations within a compound request. A compound request contains a number of NFS operations that have their results encoded in sequence in a compound reply. The results of successful operations will consist of an NFS4_OK status followed by the encoded results of the operation. If an NFS operation fails, an error status will be entered in the reply and the compound request will be terminated. A description of each defined error follows: NFS4_OK Indicates the operation completed successfully. NFS4ERR_PERM Not owner. The operation was not allowed because the caller is either not a privileged user (root) or not the owner of the target of the operation. NFS4ERR_NOENT No such file or directory. The file or directory name specified does not exist. NFS4ERR_IO I/O error. A hard error (for example, a disk error) occurred while processing the requested operation. NFS4ERR_NXIO I/O error. No such device or address. NFS4ERR_ACCES Permission denied. The caller does not have the correct permission to perform the requested operation. Contrast this with NFS4ERR_PERM, which restricts itself to owner or privileged user permission failures. NFS4ERR_EXIST File exists. The file specified already exists. NFS4ERR_XDEV Attempt to do a cross-device hard link. NFS4ERR_NODEV No such device. Expires: April 2000 [Page 72] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_NOTDIR Not a directory. The caller specified a non- directory in a directory operation. NFS4ERR_ISDIR Is a directory. The caller specified a directory in a non-directory operation. NFS4ERR_INVAL Invalid argument or unsupported argument for an operation. Two examples are attempting a READLINK on an object other than a symbolic link or attempting to SETATTR a time field on a server that does not support this operation. NFS4ERR_FBIG File too large. The operation would have caused a file to grow beyond the server's limit. NFS4ERR_NOSPC No space left on device. The operation would have caused the server's file system to exceed its limit. NFS4ERR_ROFS Read-only file system. A modifying operation was attempted on a read-only file system. NFS4ERR_MLINK Too many hard links. NFS4ERR_NAMETOOLONG The filename in an operation was too long. NFS4ERR_NOTEMPTY An attempt was made to remove a directory that was not empty. NFS4ERR_DQUOT Resource (quota) hard limit exceeded. The user's resource limit on the server has been exceeded. NFS4ERR_STALE Invalid file handle. The file handle given in the arguments was invalid. The file referred to by that file handle no longer exists or access to it has been revoked. Expires: April 2000 [Page 73] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_BADHANDLE Illegal NFS file handle. The file handle failed internal consistency checks. NFS4ERR_NOT_SYNC Update synchronization mismatch was detected during a SETATTR operation. NFS4ERR_BAD_COOKIE READDIR cookie is stale. NFS4ERR_NOTSUPP Operation is not supported. NFS4ERR_TOOSMALL Buffer or request is too small. NFS4ERR_SERVERFAULT An error occurred on the server which does not map to any of the legal NFS version 4 protocol error values. The client should translate this into an appropriate error. UNIX clients may choose to translate this to EIO. NFS4ERR_BADTYPE An attempt was made to create an object of a type not supported by the server. NFS4ERR_JUKEBOX The server initiated the request, but was not able to complete it in a timely fashion. The client should wait and then try the request with a new RPC transaction ID. For example, this error should be returned from a server that supports hierarchical storage and receives a request to process a file that has been migrated. In this case, the server should start the immigration process and respond to client with this error. NFS4ERR_SAME Returned if an NVERIFY operation shows that no attributes have changed. NFS4ERR_DENIED An attempt to lock a file is denied. Since this may be a temporary condition, the client is encouraged to retry the lock request (with exponential backoff of timeout) until the lock Expires: April 2000 [Page 74] Draft Protocol Specification NFS version 4 October 1999 is accepted. NFS4ERR_EXPIRED A lease has expired that is being used in the current procedure. NFS4ERR_LOCKED A read or write operation was attempted on a locked file. NFS4ERR_GRACE The server is in its recovery or grace period which should match the lease period of the server. NFS4ERR_FHEXPIRED The file handle provided is volatile and has expired at the server. The client should attempt to recover the new file handle by traversing the server's file system name space. The file handle may have expired because the server has restarted, the file system object has been removed, or the file handle has been flushed from the server's internal mappings. NOTE: This error definition will need to be crisp and match the section describing the volatile file handles. NFS4ERR_SHARE_DENIED An attempt to OPEN a file with a share reservation has failed because of a share conflict. NFS4ERR_SAME This error is returned by the NVERIFY operation to signify that the attributes compared were the same as provided in the client's request. NFS4ERR_WRONGSEC The security mechanism being used by the client for the procedure does not match the server's security policy. The client should change the security mechanism being used and retry the operation. NFS4ERR_CLID_INUSE The SETCLIENTID procedure has found that a Expires: April 2000 [Page 75] Draft Protocol Specification NFS version 4 October 1999 client id is already in use by another client. NFS4ERR_RESOURCE For the processing of the COMPOUND procedure, the server may exhaust available resources and can not continue processing procedures within the COMPOUND operation. This error will be returned from the server in those instances of resource exhaustion related to the processing of the COMPOUND procedure. NFS4ERR_MOVED The filesystem which contains the current filehandle object has been relocated or migrated to another server. The client may obtain the new filesystem location by obtaining the "fs_locations" attribute for the current filehandle. For further discussion, refer to the section "Filesystem Migration or Relocation". NFS4ERR_NOFILEHANDLE The logical current file handle value has not been set properly. This may be a result of a malformed COMPOUND operation (i.e. no PUTFH or PUTROOTFH before an operation that requires the current file handle be set). Expires: April 2000 [Page 76] Draft Protocol Specification NFS version 4 October 1999 11. NFS Version 4 Requests For the NFS program, version 4, there are two traditional RPC procedures: NULL and COMPOUND. All other operations for NFS version 4 are defined in normal XDR/RPC syntax and semantics except that these operations are encapsulated within the COMPOUND request. This requires that the client combine one or more NFSv4 operations into a single request. The NFS4_CALLBACK program is used to provide server to client signaling and is constructed in a similar fashion as the NFS program. The procedures CB_NULL and CB_COMPOUND are defined in the same way as NULL and COMPOUND are within the NFS program. The CB_COMPOUND request also encapsulates the remaining operations of the NFS4_CALLBACK program. 11.1. Compound Procedure These compound requests provide the opportunity for better performance on high latency networks. The client can avoid cumulative latency of multiple RPCs by combining multiple dependent operations into a single compound request. A compound op may provide for protocol simplification by allowing the client to combine basic procedures into a single request that is customized for the client's environment. The basics of the COMPOUND procedures construction is: +-----------+-----------+-----------+-- | op + args | op + args | op + args | +-----------+-----------+-----------+-- and the reply looks like this: +----------------+----------------+----------------+-- | code + results | code + results | code + results | +----------------+----------------+----------------+-- Where "code" is an indication of the success or failure of the operation including the opcode itself. 11.2. Evaluation of a Compound Request The server will process the COMPOUND procedure by evaluating each of the operations within the COMPOUND request in order. Each component Expires: April 2000 [Page 77] Draft Protocol Specification NFS version 4 October 1999 operation consists of a 32 bit operation code, followed by the argument of length determined by the type of operation. The results of each operation are encoded in sequence into a reply buffer. The results of each operation are preceded by the opcode and a status code (normally zero). If an operation results in a non-zero status code, the status will be encoded and evaluation of the compound sequence will halt and the reply will be returned. There are no atomicity requirements for the procedures contained within the COMPOUND procedure. The operations being evaluated as part of a COMPOUND request may be evaluated simultaneously with other COMPOUND requests that the server receives. It is the client's responsibility for recovering from any partially completed compound request. Each operation assumes a "current" filehandle that is available as part of the execution context of the compound request. Operations may set, change, or return this filehandle. Expires: April 2000 [Page 78] Draft Protocol Specification NFS version 4 October 1999 12. NFS Version 4 Procedures 12.1. Procedure 0: NULL - No Operation SYNOPSIS ARGUMENT void; RESULT void; DESCRIPTION Standard ONCRPC NULL procedure. Void argument, void response. ERRORS None. Expires: April 2000 [Page 79] Draft Protocol Specification NFS version 4 October 1999 12.2. Procedure 1: COMPOUND - Compound Operations SYNOPSIS compoundargs -> compoundres ARGUMENT union opunion switch (unsigned opcode) { case : ; ... }; struct op { opunion ops; }; struct COMPOUND4args { utf8string tag; op oplist<>; }; RESULT struct COMPOUND4res { nfsstat4 status; utf8string tag; resultdata data<>; }; DESCRIPTION The COMPOUND procedure is used to combine one or more of the NFS procedures into a single RPC request. The main NFS RPC program has two main procedures: NULL and COMPOUND. All other procedures use the COMPOUND procedure as a wrapper. In the processing of the COMPOUND procedure, the server may find that it does not have the available resources to execute any or all of the procedures within the COMPOUND sequence. In this case, the error NFS4ERR_RESOURCE will be returned for the particular procedure within the COMPOUND operation where the resource exhaustion occurred. This assume that all previous procedures Expires: April 2000 [Page 80] Draft Protocol Specification NFS version 4 October 1999 within the COMPOUND sequence have been evaluated successfully. IMPLEMENTATION The COMPOUND procedure is used to combine individual procedures into a single RPC request. The server interprets each of the procedures in turn. If a procedure is executed by the server and the status of that procedure is NFS4_OK, then the next procedure in the COMPOUND procedure is executed. The server continues this process until there are no more procedures to be executed or one of the procedures has a status value other than NFS4_OK. Note that the definition of the "tag" in both the request and response are left to the implementor. It may be used to summarize the content of the compound request for the benefit of packet sniffers and engineers debugging implementations. ERRORS NFS4ERR_RESOURCE Expires: April 2000 [Page 81] Draft Protocol Specification NFS version 4 October 1999 12.2.1. Operation 2: ACCESS - Check Access Rights SYNOPSIS (cfh), accessreq -> supported, accessrights ARGUMENT const ACCESS4_READ = 0x0001; const ACCESS4_LOOKUP = 0x0002; const ACCESS4_MODIFY = 0x0004; const ACCESS4_EXTEND = 0x0008; const ACCESS4_DELETE = 0x0010; const ACCESS4_EXECUTE = 0x0020; struct ACCESS4args { /* CURRENT_FH: object */ uint32_t access; }; RESULT struct ACCESS4resok { uint32_t supported; uint32_t access; }; union ACCESS4res switch (nfsstat4 status) { case NFS4_OK: ACCESS4resok resok; default: void; }; DESCRIPTION ACCESS determines the access rights that a user, as identified by the credentials in the request, has with respect to a file system object. The client encodes the set of access rights that are to be checked in a bit mask. The server checks the permissions encoded in the bit mask. If a status of NFS4_OK is returned, two bit masks Expires: April 2000 [Page 82] Draft Protocol Specification NFS version 4 October 1999 are included in the response. The first represents the access rights for which the server can verify reliably for the user. The second represents the access rights available to the user for the filehandle provided. The results of this procedure are necessarily advisory in nature. That is, a return status of NFS4_OK and the appropriate bit set in the bit mask does not imply that such access will be allowed to the file system object in the future, as access rights can be revoked by the server at any time. The following access permissions may be requested: ACCESS_READ: bit 1 Read data from file or read a directory. ACCESS_LOOKUP: bit 2 Look up a name in a directory (no meaning for non-directory objects). ACCESS_MODIFY: bit 3 Rewrite existing file data or modify existing directory entries. ACCESS_EXTEND: bit 4 Write new data or add directory entries. ACCESS_DELETE: bit 5 Delete an existing directory entry. ACCESS_EXECUTE: bit 6 Execute file (no meaning for a directory). IMPLEMENTATION In general, it is not sufficient for the client to attempt to deduce access permissions by inspecting the uid, gid, and mode fields in the file attributes, since the server may perform uid or gid mapping or enforce additional access control restrictions. It is also possible that the NFS version 4 protocol server may not be in the same ID space as the NFS version 4 protocol client. In these cases (and perhaps others), the NFS version 4 protocol client can not reliably perform an access check with only current file attributes. In the NFS version 2 protocol, the only reliable way to determine whether an operation was allowed was to try it and see if it succeeded or failed. Using the ACCESS procedure in the NFS version 4 protocol, the client can ask the server to indicate whether or Expires: April 2000 [Page 83] Draft Protocol Specification NFS version 4 October 1999 not one or more classes of operations are permitted. The ACCESS operation is provided to allow clients to check before doing a series of operations. This is useful in operating systems (such as UNIX) where permission checking is done only when a directory is opened. This procedure is also invoked by NFS client access procedure (called possibly through access(2)). The intent is to make the behavior of opening a remote file more consistent with the behavior of opening a local file. For NFS version 4, the use of the ACCESS procedure when opening a regular file is deprecated in favor of using OPEN. The information returned by the server in response to an ACCESS call is not permanent. It was correct at the exact time that the server performed the checks, but not necessarily afterwards. The server can revoke access permission at any time. The NFS version 4 protocol client should use the effective credentials of the user to build the authentication information in the ACCESS request used to determine access rights. It is the effective user and group credentials that are used in subsequent read and write operations. Many implementations do not directly support the ACCESS_DELETE permission. Operating systems like UNIX will ignore the ACCESS_DELETE bit if set on an access request on a non-directory object. In these systems, delete permission on a file is determined by the access permissions on the directory in which the file resides, instead of being determined by the permissions of the file itself. Therefore the mask returned enumerating which access rights can be determined will have the ACCESS_DELETE value set to 0. This indicates to the client that the server was unable to check that particular access right. The ACCESS_DELETE bit in the access mask returned will then be ignored by the client. ERRORS NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_BADHANDLE Expires: April 2000 [Page 84] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_FHEXPIRED NFS4ERR_WRONGSEC NFS4ERR_MOVED Expires: April 2000 [Page 85] Draft Protocol Specification NFS version 4 October 1999 12.2.2. Operation 3: CLOSE - Close File SYNOPSIS (cfh), stateid -> stateid ARGUMENT struct CLOSE4args { stateid4 stateid; }; RESULT union CLOSE4res switch (nfsstat4 status) { case NFS4_OK: stateid4 stateid; default: void; }; DESCRIPTION The CLOSE procedure notifies the server that all share reservations corresponding to the client supplied stateid should be released. IMPLEMENTATION Share reservations for the matching stateid will be released on successful completion of the CLOSE procedure. ERRORS NFS4ERR_INVAL NFS4ERR_STALE NFS4ERR_BADHANDLE Expires: April 2000 [Page 86] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_SERVERFAULT NFS4ERR_EXPIRED NFS4ERR_GRACE NFS4ERR_FHEXPIRED NFS4ERR_MOVED Expires: April 2000 [Page 87] Draft Protocol Specification NFS version 4 October 1999 12.2.3. Operation 4: COMMIT - Commit Cached Data SYNOPSIS (cfh), offset, count -> verifier ARGUMENT struct COMMIT4args { /* CURRENT_FH: file */ offset4 offset; count4 count; }; RESULT struct COMMIT4resok { writeverf4 verf; }; union COMMIT4res switch (nfsstat4 status) { case NFS4_OK: COMMIT4resok resok4; default: void; }; DESCRIPTION The COMMIT procedure forces or flushes data to stable storage that was previously written with a WRITE operation which had the stable field set to UNSTABLE4. The offset provided by the client represents the position within the file at which the flush is to begin. An offset value of 0 (zero) means to flush data starting at the beginning of the file. The count as provided by the client is the number of bytes of data to flush. If count is 0 (zero), a flush from offset to the end of file is done. The server returns a write verifier upon successful completion of the COMMIT. The write verifier is used by the client to determine if the server has restarted or rebooted between the initial Expires: April 2000 [Page 88] Draft Protocol Specification NFS version 4 October 1999 WRITE(s) and the COMMIT. The client does this by comparing the write verifier returned from the initial writes and the verifier returned by the COMMIT procedure. The server must vary the value of the write verifier at each server event that may lead to a loss of uncommitted data. Most commonly this occurs when the server is rebooted; however, other events at the server may result in uncommitted data loss as well. IMPLEMENTATION The COMMIT procedure is similar in operation and semantics to the POSIX fsync(2) system call that synchronizes a file's state with the disk (file data and metadata is flushed to disk or stable storage). COMMIT performs the same operation for a client, flushing any unsynchronized data and metadata on the server to the server's disk or stable storage for the specified file. Like fsync(2), it may be that there is some modified data or no modified data to synchronize. The data may have been synchronized by the server's normal periodic buffer synchronization activity. COMMIT should return NFS4_OK, unless there has been an unexpected error. COMMIT differs from fsync(2) in that it is possible for the client to flush a range of the file (most likely triggered by a buffer- reclamation scheme on the client before file has been completely written). The server implementation of COMMIT is reasonably simple. If the server receives a full file COMMIT request, that is starting at offset 0 and count 0, it should do the equivalent of fsync()'ing the file. Otherwise, it should arrange to have the cached data in the range specified by offset and count to be flushed to stable storage. In both cases, any metadata associated with the file must be flushed to stable storage before returning. It is not an error for there to be nothing to flush on the server. This means that the data and metadata that needed to be flushed have already been flushed or lost during the last server failure. The client implementation of COMMIT is a little more complex. There are two reasons for wanting to commit a client buffer to stable storage. The first is that the client wants to reuse a buffer. In this case, the offset and count of the buffer are sent to the server in the COMMIT request. The server then flushes any cached data based on the offset and count, and flushes any metadata associated with the file. It then returns the status of the flush and the write verifier. The other reason for the client to generate a COMMIT is for a full file flush, such as may be done at Expires: April 2000 [Page 89] Draft Protocol Specification NFS version 4 October 1999 close. In this case, the client would gather all of the buffers for this file that contain uncommitted data, do the COMMIT operation with an offset of 0 and count of 0, and then free all of those buffers. Any other dirty buffers would be sent to the server in the normal fashion. After a buffer is written by the client with stable parameter set to UNSTABLE, the buffer must be considered as modified by the client until the buffer has either been flushed via a COMMIT operation or written via a WRITE operation with stable parameter set to FILE_SYNC or DATA_SYNC. This is done to prevent the buffer from being freed and reused before the data can be flushed to stable storage on the server. When a response comes back from either a WRITE or a COMMIT operation and it contains a write verifier that is different than previously returned by the server, the client will need to retransmit all of the buffers containing uncommitted cached data to the server. How this is to be done is up to the implementor. If there is only one buffer of interest, then it should probably be sent back over in a WRITE request with the appropriate stable parameter. If there is more than one buffer, it might be worthwhile retransmitting all of the buffers in WRITE requests with the stable parameter set to UNSTABLE and then retransmitting the COMMIT operation to flush all of the data on the server to stable storage. The timing of these retransmissions is left to the implementor. The above description applies to page-cache-based systems as well as buffer-cache-based systems. In those systems, the virtual memory system will need to be modified instead of the buffer cache. ERRORS NFS4ERR_IO NFS4ERR_LOCKED NFS4ERR_SERVERFAULT NFS4ERR_MOVED Expires: April 2000 [Page 90] Draft Protocol Specification NFS version 4 October 1999 12.2.4. Operation 5: CREATE - Create a Non-Regular File Object SYNOPSIS (cfh), name, type, how -> (cfh), change_info ARGUMENT struct CREATE4args { /* CURRENT_FH: directory for creation */ component4 objname; fattr4_type type; createhow4 createhow; }; RESULT struct change_info4 { bool atomic; fattr4_change before; fattr4_change after; }; struct CREATE4resok { change_info4 cinfo; }; union CREATE4res switch (nfsstat4 status) { case NFS4_OK: CREATE4resok resok4; default: void; }; DESCRIPTION The CREATE procedure creates an non-regular file object in a directory with a given name. The OPEN procedure MUST be used to create a regular file. The need for exclusive create semantics for non-regular Expires: April 2000 [Page 91] Draft Protocol Specification NFS version 4 October 1999 files needs to be decided upon and decisions about storage location of the verifier will need to be determined as well. The objtype determines the type of object to be created: directory, symlink, etc. The how union may have a value of UNCHECKED, GUARDED, and EXCLUSIVE. UNCHECKED means that the object should be created without checking for the existence of a duplicate object in the same directory. In this case, attrbits and attrvals describe the initial attributes for the file object. GUARDED specifies that the server should check for the presence of a duplicate object before performing the create and should fail the request with NFS4ERR_EXIST if a duplicate object exists. If the object does not exist, the request is performed as described for UNCHECKED. EXCLUSIVE specifies that the server is to follow exclusive creation semantics, using the verifier to ensure exclusive creation of the target. No attributes may be provided in this case, since the server may use the target object meta-data to store the verifier. For the directory where the new file object was created, the server returns change_info4 information in cinfo. With the atomic field of the change_info4 struct, the server will indicate if the before and after change attributes were obtained atomically with respect to the file object creation. The current filehandle is replaced by that of the new object. IMPLEMENTATION The CREATE procedure carries support for EXCLUSIVE create forward from NFS version 3. As in NFS version 3, this mechanism provides reliable exclusive creation. Exclusive create is invoked when the how parameter is EXCLUSIVE. In this case, the client provides a verifier that can reasonably be expected to be unique. A combination of a client identifier, perhaps the client network address, and a unique number generated by the client, perhaps the RPC transaction identifier, may be appropriate. If the object does not exist, the server creates the object and stores the verifier in stable storage. For file systems that do not provide a mechanism for the storage of arbitrary file attributes, the server may use one or more elements of the object meta-data to store the verifier. The verifier must be stored in stable storage to prevent erroneous failure on retransmission of the request. It is assumed that an exclusive create is being performed because exclusive semantics are critical to the application. Because of the Expires: April 2000 [Page 92] Draft Protocol Specification NFS version 4 October 1999 expected usage, exclusive CREATE does not rely solely on the normally volatile duplicate request cache for storage of the verifier. The duplicate request cache in volatile storage does not survive a crash and may actually flush on a long network partition, opening failure windows. In the UNIX local file system environment, the expected storage location for the verifier on creation is the meta-data (time stamps) of the object. For this reason, an exclusive object create may not include initial attributes because the server would have nowhere to store the verifier. If the server can not support these exclusive create semantics, possibly because of the requirement to commit the verifier to stable storage, it should fail the CREATE request with the error, NFS4ERR_NOTSUPP. During an exclusive CREATE request, if the object already exists, the server reconstructs the object's verifier and compares it with the verifier in the request. If they match, the server treats the request as a success. The request is presumed to be a duplicate of an earlier, successful request for which the reply was lost and that the server duplicate request cache mechanism did not detect. If the verifiers do not match, the request is rejected with the status, NFS4ERR_EXIST. Once the client has performed a successful exclusive create, it must issue a SETATTR to set the correct object attributes. Until it does so, it should not rely upon any of the object attributes, since the server implementation may need to overload object meta- data to store the verifier. Use of the GUARDED attribute does not provide exactly-once semantics. In particular, if a reply is lost and the server does not detect the retransmission of the request, the procedure can fail with NFS4ERR_EXIST, even though the create was performed successfully. Note: 1. Need to determine an initial set of attributes that must be set, and a set of attributes that can optionally be set, on a per-filetype basis. For instance, if the filetype is a NF4BLK then the device attributes must be set. 2. Need to consider the symbolic link path as an "attribute". No need for a READLINK op Expires: April 2000 [Page 93] Draft Protocol Specification NFS version 4 October 1999 if this is so. Similarly, a filehandle could be defined as an attribute for LINK. ERRORS NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_EXIST NFS4ERR_NOTDIR NFS4ERR_INVAL NFS4ERR_NOSPC NFS4ERR_ROFS NFS4ERR_NAMETOOLONG NFS4ERR_DQUOT NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT NFS4ERR_FHEXPIRED NFS4ERR_WRONGSEC NFS4ERR_MOVED Expires: April 2000 [Page 94] Draft Protocol Specification NFS version 4 October 1999 12.2.5. Operation 6: DELEGPURGE - Purge Delegations Awaiting Recovery SYNOPSIS clientid -> ARGUMENT struct DELEGPURGE4args { clientid4 clientid; }; RESULT struct DELEGPURGE4res { nfsstat4 status; }; DESCRIPTION Purges all of the delegations awaiting recovery for a given client. This is useful for clients which do not commit delegation information to stable storage to indicate that conflicting requests need not be held up awaiting recovery of delegation information. This operation should also be used by clients which do have delegation information on stable storage after doing all of delegation recovery that is needed. Using DELEGPURGE will prevent any delegations which were made by the server but were not sent to the client and committed to stable storage from holding up other clients making conflicting requests. ERRORS Expires: April 2000 [Page 95] Draft Protocol Specification NFS version 4 October 1999 12.2.6. Operation 7: DELEGRETURN - Return Delegation SYNOPSIS stateid -> ARGUMENT struct DELEGRETURN4args { stateid4 stateid; }; RESULT struct DELEGRETURN4res { nfsstat4 status; }; DESCRIPTION Returns the delegation represented by the given stateid ERRORS Expires: April 2000 [Page 96] Draft Protocol Specification NFS version 4 October 1999 12.2.7. Operation 8: GETATTR - Get Attributes SYNOPSIS (cfh), attrbits -> attrbits, attrvals ARGUMENT struct GETATTR4args { /* CURRENT_FH: directory or file */ bitmap4 attr_request; }; RESULT struct GETATTR4resok { fattr4 obj_attributes; }; union GETATTR4res switch (nfsstat4 status) { case NFS4_OK: GETATTR4resok resok4; default: void; }; DESCRIPTION The GETATTR procedure will obtain attributes from the server. The client sets a bit in the bitmap argument for each attribute value that it would like the server to return. The server returns an attribute bitmap that indicates the attribute values for which it was able to return, followed by the attribute values ordered lowest attribute number first. The server must return a value for each attribute that the client requests if the attribute is supported by the server. If the server does not support an attribute or cannot approximate a useful value then it must not return the attribute value and must not set the attribute bit in the result bitmap. The server must return an error if it supports an attribute but cannot obtain its value. In that case no attribute values will be returned. Expires: April 2000 [Page 97] Draft Protocol Specification NFS version 4 October 1999 All servers must support attribute 0 (zero) which is a bitmap of all supported attributes for the filesystem object. IMPLEMENTATION ERRORS NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_INVAL NFS4ERR_STALE NFS4ERR_BADHANDLE NFS4ERR_SERVERFAULT NFS4ERR_JUKEBOX NFS4ERR_FHEXPIRED NFS4ERR_MOVED Expires: April 2000 [Page 98] Draft Protocol Specification NFS version 4 October 1999 12.2.8. Operation 9: GETFH - Get Current Filehandle SYNOPSIS (cfh) -> filehandle ARGUMENT /* CURRENT_FH: */ void; RESULT struct GETFH4resok { nfs4_fh object; }; union GETFH4res switch (nfsstat4 status) { case NFS4_OK: GETFH4resok resok4; default: void; }; DESCRIPTION Returns the current filehandle. Operations that change the current filehandle like LOOKUP or CREATE to not automatically return the new filehandle as a result. For instance, if a client needs to lookup a directory entry and obtain its filehandle then the following request is needed. 1: PUTFH (directory filehandle) 2: LOOKUP (entry name) 3: GETFH IMPLEMENTATION Expires: April 2000 [Page 99] Draft Protocol Specification NFS version 4 October 1999 ERRORS NFS4ERR_BADHANDLE NFS4ERR_FHEXPIRED NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_WRONGSEC Expires: April 2000 [Page 100] Draft Protocol Specification NFS version 4 October 1999 12.2.9. Operation 10: LINK - Create Link to a File SYNOPSIS (cfh), directory, newname -> (cfh), change_info ARGUMENT struct LINK4args { /* CURRENT_FH: file */ nfs4_fh dir; component4 newname; }; RESULT struct LINK4resok { change_info4 cinfo; }; union LINK4res switch (nfsstat4 status) { case NFS4_OK: LINK4resok resok4; default: void; }; DESCRIPTION The LINK procedure creates an additional newname for the file with the current filehandle in the directory dir. The current file handle and the directory must reside within the same file system on the server. For the directory, the server returns change_info4 information in cinfo. With the atomic field of the change_info4 struct, the server will indicate if the before and after change attributes were obtained atomically with respect to the link creation. IMPLEMENTATION Expires: April 2000 [Page 101] Draft Protocol Specification NFS version 4 October 1999 Changes to any property of the hard-linked files are reflected in all of the linked files. When a hard link is made to a file, the attributes for the file should have a value for nlink that is one greater than the value before the LINK. The comments under RENAME regarding object and target residing on the same file system apply here as well. The comments regarding the target name applies as well. ERRORS NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_EXIST NFS4ERR_XDEV NFS4ERR_NOTDIR NFS4ERR_INVAL NFS4ERR_NOSPC NFS4ERR_ROFS NFS4ERR_MLINK NFS4ERR_NAMETOOLONG NFS4ERR_DQUOT NFS4ERR_STALE NFS4ERR_BADHANDLE NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT NFS4ERR_FHEXPIRED NFS4ERR_MOVED Expires: April 2000 [Page 102] Draft Protocol Specification NFS version 4 October 1999 12.2.10. Operation 11: LOCK - Create Lock SYNOPSIS (cfh) type, seqid, reclaim, owner, offset, length -> stateid, access ARGUMENT enum nfs4_lock_type { READ_LT = 1, WRITE_LT = 2, READW_LT = 3, /* blocking read */ WRITEW_LT = 4 /* blocking write */ }; struct LOCK4args { /* CURRENT_FH: file */ nfs4_lock_type type; seqid4 seqid; bool reclaim; stateid4 stateid; offset4 offset; length4 length; }; RESULT struct lockres { stateid4 stateid; int32_t access; }; union LOCK4res switch (nfsstat4 status) { case NFS4_OK: lockres result; default: void; }; DESCRIPTION Expires: April 2000 [Page 103] Draft Protocol Specification NFS version 4 October 1999 The LOCK procedure requests a record lock for the byte range specified by the offset and length parameters. The lock type is also specified to be one of the nfs4_lock_types. If this is a reclaim request, the reclaim parameter will be TRUE; IMPLEMENTATION The File Locking section contains a full description of this and the other file locking procedures. ERRORS NFS4ERR_ACCES NFS4ERR_ISDIR NFS4ERR_INVAL NFS4ERR_STALE NFS4ERR_BADHANDLE NFS4ERR_SERVERFAULT NFS4ERR_GRACE NFS4ERR_FHEXPIRED NFS4ERR_MOVED Expires: April 2000 [Page 104] Draft Protocol Specification NFS version 4 October 1999 12.2.11. Operation 12: LOCKT - Test For Lock SYNOPSIS (cfh) type, seqid, reclaim, owner, offset, length -> {void, NFS4ERR_DENIED -> owner} ARGUMENT struct LOCK4args { /* CURRENT_FH: file */ nfs4_lock_type type; seqid4 seqid; bool reclaim; nfs_lockowner owner; offset4 offset; length4 length; }; RESULT union LOCKT4res switch (nfsstat4 status) { case NFS4ERR_DENIED: nfs_lockowner owner; case NFS4_OK: void; default: void; }; DESCRIPTION The LOCKT procedure tests the lock as specified in the argument. The owner of the lock is returned in the event it is currently being held; if no lock is held, nothing other than NFS4_OK is returned. IMPLEMENTATION The File Locking section contains a full description of this and the other file locking procedures. Expires: April 2000 [Page 105] Draft Protocol Specification NFS version 4 October 1999 ERRORS NFS4ERR_ACCES NFS4ERR_ISDIR NFS4ERR_INVAL NFS4ERR_STALE NFS4ERR_BADHANDLE NFS4ERR_SERVERFAULT NFS4ERR_DENIED NFS4ERR_GRACE NFS4ERR_FHEXPIRED NFS4ERR_MOVED Expires: April 2000 [Page 106] Draft Protocol Specification NFS version 4 October 1999 12.2.12. Operation 13: LOCKU - Unlock File SYNOPSIS (cfh) type, seqid, reclaim, owner, offset, length -> stateid ARGUMENT struct LOCK4args { /* CURRENT_FH: file */ nfs4_lock_type type; seqid4 seqid; bool reclaim; nfs_lockowner owner; offset4 offset; length4 length; }; RESULT union LOCKU4res switch (nfsstat4 status) { case NFS4_OK: stateid4 stateid_ok; default: stateid4 stateid_oth; }; DESCRIPTION The LOCKU procedure unlocks the record lock specified by the parameters. IMPLEMENTATION The File Locking section contains a full description of this and the other file locking procedures. ERRORS NFS4ERR_ACCES Expires: April 2000 [Page 107] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_ISDIR NFS4ERR_INVAL NFS4ERR_STALE NFS4ERR_BADHANDLE NFS4ERR_SERVERFAULT NFS4ERR_GRACE NFS4ERR_FHEXPIRED NFS4ERR_MOVED Expires: April 2000 [Page 108] Draft Protocol Specification NFS version 4 October 1999 12.2.13. Operation 14: LOOKUP - Lookup Filename SYNOPSIS (cfh), filenames -> (cfh) ARGUMENT struct LOOKUP4args { /* CURRENT_FH: directory */ pathname4 path; }; RESULT struct LOOKUP4res { /* CURRENT_FH: object */ nfsstat4 status; }; DESCRIPTION The current filehandle is assumed to refer to a directory. LOOKUP evaluates the pathname contained in the array of names and obtains a new current filehandle from the final name. All but the final name in the list must be the names of directories. If the pathname cannot be evaluated either because a component doesn't exist or because the client doesn't have permission to evaluate a component of the path, then an error will be returned and the current filehandle will be unchanged. IMPLEMENTATION If the client prefers a partial evaluation of the path then a sequence of LOOKUP operations can be substituted e.g. 1. PUTFH (directory filehandle) 2. LOOKUP "pub" "foo" "bar" 3. GETFH Expires: April 2000 [Page 109] Draft Protocol Specification NFS version 4 October 1999 or 1. PUTFH (directory filehandle) 2. LOOKUP "pub" 3. GETFH 4. LOOKUP "foo" 5. GETFH 6. LOOKUP "bar" 7. GETFH NFS version 4 servers depart from the semantics of previous NFS versions in allowing LOOKUP requests to cross mountpoints on the server. The client can detect a mountpoint crossing by comparing the fsid attribute of the directory with the fsid attribute of the directory looked up. If the fsids are different then the new directory is a server mountpoint. Unix clients that detect a mountpoint crossing will need to mount the server's filesystem. Servers that limit NFS access to "shares" or "exported" filesystems should provide a pseudo-filesystem into which the exported filesystems can be integrated, so that clients can browse the server's namespace. The clients view of a pseudo filesystem will be limited to paths that lead to exported filesystems. Note: previous versions of the protocol assigned special semantics to the names "." and "..". NFS version 4 assigns no special semantics to these names. The LOOKUPP operator must be used to lookup a parent directory. Note that this procedure does not follow symbolic links. The client is responsible for all parsing of filenames including filenames that are modified by symbolic links encountered during the lookup process. ERRORS NFS4ERR_NOENT NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_NOTDIR NFS4ERR_INVAL NFS4ERR_NAMETOOLONG Expires: April 2000 [Page 110] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_STALE NFS4ERR_SERVERFAULT NFS4ERR_FHEXPIRED NFS4ERR_MOVED Expires: April 2000 [Page 111] Draft Protocol Specification NFS version 4 October 1999 12.2.14. Operation 15: LOOKUPP - Lookup Parent Directory SYNOPSIS (cfh) -> (cfh) ARGUMENT /* CURRENT_FH: object */ void; RESULT struct LOOKUPP4res { /* CURRENT_FH: directory */ nfsstat4 status; }; DESCRIPTION The current filehandle is assumed to refer to a directory. LOOKUPP assigns the filehandle for its parent directory to be the current filehandle. If there is no parent directory an ENOENT error must be returned. Therefore, ENOENT will be returned by the server when the current filehandle is at the root or top of the server's file tree. IMPLEMENTATION As for LOOKUP, LOOKUPP will also cross mountpoints. ERRORS NFS4ERR_NOENT NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_INVAL Expires: April 2000 [Page 112] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_STALE NFS4ERR_SERVERFAULT NFS4ERR_FHEXPIRED NFS4ERR_MOVED Expires: April 2000 [Page 113] Draft Protocol Specification NFS version 4 October 1999 12.2.15. Operation 16: NVERIFY - Verify Difference in Attributes SYNOPSIS (cfh), attrbits, attrvals -> - ARGUMENT struct NVERIFY4args { /* CURRENT_FH: object */ bitmap4 attr_request; fattr4 obj_attributes; }; RESULT struct NVERIFY4res { nfsstat4 status; }; DESCRIPTION This operation is used to prefix a sequence of operations to be performed if one or more attributes have changed on some filesystem object. If all the attributes match then the error NFS4ERR_SAME must be returned. IMPLEMENTATION This operation is useful as a cache validation operator. If the object to which the attributes belong has changed then the following operations may obtain new data associated with that object. For instance, to check if a file has been changed and obtain new data if it has: 1. PUTFH (public) 2. LOOKUP "pub" "foo" "bar" 3. NVERIFY attrbits attrs 4. READ 0 32767 Expires: April 2000 [Page 114] Draft Protocol Specification NFS version 4 October 1999 ERRORS NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_STALE NFS4ERR_BADHANDLE NFS4ERR_SERVERFAULT NFS4ERR_FHEXPIRED NFS4ERR_SAME NFS4ERR_MOVED Expires: April 2000 [Page 115] Draft Protocol Specification NFS version 4 October 1999 12.2.16. Operation 17: OPEN - Open a Regular File SYNOPSIS (cfh), claim, openhow, owner, seqid, access, deny -> (cfh), stateid, rflags, access, delegation ARGUMENT struct OPEN4args { open_claim4 claim; openflag openhow; nfs_lockowner owner; seqid4 seqid; int32_t access; int32_t deny; }; enum createmode4 { UNCHECKED = 0, GUARDED = 1, EXCLUSIVE = 2 }; union createhow4 switch (createmode4 mode) { case UNCHECKED: case GUARDED: fattr4 createattrs; case EXCLUSIVE: createverf4 verf; }; enum opentype4 { OPEN4_NOCREATE 0, OPEN4_CREATE 1 }; union openflag switch (opentype4 opentype) { case OPEN4_CREATE: createhow4 how; default: void; }; /* * Access and Deny constants for open argument */ Expires: April 2000 [Page 116] Draft Protocol Specification NFS version 4 October 1999 const OPEN4_ACCESS_READ = 0x0001; const OPEN4_ACCESS_WRITE= 0x0002; const OPEN4_ACCESS_BOTH = 0x0003; const OPEN4_DENY_NONE = 0x0000; const OPEN4_DENY_READ = 0x0001; const OPEN4_DENY_WRITE = 0x0002; const OPEN4_DENY_BOTH = 0x0003; enum open_delegation_type4 { OPEN_DELEGATE_NONE = 0, OPEN_DELEGATE_READ = 1, OPEN_DELEGATE_WRITE = 2 }; enum open_claim_type4 { CLAIM_NULL = 0, CLAIM_PREVIOUS = 1, CLAIM_DELEGATE_CUR = 2, CLAIM_DELEGATE_PREV = 3 }; struct open_claim_delegate_cur { pathname4 file; stateid4 delegate_stateid; }; union open_claim4 switch (open_claim_type4 claim) { /* * No special rights to file. Ordinary OPEN of the specified file. */ case CLAIM_NULL: /* CURRENT_FH: directory */ pathname4 file; /* * Right to the file established by an open previous to server * reboot. File identified by filehandle obtained at that time * rather than by name. */ case CLAIM_PREVIOUS: /* CURRENT_FH: file being reclaimed */ int32_t delegate_type; /* * Right to file based on a delegation granted by the server. * File is specified by name. */ Expires: April 2000 [Page 117] Draft Protocol Specification NFS version 4 October 1999 case CLAIM_DELEGATE_CUR: /* CURRENT_FH: directory */ open_claim_delegate_cur delegate_cur_info; /* Right to file based on a delegation granted to a previous boot * instance of the client. File is specified by name. */ case CLAIM_DELEGATE_PREV: /* CURRENT_FH: directory */ pathname4 file_delegate_prev; }; RESULT /* * Result flags */ /* Mandatory locking is in effect for this file. */ const OPEN4_RESULT_MLOCK = 0x0001; struct open_read_delegation4 { stateid4 stateid; /* Stateid for delegation*/ bool recall; /* Pre-recalled flag for delegations obtained by reclaim (CLAIM_PREVIOUS) */ nfsace4 permissions; /* Defines users who don't need an ACCESS call to open for read */ }; struct open_write_delegation4 { stateid4 stateid; /* Stateid for delegation be flushed to the server on close. */ bool recall; /* Pre-recalled flag for delegations obtained by reclaim (CLAIM_PREVIOUS) */ nfs_space_limit4 space_limit; /* Defines condition that the client must check to determine whether the file needs to be flushed to the server on close. */ nfsace4 permissions; /* Defines users who don't Expires: April 2000 [Page 118] Draft Protocol Specification NFS version 4 October 1999 need an ACCESS call as part of a delegated open. */ }; union open_delegation4 switch (open_delegation_type4 delegation_type) { case OPEN_DELEGATE_NONE: void; case OPEN_DELEGATE_READ: OPEN4readDelegation read; case OPEN_DELEGATE_WRITE: OPEN4writeDelegation write; }; struct OPEN4resok { stateid4 stateid; /* Stateid for open */ uint32_t rflags; /* Result flags */ int32_t access; /* Access granted */ open_delegation4 delegation; /* Info on any open delegation */ }; union OPEN4res switch (nfsstat4 status) { case NFS4_OK: /* CURRENT_FH: opened file */ OPEN4resok result; default: void; }; DESCRIPTION The OPEN procedure creates and/or opens a regular file in a directory with the provided name. If the file does not exist at the server and creation is desired, specification of the method of creation is provided by the openhow parameter. The client has the choice of three creation methods: UNCHECKED, GUARDED, or EXCLUSIVE. UNCHECKED means that the file should be created without checking for the existence of a duplicate object in the same directory. For this type of create, createattrs specifies the initial set of attributes for the file (NOTE: need to define exactly which attributes should be set and if the file exists, should the attributes be modified if the file exists). If GUARDED is specified, the server checks for the presence of a duplicate object Expires: April 2000 [Page 119] Draft Protocol Specification NFS version 4 October 1999 by name before performing the create. If a duplicate exists, an error of NFS4ERR_EXIST is returned as the status. If the object does not exist, the request is performed as described for UNCHECKED. EXCLUSIVE specifies that the server is to follow exclusive creation semantics, using the verifier to ensure exclusive creation of the target. The server should check for the presence of a duplicate object by name. If the object does not exist, the server creates the object and stores the verifier with the object. If the object does exist and the stored verifier matches the client provided verifier, the server uses the existing object as the newly created object. If the stored verifier does not match, then an error of NFS4ERR_EXIST is returned. No attributes may be provided in this case, since the server may use an attribute of the target object to store the verifier. (NOTE: does a specific attribute need to be specified for storage of verifier ) Upon successful creation, the current filehandle is replaced by that of the new object. The OPEN procedure provides for DOS SHARE capability with the use of the access and deny fields of the OPEN arguments. The client specifies at OPEN the required access and deny modes. For clients that do not directly support SHAREs (i.e. Unix), the expected deny value is DENY_NONE. In the case that there is a existing SHARE reservation that conflicts with the OPEN request, the server returns the error NFS4ERR_DENIED. For a complete SHARE request, the client must provide values for the owner and seqid fields for the OPEN argument. For additional discussion of SHARE semantics see the section on 'Share Reservations'. In the case that the client is recovering state from a server failure, the reclaim field of the OPEN argument is used to signify that the request is meant to reclaim state previously held. The "claim" field of the OPEN argument is used to specify the file to be opened and the state information which the client claims to possess. There are four basic claim types which cover the various situations for an OPEN. They are as follows: CLAIM_NULL For the client, this is a new OPEN request and there is no previous state associate with the file for the client. Expires: April 2000 [Page 120] Draft Protocol Specification NFS version 4 October 1999 CLAIM_PREVIOUS The client is claiming basic OPEN state for a file that was held previous to a server reboot. Generally used when a server is returning persistent file handles; the client may not have the file name to reclaim the OPEN. CLAIM_DELEGATE_CUR The client is claiming a delegation for OPEN as granted by the server. Generally this is done as part of recalling a delegation. CLAIM_DELEGATE_PREV The client is claiming a delegation granted to a previous client instance; used after the client reboots. For OPEN requests whose claim type is other than CLAIM_PREVIOUS (i.e. requests other than those devoted to reclaiming opens after a server reboot) that reach the server during its grace or lease expiration period, the server returns an error of NFS4ERR_GRACE. For any OPEN request, the server may return an open delegation, which allows further opens and closes to be handled locally on the client as described in the section Open Delegation. Note that delegation is up to the server to decide. The client should never assume that delegation will or will not be granted in a particular instance. It should always be prepared for either case. A partial exception is the reclaim (CLAIM_PREVIOUS) case, in which a delegation type is claimed. In this case, delegation will always be granted, although the server may specify an immediate recall in the delegation structure. IMPLEMENTATION The OPEN procedure contains support for EXCLUSIVE create. The mechanism is similar to the support in NFS version 3 [RFC1813]. As in NFS version 3, this mechanism provides reliable exclusive creation. Exclusive create is invoked when the how parameter is EXCLUSIVE. In this case, the client provides a verifier that can reasonably be expected to be unique. A combination of a client identifier, perhaps the client network address, and a unique number generated by the client, perhaps the RPC transaction identifier, may be appropriate. If the object does not exist, the server creates the object and Expires: April 2000 [Page 121] Draft Protocol Specification NFS version 4 October 1999 stores the verifier in stable storage. For file systems that do not provide a mechanism for the storage of arbitrary file attributes, the server may use one or more elements of the object meta-data to store the verifier. The verifier must be stored in stable storage to prevent erroneous failure on retransmission of the request. It is assumed that an exclusive create is being performed because exclusive semantics are critical to the application. Because of the expected usage, exclusive CREATE does not rely solely on the normally volatile duplicate request cache for storage of the verifier. The duplicate request cache in volatile storage does not survive a crash and may actually flush on a long network partition, opening failure windows. In the UNIX local file system environment, the expected storage location for the verifier on creation is the meta-data (time stamps) of the object. For this reason, an exclusive object create may not include initial attributes because the server would have nowhere to store the verifier. If the server can not support these exclusive create semantics, possibly because of the requirement to commit the verifier to stable storage, it should fail the OPEN request with the error, NFS4ERR_NOTSUPP. During an exclusive CREATE request, if the object already exists, the server reconstructs the object's verifier and compares it with the verifier in the request. If they match, the server treats the request as a success. The request is presumed to be a duplicate of an earlier, successful request for which the reply was lost and that the server duplicate request cache mechanism did not detect. If the verifiers do not match, the request is rejected with the status, NFS4ERR_EXIST. Once the client has performed a successful exclusive create, it must issue a SETATTR to set the correct object attributes. Until it does so, it should not rely upon any of the object attributes, since the server implementation may need to overload object meta- data to store the verifier. The subsequent SETATTR must not occur in the same COMPOUND request as the OPEN. This separation will guarantee that the exclusive create mechanism will continue to function properly in the face of retransmission of the request. Use of the GUARDED attribute does not provide exactly-once semantics. In particular, if a reply is lost and the server does not detect the retransmission of the request, the procedure can fail with NFS4ERR_EXIST, even though the create was performed successfully. For SHARE reservations, the client must specify a value for access Expires: April 2000 [Page 122] Draft Protocol Specification NFS version 4 October 1999 that is one of READ, WRITE, or BOTH. For deny, the client must specify one of NONE, READ, WRITE, or BOTH. If the client fails to do this, the server must return NFS4ERR_INVAL. The OPEN call ERRORS NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_EXIST NFS4ERR_NOTDIR NFS4ERR_NOSPC NFS4ERR_ROFS NFS4ERR_NAMETOOLONG NFS4ERR_DQUOT NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT NFS4ERR_SHARE_DENIED NFS4ERR_GRACE NFS4ERR_MOVED Expires: April 2000 [Page 123] Draft Protocol Specification NFS version 4 October 1999 12.2.17. Operation 18: OPENATTR - Open Named Attribute Directory SYNOPSIS (cfh) -> (cfh) ARGUMENT /* CURRENT_FH: file or directory */ void; RESULT struct OPENATTR4res { /* CURRENT_FH: name attr directory*/ nfsstat4 status; }; DESCRIPTION The OPENATTR procedure is used to obtain the filehandle of the named attribute directory associated with the current filehandle. The result of the OPENATTR will be a filehandle of type NF4ATTRDIR. From this filehandle, READDIR and LOOKUP procedures can be used to obtain filehandles for the various named attributes associated with the original file system object. Filehandles returned within the named attribute directory will have a type of NF4NAMEDATTR. IMPLEMENTATION If the server does not support named attributes for the current filehandle, an error of NFS4ERR_NOTSUPP will be returned to the client. ERRORS NFS4ERR_NOENT NFS4ERR_IO Expires: April 2000 [Page 124] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_ACCES NFS4ERR_INVAL NFS4ERR_STALE NFS4ERR_BADHANDLE NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT NFS4ERR_JUKEBOX NFS4ERR_FHEXPIRED NFS4ERR_WRONGSEC NFS4ERR_MOVED Expires: April 2000 [Page 125] Draft Protocol Specification NFS version 4 October 1999 12.2.18. Operation 19: PUTFH - Set Current Filehandle SYNOPSIS filehandle -> (cfh) ARGUMENT struct PUTFH4args { nfs4_fh object; }; RESULT struct PUTFH4res { /* CURRENT_FH: */ nfsstat4 status; }; DESCRIPTION Replaces the current filehandle with the filehandle provided as an argument. IMPLEMENTATION Commonly used as the first operator in any NFS request to set the context for following operations. ERRORS NFS4ERR_BADHANDLE NFS4ERR_FHEXPIRED NFS4ERR_MOVED NFS4ERR_SERVERFAULT Expires: April 2000 [Page 126] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_STALE NFS4ERR_WRONGSEC Expires: April 2000 [Page 127] Draft Protocol Specification NFS version 4 October 1999 12.2.19. Operation 20: PUTPUBFH - Set Public Filehandle SYNOPSIS - -> (cfh) ARGUMENT void; RESULT struct PUTPUBFH4res { /* CURRENT_FH: root fh */ nfsstat4 status; }; DESCRIPTION Replaces the current filehandle with the filehandle that represents the public filehandle of the server's namespace. This filehandle may be different from the "root" filehandle which may be associated with some other directory on the server. IMPLEMENTATION Used as the first operator in any NFS request to set the context for following operations. ERRORS NFS4ERR_SERVERFAULT NFS4ERR_WRONGSEC Expires: April 2000 [Page 128] Draft Protocol Specification NFS version 4 October 1999 12.2.20. Operation 21: PUTROOTFH - Set Root Filehandle SYNOPSIS - -> (cfh) ARGUMENT void; RESULT struct PUTROOTFH4res { /* CURRENT_FH: root fh */ nfsstat4 status; }; DESCRIPTION Replaces the current filehandle with the filehandle that represents the root of the server's namespace. From this filehandle a LOOKUP operation can locate any other filehandle on the server. This filehandle may be different from the "public" filehandle which may be associated with some other directory on the server. IMPLEMENTATION Commonly used as the first operator in any NFS request to set the context for following operations. ERRORS NFS4ERR_SERVERFAULT NFS4ERR_WRONTSEC Expires: April 2000 [Page 129] Draft Protocol Specification NFS version 4 October 1999 12.2.21. Operation 22: READ - Read from File SYNOPSIS (cfh), offset, count, stateid -> eof, data ARGUMENT struct READ4args { /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; count4 count; }; RESULT struct READ4resok { bool eof; opaque data<>; }; union READ4res switch (nfsstat4 status) { case NFS4_OK: READ4resok resok4; default: void; }; DESCRIPTION The READ procedure reads data from the regular file identified by the current filehandle. The client provides an offset of where the READ is to start and a count of how many bytes are to be read. An offset of 0 (zero) means to read data starting at the beginning of the file. If offset is greater than or equal to the size of the file, the status, NFS4_OK, is returned with a data length set to 0 (zero) and eof set to TRUE. The READ is subject to access permissions checking. If the client specifies a count value of 0 (zero), the READ Expires: April 2000 [Page 130] Draft Protocol Specification NFS version 4 October 1999 succeeds and returns 0 (zero) bytes of data again subject to access permissions checking. The server may choose to return fewer bytes than specified by the client. The client needs to check for this condition and handle the condition appropriately. The stateid value for a READ request represents a value returned from a previous record lock or share reservation request. Used by the server to verify that the associated lock is still valid and to update lease timeouts for the client. If the read ended at the end-of-file (formally, in a correctly formed READ request, if offset + count is equal to the size of the file), eof is returned as TRUE; otherwise it is FALSE. A successful READ of an empty file will always return eof as TRUE. IMPLEMENTATION It is possible for the server to return fewer than count bytes of data. If the server returns less than the count requested and eof set to FALSE, the client should issue another READ to get the remaining data. A server may return less data than requested under several circumstances. The file may have been truncated by another client or perhaps on the server itself, changing the file size from what the requesting client believes to be the case. This would reduce the actual amount of data available to the client. It is possible that the server may back off the transfer size and reduce the read request return. Server resource exhaustion may also occur necessitating a smaller read return. If the file is locked the server will return an NFS4ERR_LOCKED error. Since the lock may be of short duration, the client may choose to retransmit the READ request (with exponential backoff) until the operation succeeds. ERRORS NFS4ERR_IO NFS4ERR_NXIO NFS4ERR_ACCES NFS4ERR_INVAL NFS4ERR_STALE Expires: April 2000 [Page 131] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_BADHANDLE NFS4ERR_SERVERFAULT NFS4ERR_DENIED NFS4ERR_JUKEBOX NFS4ERR_EXPIRED NFS4ERR_LOCKED NFS4ERR_GRACE NFS4ERR_FHEXPIRED NFS4ERR_WRONGSEC NFS4ERR_MOVED Expires: April 2000 [Page 132] Draft Protocol Specification NFS version 4 October 1999 12.2.22. Operation 23: READDIR - Read Directory SYNOPSIS (cfh), cookie, dircount, maxcount, attrbits -> { cookie, filename, attrbits, attributes } ARGUMENT struct READDIR4args { /* CURRENT_FH: directory */ nfs_cookie4 cookie; count4 dircount; count4 maxcount; bitmap4 attr_request; }; RESULT struct entry4 { nfs_cookie4 cookie; component4 name; fattr4 attrs; entry4 *nextentry; }; struct dirlist4 { entry4 *entries; bool eof; }; struct READDIR4resok { dirlist4 reply; }; union READDIR4res switch (nfsstat4 status) { case NFS4_OK: READDIR4resok resok4; default: void; }; Expires: April 2000 [Page 133] Draft Protocol Specification NFS version 4 October 1999 DESCRIPTION The READDIR procedure retrieves a variable number of entries from a file system directory and returns complete information about each entry along with information to allow the client to request additional directory entries in a subsequent READDIR. The arguments contain a cookie value that represents where the READDIR should start within the directory. A value of 0 (zero) for the cookie is used to start reading at the beginning of the directory. For subsequent READDIR requests, the client specifies a cookie value that is provided by the server on a previous READDIR request. The dircount portion of the argument is the maximum number of bytes of directory information that should be returned. This value does not include the size of attributes or filehandle values that may be returned in the result. The maxcount value of the argument specifies the maximum number of bytes for the result. This maximum size represents all of the data being returned and includes the XDR overhead. The server may return less data. Finally, attrbits represents the list of attributes the client wants returned for each directory entry supplied by the server. On successful return, the server's response will provide a list of directory entries. Each of these entries contains the name of the directory entry, a cookie value for that entry, and the associated attributes as requested. The cookie value is only meaningful to the server and is used as a "bookmark" for the directory entry. As mentioned, this cookie is used by the client for subsequent READDIR operations so that it may continue reading a directory. The cookie is similar in concept to a READ offset but should not be interpreted as such by the client. Ideally, the cookie value should not change if the directory is modified. In some cases, the server may encounter an error while obtaining the attributes for a directory entry. Instead of returning an error for the entire READDIR operation, the server can instead return the attribute 'fattr4_rdattr_error'. This way the server is able to communicate the failure to the client and not fail the entire operation in the instance of what might be a transient failure. Obviously, the client must request the fattr4_rdattr_error attribute for this method to work properly. If Expires: April 2000 [Page 134] Draft Protocol Specification NFS version 4 October 1999 the client does not request the attribute, the server has no choice but to return failure for the entire READDIR operation. IMPLEMENTATION Issues that need to be understood for this procedure include increased cache flushing activity on the client (as new file handles are returned with names which are entered into caches) and over-the-wire overhead versus expected subsequent LOOKUP and GETATTR elimination. The dircount and maxcount fields are included as an optimization. Consider a READDIR call on a UNIX operating system implementation for 1048 bytes; the reply does not contain many entries because of the overhead due to attributes and file handles. An alternative is to issue a READDIR call for 8192 bytes and then only use the first 1048 bytes of directory information. However, the server doesn't know that all that is needed is 1048 bytes of directory information (as would be returned by READDIR). It sees the 8192 byte request and issues a VOP_READDIR for 8192 bytes. It then steps through all of those directory entries, obtaining attributes and file handles for each entry. When it encodes the result, the server only encodes until it gets 8192 bytes of results which include the attributes and file handles. Thus, it has done a larger VOP_READDIR and many more attribute fetches than it needed to. The ratio of the directory entry size to the size of the attributes plus the size of the file handle is usually at least 8 to 1. The server has done much more work than it needed to. The solution to this problem is for the client to provide two counts to the server. The first is the number of bytes of directory information that the client really wants, dircount. The second is the maximum number of bytes in the result, including the attributes and file handles, maxcount. Thus, the server will issue a VOP_READDIR for only the number of bytes that the client really wants to get, not an inflated number. This should help to reduce the size of VOP_READDIR requests on the server, thus reducing the amount of work done there, and to reduce the number of VOP_LOOKUP, VOP_GETATTR, and other calls done by the server to construct attributes and file handles. ERRORS NFS4ERR_IO NFS4ERR_ACCES Expires: April 2000 [Page 135] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_NOTDIR NFS4ERR_INVAL NFS4ERR_STALE NFS4ERR_BADHANDLE NFS4ERR_BAD_COOKIE NFS4ERR_TOOSMALL NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT NFS4ERR_JUKEBOX NFS4ERR_FHEXPIRED NFS4ERR_WRONGSEC NFS4ERR_MOVED Expires: April 2000 [Page 136] Draft Protocol Specification NFS version 4 October 1999 12.2.23. Operation 24: READLINK - Read Symbolic Link SYNOPSIS (cfh) -> linktext ARGUMENT /* CURRENT_FH: symlink */ void; RESULT struct READLINK4resok { linktext4 link; }; union READLINK4res switch (nfsstat4 status) { case NFS4_OK: READLINK4resok resok4; default: void; }; DESCRIPTION READLINK reads the data associated with a symbolic link. The data is a UTF-8 string that is opaque to the server. That is, whether created by an NFS client or created locally on the server, the data in a symbolic link is not interpreted when created, but is simply stored. IMPLEMENTATION A symbolic link is nominally a pointer to another file. The data is not necessarily interpreted by the server, just stored in the file. It is possible for a client implementation to store a path name that is not meaningful to the server operating system in a symbolic link. A READLINK operation returns the data to the client for interpretation. If different implementations want to share access to symbolic links, then they must agree on the Expires: April 2000 [Page 137] Draft Protocol Specification NFS version 4 October 1999 interpretation of the data in the symbolic link. The READLINK operation is only allowed on objects of type, NF4LNK. The server should return the error, NFS4ERR_INVAL, if the object is not of type, NF4LNK. ERRORS NFS4ERR_IO NFS4ERR_INVAL NFS4ERR_ACCES NFS4ERR_STALE NFS4ERR_BADHANDLE NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT NFS4ERR_JUKEBOX NFS4ERR_FHEXPIRED NFS4ERR_WRONGSEC NFS4ERR_MOVED Expires: April 2000 [Page 138] Draft Protocol Specification NFS version 4 October 1999 12.2.24. Operation 25: REMOVE - Remove Filesystem Object SYNOPSIS (cfh), filename -> change_info ARGUMENT struct REMOVE4args { /* CURRENT_FH: directory */ component4 target; }; RESULT struct REMOVE4resok { change_info4 cinfo; } union REMOVE4res switch (nfsstat4 status) { case NFS4_OK: REMOVE4resok resok4; default: void; } DESCRIPTION The REMOVE procecure removes (deletes) a directory entry named by filename from the directory corresponding to the current filehandle. If the entry in the directory was the last reference to the corresponding file system object, the object may be destroyed. For the directory where the filename was removed, the server returns change_info4 information in cinfo. With the atomic field of the change_info4 struct, the server will indicate if the before and after change attributes were obtained atomically with respect to the removal. Expires: April 2000 [Page 139] Draft Protocol Specification NFS version 4 October 1999 IMPLEMENTATION NFS versions 2 and 3 required a different operator RMDIR for directory removal. NFS version 4 REMOVE can be used to delete any directory entry independent of its filetype. The concept of last reference is server specific. However, if the nlink field in the previous attributes of the object had the value 1, the client should not rely on referring to the object via a file handle. Likewise, the client should not rely on the resources (disk space, directory entry, and so on.) formerly associated with the object becoming immediately available. Thus, if a client needs to be able to continue to access a file after using REMOVE to remove it, the client should take steps to make sure that the file will still be accessible. The usual mechanism used is to use RENAME to rename the file from its old name to a new hidden name. ERRORS NFS4ERR_NOENT NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_NOTDIR NFS4ERR_ROFS NFS4ERR_NAMETOOLONG NFS4ERR_NOTEMPTY NFS4ERR_STALE NFS4ERR_BADHANDLE NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT NFS4ERR_FHEXPIRED NFS4ERR_WRONGSEC NFS4ERR_MOVED Expires: April 2000 [Page 140] Draft Protocol Specification NFS version 4 October 1999 12.2.25. Operation 26: RENAME - Rename Directory Entry SYNOPSIS (cfh), oldname, newdir, newname -> source_change_info, target_change_info ARGUMENT struct RENAME4args { /* CURRENT_FH: source directory */ component4 oldname; nfs4_fh newdir; component4 newname; }; RESULT struct RENAME4resok { change_info4 source_cinfo; change_info4 target_cinfo; }; union RENAME4res switch (nfsstat4 status) { case NFS4_OK: RENAME4resok resok4; default: void; }; DESCRIPTION RENAME renames the object identified by oldname in the directory corresponding to the current filehandle to newname in directory newdir. The operation is required to be atomic to the client. Source and target directories must reside on the same file system on the server. If the directory, newdir, already contains an entry with the name, newname, the source object must be compatible with the target: either both are non-directories or both are directories and the target must be empty. If compatible, the existing target is removed Expires: April 2000 [Page 141] Draft Protocol Specification NFS version 4 October 1999 before the rename occurs. If they are not compatible or if the target is a directory but not empty, the server should return the error, NFS4ERR_EXIST. If oldname and newname both refer to the same file (they might be hard links of each other), then RENAME should perform no action and return success. For both directories involved in the RENAME, the server returns change_info4 information. With the atomic field of the change_info4 struct, the server will indicate if the before and after change attributes were obtained atomically with respect to the rename. IMPLEMENTATION The RENAME operation must be atomic to the client. The statement "source and target directories must reside on the same file system on the server" means that the fsid fields in the attributes for the directories are the same. If they reside on different file systems, the error, NFS4ERR_XDEV, is returned. Even though the operation is atomic, the status, NFS4ERR_MLINK, may be returned if the server used a "unlink/link/unlink" sequence internally. A file handle may or may not become stale on a rename. However, server implementors are strongly encouraged to attempt to keep file handles from becoming stale in this fashion. On some servers, the filenames, "." and "..", are illegal as either oldname or newname. In addition, neither oldname nor newname can be an alias for the source directory. These servers will return the error, NFS4ERR_INVAL, in these cases. ERRORS NFS4ERR_NOENT NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_EXIST NFS4ERR_XDEV Expires: April 2000 [Page 142] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_NOTDIR NFS4ERR_ISDIR NFS4ERR_INVAL NFS4ERR_NOSPC NFS4ERR_ROFS NFS4ERR_MLINK NFS4ERR_NAMETOOLONG NFS4ERR_NOTEMPTY NFS4ERR_DQUOT NFS4ERR_STALE NFS4ERR_BADHANDLE NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT NFS4ERR_FHEXPIRED NFS4ERR_WRONGSEC NFS4ERR_MOVED Expires: April 2000 [Page 143] Draft Protocol Specification NFS version 4 October 1999 12.2.26. Operation 27: RENEW - Renew a Lease SYNOPSIS stateid -> () ARGUMENT struct RENEW4args { stateid4 stateid; }; RESULT struct RENEW4res { nfsstat4 status; }; DESCRIPTION The RENEW procedure is used by the client to renew leases which it currently holds at a server. The processing the RENEW request, the server renews all leases associated with the client. The associated leases are determined by the client id provided via the SETCLIENTID procedure. IMPLEMENTATION ERRORS NFS4ERR_SERVERFAULT NFS4ERR_EXPIRED NFS4ERR_GRACE NFS4ERR_WRONGSEC NFS4ERR_MOVED Expires: April 2000 [Page 144] Draft Protocol Specification NFS version 4 October 1999 12.2.27. Operation 28: RESTOREFH - Restore Saved Filehandle SYNOPSIS (sfh) -> (cfh) ARGUMENT /* SAVED_FH: */ void; RESULT struct RESTOREFH4res { /* CURRENT_FH: value of saved fh */ nfsstat4 status; }; DESCRIPTION Set the current filehandle to the value in the saved filehandle. If there is no saved filehandle then return an error NFS4ERR_INVAL. IMPLEMENTATION Procedures like OPEN and LOOKUP use the current filehandle to represent a directory and replace it with a new filehandle. Assuming the previous filehandle was saved with a SAVEFH operator, the previous filehandle can be restored as the current filehandle. This is commonly used to obtain post-operation attributes for the directory, e.g. 1. PUTFH (directory filehandle) 2. SAVEFH 3. GETATTR attrbits (pre-op dir attrs) 4. CREATE optbits "foo" attrs 5. GETATTR attrbits (file attributes) 6. RESTOREFH 7. GETATTR attrbits (post-op dir attrs) Expires: April 2000 [Page 145] Draft Protocol Specification NFS version 4 October 1999 ERRORS NFS4ERR_BADHANDLE NFS4ERR_FHEXPIRED NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_WRONGSEC Expires: April 2000 [Page 146] Draft Protocol Specification NFS version 4 October 1999 12.2.28. Operation 29: SAVEFH - Save Current Filehandle SYNOPSIS (cfh) -> (sfh) ARGUMENT /* CURRENT_FH: */ void; RESULT struct SAVEFH4res { /* SAVED_FH: value of current fh */ nfsstat4 status; }; DESCRIPTION Save the current filehandle. If a previous filehandle was saved then it is no longer accessible. The saved filehandle can be restored as the current filehandle with the RESTOREFH operator. IMPLEMENTATION ERRORS NFS4ERR_BADHANDLE NFS4ERR_FHEXPIRED NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_SERVERFAULT NFS4ERR_STALE Expires: April 2000 [Page 147] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_WRONGSEC Expires: April 2000 [Page 148] Draft Protocol Specification NFS version 4 October 1999 12.2.29. Operation 30: SECINFO - Obtain Available Security SYNOPSIS (cfh), filename -> { secinfo } ARGUMENT struct SECINFO4args { /* CURRENT_FH: */ component4 name; }; RESULT struct rpcsec_gss_info { sec_oid4 oid; qop4 qop; rpc_gss_svc_t service; }; struct secinfo4 { unsigned int flavor; opaque flavor_info<>; /* null for AUTH_SYS, AUTH_NONE; contains rpcsec_gss_info for RPCSEC_GSS. */ }; struct SECINFO4resok { secinfo4 reply<>; }; union SECINFO4res switch (nfsstat4 status) { case NFS4_OK: SECINFO4resok resok4; default: void; }; DESCRIPTION Expires: April 2000 [Page 149] Draft Protocol Specification NFS version 4 October 1999 The SECINFO procedure is used by the client to obtain a list of valid RPC authentication flavors for a specific file handle, file name pair. The result will contain an array which represents the security mechanisms available. The array entries are represented by the secinfo4 structure. The field 'flavor' will contain a value of AUTH_NONE, AUTH_SYS (as defined in [RFC1831]), or RPCSEC_GSS (as defined in [RFC2203]). For the flavors, AUTH_NONE, and AUTH_SYS no additional security information is returned. For a return value of RPCSEC_GSS, a security triple is returned that contains the mechanism object id (as defined in [RFC2078]), the quality of protection (as defined in [RFC2078]) and the service type (as defined in [RFC2203]). It is possible for SECINFO to return multiple entries with flavor equal to RPCSEC_GSS with different security triple values. IMPLEMENTATION The SECINFO procedure is expected to be used by the NFS client when the error value of NFS4ERR_WRONGSEC is returned from another NFS procedure. This signifies to the client that the server's security policy is different from what the client is currently using. At this point, the client is expected to obtain a list of possible security flavors and choose what best suits its policies. ERRORS NFS4ERR_SERVERFAULT NFS4ERR_MOVED Expires: April 2000 [Page 150] Draft Protocol Specification NFS version 4 October 1999 12.2.30. Operation 31: SETATTR - Set Attributes SYNOPSIS (cfh), attrbits, attrvals -> - ARGUMENT struct SETATTR4args { /* CURRENT_FH: target object */ stateid4 stateid; fattr4 obj_attributes; }; RESULT struct SETATTR4res { nfsstat4 status; }; DESCRIPTION The SETATTR Procedure changes one or more of the attributes of a file system object. The new attributes are specified with a bitmap and the attributes that follow the bitmap in bit order. The stateid is necessary for SETATTR's that change the size of file (modify the attribute object_size). This stateid represents a record lock, share reservation, or delegation which must be valid for the SETATTR to modify the file data. IMPLEMENTATION The file size attribute is used to request changes to the size of a file. A value of 0 (zero) causes the file to be truncated, a value less than the current size of the file causes data from new size to the end of the file to be discarded, and a size greater than the current size of the file causes logically zeroed data bytes to be added to the end of the file. Servers are free to implement this using holes or actual zero data bytes. Clients should not make any Expires: April 2000 [Page 151] Draft Protocol Specification NFS version 4 October 1999 assumptions regarding a server's implementation of this feature, beyond that the bytes returned will be zeroed. Servers must support extending the file size via SETATTR. SETATTR is not guaranteed atomic. A failed SETATTR may partially change a file's attributes. Changing the size of a file with SETATTR indirectly changes the time_modify. A client must account for this as size changes can result in data deletion. If server and client times differ, programs that compare client time to file times can break. A time maintenance protocol should be used to limit client/server time skew. If the server cannot successfully set all the attributes it must return an NFS4ERR_INVAL error. If the server can only support 32 bit offsets and sizes, a SETATTR request to set the size of a file to larger than can be represented in 32 bits will be rejected with this same error. ERRORS NFS4ERR_PERM NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_INVAL NFS4ERR_FBIG NFS4ERR_NOSPC NFS4ERR_ROFS NFS4ERR_DQUOT NFS4ERR_STALE NFS4ERR_BADHANDLE NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT Expires: April 2000 [Page 152] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_JUKEBOX NFS4ERR_DENIED NFS4ERR_GRACE NFS4ERR_FHEXPIRED NFS4ERR_WRONGSEC NFS4ERR_MOVED Expires: April 2000 [Page 153] Draft Protocol Specification NFS version 4 October 1999 12.2.31. Operation 32: SETCLIENTID - Negotiated Clientid SYNOPSIS verifier, client -> clientid ARGUMENT struct cid { opaque verifier[4]; opaque id<>; }; union nfs_client_id switch (clientid4 clientid) { case 0: cid ident; default: void; }; struct SETCLIENTID4args { seqid4 seqid; nfs_client_id client; }; RESULT union SETCLIENTID4res switch (nfsstat4 status) { case NFS4_OK: clientid4 clientid; default: void; }; DESCRIPTION The SETCLIENTID procedure introduces the ability of the client to notify the server of its intention to use a particular client identifier and verifier pair. Upon successful completion the server will return a clientid which is used in subsequent file locking requests. Expires: April 2000 [Page 154] Draft Protocol Specification NFS version 4 October 1999 IMPLEMENTATION The server takes the verifier and client identification supplied and search for a match of the client identification. If no match is found the server saves the principal/uid information along with the verifier and client identification and returns a unique clientid that is used as a short hand reference to the supplied information. If the server find matching client identification and a corresponding match in principal/uid, the server releases all locking state for the client and returns a new clientid. ERRORS NFS4ERR_INVAL NFS4ERR_SERVERFAULT NFS4ERR_CLID_INUSE Expires: April 2000 [Page 155] Draft Protocol Specification NFS version 4 October 1999 12.2.32. Operation 33: VERIFY - Verify Same Attributes SYNOPSIS (cfh), attrbits, attrvals -> - ARGUMENT struct VERIFY4args { /* CURRENT_FH: object */ bitmap4 attr_request; fattr4 obj_attributes; }; RESULT struct VERIFY4res { nfsstat4 status; }; DESCRIPTION The VERIFY procedure is used to verify that attributes have a value assumed by the client before proceeding with following operations in the compound request. For instance, a VERIFY can be used to make sure that the file size has not changed for an append-mode write: 1. PUTFH 0x0123456 2. VERIFY attrbits attrs 3. WRITE 450328 4096 If the attributes are not as expected, then the request fails and the data is not appended to the file. IMPLEMENTATION ERRORS Expires: April 2000 [Page 156] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_ACCES NFS4ERR_INVAL NFS4ERR_STALE NFS4ERR_BADHANDLE NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT NFS4ERR_JUKEBOX NFS4ERR_FHEXPIRED NFS4ERR_MOVED Expires: April 2000 [Page 157] Draft Protocol Specification NFS version 4 October 1999 12.2.33. Operation 34: WRITE - Write to File SYNOPSIS (cfh), offset, count, stability, stateid, data -> count, committed, verifier ARGUMENT enum stable_how4 { UNSTABLE4 = 0, DATA_SYNC4 = 1, FILE_SYNC4 = 2 }; struct WRITE4args { /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; count4 count; stable_how4 stable; opaque data<>; }; RESULT struct WRITE4resok { count4 count; stable_how4 committed; writeverf4 verf; }; union WRITE4res switch (nfsstat4 status) { case NFS4_OK: WRITE4resok resok4; default: void; }; DESCRIPTION The WRITE procedure is used to write data to a regular file. The Expires: April 2000 [Page 158] Draft Protocol Specification NFS version 4 October 1999 target file is specified by the current filehandle. The offset specifies the offset where the data should be written. An offset of 0 (zero) specifies that the write should start at the beginning of the file. The count represents the number of bytes of data that are to be written. If the count is 0 (zero), the WRITE will succeed and return a count of 0 (zero) subject to permissions checking. The server may choose to write fewer bytes than requested by the client. Part of the write request is a specification of how the write is to be performed. The client specifies with the stable parameter the method of how the data is to be processed by the server. If stable is FILE_SYNC, the server must commit the data written plus all file system metadata to stable storage before returning results. This corresponds to the NFS version 2 protocol semantics. Any other behavior constitutes a protocol violation. If stable is DATA_SYNC, then the server must commit all of the data to stable storage and enough of the metadata to retrieve the data before returning. The server implementor is free to implement DATA_SYNC in the same fashion as FILE_SYNC, but with a possible performance drop. If stable is UNSTABLE, the server is free to commit any part of the data and the metadata to stable storage, including all or none, before returning a reply to the client. There is no guarantee whether or when any uncommitted data will subsequently be committed to stable storage. The only guarantees made by the server are that it will not destroy any data without changing the value of verf and that it will not commit the data and metadata at a level less than that requested by the client. The stateid returned from a previous record lock or share reservation request is provided as part of the argument. The stateid is used by the server to verify that the associated lock is still valid and to update lease timeouts for the client. Upon successful completion, the following results are returned. The count result is the number of bytes of data written to the file. The server may write fewer bytes than requested. If so, the actual number of bytes written starting at location, offset, is returned. The server also returns an indication of the level of commitment of the data and metadata via committed. If the server committed all data and metadata to stable storage, committed should be set to FILE_SYNC. If the level of commitment was at least as strong as DATA_SYNC, then committed should be set to DATA_SYNC. Otherwise, committed must be returned as UNSTABLE. If stable was FILE_SYNC, then committed must also be FILE_SYNC: anything else constitutes a protocol violation. If stable was DATA_SYNC, then committed may be Expires: April 2000 [Page 159] Draft Protocol Specification NFS version 4 October 1999 FILE_SYNC or DATA_SYNC: anything else constitutes a protocol violation. If stable was UNSTABLE, then committed may be either FILE_SYNC, DATA_SYNC, or UNSTABLE. The final portion of the result is the write verifier, verf. The write verifier is a cookie that the client can use to determine whether the server has changed state between a call to WRITE and a subsequent call to either WRITE or COMMIT. This cookie must be consistent during a single instance of the NFS version 4 protocol service and must be unique between instances of the NFS version 4 protocol server, where uncommitted data may be lost. If a client writes data to the server with the stable argument set to UNSTABLE and the reply yields a committed response of DATA_SYNC or UNSTABLE, the client will follow up some time in the future with a COMMIT operation to synchronize outstanding asynchronous data and metadata with the server's stable storage, barring client error. It is possible that due to client crash or other error that a subsequent COMMIT will not be received by the server. IMPLEMENTATION It is possible for the server to write fewer than count bytes of data. In this case, the server should not return an error unless no data was written at all. If the server writes less than count bytes, the client should issue another WRITE to write the remaining data. It is assumed that the act of writing data to a file will cause the time_modified of the file to be updated. However, the time_modified of the file should not be changed unless the contents of the file are changed. Thus, a WRITE request with count set to 0 should not cause the time_modified of the file to be updated. The definition of stable storage has been historically a point of contention. The following expected properties of stable storage may help in resolving design issues in the implementation. Stable storage is persistent storage that survives: 1. Repeated power failures. 2. Hardware failures (of any board, power supply, etc.). 3. Repeated software crashes, including reboot cycle. This definition does not address failure of the stable storage module itself. Expires: April 2000 [Page 160] Draft Protocol Specification NFS version 4 October 1999 The verifier, is defined to allow a client to detect different instances of an NFS version 4 protocol server over which cached, uncommitted data may be lost. In the most likely case, the verifier allows the client to detect server reboots. This information is required so that the client can safely determine whether the server could have lost cached data. If the server fails unexpectedly and the client has uncommitted data from previous WRITE requests (done with the stable argument set to UNSTABLE and in which the result committed was returned as UNSTABLE as well) it may not have flushed cached data to stable storage. The burden of recovery is on the client and the client will need to retransmit the data to the server. A suggested verifier would be to use the time that the server was booted or the time the server was last started (if restarting the server without a reboot results in lost buffers). The committed field in the results allows the client to do more effective caching. If the server is committing all WRITE requests to stable storage, then it should return with committed set to FILE_SYNC, regardless of the value of the stable field in the arguments. A server that uses an NVRAM accelerator may choose to implement this policy. The client can use this to increase the effectiveness of the cache by discarding cached data that has already been committed on the server. Some implementations may return NFS4ERR_NOSPC instead of NFS4ERR_DQUOT when a user's quota is exceeded. ERRORS NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_INVAL NFS4ERR_FBIG NFS4ERR_NOSPC NFS4ERR_ROFS NFS4ERR_DQUOT NFS4ERR_STALE Expires: April 2000 [Page 161] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_BADHANDLE NFS4ERR_SERVERFAULT NFS4ERR_JUKEBOX NFS4ERR_LOCKED NFS4ERR_GRACE NFS4ERR_FHEXPIRED NFS4ERR_WRONGSEC NFS4ERR_MOVED Expires: April 2000 [Page 162] Draft Protocol Specification NFS version 4 October 1999 13. NFS Version 4 Callback Procedures The procedures used for callbacks are defined in the following sections. In the interest of clarity, the terms "client" and "server" refer to NFS clients and servers, despite the fact that for an individual callback RPC, the sense of these terms would be precisely the opposite. 13.1. Procedure 0: CB_NULL - No Operation SYNOPSIS ARGUMENT void; RESULT void; DESCRIPTION Standard ONCRPC NULL procedure. Void argument, void response. ERRORS None. Expires: April 2000 [Page 163] Draft Protocol Specification NFS version 4 October 1999 13.2. Procedure 1: CB_COMPOUND - Compound Operations SYNOPSIS compoundargs -> compoundres ARGUMENT union cb_opunion switch (unsigned opcode) { case : ; ... }; struct cb_op { cb_opunion ops; }; struct CB_COMPOUND4args { utf8string tag; cb_op oplist<>; }; RESULT union cb_resultdata switch (unsigned resop){ case ; ... }; struct CB_COMPOUND4res { nfsstat4 status; utf8string tag; cb_resultdata data<>; }; union opunion switch (unsigned opcode) { case : ; ... }; struct op { opunion ops; }; struct COMPOUND4args { Expires: April 2000 [Page 164] Draft Protocol Specification NFS version 4 October 1999 utf8string tag; op oplist<>; }; DESCRIPTION The CB_COMPOUND procedure is used to combine one or more of the callback procedures into a single RPC request. The main callback RPC program has two main procedures: CB_NULL and CB_COMPOUND. All other procedures use the CB_COMPOUND procedure as a wrapper. In the processing of the CB_COMPOUND procedure, the server may find that it does not have the available resources to execute any or all of the procedures within the CB_COMPOUND sequence. In this case, the error NFS4ERR_RESOURCE will be returned for the particular procedure within the CB_COMPOUND operation where the resource exhaustion occurred. This assumes that all previous procedures within the CB_COMPOUND sequence have been evaluated successfully. IMPLEMENTATION The CB_COMPOUND procedure is used to combine individual procedures into a single RPC request. The server interprets each of the procedures in turn. If a procedure is executed by the server and the status of that procedure is NFS4_OK, then the next procedure in the CB_COMPOUND procedure is executed. The server continues this process until there are no more procedures to be executed or one of the procedures has a status value other than NFS4_OK. ERRORS NFS4ERR_RESOURCE Expires: April 2000 [Page 165] Draft Protocol Specification NFS version 4 October 1999 13.2.1. Procedure 2: CB_GETATTR - Get Attributes SYNOPSIS fh, attrbits -> attrbits, attrvals ARGUMENT struct CB_GETATTR4args { nfs_fh4 fh; bitmap4 attr_request; }; RESULT struct CB_GETATTR4resok { fattr4 obj_attributes; }; union CB_GETATTR4res switch (nfsstat4 status) { case NFS4_OK: CB_GETATTR4resok resok4; default: void; }; DESCRIPTION CB_GETATTR is used to obtain the attributes modified by an open delegate to allow the server to respond to GETATTR requests for a file which is the subject of an open delegation. IMPLEMENTATION The client returns attrbits and the associated attribute values only for attributes that it may change (change, time_modify, object_size). It may further limit the response to attributes that it has in fact changed during the scope of the delegation. Expires: April 2000 [Page 166] Draft Protocol Specification NFS version 4 October 1999 ERRORS Expires: April 2000 [Page 167] Draft Protocol Specification NFS version 4 October 1999 13.2.2. Procedure 3: CB_RECALL - Recall an Open Delegation SYNOPSIS stateid, truncate, fh ARGUMENT struct CB_RECALL4args { stateid4 stateid; bool truncate; nfs_fh4 fh; }; RESULT struct CB_RECALL4res { nfsstat4 status; }; DESCRIPTION CB_RECALL is used to begin the process of recalling an open delegation and returning it to the server. The truncate flag is used to optimize recall for a file which is about to be truncated to zero. When it is set, the client is freed of obligation to propagate modified data for the file to the server, since this data is irrelevant. IMPLEMENTATION The client should reply to the callback immediately. Replying does not complete the recall. The recall is not complete until the delegation is returned using a DELEGRETURN. ERRORS Expires: April 2000 [Page 168] Draft Protocol Specification NFS version 4 October 1999 Expires: April 2000 [Page 169] Draft Protocol Specification NFS version 4 October 1999 14. Locking notes 14.1. Short and long leases The usual lease trade-offs apply: short leases are good for fast server recovery at a cost of increased RENEW or READ (with zero length) requests. Longer leases are certainly kinder and gentler to large internet servers trying to handle huge numbers of clients. RENEW requests drop in direct proportion to the lease time. The disadvantages of long leases are slower server recover after crash (server must wait for leases to expire and grace period before granting new lock requests) and increased file contention (if client fails to transmit an unlock request then server must wait for lease expiration before granting new locks). Long leases are usable if the server is to store lease state in non- volatile memory. Upon recovery, the server can reconstruct the lease state from its non-volatile memory and continue operation with its clients and therefore long leases are not an issue. 14.2. Clocks and leases To avoid the need for synchronized clocks, lease times are granted by the server as a time delta, though there is a requirement that the client and server clocks do not drift excessively over the duration of the lock. There is also the issue of propagation delay across the network which could easily be several hundred milliseconds across the Internet as well as the possibility that requests will be lost and need to be retransmitted. To take propagation delay into account, the client should subtract a it from lease times, e.g. if the client estimates the one-way propagation delay as 200 msec, then it can assume that the lease is already 200 msec old when it gets it. In addition, it'll take another 200 msec to get a response back to the server. So the client must send a lock renewal or write data back to the server 400 msec before the lease would expire. The client could measure propagation delay with reasonable accuracy by measuring the round-trip time for lock extensions assuming that there's not much server processing overhead in an extension. 14.3. Locks and lease times Lock requests do not contain desired lease times. The server Expires: April 2000 [Page 170] Draft Protocol Specification NFS version 4 October 1999 allocates leases with no information from the client. The assumption here is that the client really has no idea of just how long the lock will be required. If a scenario can be found where a hint from the client as to the maximum lease time desired would be useful, then this feature could be added to lock requests. 14.4. Locking of directories and other meta-files A question: should directories and/or other file-system objects like symbolic links be lockable ? Clients will want to cache whole directories. It would be nice to have consistent directory caches, but it would require that any client creating a new file get a write lock on the directory and be prepared to handle lock denial. Is the weak cache consistency that we currently have for directories acceptable ? I think perhaps it is - given the expense of doing full consistency on an Internet scale. 14.5. Proxy servers and leases Proxy servers. There is some interest in having NFS V4 support caching proxies. Support for proxy caching is a requirement if servers are to handle large numbers of clients - clients that may have little or no ability to cache on their own. How could proxy servers use lease-based locking ? 14.6. Locking and the new latency Latency caused by locking. If a client wants to update a file then it will have to wait until the leases on read locks have expired. If the leases are of the order of 60 seconds or several minutes then the client (and end-user) may be blocked for a while. This is unfamiliar for current NFS users who are not bothered by mandatory locking - but it could be an issue if we decide we like the caching benefits. A similar problem exists for clients that wish to read a file that is write locked. The read-lock case is likely to be more common if read-locking is used to protect cached data on the client. Expires: April 2000 [Page 171] Draft Protocol Specification NFS version 4 October 1999 15. Internationalization The primary issue in which NFS needs to deal with internationalization, or i18n, is with respect to file names and other strings as used within the protocol. NFS' choice of string representation must allow reasonable name/string access to clients which use various languages. The UTF-8 encoding allows for this type of access and this choice is explained in the following. 15.1. Universal Versus Local Character Sets [RFC1345] describes a table of 16 bit characters for many different languages (the bit encodings match Unicode, though of course RFC1345 is somewhat out of date with respect to current Unicode assignments). Each character from each language has a unique 16 bit value in the 16 bit character set. Thus this table can be thought of as a universal character set. [RFC1345] then talks about groupings of subsets of the entire 16 bit character set into "Charset Tables". For example one might take all the Greek characters from the 16 bit table (which are are consecutively allocated), and normalize their offsets to a table that fits in 7 bits. Thus we find that "lower case alpha" is in the same position as "upper case a" in the US-ASCII table, and "upper case alpha" is in the same position as "lower case a" in the US-ASCII table. These normalized subset character sets can be thought of as "local character sets", suitable for an operating system locale. Local character sets are not suitable for the NFS protocol. Consider someone who creates a file with a name in a Swedish character set. If someone else later goes to access the file with their locale set to the Swedish language, then there are no problems. But if someone in say the US-ASCII locale goes to access the file, the file name will look very different, because the Swedish characters in the 7 bit table will now be represented in US-ASCII characters on the display. It would be preferable to give the US-ASCII user a way to display the file name using Swedish glyphs. In order to do that, the NFS protocol would have to include the locale with the file name on each operation to create a file. But then what of the situation when we have a path name on the server like: /component-1/component-2/component-3 Each component could have been created with a different locale. If one issues CREATE with multi-component path name, and if some of the leading components already exist, what is to be done with the Expires: April 2000 [Page 172] Draft Protocol Specification NFS version 4 October 1999 existing components? Is the current locale attribute replaced with the user's current one? These types of situations quickly become too complex when there is an alternate solution. If NFS V4 used a universal 16 bit or 32 bit character set (or a encoding of a 16 bit or 32 bit character set into octets), then server and client need not care if the locale of the user accessing the file is different than the locale of the user who created the file. The unique 16 bit or 32 bit encoding of the character allows for determination of what language the character is from and also how to display that character on the client. The server need not know what locales are used. 15.2. Overview of Universal Character Set Standards The previous section makes a case for using a universal character set in NFS version 4. This section makes the case for using UTF-8 as the specific universal character set for NFS version 4. [RFC2279] discusses UTF-* (UTF-8 and other UTF-XXX encodings), Unicode, and UCS-*. There are two standards bodies managing universal code sets: o ISO/IEC which has the standard 10646-1 o Unicode which has the Unicode standard Both standards bodies have pledged to track each other's assignments of character codes. The following is a brief analysis of the various standards. UCS Universal Character Set. This is ISO/IEC 10646-1: "a multi-octet character set called the Universal Character Set (UCS), which encompasses most of the world's writing systems." UCS-2 a two octet per character encoding that addresses the first 2^16 characters of UCS. Currently there are no UCS characters beyond that range. UCS-4 a four octet per character encoding that permits the encoding of up to 2^31 characters. Expires: April 2000 [Page 173] Draft Protocol Specification NFS version 4 October 1999 UTF UCS transformation format. UTF-1 Only historical interest; it has been removed from 10646-1 UTF-7 Encodes the entire "repertoire" of UCS "characters using only octets with the higher order bit clear". [RFC2152] describes UTF-7. UTF-7 accomplishes this by reserving one of the 7bit US-ASCII characters as a "shift" character to indicate non-US-ASCII characters. UTF-8 Unlike UTF-7, uses all 8 bits of the octets. US-ASCII characters are encoded as before unchanged. Any octet with the high bit cleared can only mean a US-ASCII character. The high bit set means that a UCS character is being encoded. UTF-16 Encodes UCS-4 characters into UCS-2 characters using a reserved range in UCS-2. Unicode Unicode and UCS-2 are the same; [RFC2279] states: Up to the present time, changes in Unicode and amendments to ISO/IEC 10646 have tracked each other, so that the character repertoires and code point assignments have remained in sync. The relevant standardization committees have committed to maintain this very useful synchronism. 15.3. Difficulties with UCS-4, UCS-2, Unicode Adapting existing applications, and file systems to multi-octet schemes like UCS and Unicode can be difficult. A significant amount of code has been written to process streams of bytes. Also there are many existing stored objects described with 7 bit or 8 bit characters. Doubling or quadrupling the bandwidth and storage requirements seems like an expensive way to accomplish I18N. UCS-2 and Unicode are "only" 16 bits long. That might seem to be enough but, according to [Unicode1], 38,887 Unicode characters are already assigned. And according to [Unicode2] there are still more languages that need to be added. Expires: April 2000 [Page 174] Draft Protocol Specification NFS version 4 October 1999 15.4. UTF-8 and its solutions UTF-8 solves problems for NFS that exist with the use of UCS and Unicode. UTF-8 will encode 16 bit and 32 bit characters in a way that will be compact for most users. The encoding table from UCS-4 to UTF-8, as copied from [RFC2279]: UCS-4 range (hex.) UTF-8 octet sequence (binary) 0000 0000-0000 007F 0xxxxxxx 0000 0080-0000 07FF 110xxxxx 10xxxxxx 0000 0800-0000 FFFF 1110xxxx 10xxxxxx 10xxxxxx 0001 0000-001F FFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx 0020 0000-03FF FFFF 111110xx 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx 0400 0000-7FFF FFFF 1111110x 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx See [RFC2279] for precise encoding and decoding rules. Note because of UTF-16, the algorithm from Unicode/UCS-2 to UTF-8 needs to account for the reserved range between D800 and DFFF. Note that the 16 bit UCS or Unicode characters require no more than 3 octets to encode into UTF-8 Interestingly, UTF-8 has room to handle characters larger than 31 bits, because the leading octet of form: 1111111x is not defined. If needed, ISO could either use that octet to indicate a sequence of an encoded 8 octet character, or perhaps use 11111110 to permit the next octet to indicate an even more expandable character set. So using UTF-8 to represent character encodings means never having to run out of room. Expires: April 2000 [Page 175] Draft Protocol Specification NFS version 4 October 1999 16. Security Considerations The major security feature to consider is the authentication of the user making the request of NFS service. Consideration should also be given to the integrity and privacy of this NFS request. These specific issues are discussed as part of the section on "RPC and Security Flavor". As this document progresses, other issues of denial of service and other typical security issues will be addressed here along with those issues specific to NFS service. Expires: April 2000 [Page 176] Draft Protocol Specification NFS version 4 October 1999 17. NFS Version 4 RPC definition file /* * Copyright (C) The Internet Society (1998,1999). * All Rights Reserved. */ /* * nfs4_prot.x * */ %#pragma ident "%W% %D%" /* * Basic typedefs for RFC 1832 data type definitions */ typedef unsigned hyper uint64_t; typedef hyper int64_t; typedef unsigned int uint32_t; typedef int int32_t; /* * Sizes */ const NFS4_FHSIZE = 128; const NFS4_CREATEVERFSIZE = 8; /* * File types */ enum nfs_ftype4 { NF4REG = 1, /* Regular File */ NF4DIR = 2, /* Directory */ NF4BLK = 3, /* Special File - block device */ NF4CHR = 4, /* Special File - character device */ NF4LNK = 5, /* Symbolic Link */ NF4SOCK = 6, /* Special File - socket */ NF4FIFO = 7, /* Special File - fifo */ NF4ATTRDIR = 8, /* Attribute Directory */ NF4NAMEDATTR = 9 /* Named Attribute */ }; /* * Error status */ enum nfsstat4 { NFS4_OK = 0, Expires: April 2000 [Page 177] Draft Protocol Specification NFS version 4 October 1999 NFS4ERR_PERM = 1, NFS4ERR_NOENT = 2, NFS4ERR_IO = 5, NFS4ERR_NXIO = 6, NFS4ERR_ACCES = 13, NFS4ERR_EXIST = 17, NFS4ERR_XDEV = 18, NFS4ERR_NODEV = 19, NFS4ERR_NOTDIR = 20, NFS4ERR_ISDIR = 21, NFS4ERR_INVAL = 22, NFS4ERR_FBIG = 27, NFS4ERR_NOSPC = 28, NFS4ERR_ROFS = 30, NFS4ERR_MLINK = 31, NFS4ERR_NAMETOOLONG = 63, NFS4ERR_NOTEMPTY = 66, NFS4ERR_DQUOT = 69, NFS4ERR_STALE = 70, NFS4ERR_BADHANDLE = 10001, NFS4ERR_NOT_SYNC = 10002, NFS4ERR_BAD_COOKIE = 10003, NFS4ERR_NOTSUPP = 10004, NFS4ERR_TOOSMALL = 10005, NFS4ERR_SERVERFAULT = 10006, NFS4ERR_BADTYPE = 10007, NFS4ERR_JUKEBOX = 10008, NFS4ERR_SAME = 10009,/* nverify says attrs same */ NFS4ERR_DENIED = 10010,/* lock unavailable */ NFS4ERR_EXPIRED = 10011,/* lock lease expired */ NFS4ERR_LOCKED = 10012,/* I/O failed due to lock */ NFS4ERR_GRACE = 10013,/* in grace period */ NFS4ERR_FHEXPIRED = 10014,/* file handle expired */ NFS4ERR_SHARE_DENIED = 10015,/* share reserve denied */ NFS4ERR_WRONGSEC = 10016,/* wrong security flavor */ NFS4ERR_CLID_INUSE = 10017,/* clientid in use */ NFS4ERR_RESOURCE = 10018,/* resource exhaustion */ NFS4ERR_MOVED = 10019,/* filesystem relocated */ NFS4ERR_NOFILEHANDLE = 10020 /* current FH is not set */ }; /* * Basic data types */ typedef uint32_t bitmap4<>; typedef uint64_t offset4; typedef uint32_t count4; typedef uint32_t length4; Expires: April 2000 [Page 178] Draft Protocol Specification NFS version 4 October 1999 typedef uint64_t clientid4; typedef uint64_t stateid4; typedef uint32_t seqid4; typedef opaque utf8string<>; typedef utf8string component4; typedef component4 pathname4<>; typedef uint64_t nfs_lockid4; typedef uint32_t nfs_lease4; typedef uint32_t nfs_lockstate4; typedef uint64_t nfs_cookie4; typedef utf8string linktext4; typedef opaque sec_oid4<>; typedef uint32_t qop4; typedef uint32_t mode4; typedef uint32_t writeverf4; typedef opaque createverf4[NFS4_CREATEVERFSIZE]; /* * Timeval */ struct nfstime4 { int64_t seconds; uint32_t nseconds; }; /* * File access handle */ typedef opaque nfs_fh4; /* * File attribute definitions */ /* * FSID structure for major/minor */ struct fsid4 { uint64_t major; uint64_t minor; }; /* * Filesystem locations attribute for relocation/migration */ struct fs_location4 { utf8string server<>; Expires: April 2000 [Page 179] Draft Protocol Specification NFS version 4 October 1999 pathname4 rootpath; }; struct fs_locations4 { pathname4 fs_root; fs_location4 locations<>; }; /* * Various Access Control Entry definitions */ /* * Mask that indicates which Access Control Entries are supported. * Values for the fattr4_aclsupport attribute. */ const ACL4_SUPPORT_ALLOW_ACL = 0x00000001; const ACL4_SUPPORT_DENY_ACL = 0x00000002; const ACL4_SUPPORT_AUDIT_ACL = 0x00000004; const ACL4_SUPPORT_ALARM_ACL = 0x00000008; typedef uint32_t acetype4; /* * acetype4 values, others can be added as needed. */ const ACE4_ACCESS_ALLOWED_ACE_TYPE = 0x00000000; const ACE4_ACCESS_DENIED_ACE_TYPE = 0x00000001; const ACE4_SYSTEM_AUDIT_ACE_TYPE = 0x00000002; const ACE4_SYSTEM_ALARM_ACE_TYPE = 0x00000003; /* * ACE flag */ typedef uint32_t aceflag4; /* * ACE flag values */ const ACE4_FILE_INHERIT_ACE = 0x00000001; const ACE4_DIRECTORY_INHERIT_ACE = 0x00000002; const ACE4_NO_PROPAGATE_INHERIT_ACE = 0x00000004; const ACE4_INHERIT_ONLY_ACE = 0x00000008; const ACE4_SUCCESSFUL_ACCESS_ACE_FLAG = 0x00000010; const ACE4_FAILED_ACCESS_ACE_FLAG = 0x00000020; const ACE4_IDENTIFIER_GROUP = 0x00000040; Expires: April 2000 [Page 180] Draft Protocol Specification NFS version 4 October 1999 /* * ACE mask */ typedef uint32_t acemask4; /* * ACE mask values */ const ACE4_READ_DATA = 0x00000001; const ACE4_LIST_DIRECTORY = 0x00000001; const ACE4_WRITE_DATA = 0x00000002; const ACE4_ADD_FILE = 0x00000002; const ACE4_APPEND_DATA = 0x00000004; const ACE4_ADD_SUBDIRECTORY = 0x00000004; const ACE4_READ_STREAMS = 0x00000008; const ACE4_WRITE_STREAMS = 0x00000010; const ACE4_EXECUTE = 0x00000020; const ACE4_DELETE_CHILD = 0x00000040; const ACE4_READ_ATTRIBUTES = 0x00000080; const ACE4_WRITE_ATTRIBUTES = 0x00000100; const ACE4_READ_CONTROL = 0x00000200; const ACE4_READ_EXTENDED_ATTRIBUTES = 0x00000400; const ACE4_WRITE_EXTENDED_ATTRIBUTES = 0x00000800; const ACE4_DELETE = 0x00010000; const ACE4_READ_ACL = 0x00020000; const ACE4_WRITE_ACL = 0x00040000; const ACE4_WRITE_OWNER = 0x00080000; const ACE4_SYNCHRONIZE = 0x00100000; /* * ACE4_GENERIC_READ -- defined as combination of * ACE4_READ_CONTROL | * ACE4_READ_DATA | * ACE4_READ_ATTRIBUTES | * ACE4_READ_EXTENDED_ATTRIBUTES | * ACE4_SYNCHRONIZE */ const ACE4_GENERIC_READ = 0x00100681; /* * ACE4_GENERIC_WRITE -- defined as combination of * ACE4_READ_CONTROL | * ACE4_WRITE_DATA | * ACE4_WRITE_ATTRIBUTES | * ACE4_WRITE_EXTENDED_ATTRIBUTES | Expires: April 2000 [Page 181] Draft Protocol Specification NFS version 4 October 1999 * ACE4_APPEND_DATA | * ACE4_SYNCHRONIZE */ const ACE4_GENERIC_WRITE = 0x00100B06; /* * ACE4_GENERIC_EXECUTE -- defined as combination of * ACE4_READ_CONTROL * ACE4_READ_ATTRIBUTES * ACE4_EXECUTE * ACE4_SYNCHRONIZE */ const ACE4_GENERIC_EXECUTE = 0x001002A0; /* * Access Control Entry definition */ struct nfsace4 { acetype4 type; aceflag4 flag; acemask4 access_mask; utf8string who; }; /* * Special data/attribute associated with * file types NF4BLK and NF4CHR. */ struct specdata4 { uint32_t specdata1; uint32_t specdata2; }; typedef bitmap4 fattr4_supported_attrs; typedef nfs_ftype4 fattr4_type; typedef bool fattr4_persistent_fh; typedef uint64_t fattr4_change; typedef uint64_t fattr4_size; typedef bool fattr4_link_support; typedef bool fattr4_symlink_support; typedef bool fattr4_named_attr; typedef fsid4 fattr4_fsid; typedef bool fattr4_unique_handles; typedef uint32_t fattr4_lease_time; typedef nfsstat4 fattr4_rdattr_error; Expires: April 2000 [Page 182] Draft Protocol Specification NFS version 4 October 1999 typedef nfsace4 fattr4_acl<>; typedef uint32_t fattr4_aclsupport; typedef bool fattr4_archive; typedef bool fattr4_cansettime; typedef bool fattr4_case_insensitive; typedef bool fattr4_case_preserving; typedef bool fattr4_chown_restricted; typedef uint64_t fattr4_fileid; typedef uint64_t fattr4_files_avail; typedef nfs_fh4 fattr4_filehandle; typedef uint64_t fattr4_files_free; typedef uint64_t fattr4_files_total; typedef fs_locations4 fattr4_fs_locations; typedef bool fattr4_hidden; typedef bool fattr4_homogeneous; typedef uint64_t fattr4_maxfilesize; typedef uint32_t fattr4_maxlink; typedef uint32_t fattr4_maxname; typedef uint64_t fattr4_maxread; typedef uint64_t fattr4_maxwrite; typedef utf8string fattr4_mimetype; typedef mode4 fattr4_mode; typedef bool fattr4_no_trunc; typedef uint32_t fattr4_numlinks; typedef utf8string fattr4_owner; typedef utf8string fattr4_owner_group; typedef uint64_t fattr4_quota_hard; typedef uint64_t fattr4_quota_soft; typedef uint64_t fattr4_quota_used; typedef specdata4 fattr4_rawdev; typedef uint64_t fattr4_space_avail; typedef uint64_t fattr4_space_free; typedef uint64_t fattr4_space_total; typedef uint64_t fattr4_space_used; typedef bool fattr4_system; typedef nfstime4 fattr4_time_access; typedef nfstime4 fattr4_time_backup; typedef nfstime4 fattr4_time_create; typedef nfstime4 fattr4_time_delta; typedef nfstime4 fattr4_time_metadata; typedef nfstime4 fattr4_time_modify; typedef utf8string fattr4_version; typedef nfstime4 fattr4_volatility; /* * Mandatory Attributes */ Expires: April 2000 [Page 183] Draft Protocol Specification NFS version 4 October 1999 const FATTR4_SUPPORTED_ATTRS = 0; const FATTR4_TYPE = 1; const FATTR4_PERSISTENT_FH = 2; const FATTR4_CHANGE = 3; const FATTR4_SIZE = 4; const FATTR4_LINK_SUPPORT = 5; const FATTR4_SYMLINK_SUPPORT = 6; const FATTR4_NAMED_ATTR = 7; const FATTR4_FSID = 8; const FATTR4_UNIQUE_HANDLES = 9; const FATTR4_LEASE_TIME = 10; const FATTR4_RDATTR_ERROR = 11; /* * Recommended Attributes */ const FATTR4_ACL = 12; const FATTR4_ARCHIVE = 13; const FATTR4_CANSETTIME = 14; const FATTR4_CASE_INSENSITIVE = 15; const FATTR4_CASE_PRESERVING = 16; const FATTR4_CHOWN_RESTRICTED = 17; const FATTR4_FILEHANDLE = 18; const FATTR4_FILEID = 19; const FATTR4_FILES_AVAIL = 20; const FATTR4_FILES_FREE = 21; const FATTR4_FILES_TOTAL = 22; const FATTR4_FS_LOCATIONS = 23; const FATTR4_HIDDEN = 24; const FATTR4_HOMOGENEOUS = 25; const FATTR4_MAXFILESIZE = 26; const FATTR4_MAXLINK = 27; const FATTR4_MAXNAME = 28; const FATTR4_MAXREAD = 29; const FATTR4_MAXWRITE = 30; const FATTR4_MIMETYPE = 31; const FATTR4_MODE = 32; const FATTR4_NO_TRUNC = 33; const FATTR4_NUMLINKS = 34; const FATTR4_OWNER = 35; const FATTR4_OWNER_GROUP = 36; const FATTR4_QUOTA_HARD = 37; const FATTR4_QUOTA_SOFT = 38; const FATTR4_QUOTA_USED = 39; const FATTR4_RAWDEV = 40; const FATTR4_SPACE_AVAIL = 41; const FATTR4_SPACE_FREE = 42; const FATTR4_SPACE_TOTAL = 43; Expires: April 2000 [Page 184] Draft Protocol Specification NFS version 4 October 1999 const FATTR4_SPACE_USED = 44; const FATTR4_SYSTEM = 45; const FATTR4_TIME_ACCESS = 46; const FATTR4_TIME_BACKUP = 47; const FATTR4_TIME_CREATE = 48; const FATTR4_TIME_DELTA = 49; const FATTR4_TIME_METADATA = 50; const FATTR4_TIME_MODIFY = 51; const FATTR4_VERSION = 52; const FATTR4_VOLATILITY = 53; typedef opaque attrlist4<>; /* * File attribute container */ struct fattr4 { bitmap4 attrmask; attrlist4 attr_vals; }; /* * Change info for the client */ struct change_info4 { bool atomic; fattr4_change before; fattr4_change after; }; struct clientaddr4 { /* see struct rpcb in RFC 1833 */ string r_netid<>; /* network id */ string r_addr<>; /* universal address */ }; /* * Callback program info as provided by the client */ struct cb_client4 { unsigned int cb_program; clientaddr4 cb_location; }; /* * Client ID */ Expires: April 2000 [Page 185] Draft Protocol Specification NFS version 4 October 1999 struct nfs_client_id4 { opaque verifier[4]; opaque id<>; }; struct nfs_lockowner4 { clientid4 clientid; opaque owner<>; }; enum nfs_lock_type4 { READ_LT = 1, WRITE_LT = 2, READW_LT = 3, /* blocking read */ WRITEW_LT = 4 /* blocking write */ }; /* * ACCESS: Check access permission */ const ACCESS4_READ = 0x0001; const ACCESS4_LOOKUP = 0x0002; const ACCESS4_MODIFY = 0x0004; const ACCESS4_EXTEND = 0x0008; const ACCESS4_DELETE = 0x0010; const ACCESS4_EXECUTE = 0x0020; struct ACCESS4args { /* CURRENT_FH: object */ uint32_t access; }; struct ACCESS4resok { uint32_t supported; uint32_t access; }; union ACCESS4res switch (nfsstat4 status) { case NFS4_OK: ACCESS4resok resok4; default: void; }; /* * CLOSE: Close a file and release share locks */ struct CLOSE4args { Expires: April 2000 [Page 186] Draft Protocol Specification NFS version 4 October 1999 /* CURRENT_FH: object */ stateid4 stateid; }; union CLOSE4res switch (nfsstat4 status) { case NFS4_OK: stateid4 stateid; default: void; }; /* * COMMIT: Commit cached data on server to stable storage */ struct COMMIT4args { /* CURRENT_FH: file */ offset4 offset; count4 count; }; struct COMMIT4resok { writeverf4 verf; }; union COMMIT4res switch (nfsstat4 status) { case NFS4_OK: COMMIT4resok resok4; default: void; }; /* * CREATE: Create a file */ union createtype4 switch (nfs_ftype4 type) { case NF4LNK: utf8string linkdata; case NF4BLK: case NF4CHR: specdata4 devdata; case NF4SOCK: case NF4FIFO: case NF4DIR: case NF4ATTRDIR: void; }; Expires: April 2000 [Page 187] Draft Protocol Specification NFS version 4 October 1999 struct CREATE4args { /* CURRENT_FH: directory for creation */ component4 objname; createtype4 objtype; }; struct CREATE4resok { change_info4 cinfo; }; union CREATE4res switch (nfsstat4 status) { case NFS4_OK: CREATE4resok resok4; default: void; }; /* * DELEGPURGE: Purge Delegations Awaiting Recovery */ struct DELEGPURGE4args { clientid4 clientid; }; struct DELEGPURGE4res { nfsstat4 status; }; /* * DELEGRETURN: Return a delegation */ struct DELEGRETURN4args { stateid4 stateid; }; struct DELEGRETURN4res { nfsstat4 status; }; /* * GETATTR: Get file attributes */ struct GETATTR4args { /* CURRENT_FH: directory or file */ bitmap4 attr_request; }; struct GETATTR4resok { Expires: April 2000 [Page 188] Draft Protocol Specification NFS version 4 October 1999 fattr4 obj_attributes; }; union GETATTR4res switch (nfsstat4 status) { case NFS4_OK: GETATTR4resok resok4; default: void; }; /* * GETFH: Get current filehandle */ struct GETFH4resok { nfs_fh4 object; }; union GETFH4res switch (nfsstat4 status) { case NFS4_OK: GETFH4resok resok4; default: void; }; /* * LINK: Create link to an object */ struct LINK4args { /* CURRENT_FH: file */ nfs_fh4 dir; component4 newname; }; struct LINK4resok { change_info4 cinfo; }; union LINK4res switch (nfsstat4 status) { case NFS4_OK: LINK4resok resok4; default: void; }; /* * LOCK/LOCKT/LOCKU: Record lock management */ struct LOCK4args { Expires: April 2000 [Page 189] Draft Protocol Specification NFS version 4 October 1999 /* CURRENT_FH: file */ nfs_lock_type4 type; seqid4 seqid; bool reclaim; stateid4 stateid; offset4 offset; length4 length; }; struct lockres4 { stateid4 stateid; int32_t access; }; union LOCK4res switch (nfsstat4 status) { case NFS4_OK: lockres4 result; default: void; }; union LOCKT4res switch (nfsstat4 status) { case NFS4ERR_DENIED: nfs_lockowner4 owner; case NFS4_OK: void; default: void; }; union LOCKU4res switch (nfsstat4 status) { case NFS4_OK: stateid4 stateid_ok; default: stateid4 stateid_oth; }; /* * LOOKUP: Lookup filename */ struct LOOKUP4args { /* CURRENT_FH: directory */ pathname4 path; }; struct LOOKUP4res { /* CURRENT_FH: object */ nfsstat4 status; Expires: April 2000 [Page 190] Draft Protocol Specification NFS version 4 October 1999 }; /* * LOOKUPP: Lookup parent directory */ struct LOOKUPP4res { /* CURRENT_FH: directory */ nfsstat4 status; }; /* * NVERIFY: Verify attributes different */ struct NVERIFY4args { /* CURRENT_FH: object */ bitmap4 attr_request; fattr4 obj_attributes; }; struct NVERIFY4res { nfsstat4 status; }; /* * Various definitions for OPEN */ enum createmode4 { UNCHECKED4 = 0, GUARDED4 = 1, EXCLUSIVE4 = 2 }; union createhow4 switch (createmode4 mode) { case UNCHECKED4: case GUARDED4: fattr4 createattrs; case EXCLUSIVE4: createverf4 verf; }; enum opentype4 { OPEN4_NOCREATE = 0, OPEN4_CREATE = 1 }; union openflag4 switch (opentype4 opentype) { case OPEN4_CREATE: createhow4 how; Expires: April 2000 [Page 191] Draft Protocol Specification NFS version 4 October 1999 default: void; }; enum limit_by4 { NFS_LIMIT_SIZE = 1, NFS_LIMIT_BLOCKS = 2 /* experimental; subject to change*/ /* others as needed */ }; struct nfs_modified_limit4 { uint64_t bytes; uint32_t blocksize; }; union nfs_space_limit4 switch (limit_by4 limitby) { case NFS_LIMIT_SIZE: uint64_t filesize; case NFS_LIMIT_BLOCKS: nfs_modified_limit4 mod_blocks; } ; /* * Access and Deny constants for open argument */ const OPEN4_ACCESS_READ = 0x0001; const OPEN4_ACCESS_WRITE= 0x0002; const OPEN4_ACCESS_BOTH = 0x0003; const OPEN4_DENY_NONE = 0x0000; const OPEN4_DENY_READ = 0x0001; const OPEN4_DENY_WRITE = 0x0002; const OPEN4_DENY_BOTH = 0x0003; enum open_delegation_type4 { OPEN_DELEGATE_NONE = 0, OPEN_DELEGATE_READ = 1, OPEN_DELEGATE_WRITE = 2 }; enum open_claim_type4 { CLAIM_NULL = 0, CLAIM_PREVIOUS = 1, CLAIM_DELEGATE_CUR = 2, CLAIM_DELEGATE_PREV = 3 }; Expires: April 2000 [Page 192] Draft Protocol Specification NFS version 4 October 1999 struct open_claim_delegate_cur4 { pathname4 file; stateid4 delegate_stateid; }; union open_claim4 switch (open_claim_type4 claim) { /* * No special rights to file. Ordinary OPEN of the specified file. */ case CLAIM_NULL: /* CURRENT_FH: directory */ pathname4 file; /* * Right to the file established by an open previous to server * reboot. File identified by filehandle obtained at that time * rather than by name. */ case CLAIM_PREVIOUS: /* CURRENT_FH: file being reclaimed */ int32_t delegate_type; /* * Right to file based on a delegation granted by the server. * File is specified by name. */ case CLAIM_DELEGATE_CUR: /* CURRENT_FH: directory */ open_claim_delegate_cur4 delegate_cur_info; /* Right to file based on a delegation granted to a previous boot * instance of the client. File is specified by name. */ case CLAIM_DELEGATE_PREV: /* CURRENT_FH: directory */ pathname4 file_delegate_prev; }; /* * OPEN: Open a file, potentially receiving an open delegation */ struct OPEN4args { open_claim4 claim; openflag4 openhow; nfs_lockowner4 owner; seqid4 seqid; int32_t access; int32_t deny; Expires: April 2000 [Page 193] Draft Protocol Specification NFS version 4 October 1999 }; /* * Result flags */ /* Mandatory locking is in effect for this file. */ const OPEN4_RESULT_MLOCK = 0x0001; struct open_read_delegation4 { stateid4 stateid; /* Stateid for delegation*/ bool recall; /* Pre-recalled flag for delegations obtained by reclaim (CLAIM_PREVIOUS) */ nfsace4 permissions; /* Defines users who don't need an ACCESS call to open for read */ }; struct open_write_delegation4 { stateid4 stateid; /* Stateid for delegation be flushed to the server on close. */ bool recall; /* Pre-recalled flag for delegations obtained by reclaim (CLAIM_PREVIOUS) */ nfs_space_limit4 space_limit; /* Defines condition that the client must check to determine whether the file needs to be flushed to the server on close. */ nfsace4 permissions; /* Defines users who don't need an ACCESS call as part of a delegated open. */ }; union open_delegation4 switch (open_delegation_type4 delegation_type) { case OPEN_DELEGATE_NONE: void; case OPEN_DELEGATE_READ: open_read_delegation4 read; case OPEN_DELEGATE_WRITE: open_write_delegation4 write; }; Expires: April 2000 [Page 194] Draft Protocol Specification NFS version 4 October 1999 struct OPEN4resok { stateid4 stateid; /* Stateid for open */ uint32_t rflags; /* Result flags */ int32_t access; /* Access granted */ open_delegation4 delegation; /* Info on any open delegation */ }; union OPEN4res switch (nfsstat4 status) { case NFS4_OK: /* CURRENT_FH: opened file */ OPEN4resok result; default: void; }; /* * OPENATTR: open named attributes directory */ struct OPENATTR4res { /* CURRENT_FH: name attr directory*/ nfsstat4 status; }; /* * PUTFH: Set current filehandle */ struct PUTFH4args { nfs_fh4 object; }; struct PUTFH4res { /* CURRENT_FH: */ nfsstat4 status; }; /* * PUTPUBFH: Set public filehandle */ struct PUTPUBFH4res { /* CURRENT_FH: public fh */ nfsstat4 status; }; /* * PUTROOTFH: Set root filehandle */ struct PUTROOTFH4res { Expires: April 2000 [Page 195] Draft Protocol Specification NFS version 4 October 1999 /* CURRENT_FH: root fh */ nfsstat4 status; }; /* * READ: Read from file */ struct READ4args { /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; count4 count; }; struct READ4resok { bool eof; opaque data<>; }; union READ4res switch (nfsstat4 status) { case NFS4_OK: READ4resok resok4; default: void; }; /* * READDIR: Read directory */ struct READDIR4args { /* CURRENT_FH: directory */ nfs_cookie4 cookie; count4 dircount; count4 maxcount; bitmap4 attr_request; }; struct entry4 { nfs_cookie4 cookie; component4 name; fattr4 attrs; entry4 *nextentry; }; struct dirlist4 { entry4 *entries; bool eof; Expires: April 2000 [Page 196] Draft Protocol Specification NFS version 4 October 1999 }; struct READDIR4resok { dirlist4 reply; }; union READDIR4res switch (nfsstat4 status) { case NFS4_OK: READDIR4resok resok4; default: void; }; /* * READLINK: Read symbolic link */ struct READLINK4resok { linktext4 link; }; union READLINK4res switch (nfsstat4 status) { case NFS4_OK: READLINK4resok resok4; default: void; }; /* * REMOVE: Remove filesystem object */ struct REMOVE4args { /* CURRENT_FH: directory */ component4 target; }; struct REMOVE4resok { change_info4 cinfo; }; union REMOVE4res switch (nfsstat4 status) { case NFS4_OK: REMOVE4resok resok4; default: void; }; Expires: April 2000 [Page 197] Draft Protocol Specification NFS version 4 October 1999 /* * RENAME: Rename directory entry */ struct RENAME4args { /* CURRENT_FH: source directory */ component4 oldname; nfs_fh4 newdir; component4 newname; }; struct RENAME4resok { change_info4 source_cinfo; change_info4 target_cinfo; }; union RENAME4res switch (nfsstat4 status) { case NFS4_OK: RENAME4resok resok4; default: void; }; /* * RENEW: Renew a Lease */ struct RENEW4args { stateid4 stateid; }; struct RENEW4res { nfsstat4 status; }; /* * RESTOREFH: Restore saved filehandle */ struct RESTOREFH4res { /* CURRENT_FH: value of saved fh */ nfsstat4 status; }; /* * SAVEFH: Save current filehandle */ struct SAVEFH4res { /* SAVED_FH: value of current fh */ nfsstat4 status; Expires: April 2000 [Page 198] Draft Protocol Specification NFS version 4 October 1999 }; /* * SECINFO: Obtain Available Security Mechanisms */ struct SECINFO4args { /* CURRENT_FH: */ component4 name; }; /* * From RFC 2203 */ enum rpc_gss_svc_t { RPC_GSS_SVC_NONE = 1, RPC_GSS_SVC_INTEGRITY = 2, RPC_GSS_SVC_PRIVACY = 3 }; struct rpcsec_gss_info { sec_oid4 oid; qop4 qop; rpc_gss_svc_t service; }; struct secinfo4 { unsigned int flavor; opaque flavor_info<>; /* null for AUTH_SYS, AUTH_NONE; contains rpcsec_gss_info for RPCSEC_GSS. */ }; typedef secinfo4 SECINFO4resok<>; union SECINFO4res switch (nfsstat4 status) { case NFS4_OK: SECINFO4resok resok4; default: void; }; /* * SETATTR: Set attributes */ struct SETATTR4args { /* CURRENT_FH: target object */ stateid4 stateid; fattr4 obj_attributes; Expires: April 2000 [Page 199] Draft Protocol Specification NFS version 4 October 1999 }; struct SETATTR4res { nfsstat4 status; bitmap4 attrsset; }; /* * SETCLIENTID */ struct SETCLIENTID4args { seqid4 seqid; nfs_client_id4 client; cb_client4 callback; }; union SETCLIENTID4res switch (nfsstat4 status) { case NFS4_OK: clientid4 clientid; case NFS4ERR_CLID_INUSE: clientaddr4 client_using; default: void; }; /* * VERIFY: Verify attributes same */ struct VERIFY4args { /* CURRENT_FH: object */ bitmap4 attr_request; fattr4 obj_attributes; }; struct VERIFY4res { nfsstat4 status; }; /* * WRITE: Write to file */ enum stable_how4 { UNSTABLE4 = 0, DATA_SYNC4 = 1, FILE_SYNC4 = 2 }; struct WRITE4args { Expires: April 2000 [Page 200] Draft Protocol Specification NFS version 4 October 1999 /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; stable_how4 stable; opaque data<>; }; struct WRITE4resok { count4 count; stable_how4 committed; writeverf4 verf; }; union WRITE4res switch (nfsstat4 status) { case NFS4_OK: WRITE4resok resok4; default: void; }; /* * Operation arrays */ enum nfs_opnum4 { OP_ACCESS = 2, OP_CLOSE = 3, OP_COMMIT = 4, OP_CREATE = 5, OP_DELEGPURGE = 6, OP_DELEGRETURN = 7, OP_GETATTR = 8, OP_GETFH = 9, OP_LINK = 10, OP_LOCK = 11, OP_LOCKT = 12, OP_LOCKU = 13, OP_LOOKUP = 14, OP_LOOKUPP = 15, OP_NVERIFY = 16, OP_OPEN = 17, OP_OPENATTR = 18, OP_PUTFH = 19, OP_PUTPUBFH = 20, OP_PUTROOTFH = 21, OP_READ = 22, OP_READDIR = 23, OP_READLINK = 24, Expires: April 2000 [Page 201] Draft Protocol Specification NFS version 4 October 1999 OP_REMOVE = 25, OP_RENAME = 26, OP_RENEW = 27, OP_RESTOREFH = 28, OP_SAVEFH = 29, OP_SECINFO = 30, OP_SETATTR = 31, OP_SETCLIENTID = 32, OP_VERIFY = 33, OP_WRITE = 34 }; union nfs_argop4 switch (nfs_opnum4 argop) { case OP_ACCESS: ACCESS4args opaccess; case OP_CLOSE: CLOSE4args opclose; case OP_COMMIT: COMMIT4args opcommit; case OP_CREATE: CREATE4args opcreate; case OP_DELEGPURGE: DELEGPURGE4args opdelegpurge; case OP_DELEGRETURN: DELEGRETURN4args opdelegreturn; case OP_GETATTR: GETATTR4args opgetattr; case OP_GETFH: void; case OP_LINK: LINK4args oplink; case OP_LOCK: LOCK4args oplock; case OP_LOCKT: LOCK4args oplockt; case OP_LOCKU: LOCK4args oplocku; case OP_LOOKUP: LOOKUP4args oplookup; case OP_LOOKUPP: void; case OP_NVERIFY: NVERIFY4args opnverify; case OP_OPEN: OPEN4args opopen; case OP_OPENATTR: void; case OP_PUTFH: PUTFH4args opputfh; case OP_PUTPUBFH: void; case OP_PUTROOTFH: void; case OP_READ: READ4args opread; case OP_READDIR: READDIR4args opreaddir; case OP_READLINK: void; case OP_REMOVE: REMOVE4args opremove; case OP_RENAME: RENAME4args oprename; case OP_RENEW: RENEW4args oprenew; case OP_RESTOREFH: void; case OP_SAVEFH: void; case OP_SECINFO: SECINFO4args opsecinfo; case OP_SETATTR: SETATTR4args opsetattr; case OP_SETCLIENTID: SETCLIENTID4args opsetclientid; case OP_VERIFY: VERIFY4args opverify; case OP_WRITE: WRITE4args opwrite; }; Expires: April 2000 [Page 202] Draft Protocol Specification NFS version 4 October 1999 union nfs_resop4 switch (nfs_opnum4 resop){ case OP_ACCESS: ACCESS4res opaccess; case OP_CLOSE: CLOSE4res opclose; case OP_COMMIT: COMMIT4res opcommit; case OP_CREATE: CREATE4res opcreate; case OP_DELEGPURGE: DELEGPURGE4res opdelegpurge; case OP_DELEGRETURN: DELEGRETURN4res opdelegreturn; case OP_GETATTR: GETATTR4res opgetattr; case OP_GETFH: GETFH4res opgetfh; case OP_LINK: LINK4res oplink; case OP_LOCK: LOCK4res oplock; case OP_LOCKT: LOCKT4res oplockt; case OP_LOCKU: LOCKU4res oplocku; case OP_LOOKUP: LOOKUP4res oplookup; case OP_LOOKUPP: LOOKUPP4res oplookupp; case OP_NVERIFY: NVERIFY4res opnverify; case OP_OPEN: OPEN4res opopen; case OP_OPENATTR: OPENATTR4res opopenattr; case OP_PUTFH: PUTFH4res opputfh; case OP_PUTPUBFH: PUTPUBFH4res opputpubfh; case OP_PUTROOTFH: PUTROOTFH4res opputrootfh; case OP_READ: READ4res opread; case OP_READDIR: READDIR4res opreaddir; case OP_READLINK: READLINK4res opreadlink; case OP_REMOVE: REMOVE4res opremove; case OP_RENAME: RENAME4res oprename; case OP_RENEW: RENEW4res oprenew; case OP_RESTOREFH: RESTOREFH4res oprestorefh; case OP_SAVEFH: SAVEFH4res opsavefh; case OP_SECINFO: SECINFO4res opsecinfo; case OP_SETATTR: SETATTR4res opsetattr; case OP_SETCLIENTID: SETCLIENTID4res opsetclientid; case OP_VERIFY: VERIFY4res opverify; case OP_WRITE: WRITE4res opwrite; }; struct COMPOUND4args { utf8string tag; nfs_argop4 argarray<>; }; struct COMPOUND4res { nfsstat4 status; utf8string tag; nfs_resop4 resarray<>; }; /* Expires: April 2000 [Page 203] Draft Protocol Specification NFS version 4 October 1999 * Remote file service routines */ program NFS4_PROGRAM { version NFS_V4 { void NFSPROC4_NULL(void) = 0; COMPOUND4res NFSPROC4_COMPOUND(COMPOUND4args) = 1; } = 4; } = 100003; /* * NFS4 Callback Procedure Definitions and Program */ /* * CB_GETATTR: Get Current Attributes */ struct CB_GETATTR4args { nfs_fh4 fh; bitmap4 attr_request; }; struct CB_GETATTR4resok { fattr4 obj_attributes; }; union CB_GETATTR4res switch (nfsstat4 status) { case NFS4_OK: CB_GETATTR4resok resok4; default: void; }; /* * CB_RECALL: Recall an Open Delegation */ struct CB_RECALL4args { stateid4 stateid; bool truncate; nfs_fh4 fh; }; struct CB_RECALL4res { Expires: April 2000 [Page 204] Draft Protocol Specification NFS version 4 October 1999 nfsstat4 status; }; /* * Various definitions for CB_COMPOUND */ enum nfs_cb_opnum4 { OP_CB_GETATTR = 2, OP_CB_RECALL = 3 }; union nfs_cb_argop4 switch (unsigned argop) { case OP_CB_GETATTR: CB_GETATTR4args opcbgetattr; case OP_CB_RECALL: CB_RECALL4args opcbrecall; }; union nfs_cb_resop4 switch (unsigned resop){ case OP_CB_GETATTR: CB_GETATTR4res opcbgetattr; case OP_CB_RECALL: CB_RECALL4res opcbrecall; }; struct CB_COMPOUND4args { utf8string tag; nfs_cb_argop4 argarray<>; }; struct CB_COMPOUND4res { nfsstat4 status; utf8string tag; nfs_cb_resop4 resarray<>; }; /* * Program number is in the transient range since the client * will assign the exact transient program number and provide * that to the server via the SETCLIENTID operation. */ program NFS4_CALLBACK { version NFS_CB { void CB_NULL(void) = 0; CB_COMPOUND4res CB_COMPOUND(CB_COMPOUND4args) = 1; } = 1; } = 40000000; Expires: April 2000 [Page 205] Draft Protocol Specification NFS version 4 October 1999 18. Bibliography [Gray] C. Gray, D. Cheriton, "Leases: An Efficient Fault-Tolerant Mechanism for Distributed File Cache Consistency," Proceedings of the Twelfth Symposium on Operating Systems Principles, p. 202-210, December 1989. [Juszczak] Juszczak, Chet, "Improving the Performance and Correctness of an NFS Server," USENIX Conference Proceedings, USENIX Association, Berkeley, CA, June 1990, pages 53-63. Describes reply cache implementation that avoids work in the server by handling duplicate requests. More important, though listed as a side-effect, the reply cache aids in the avoidance of destructive non-idempotent operation re-application -- improving correctness. [Kazar] Kazar, Michael Leon, "Synchronization and Caching Issues in the Andrew File System," USENIX Conference Proceedings, USENIX Association, Berkeley, CA, Dallas Winter 1988, pages 27-36. A description of the cache consistency scheme in AFS. Contrasted with other distributed file systems. [Macklem] Macklem, Rick, "Lessons Learned Tuning the 4.3BSD Reno Implementation of the NFS Protocol," Winter USENIX Conference Proceedings, USENIX Association, Berkeley, CA, January 1991. Describes performance work in tuning the 4.3BSD Reno NFS implementation. Describes performance improvement (reduced CPU loading) through elimination of data copies. [Mogul] Mogul, Jeffrey C., "A Recovery Protocol for Spritely NFS," USENIX File System Workshop Proceedings, Ann Arbor, MI, USENIX Association, Berkeley, CA, May 1992. Second paper on Spritely NFS proposes a lease-based scheme for recovering state of consistency protocol. [Nowicki] Nowicki, Bill, "Transport Issues in the Network File System," ACM SIGCOMM newsletter Computer Communication Review, April 1989. A brief description of the basis for the dynamic retransmission work. Expires: April 2000 [Page 206] Draft Protocol Specification NFS version 4 October 1999 [Pawlowski] Pawlowski, Brian, Ron Hixon, Mark Stein, Joseph Tumminaro, "Network Computing in the UNIX and IBM Mainframe Environment," Uniforum `89 Conf. Proc., (1989) Description of an NFS server implementation for IBM's MVS operating system. [RFC1094] Sun Microsystems, Inc., "NFS: Network File System Protocol Specification", RFC1094, March 1989. http://www.ietf.org/rfc/rfc1094.txt [RFC1345] Simonsen, K., "Character Mnemonics & Character Sets", RFC1345, Rationel Almen Planlaegning, June 1992. http://www.ietf.org/rfc/rfc1345.txt [RFC1813] Callaghan, B., Pawlowski, B., Staubach, P., "NFS Version 3 Protocol Specification", RFC1813, Sun Microsystems, Inc., June 1995. http://www.ietf.org/rfc/rfc1813.txt [RFC1831] Srinivasan, R., "RPC: Remote Procedure Call Protocol Specification Version 2", RFC1831, Sun Microsystems, Inc., August 1995. http://www.ietf.org/rfc/rfc1831.txt [RFC1832] Srinivasan, R., "XDR: External Data Representation Standard", RFC1832, Sun Microsystems, Inc., August 1995. http://www.ietf.org/rfc/rfc1832.txt [RFC1833] Srinivasan, R., "Binding Protocols for ONC RPC Version 2", RFC1833, Sun Microsystems, Inc., August 1995. http://www.ietf.org/rfc/rfc1833.txt Expires: April 2000 [Page 207] Draft Protocol Specification NFS version 4 October 1999 [RFC2054] Callaghan, B., "WebNFS Client Specification", RFC2054, Sun Microsystems, Inc., October 1996 http://www.ietf.org/rfc/rfc2054.txt [RFC2055] Callaghan, B., "WebNFS Server Specification", RFC2054, Sun Microsystems, Inc., October 1996 http://www.ietf.org/rfc/rfc2055.txt [RFC2078] Linn, J., "Generic Security Service Application Program Interface, Version 2", RFC2078, OpenVision Technologies, January 1997. http://www.ietf.org/rfc/rfc2078.txt [RFC2152] Goldsmith, D., "UTF-7 A Mail-Safe Transformation Format of Unicode", RFC2152, Apple Computer, Inc., May 1997 http://www.ietf.org/rfc/rfc2152.txt [RFC2203] Eisler, M., Chiu, A., Ling, L., "RPCSEC_GSS Protocol Specification", RFC2203, Sun Microsystems, Inc., August 1995. http://www.ietf.org/rfc/rfc2203.txt [RFC2279] Yergeau, F., "UTF-8, a transformation format of ISO 10646", RFC2279, Alis Technologies, January 1998. http://www.ietf.org/rfc/rfc2279.txt [RFC2623] Eisler, M., "NFS Version 2 and Version 3 Security Issues and the NFS Protocol's Use of RPCSEC_GSS and Kerberos V5", RFC2623, Sun Microsystems, June 1999 http://www.ietf.org/rfc/rfc2623.txt Expires: April 2000 [Page 208] Draft Protocol Specification NFS version 4 October 1999 [RFC2624] Shepler, S., "NFS Version 4 Design Considerations", RFC2624, Sun Microsystems, June 1999 http://www.ietf.org/rfc/rfc2624.txt [Sandberg] Sandberg, R., D. Goldberg, S. Kleiman, D. Walsh, B. Lyon, "Design and Implementation of the Sun Network Filesystem," USENIX Conference Proceedings, USENIX Association, Berkeley, CA, Summer 1985. The basic paper describing the SunOS implementation of the NFS version 2 protocol, and discusses the goals, protocol specification and trade- offs. [Srinivasan] Srinivasan, V., Jeffrey C. Mogul, "Spritely NFS: Implementation and Performance of Cache Consistency Protocols", WRL Research Report 89/5, Digital Equipment Corporation Western Research Laboratory, 100 Hamilton Ave., Palo Alto, CA, 94301, May 1989. This paper analyzes the effect of applying a Sprite-like consistency protocol applied to standard NFS. The issues of recovery in a stateful environment are covered in [Mogul]. [Unicode1] "Unicode Technical Report #8 - The Unicode Standard, Version 2.1", Unicode, Inc., The Unicode Consortium, P.O. Box 700519, San Jose, CA 95710-0519 USA, September 1998 http://www.unicode.org/unicode/reports/tr8.html [Unicode2] "Unsupported Scripts" Unicode, Inc., The Unicode Consortium, P.O. Box 700519, San Jose, CA 95710-0519 USA, October 1998 http://www.unicode.org/unicode/standard/unsupported.html [XNFS] The Open Group, Protocols for Interworking: XNFS, Version 3W, The Open Group, 1010 El Camino Real Suite 380, Menlo Park, CA 94025, ISBN 1-85912-184-5, February 1998. HTML version available: http://www.opengroup.org Expires: April 2000 [Page 209] Draft Protocol Specification NFS version 4 October 1999 19. Authors and Contributors General feedback related to this document should be directed to: nfsv4-wg@sunroof.eng.sun.com or the editor. 19.1. Editor's Address Spencer Shepler Sun Microsystems, Inc. 7808 Moonflower Drive Austin, Texas 78750 Phone: +1 512-349-9376 E-mail: shepler@eng.sun.com 19.2. Authors' Addresses Carl Beame Hummingbird Communications Ltd. E-mail: beame@bws.com Brent Callaghan Sun Microsystems, Inc. 901 San Antonio Road Palo Alto, CA 94303 Phone: +1 650-786-5067 E-mail: brent.callaghan@eng.sun.com Mike Eisler Sun Microsystems, Inc. 5565 Wilson Road Colorado Springs, CO 80919 Phone: +1 719-599-9026 E-mail: mre@eng.sun.com Dave Noveck Network Appliance 495 East Java Drive Sunnyvale, CA 94089 Expires: April 2000 [Page 210] Draft Protocol Specification NFS version 4 October 1999 Phone: +1 781-861-9291 E-mail: dave.noveck@netapp.com David Robinson Sun Microsystems, Inc. 901 San Antonio Road Palo Alto, CA 94303 Phone: +1 650-786-5088 E-mail: david.robinson@eng.sun.com Robert Thurlow Sun Microsystems, Inc. 901 San Antonio Road Palo Alto, CA 94303 Phone: +1 650-786-5096 E-mail: robert.thurlow@eng.sun.com Expires: April 2000 [Page 211] Draft Protocol Specification NFS version 4 October 1999 20. Full Copyright Statement "Copyright (C) The Internet Society (1999). All Rights Reserved. This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English. The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns. This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE." Expires: April 2000 [Page 212]