NFSv4 S. Shepler Internet-Draft Editor Intended status: Standards Track June 20, 2006 Expires: December 22, 2006 NFSv4 Minor Version 1 draft-ietf-nfsv4-minorversion1-03.txt Status of this Memo By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. This Internet-Draft will expire on December 22, 2006. Copyright Notice Copyright (C) The Internet Society (2006). Abstract This Internet-Draft describes the NFSv4 minor version 1 protocol extensions. These most significant of these extensions are commonly called: Sessions, Directory Delegations, and parallel NFS or pNFS Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this Shepler Expires December 22, 2006 [Page 1] Internet-Draft NFSv4 Minor Version 1 June 2006 document are to be interpreted as described in RFC 2119 [1]. Table of Contents 1. Protocol Data Types . . . . . . . . . . . . . . . . . . . . . 9 1.1. Basic Data Types . . . . . . . . . . . . . . . . . . . . 9 1.2. Structured Data Types . . . . . . . . . . . . . . . . . 10 2. Filehandles . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1. Obtaining the First Filehandle . . . . . . . . . . . . . 19 2.1.1. Root Filehandle . . . . . . . . . . . . . . . . . . . 20 2.1.2. Public Filehandle . . . . . . . . . . . . . . . . . . 20 2.2. Filehandle Types . . . . . . . . . . . . . . . . . . . . 20 2.2.1. General Properties of a Filehandle . . . . . . . . . 21 2.2.2. Persistent Filehandle . . . . . . . . . . . . . . . . 21 2.2.3. Volatile Filehandle . . . . . . . . . . . . . . . . . 22 2.3. One Method of Constructing a Volatile Filehandle . . . . 23 2.4. Client Recovery from Filehandle Expiration . . . . . . . 24 3. File Attributes . . . . . . . . . . . . . . . . . . . . . . . 25 3.1. Mandatory Attributes . . . . . . . . . . . . . . . . . . 26 3.2. Recommended Attributes . . . . . . . . . . . . . . . . . 26 3.3. Named Attributes . . . . . . . . . . . . . . . . . . . . 27 3.4. Classification of Attributes . . . . . . . . . . . . . . 27 3.5. Mandatory Attributes - Definitions . . . . . . . . . . . 28 3.6. Recommended Attributes - Definitions . . . . . . . . . . 30 3.7. Time Access . . . . . . . . . . . . . . . . . . . . . . 38 3.8. Interpreting owner and owner_group . . . . . . . . . . . 38 3.9. Character Case Attributes . . . . . . . . . . . . . . . 40 3.10. Quota Attributes . . . . . . . . . . . . . . . . . . . . 41 3.11. mounted_on_fileid . . . . . . . . . . . . . . . . . . . 41 3.12. send_impl_id and recv_impl_id . . . . . . . . . . . . . 42 3.13. fs_layouttype . . . . . . . . . . . . . . . . . . . . . 43 3.14. layouttype . . . . . . . . . . . . . . . . . . . . . . . 43 3.15. layouthint . . . . . . . . . . . . . . . . . . . . . . . 43 3.16. Access Control Lists . . . . . . . . . . . . . . . . . . 44 3.16.1. ACE type . . . . . . . . . . . . . . . . . . . . . . 46 3.16.2. ACE Access Mask . . . . . . . . . . . . . . . . . . . 47 3.16.3. ACE flag . . . . . . . . . . . . . . . . . . . . . . 52 3.16.4. ACE who . . . . . . . . . . . . . . . . . . . . . . . 54 3.16.5. Mode Attribute . . . . . . . . . . . . . . . . . . . 55 3.16.6. Interaction Between Mode and ACL Attributes . . . . . 56 4. Single-server Name Space . . . . . . . . . . . . . . . . . . 69 4.1. Server Exports . . . . . . . . . . . . . . . . . . . . . 69 4.2. Browsing Exports . . . . . . . . . . . . . . . . . . . . 69 4.3. Server Pseudo Filesystem . . . . . . . . . . . . . . . . 70 4.4. Multiple Roots . . . . . . . . . . . . . . . . . . . . . 71 4.5. Filehandle Volatility . . . . . . . . . . . . . . . . . 71 4.6. Exported Root . . . . . . . . . . . . . . . . . . . . . 71 Shepler Expires December 22, 2006 [Page 2] Internet-Draft NFSv4 Minor Version 1 June 2006 4.7. Mount Point Crossing . . . . . . . . . . . . . . . . . . 71 4.8. Security Policy and Name Space Presentation . . . . . . 72 5. File Locking and Share Reservations . . . . . . . . . . . . . 73 5.1. Locking . . . . . . . . . . . . . . . . . . . . . . . . 73 5.1.1. Client ID . . . . . . . . . . . . . . . . . . . . . . 74 5.1.2. Server Release of Clientid . . . . . . . . . . . . . 76 5.1.3. lock_owner and stateid Definition . . . . . . . . . . 77 5.1.4. Use of the stateid and Locking . . . . . . . . . . . 79 5.1.5. Sequencing of Lock Requests . . . . . . . . . . . . . 81 5.1.6. Recovery from Replayed Requests . . . . . . . . . . . 82 5.1.7. Releasing lock_owner State . . . . . . . . . . . . . 82 5.1.8. Use of Open Confirmation . . . . . . . . . . . . . . 82 5.2. Lock Ranges . . . . . . . . . . . . . . . . . . . . . . 84 5.3. Upgrading and Downgrading Locks . . . . . . . . . . . . 84 5.4. Blocking Locks . . . . . . . . . . . . . . . . . . . . . 84 5.5. Lease Renewal . . . . . . . . . . . . . . . . . . . . . 85 5.6. Crash Recovery . . . . . . . . . . . . . . . . . . . . . 86 5.6.1. Client Failure and Recovery . . . . . . . . . . . . . 86 5.6.2. Server Failure and Recovery . . . . . . . . . . . . . 87 5.6.3. Network Partitions and Recovery . . . . . . . . . . . 89 5.7. Recovery from a Lock Request Timeout or Abort . . . . . 92 5.8. Server Revocation of Locks . . . . . . . . . . . . . . . 93 5.9. Share Reservations . . . . . . . . . . . . . . . . . . . 94 5.10. OPEN/CLOSE Operations . . . . . . . . . . . . . . . . . 94 5.10.1. Close and Retention of State Information . . . . . . 95 5.11. Open Upgrade and Downgrade . . . . . . . . . . . . . . . 96 5.12. Short and Long Leases . . . . . . . . . . . . . . . . . 96 5.13. Clocks, Propagation Delay, and Calculating Lease Expiration . . . . . . . . . . . . . . . . . . . . . . . 97 6. Client-Side Caching . . . . . . . . . . . . . . . . . . . . . 97 6.1. Performance Challenges for Client-Side Caching . . . . . 98 6.2. Delegation and Callbacks . . . . . . . . . . . . . . . . 99 6.2.1. Delegation Recovery . . . . . . . . . . . . . . . . . 100 6.3. Data Caching . . . . . . . . . . . . . . . . . . . . . . 102 6.3.1. Data Caching and OPENs . . . . . . . . . . . . . . . 102 6.3.2. Data Caching and File Locking . . . . . . . . . . . . 103 6.3.3. Data Caching and Mandatory File Locking . . . . . . . 105 6.3.4. Data Caching and File Identity . . . . . . . . . . . 105 6.4. Open Delegation . . . . . . . . . . . . . . . . . . . . 106 6.4.1. Open Delegation and Data Caching . . . . . . . . . . 109 6.4.2. Open Delegation and File Locks . . . . . . . . . . . 110 6.4.3. Handling of CB_GETATTR . . . . . . . . . . . . . . . 110 6.4.4. Recall of Open Delegation . . . . . . . . . . . . . . 113 6.4.5. Clients that Fail to Honor Delegation Recalls . . . . 115 6.4.6. Delegation Revocation . . . . . . . . . . . . . . . . 116 6.5. Data Caching and Revocation . . . . . . . . . . . . . . 116 6.5.1. Revocation Recovery for Write Open Delegation . . . . 117 6.6. Attribute Caching . . . . . . . . . . . . . . . . . . . 118 Shepler Expires December 22, 2006 [Page 3] Internet-Draft NFSv4 Minor Version 1 June 2006 6.7. Data and Metadata Caching and Memory Mapped Files . . . 120 6.8. Name Caching . . . . . . . . . . . . . . . . . . . . . . 122 6.9. Directory Caching . . . . . . . . . . . . . . . . . . . 123 7. Security Negotiation . . . . . . . . . . . . . . . . . . . . 124 8. Clarification of Security Negotiation in NFSv4.1 . . . . . . 124 8.1. PUTFH + LOOKUP . . . . . . . . . . . . . . . . . . . . . 125 8.2. PUTFH + LOOKUPP . . . . . . . . . . . . . . . . . . . . 125 8.3. PUTFH + SECINFO . . . . . . . . . . . . . . . . . . . . 125 8.4. PUTFH + Anything Else . . . . . . . . . . . . . . . . . 126 9. NFSv4.1 Sessions . . . . . . . . . . . . . . . . . . . . . . 126 9.1. Sessions Background . . . . . . . . . . . . . . . . . . 126 9.1.1. Introduction to Sessions . . . . . . . . . . . . . . 126 9.1.2. Motivation . . . . . . . . . . . . . . . . . . . . . 127 9.1.3. Problem Statement . . . . . . . . . . . . . . . . . . 128 9.1.4. NFSv4 Session Extension Characteristics . . . . . . . 130 9.2. Transport Issues . . . . . . . . . . . . . . . . . . . . 130 9.2.1. Session Model . . . . . . . . . . . . . . . . . . . . 130 9.2.2. Connection State . . . . . . . . . . . . . . . . . . 132 9.2.3. NFSv4 Channels, Sessions and Connections . . . . . . 132 9.2.4. Reconnection, Trunking and Failover . . . . . . . . . 134 9.2.5. Server Duplicate Request Cache . . . . . . . . . . . 135 9.3. Session Initialization and Transfer Models . . . . . . . 136 9.3.1. Session Negotiation . . . . . . . . . . . . . . . . . 136 9.3.2. RDMA Requirements . . . . . . . . . . . . . . . . . . 138 9.3.3. RDMA Connection Resources . . . . . . . . . . . . . . 138 9.3.4. TCP and RDMA Inline Transfer Model . . . . . . . . . 139 9.3.5. RDMA Direct Transfer Model . . . . . . . . . . . . . 142 9.4. Connection Models . . . . . . . . . . . . . . . . . . . 145 9.4.1. TCP Connection Model . . . . . . . . . . . . . . . . 146 9.4.2. Negotiated RDMA Connection Model . . . . . . . . . . 147 9.4.3. Automatic RDMA Connection Model . . . . . . . . . . . 148 9.5. Buffer Management, Transfer, Flow Control . . . . . . . 148 9.6. Retry and Replay . . . . . . . . . . . . . . . . . . . . 151 9.7. The Back Channel . . . . . . . . . . . . . . . . . . . . 152 9.8. COMPOUND Sizing Issues . . . . . . . . . . . . . . . . . 153 9.9. Data Alignment . . . . . . . . . . . . . . . . . . . . . 153 9.10. NFSv4 Integration . . . . . . . . . . . . . . . . . . . 155 9.10.1. Minor Versioning . . . . . . . . . . . . . . . . . . 155 9.10.2. Slot Identifiers and Server Duplicate Request Cache . 155 9.10.3. Resolving server callback races with sessions . . . . 159 9.10.4. COMPOUND and CB_COMPOUND . . . . . . . . . . . . . . 160 9.10.5. eXternal Data Representation Efficiency . . . . . . . 161 9.10.6. Effect of Sessions on Existing Operations . . . . . . 161 9.10.7. Authentication Efficiencies . . . . . . . . . . . . . 162 9.11. Sessions Security Considerations . . . . . . . . . . . . 163 9.11.1. Authentication . . . . . . . . . . . . . . . . . . . 165 10. Multi-server Name Space . . . . . . . . . . . . . . . . . . . 166 10.1. Location attributes . . . . . . . . . . . . . . . . . . 166 Shepler Expires December 22, 2006 [Page 4] Internet-Draft NFSv4 Minor Version 1 June 2006 10.2. File System Presence or Absence . . . . . . . . . . . . 166 10.3. Getting Attributes for an Absent File System . . . . . . 168 10.3.1. GETATTR Within an Absent File System . . . . . . . . 168 10.3.2. READDIR and Absent File Systems . . . . . . . . . . . 169 10.4. Uses of Location Information . . . . . . . . . . . . . . 170 10.4.1. File System Replication . . . . . . . . . . . . . . . 170 10.4.2. File System Migration . . . . . . . . . . . . . . . . 171 10.4.3. Referrals . . . . . . . . . . . . . . . . . . . . . . 172 10.5. Additional Client-side Considerations . . . . . . . . . 172 10.6. Effecting File System Transitions . . . . . . . . . . . 173 10.6.1. Transparent File System Transitions . . . . . . . . . 174 10.6.2. Filehandles and File System Transitions . . . . . . . 176 10.6.3. Fileid's and File System Transitions . . . . . . . . 176 10.6.4. Fsid's and File System Transitions . . . . . . . . . 177 10.6.5. The Change Attribute and File System Transitions . . 177 10.6.6. Lock State and File System Transitions . . . . . . . 178 10.6.7. Write Verifiers and File System Transitions . . . . . 181 10.7. Effecting File System Referrals . . . . . . . . . . . . 181 10.7.1. Referral Example (LOOKUP) . . . . . . . . . . . . . . 182 10.7.2. Referral Example (READDIR) . . . . . . . . . . . . . 186 10.8. The Attribute fs_absent . . . . . . . . . . . . . . . . 188 10.9. The Attribute fs_locations . . . . . . . . . . . . . . . 188 10.10. The Attribute fs_locations_info . . . . . . . . . . . . 190 10.11. The Attribute fs_status . . . . . . . . . . . . . . . . 199 11. Directory Delegations . . . . . . . . . . . . . . . . . . . . 202 11.1. Introduction to Directory Delegations . . . . . . . . . 203 11.2. Directory Delegation Design (in brief) . . . . . . . . . 204 11.3. Recommended Attributes in support of Directory Delegations . . . . . . . . . . . . . . . . . . . . . . 205 11.4. Delegation Recall . . . . . . . . . . . . . . . . . . . 206 11.5. Delegation Recovery . . . . . . . . . . . . . . . . . . 206 12. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 206 13. General Definitions . . . . . . . . . . . . . . . . . . . . . 209 13.1. Metadata Server . . . . . . . . . . . . . . . . . . . . 209 13.2. Client . . . . . . . . . . . . . . . . . . . . . . . . . 209 13.3. Storage Device . . . . . . . . . . . . . . . . . . . . . 209 13.4. Storage Protocol . . . . . . . . . . . . . . . . . . . . 209 13.5. Control Protocol . . . . . . . . . . . . . . . . . . . . 210 13.6. Metadata . . . . . . . . . . . . . . . . . . . . . . . . 210 13.7. Layout . . . . . . . . . . . . . . . . . . . . . . . . . 210 14. pNFS protocol semantics . . . . . . . . . . . . . . . . . . . 211 14.1. Definitions . . . . . . . . . . . . . . . . . . . . . . 211 14.1.1. Layout Types . . . . . . . . . . . . . . . . . . . . 211 14.1.2. Layout Iomode . . . . . . . . . . . . . . . . . . . . 211 14.1.3. Layout Segments . . . . . . . . . . . . . . . . . . . 212 14.1.4. Device IDs . . . . . . . . . . . . . . . . . . . . . 213 14.1.5. Aggregation Schemes . . . . . . . . . . . . . . . . . 213 14.2. Guarantees Provided by Layouts . . . . . . . . . . . . . 214 Shepler Expires December 22, 2006 [Page 5] Internet-Draft NFSv4 Minor Version 1 June 2006 14.3. Getting a Layout . . . . . . . . . . . . . . . . . . . . 215 14.4. Committing a Layout . . . . . . . . . . . . . . . . . . 216 14.4.1. LAYOUTCOMMIT and mtime/atime/change . . . . . . . . . 216 14.4.2. LAYOUTCOMMIT and size . . . . . . . . . . . . . . . . 217 14.4.3. LAYOUTCOMMIT and layoutupdate . . . . . . . . . . . . 218 14.5. Recalling a Layout . . . . . . . . . . . . . . . . . . . 218 14.5.1. Basic Operation . . . . . . . . . . . . . . . . . . . 218 14.5.2. Recall Callback Robustness . . . . . . . . . . . . . 220 14.5.3. Recall/Return Sequencing . . . . . . . . . . . . . . 221 14.6. Metadata Server Write Propagation . . . . . . . . . . . 223 14.7. Crash Recovery . . . . . . . . . . . . . . . . . . . . . 223 14.7.1. Leases . . . . . . . . . . . . . . . . . . . . . . . 224 14.7.2. Client Recovery . . . . . . . . . . . . . . . . . . . 225 14.7.3. Metadata Server Recovery . . . . . . . . . . . . . . 226 14.7.4. Storage Device Recovery . . . . . . . . . . . . . . . 228 15. Security Considerations . . . . . . . . . . . . . . . . . . . 229 15.1. File Layout Security . . . . . . . . . . . . . . . . . . 230 15.2. Object Layout Security . . . . . . . . . . . . . . . . . 230 15.3. Block/Volume Layout Security . . . . . . . . . . . . . . 232 16. The NFSv4 File Layout Type . . . . . . . . . . . . . . . . . 232 16.1. File Striping and Data Access . . . . . . . . . . . . . 232 16.1.1. Sparse and Dense Storage Device Data Layouts . . . . 234 16.1.2. Metadata and Storage Device Roles . . . . . . . . . . 236 16.1.3. Device Multipathing . . . . . . . . . . . . . . . . . 237 16.1.4. Operations Issued to Storage Devices . . . . . . . . 237 16.1.5. COMMIT through metadata server . . . . . . . . . . . 238 16.2. Global Stateid Requirements . . . . . . . . . . . . . . 238 16.3. The Layout Iomode . . . . . . . . . . . . . . . . . . . 239 16.4. Storage Device State Propagation . . . . . . . . . . . . 239 16.4.1. Lock State Propagation . . . . . . . . . . . . . . . 240 16.4.2. Open-mode Validation . . . . . . . . . . . . . . . . 240 16.4.3. File Attributes . . . . . . . . . . . . . . . . . . . 240 16.5. Storage Device Component File Size . . . . . . . . . . . 241 16.6. Crash Recovery Considerations . . . . . . . . . . . . . 242 16.7. Security Considerations . . . . . . . . . . . . . . . . 243 16.8. Alternate Approaches . . . . . . . . . . . . . . . . . . 243 17. Layouts and Aggregation . . . . . . . . . . . . . . . . . . . 244 17.1. Simple Map . . . . . . . . . . . . . . . . . . . . . . . 244 17.2. Block Extent Map . . . . . . . . . . . . . . . . . . . . 245 17.3. Striped Map (RAID 0) . . . . . . . . . . . . . . . . . . 245 17.4. Replicated Map . . . . . . . . . . . . . . . . . . . . . 245 17.5. Concatenated Map . . . . . . . . . . . . . . . . . . . . 245 17.6. Nested Map . . . . . . . . . . . . . . . . . . . . . . . 246 18. Minor Versioning . . . . . . . . . . . . . . . . . . . . . . 246 19. Internationalization . . . . . . . . . . . . . . . . . . . . 248 19.1. Stringprep profile for the utf8str_cs type . . . . . . . 249 19.2. Stringprep profile for the utf8str_cis type . . . . . . 251 19.3. Stringprep profile for the utf8str_mixed type . . . . . 252 Shepler Expires December 22, 2006 [Page 6] Internet-Draft NFSv4 Minor Version 1 June 2006 19.4. UTF-8 Related Errors . . . . . . . . . . . . . . . . . . 254 20. Error Definitions . . . . . . . . . . . . . . . . . . . . . . 254 21. NFS version 4.1 Procedures . . . . . . . . . . . . . . . . . 263 21.1. Procedure 0: NULL - No Operation . . . . . . . . . . . . 263 21.2. Procedure 1: COMPOUND - Compound Operations . . . . . . 264 22. NFS version 4.1 Operations . . . . . . . . . . . . . . . . . 266 22.1. Operation 3: ACCESS - Check Access Rights . . . . . . . 267 22.2. Operation 4: CLOSE - Close File . . . . . . . . . . . . 269 22.3. Operation 5: COMMIT - Commit Cached Data . . . . . . . . 270 22.4. Operation 6: CREATE - Create a Non-Regular File Object . 273 22.5. Operation 7: DELEGPURGE - Purge Delegations Awaiting Recovery . . . . . . . . . . . . . . . . . . . . . . . . 276 22.6. Operation 8: DELEGRETURN - Return Delegation . . . . . . 277 22.7. Operation 9: GETATTR - Get Attributes . . . . . . . . . 277 22.8. Operation 10: GETFH - Get Current Filehandle . . . . . . 279 22.9. Operation 11: LINK - Create Link to a File . . . . . . . 280 22.10. Operation 12: LOCK - Create Lock . . . . . . . . . . . . 281 22.11. Operation 13: LOCKT - Test For Lock . . . . . . . . . . 285 22.12. Operation 14: LOCKU - Unlock File . . . . . . . . . . . 287 22.13. Operation 15: LOOKUP - Lookup Filename . . . . . . . . . 288 22.14. Operation 16: LOOKUPP - Lookup Parent Directory . . . . 290 22.15. Operation 17: NVERIFY - Verify Difference in Attributes . . . . . . . . . . . . . . . . . . . . . . . 291 22.16. Operation 18: OPEN - Open a Regular File . . . . . . . . 292 22.17. Operation 19: OPENATTR - Open Named Attribute Directory . . . . . . . . . . . . . . . . . . . . . . . 306 22.18. Operation 20: OPEN_CONFIRM - Confirm Open . . . . . . . 307 22.19. Operation 21: OPEN_DOWNGRADE - Reduce Open File Access . 309 22.20. Operation 22: PUTFH - Set Current Filehandle . . . . . . 310 22.21. Operation 23: PUTPUBFH - Set Public Filehandle . . . . . 311 22.22. Operation 24: PUTROOTFH - Set Root Filehandle . . . . . 313 22.23. Operation 25: READ - Read from File . . . . . . . . . . 314 22.24. Operation 26: READDIR - Read Directory . . . . . . . . . 316 22.25. Operation 27: READLINK - Read Symbolic Link . . . . . . 320 22.26. Operation 28: REMOVE - Remove Filesystem Object . . . . 321 22.27. Operation 29: RENAME - Rename Directory Entry . . . . . 323 22.28. Operation 30: RENEW - Renew a Lease . . . . . . . . . . 325 22.29. Operation 31: RESTOREFH - Restore Saved Filehandle . . . 326 22.30. Operation 32: SAVEFH - Save Current Filehandle . . . . . 327 22.31. Operation 33: SECINFO - Obtain Available Security . . . 328 22.32. Operation 34: SETATTR - Set Attributes . . . . . . . . . 331 22.33. Operation 35: SETCLIENTID - Negotiate Clientid . . . . . 334 22.34. Operation 36: SETCLIENTID_CONFIRM - Confirm Clientid . . 338 22.35. Operation 37: VERIFY - Verify Same Attributes . . . . . 341 22.36. Operation 38: WRITE - Write to File . . . . . . . . . . 342 22.37. Operation 39: RELEASE_LOCKOWNER - Release Lockowner State . . . . . . . . . . . . . . . . . . . . . . . . . 347 22.38. Operation 10044: ILLEGAL - Illegal operation . . . . . . 348 Shepler Expires December 22, 2006 [Page 7] Internet-Draft NFSv4 Minor Version 1 June 2006 22.39. SECINFO_NO_NAME - Get Security on Unnamed Object . . . . 348 22.40. CREATECLIENTID - Instantiate Clientid . . . . . . . . . 350 22.41. CREATESESSION - Create New Session and Confirm Clientid . . . . . . . . . . . . . . . . . . . . . . . . 355 22.42. BIND_BACKCHANNEL - Create a callback channel binding . . 360 22.43. DESTROYSESSION - Destroy existing session . . . . . . . 362 22.44. SEQUENCE - Supply per-procedure sequencing and control . 363 22.45. GET_DIR_DELEGATION - Get a directory delegation . . . . 364 22.46. LAYOUTGET - Get Layout Information . . . . . . . . . . . 368 22.47. LAYOUTCOMMIT - Commit writes made using a layout . . . . 371 22.48. LAYOUTRETURN - Release Layout Information . . . . . . . 375 22.49. GETDEVICEINFO - Get Device Information . . . . . . . . . 376 22.50. GETDEVICELIST . . . . . . . . . . . . . . . . . . . . . 377 22.51. WANT_DELEGATION . . . . . . . . . . . . . . . . . . . . 379 23. NFS version 4.1 Callback Procedures . . . . . . . . . . . . . 382 23.1. Procedure 0: CB_NULL - No Operation . . . . . . . . . . 382 23.2. Procedure 1: CB_COMPOUND - Compound Operations . . . . . 383 24. CB_RECALLCREDIT - change flow control limits . . . . . . . . 385 25. CB_SEQUENCE - Supply callback channel sequencing and control . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 26. CB_NOTIFY - Notify directory changes . . . . . . . . . . . . 387 27. CB_RECALL_ANY - Keep any N delegations . . . . . . . . . . . 390 28. CB_SIZECHANGED . . . . . . . . . . . . . . . . . . . . . . . 393 29. CB_LAYOUTRECALL . . . . . . . . . . . . . . . . . . . . . . . 394 30. CB_PUSH_DELEG . . . . . . . . . . . . . . . . . . . . . . . . 397 31. CB_RECALLABLE_OBJ_AVAIL . . . . . . . . . . . . . . . . . . . 398 32. References . . . . . . . . . . . . . . . . . . . . . . . . . 398 32.1. Normative References . . . . . . . . . . . . . . . . . . 398 32.2. Informative References . . . . . . . . . . . . . . . . . 399 Appendix A. Acknowledgments . . . . . . . . . . . . . . . . . . 399 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 400 Intellectual Property and Copyright Statements . . . . . . . . . 401 Shepler Expires December 22, 2006 [Page 8] Internet-Draft NFSv4 Minor Version 1 June 2006 1. Protocol Data Types The syntax and semantics to describe the data types of the NFS version 4 protocol are defined in the XDR RFC1832 [2] and RPC RFC1831 [3] documents. The next sections build upon the XDR data types to define types and structures specific to this protocol. 1.1. Basic Data Types These are the base NFSv4 data types. +---------------+---------------------------------------------------+ | Data Type | Definition | +---------------+---------------------------------------------------+ | int32_t | typedef int int32_t; | | uint32_t | typedef unsigned int uint32_t; | | int64_t | typedef hyper int64_t; | | uint64_t | typedef unsigned hyper uint64_t; | | attrlist4 | typedef opaque attrlist4<>; | | | Used for file/directory attributes | | bitmap4 | typedef uint32_t bitmap4<>; | | | Used in attribute array encoding. | | changeid4 | typedef uint64_t changeid4; | | | Used in definition of change_info | | clientid4 | typedef uint64_t clientid4; | | | Shorthand reference to client identification | | component4 | typedef utf8str_cs component4; | | | Represents path name components | | count4 | typedef uint32_t count4; | | | Various count parameters (READ, WRITE, COMMIT) | | length4 | typedef uint64_t length4; | | | Describes LOCK lengths | | linktext4 | typedef utf8str_cs linktext4; | | | Symbolic link contents | | mode4 | typedef uint32_t mode4; | | | Mode attribute data type | | nfs_cookie4 | typedef uint64_t nfs_cookie4; | | | Opaque cookie value for READDIR | | nfs_fh4 | typedef opaque nfs_fh4 | | | Filehandle definition; NFS4_FHSIZE is defined as | | | 128 | | nfs_ftype4 | enum nfs_ftype4; | | | Various defined file types | | nfsstat4 | enum nfsstat4; | | | Return value for operations | | offset4 | typedef uint64_t offset4; | | | Various offset designations (READ, WRITE, LOCK, | | | COMMIT) | Shepler Expires December 22, 2006 [Page 9] Internet-Draft NFSv4 Minor Version 1 June 2006 | pathname4 | typedef component4 pathname4<>; | | | Represents path name for fs_locations | | qop4 | typedef uint32_t qop4; | | | Quality of protection designation in SECINFO | | sec_oid4 | typedef opaque sec_oid4<>; | | | Security Object Identifier The sec_oid4 data type | | | is not really opaque. Instead contains an ASN.1 | | | OBJECT IDENTIFIER as used by GSS-API in the | | | mech_type argument to GSS_Init_sec_context. See | | | RFC2743 [4] for details. | | seqid4 | typedef uint32_t seqid4; | | | Sequence identifier used for file locking | | utf8string | typedef opaque utf8string<>; | | | UTF-8 encoding for strings | | utf8str_cis | typedef opaque utf8str_cis; | | | Case-insensitive UTF-8 string | | utf8str_cs | typedef opaque utf8str_cs; | | | Case-sensitive UTF-8 string | | utf8str_mixed | typedef opaque utf8str_mixed; | | | UTF-8 strings with a case sensitive prefix and a | | | case insensitive suffix. | | verifier4 | typedef opaque verifier4[NFS4_VERIFIER_SIZE]; | | | Verifier used for various operations (COMMIT, | | | CREATE, OPEN, READDIR, SETCLIENTID, | | | SETCLIENTID_CONFIRM, WRITE) NFS4_VERIFIER_SIZE is | | | defined as 8. | +---------------+---------------------------------------------------+ End of Base Data Types Table 1 1.2. Structured Data Types 1.2.1. nfstime4 struct nfstime4 { int64_t seconds; uint32_t nseconds; } The nfstime4 structure gives the number of seconds and nanoseconds since midnight or 0 hour January 1, 1970 Coordinated Universal Time (UTC). Values greater than zero for the seconds field denote dates after the 0 hour January 1, 1970. Values less than zero for the seconds field denote dates before the 0 hour January 1, 1970. In both cases, the nseconds field is to be added to the seconds field for the final time representation. For example, if the time to be Shepler Expires December 22, 2006 [Page 10] Internet-Draft NFSv4 Minor Version 1 June 2006 represented is one-half second before 0 hour January 1, 1970, the seconds field would have a value of negative one (-1) and the nseconds fields would have a value of one-half second (500000000). Values greater than 999,999,999 for nseconds are considered invalid. This data type is used to pass time and date information. A server converts to and from its local representation of time when processing time values, preserving as much accuracy as possible. If the precision of timestamps stored for a filesystem object is less than defined, loss of precision can occur. An adjunct time maintenance protocol is recommended to reduce client and server time skew. 1.2.2. time_how4 enum time_how4 { SET_TO_SERVER_TIME4 = 0, SET_TO_CLIENT_TIME4 = 1 }; 1.2.3. settime4 union settime4 switch (time_how4 set_it) { case SET_TO_CLIENT_TIME4: nfstime4 time; default: void; }; The above definitions are used as the attribute definitions to set time values. If set_it is SET_TO_SERVER_TIME4, then the server uses its local representation of time for the time value. 1.2.4. specdata4 struct specdata4 { uint32_t specdata1; /* major device number */ uint32_t specdata2; /* minor device number */ }; This data type represents additional information for the device file types NF4CHR and NF4BLK. 1.2.5. fsid4 struct fsid4 { uint64_t major; uint64_t minor; }; Shepler Expires December 22, 2006 [Page 11] Internet-Draft NFSv4 Minor Version 1 June 2006 1.2.6. fs_location4 struct fs_location4 { utf8str_cis server<>; pathname4 rootpath; }; 1.2.7. fs_locations4 struct fs_locations4 { pathname4 fs_root; fs_location4 locations<>; }; The fs_location4 and fs_locations4 data types are used for the fs_locations recommended attribute which is used for migration and replication support. 1.2.8. fattr4 struct fattr4 { bitmap4 attrmask; attrlist4 attr_vals; }; The fattr4 structure is used to represent file and directory attributes. The bitmap is a counted array of 32 bit integers used to contain bit values. The position of the integer in the array that contains bit n can be computed from the expression (n / 32) and its bit within that integer is (n mod 32). 0 1 +-----------+-----------+-----------+-- | count | 31 .. 0 | 63 .. 32 | +-----------+-----------+-----------+-- 1.2.9. change_info4 struct change_info4 { bool atomic; changeid4 before; changeid4 after; }; This structure is used with the CREATE, LINK, REMOVE, RENAME Shepler Expires December 22, 2006 [Page 12] Internet-Draft NFSv4 Minor Version 1 June 2006 operations to let the client know the value of the change attribute for the directory in which the target filesystem object resides. 1.2.10. clientaddr4 struct clientaddr4 { /* see struct rpcb in RFC1833 */ string r_netid<>; /* network id */ string r_addr<>; /* universal address */ }; The clientaddr4 structure is used as part of the SETCLIENTID operation to either specify the address of the client that is using a clientid or as part of the callback registration. The r_netid and r_addr fields are specified in RFC1833 [9], but they are underspecified in RFC1833 [9] as far as what they should look like for specific protocols. For TCP over IPv4 and for UDP over IPv4, the format of r_addr is the US-ASCII string: h1.h2.h3.h4.p1.p2 The prefix, "h1.h2.h3.h4", is the standard textual form for representing an IPv4 address, which is always four octets long. Assuming big-endian ordering, h1, h2, h3, and h4, are respectively, the first through fourth octets each converted to ASCII-decimal. Assuming big-endian ordering, p1 and p2 are, respectively, the first and second octets each converted to ASCII-decimal. For example, if a host, in big-endian order, has an address of 0x0A010307 and there is a service listening on, in big endian order, port 0x020F (decimal 527), then complete universal address is "10.1.3.7.2.15". For TCP over IPv4 the value of r_netid is the string "tcp". For UDP over IPv4 the value of r_netid is the string "udp". For TCP over IPv6 and for UDP over IPv6, the format of r_addr is the US-ASCII string: x1:x2:x3:x4:x5:x6:x7:x8.p1.p2 The suffix "p1.p2" is the service port, and is computed the same way as with universal addresses for TCP and UDP over IPv4. The prefix, "x1:x2:x3:x4:x5:x6:x7:x8", is the standard textual form for representing an IPv6 address as defined in Section 2.2 of RFC1884 [5]. Additionally, the two alternative forms specified in Section 2.2 of RFC1884 [5] are also acceptable. Shepler Expires December 22, 2006 [Page 13] Internet-Draft NFSv4 Minor Version 1 June 2006 For TCP over IPv6 the value of r_netid is the string "tcp6". For UDP over IPv6 the value of r_netid is the string "udp6". 1.2.11. cb_client4 struct cb_client4 { unsigned int cb_program; clientaddr4 cb_location; }; This structure is used by the client to inform the server of its call back address; includes the program number and client address. 1.2.12. nfs_client_id4 struct nfs_client_id4 { verifier4 verifier; opaque id }; This structure is part of the arguments to the SETCLIENTID operation. NFS4_OPAQUE_LIMIT is defined as 1024. 1.2.13. open_owner4 struct open_owner4 { clientid4 clientid; opaque owner }; This structure is used to identify the owner of open state. NFS4_OPAQUE_LIMIT is defined as 1024. 1.2.14. lock_owner4 struct lock_owner4 { clientid4 clientid; opaque owner }; This structure is used to identify the owner of file locking state. NFS4_OPAQUE_LIMIT is defined as 1024. Shepler Expires December 22, 2006 [Page 14] Internet-Draft NFSv4 Minor Version 1 June 2006 1.2.15. open_to_lock_owner4 struct open_to_lock_owner4 { seqid4 open_seqid; stateid4 open_stateid; seqid4 lock_seqid; lock_owner4 lock_owner; }; This structure is used for the first LOCK operation done for an open_owner4. It provides both the open_stateid and lock_owner such that the transition is made from a valid open_stateid sequence to that of the new lock_stateid sequence. Using this mechanism avoids the confirmation of the lock_owner/lock_seqid pair since it is tied to established state in the form of the open_stateid/open_seqid. 1.2.16. stateid4 struct stateid4 { uint32_t seqid; opaque other[12]; }; This structure is used for the various state sharing mechanisms between the client and server. For the client, this data structure is read-only. The starting value of the seqid field is undefined. The server is required to increment the seqid field monotonically at each transition of the stateid. This is important since the client will inspect the seqid in OPEN stateids to determine the order of OPEN processing done by the server. 1.2.17. layouttype4 enum layouttype4 { LAYOUT_NFSV4_FILES = 1, LAYOUT_OSD2_OBJECTS = 2, LAYOUT_BLOCK_VOLUME = 3 }; A layout type specifies the layout being used. The implication is that clients have "layout drivers" that support one or more layout types. The file server advertises the layout types it supports through the LAYOUT_TYPES file system attribute. A client asks for layouts of a particular type in LAYOUTGET, and passes those layouts to its layout driver. The set of well known layout types must be defined. As well, a private range of layout types is to be defined by this document. This would allow custom installations to introduce new layout types. Shepler Expires December 22, 2006 [Page 15] Internet-Draft NFSv4 Minor Version 1 June 2006 [[Comment.1: Determine private range of layout types]] New layout types must be specified in RFCs approved by the IESG before becoming part of the pNFS specification. The LAYOUT_NFSV4_FILES enumeration specifies that the NFSv4 file layout type is to be used. The LAYOUT_OSD2_OBJECTS enumeration specifies that the object layout, as defined in [10], is to be used. Similarly, the LAYOUT_BLOCK_VOLUME enumeration that the block/volume layout, as defined in [11], is to be used. 1.2.18. pnfs_deviceid4 typedef uint32_t pnfs_deviceid4; /* 32-bit device ID */ Layout information includes device IDs that specify a storage device through a compact handle. Addressing and type information is obtained with the GETDEVICEINFO operation. A client must not assume that device IDs are valid across metadata server reboots. The device ID is qualified by the layout type and are unique per file system (FSID). This allows different layout drivers to generate device IDs without the need for co-ordination. See Section 14.1.4 for more details. 1.2.19. pnfs_netaddr4 struct pnfs_netaddr4 { string r_netid<>; /* network ID */ string r_addr<>; /* universal address */ }; For a description of the r_netid and r_addr fields see the descriptions provided in the clientaddr4 structure description. 1.2.20. pnfs_devlist_item4 struct pnfs_devlist_item4 { pnfs_deviceid4 id; opaque device_addr<>; }; An array of these values is returned by the GETDEVICELIST operation. They define the set of devices associated with a file system for the layout type specified in the GETDEVICELIST4args. The device address is used to set up a communication channel with the storage device. Different layout types will require different types of structures to define how they communicate with storage devices. Shepler Expires December 22, 2006 [Page 16] Internet-Draft NFSv4 Minor Version 1 June 2006 The opaque device_addr field must be interpreted based on the specified layout type. This document defines the device address for the NFSv4 file layout (struct pnfs_netaddr4), which identifies a storage device by network IP address and port number (similar to struct clientaddr4). This is sufficient for the clients to communicate with the NFSv4 storage devices, and may be sufficient for other layout types as well. Device types for object storage devices and block storage devices (e.g., SCSI volume labels) will be defined by their respective layout specifications. 1.2.21. pnfs_layout4 struct pnfs_layout4 { offset4 offset; length4 length; pnfs_layoutiomode4 iomode; pnfs_layouttype4 type; opaque layout<>; }; The pnfs_layout4 structure defines a layout for a file. The layout type specific data is opaque within this structure and must be interepreted based on the layout type. Currently, only the NFSv4 file layout type is defined; see Section 16.1 for its definition. Since layouts are sub-dividable, the offset and length together with the file's filehandle, the clientid, iomode, and layout type, identifies the layout. [[Comment.2: there is a discussion of moving the striping information, or more generally the "aggregation scheme", up to the generic layout level. This creates a two-layer system where the top level is a switch on different data placement layouts, and the next level down is a switch on different data storage types. This lets different layouts (e.g., striping or mirroring or redundant servers) to be layered over different storage devices. This would move geometry information out of nfsv4_file_layouttype4 and up into a generic pnfs_striped_layout type that would specify a set of pnfs_deviceid4 and pnfs_devicetype4 to use for storage. Instead of nfsv4_file_layouttype4, there would be pnfs_nfsv4_devicetype4.]] 1.2.22. pnfs_layoutupdate4 struct pnfs_layoutupdate4 { pnfs_layouttype4 type; opaque layoutupdate_data<>; }; Shepler Expires December 22, 2006 [Page 17] Internet-Draft NFSv4 Minor Version 1 June 2006 The pnfs_layoutupdate4 structure is used by the client to return 'updated' layout information to the metadata server at LAYOUTCOMMIT time. This structure provides a channel to pass layout type specific information back to the metadata server. E.g., for block/volume layout types this could include the list of reserved blocks that were written. The contents of the opaque layoutupdate_data argument are determined by the layout type and are defined in their context. The NFSv4 file-based layout does not use this structure, thus the update_data field should have a zero length. 1.2.23. layouthint4 struct pnfs_layouthint4 { pnfs_layouttype4 type; opaque layouthint_data<>; }; The layouthint4 structure is used by the client to pass in a hint about the type of layout it would like created for a particular file. It is the structure specified by the FILE_LAYOUT_HINT attribute described below. The metadata server may ignore the hint, or may selectively ignore fields within the hint. This hint should be provided at create time as part of the initial attributes within OPEN. The NFSv4 file-based layout uses the "nfsv4_file_layouthint" structure as defined in Section 16.1. 1.2.24. pnfs_layoutiomode4 enum pnfs_layoutiomode4 { LAYOUTIOMODE_READ = 1, LAYOUTIOMODE_RW = 2, LAYOUTIOMODE_ANY = 3 }; The iomode specifies whether the client intends to read or write (with the possibility of reading) the data represented by the layout. The ANY iomode MUST NOT be used for LAYOUTGET, however, it can be used for LAYOUTRETURN and LAYOUTRECALL. The ANY iomode specifies that layouts pertaining to both READ and RW iomodes are being returned or recalled, respectively. The metadata server's use of the iomode may depend on the layout type being used. The storage devices may validate I/O accesses against the iomode and reject invalid accesses. Shepler Expires December 22, 2006 [Page 18] Internet-Draft NFSv4 Minor Version 1 June 2006 1.2.25. nfs_impl_id4 struct nfs_impl_id4 { utf8str_cis nii_domain; utf8str_cs nii_name; nfstime4 nii_date; }; This structure is used to identify client and server implementation detail. The nii_domain field is the DNS domain name that the implementer is associated with. The nii_name field is the product name of the implementation and is completely free form. It is encouraged that the nii_name be used to distinguish machine architecture, machine platforms, revisions, versions, and patch levels. The nii_date field is the timestamp of when the software instance was published or built. 1.2.26. impl_ident4 struct impl_ident4 { clientid4 ii_clientid; struct nfs_impl_id4 ii_impl_id; }; This is used for exchanging implementation identification between client and server. 2. Filehandles The filehandle in the NFS protocol is a per server unique identifier for a filesystem object. The contents of the filehandle are opaque to the client. Therefore, the server is responsible for translating the filehandle to an internal representation of the filesystem object. 2.1. Obtaining the First Filehandle The operations of the NFS protocol are defined in terms of one or more filehandles. Therefore, the client needs a filehandle to initiate communication with the server. With the NFS version 2 protocol [RFC1094] and the NFS version 3 protocol [RFC1813], there exists an ancillary protocol to obtain this first filehandle. The MOUNT protocol, RPC program number 100005, provides the mechanism of translating a string based filesystem path name to a filehandle which can then be used by the NFS protocols. The MOUNT protocol has deficiencies in the area of security and use Shepler Expires December 22, 2006 [Page 19] Internet-Draft NFSv4 Minor Version 1 June 2006 via firewalls. This is one reason that the use of the public filehandle was introduced in [RFC2054] and [RFC2055]. With the use of the public filehandle in combination with the LOOKUP operation in the NFS version 2 and 3 protocols, it has been demonstrated that the MOUNT protocol is unnecessary for viable interaction between NFS client and server. Therefore, the NFS version 4 protocol will not use an ancillary protocol for translation from string based path names to a filehandle. Two special filehandles will be used as starting points for the NFS client. 2.1.1. Root Filehandle The first of the special filehandles is the ROOT filehandle. The ROOT filehandle is the "conceptual" root of the filesystem name space at the NFS server. The client uses or starts with the ROOT filehandle by employing the PUTROOTFH operation. The PUTROOTFH operation instructs the server to set the "current" filehandle to the ROOT of the server's file tree. Once this PUTROOTFH operation is used, the client can then traverse the entirety of the server's file tree with the LOOKUP operation. A complete discussion of the server name space is in the section "NFS Server Name Space". 2.1.2. Public Filehandle The second special filehandle is the PUBLIC filehandle. Unlike the ROOT filehandle, the PUBLIC filehandle may be bound or represent an arbitrary filesystem object at the server. The server is responsible for this binding. It may be that the PUBLIC filehandle and the ROOT filehandle refer to the same filesystem object. However, it is up to the administrative software at the server and the policies of the server administrator to define the binding of the PUBLIC filehandle and server filesystem object. The client may not make any assumptions about this binding. The client uses the PUBLIC filehandle via the PUTPUBFH operation. 2.2. Filehandle Types In the NFS version 2 and 3 protocols, there was one type of filehandle with a single set of semantics. This type of filehandle is termed "persistent" in NFS Version 4. The semantics of a persistent filehandle remain the same as before. A new type of filehandle introduced in NFS Version 4 is the "volatile" filehandle, which attempts to accommodate certain server environments. The volatile filehandle type was introduced to address server functionality or implementation issues which make correct Shepler Expires December 22, 2006 [Page 20] Internet-Draft NFSv4 Minor Version 1 June 2006 implementation of a persistent filehandle infeasible. Some server environments do not provide a filesystem level invariant that can be used to construct a persistent filehandle. The underlying server filesystem may not provide the invariant or the server's filesystem programming interfaces may not provide access to the needed invariant. Volatile filehandles may ease the implementation of server functionality such as hierarchical storage management or filesystem reorganization or migration. However, the volatile filehandle increases the implementation burden for the client. Since the client will need to handle persistent and volatile filehandles differently, a file attribute is defined which may be used by the client to determine the filehandle types being returned by the server. 2.2.1. General Properties of a Filehandle The filehandle contains all the information the server needs to distinguish an individual file. To the client, the filehandle is opaque. The client stores filehandles for use in a later request and can compare two filehandles from the same server for equality by doing a byte-by-byte comparison. However, the client MUST NOT otherwise interpret the contents of filehandles. If two filehandles from the same server are equal, they MUST refer to the same file. Servers SHOULD try to maintain a one-to-one correspondence between filehandles and files but this is not required. Clients MUST use filehandle comparisons only to improve performance, not for correct behavior. All clients need to be prepared for situations in which it cannot be determined whether two filehandles denote the same object and in such cases, avoid making invalid assumptions which might cause incorrect behavior. Further discussion of filehandle and attribute comparison in the context of data caching is presented in the section "Data Caching and File Identity". As an example, in the case that two different path names when traversed at the server terminate at the same filesystem object, the server SHOULD return the same filehandle for each path. This can occur if a hard link is used to create two file names which refer to the same underlying file object and associated data. For example, if paths /a/b/c and /a/d/c refer to the same file, the server SHOULD return the same filehandle for both path names traversals. 2.2.2. Persistent Filehandle A persistent filehandle is defined as having a fixed value for the lifetime of the filesystem object to which it refers. Once the server creates the filehandle for a filesystem object, the server MUST accept the same filehandle for the object for the lifetime of Shepler Expires December 22, 2006 [Page 21] Internet-Draft NFSv4 Minor Version 1 June 2006 the object. If the server restarts or reboots the NFS server must honor the same filehandle value as it did in the server's previous instantiation. Similarly, if the filesystem is migrated, the new NFS server must honor the same filehandle as the old NFS server. The persistent filehandle will be become stale or invalid when the filesystem object is removed. When the server is presented with a persistent filehandle that refers to a deleted object, it MUST return an error of NFS4ERR_STALE. A filehandle may become stale when the filesystem containing the object is no longer available. The file system may become unavailable if it exists on removable media and the media is no longer available at the server or the filesystem in whole has been destroyed or the filesystem has simply been removed from the server's name space (i.e. unmounted in a UNIX environment). 2.2.3. Volatile Filehandle A volatile filehandle does not share the same longevity characteristics of a persistent filehandle. The server may determine that a volatile filehandle is no longer valid at many different points in time. If the server can definitively determine that a volatile filehandle refers to an object that has been removed, the server should return NFS4ERR_STALE to the client (as is the case for persistent filehandles). In all other cases where the server determines that a volatile filehandle can no longer be used, it should return an error of NFS4ERR_FHEXPIRED. The mandatory attribute "fh_expire_type" is used by the client to determine what type of filehandle the server is providing for a particular filesystem. This attribute is a bitmask with the following values: FH4_PERSISTENT The value of FH4_PERSISTENT is used to indicate a persistent filehandle, which is valid until the object is removed from the filesystem. The server will not return NFS4ERR_FHEXPIRED for this filehandle. FH4_PERSISTENT is defined as a value in which none of the bits specified below are set. FH4_VOLATILE_ANY The filehandle may expire at any time, except as specifically excluded (i.e. FH4_NO_EXPIRE_WITH_OPEN). FH4_NOEXPIRE_WITH_OPEN May only be set when FH4_VOLATILE_ANY is set. If this bit is set, then the meaning of FH4_VOLATILE_ANY is qualified to exclude any expiration of the filehandle when it is open. Shepler Expires December 22, 2006 [Page 22] Internet-Draft NFSv4 Minor Version 1 June 2006 FH4_VOL_MIGRATION The filehandle will expire as a result of a file system transition (migration or replication), in those case in which the continuity of filehandle use is not specified by _handle_ class information within the fs_locations_info attribute. When this bit is set, clients without access to fs_locations_info information should assume file handles will expire on file system transitions. FH4_VOL_RENAME The filehandle will expire during rename. This includes a rename by the requesting client or a rename by any other client. If FH4_VOL_ANY is set, FH4_VOL_RENAME is redundant. Servers which provide volatile filehandles that may expire while open (i.e. if FH4_VOL_MIGRATION or FH4_VOL_RENAME is set or if FH4_VOLATILE_ANY is set and FH4_NOEXPIRE_WITH_OPEN not set), should deny a RENAME or REMOVE that would affect an OPEN file of any of the components leading to the OPEN file. In addition, the server should deny all RENAME or REMOVE requests during the grace period upon server restart. Servers which provide volatile filehandles that may expire while open require special care as regards handling of RENAMESs and REMOVEs. This situation can arise if FH4_VOL_MIGRATION or FH4_VOL_RENAME is set, if FH4_VOLATILE_ANY is set and FH4_NOEXPIRE_WITH_OPEN not set, or if a non-readonly file system has a transition target in a different _handle _ class. In these cases, the server should deny a RENAME or REMOVE that would affect an OPEN file of any of the components leading to the OPEN file. In addition, the server should deny all RENAME or REMOVE requests during the grace period, in order to make sure that reclaims of files where filehandles may have expired do not do a reclaim for the wrong file. 2.3. One Method of Constructing a Volatile Filehandle A volatile filehandle, while opaque to the client could contain: [volatile bit = 1 | server boot time | slot | generation number] o slot is an index in the server volatile filehandle table o generation number is the generation number for the table entry/ slot When the client presents a volatile filehandle, the server makes the following checks, which assume that the check for the volatile bit has passed. If the server boot time is less than the current server boot time, return NFS4ERR_FHEXPIRED. If slot is out of range, return NFS4ERR_BADHANDLE. If the generation number does not match, return Shepler Expires December 22, 2006 [Page 23] Internet-Draft NFSv4 Minor Version 1 June 2006 NFS4ERR_FHEXPIRED. When the server reboots, the table is gone (it is volatile). If volatile bit is 0, then it is a persistent filehandle with a different structure following it. 2.4. Client Recovery from Filehandle Expiration If possible, the client SHOULD recover from the receipt of an NFS4ERR_FHEXPIRED error. The client must take on additional responsibility so that it may prepare itself to recover from the expiration of a volatile filehandle. If the server returns persistent filehandles, the client does not need these additional steps. For volatile filehandles, most commonly the client will need to store the component names leading up to and including the filesystem object in question. With these names, the client should be able to recover by finding a filehandle in the name space that is still available or by starting at the root of the server's filesystem name space. If the expired filehandle refers to an object that has been removed from the filesystem, obviously the client will not be able to recover from the expired filehandle. It is also possible that the expired filehandle refers to a file that has been renamed. If the file was renamed by another client, again it is possible that the original client will not be able to recover. However, in the case that the client itself is renaming the file and the file is open, it is possible that the client may be able to recover. The client can determine the new path name based on the processing of the rename request. The client can then regenerate the new filehandle based on the new path name. The client could also use the compound operation mechanism to construct a set of operations like: RENAME A B LOOKUP B GETFH Note that the COMPOUND procedure does not provide atomicity. This example only reduces the overhead of recovering from an expired filehandle. Shepler Expires December 22, 2006 [Page 24] Internet-Draft NFSv4 Minor Version 1 June 2006 3. File Attributes To meet the requirements of extensibility and increased interoperability with non-UNIX platforms, attributes must be handled in a flexible manner. The NFS version 3 fattr3 structure contains a fixed list of attributes that not all clients and servers are able to support or care about. The fattr3 structure can not be extended as new needs arise and it provides no way to indicate non-support. With the NFS version 4 protocol, the client is able query what attributes the server supports and construct requests with only those supported attributes (or a subset thereof). To this end, attributes are divided into three groups: mandatory, recommended, and named. Both mandatory and recommended attributes are supported in the NFS version 4 protocol by a specific and well- defined encoding and are identified by number. They are requested by setting a bit in the bit vector sent in the GETATTR request; the server response includes a bit vector to list what attributes were returned in the response. New mandatory or recommended attributes may be added to the NFS protocol between major revisions by publishing a standards-track RFC which allocates a new attribute number value and defines the encoding for the attribute. See the section "Minor Versioning" for further discussion. Named attributes are accessed by the new OPENATTR operation, which accesses a hidden directory of attributes associated with a file system object. OPENATTR takes a filehandle for the object and returns the filehandle for the attribute hierarchy. The filehandle for the named attributes is a directory object accessible by LOOKUP or READDIR and contains files whose names represent the named attributes and whose data bytes are the value of the attribute. For example: +----------+-----------+---------------------------------+ | LOOKUP | "foo" | ; look up file | | GETATTR | attrbits | | | OPENATTR | | ; access foo's named attributes | | LOOKUP | "x11icon" | ; look up specific attribute | | READ | 0,4096 | ; read stream of bytes | +----------+-----------+---------------------------------+ Named attributes are intended for data needed by applications rather than by an NFS client implementation. NFS implementors are strongly encouraged to define their new attributes as recommended attributes by bringing them to the IETF standards-track process. The set of attributes which are classified as mandatory is deliberately small since servers must do whatever it takes to support Shepler Expires December 22, 2006 [Page 25] Internet-Draft NFSv4 Minor Version 1 June 2006 them. A server should support as many of the recommended attributes as possible but by their definition, the server is not required to support all of them. Attributes are deemed mandatory if the data is both needed by a large number of clients and is not otherwise reasonably computable by the client when support is not provided on the server. Note that the hidden directory returned by OPENATTR is a convenience for protocol processing. The client should not make any assumptions about the server's implementation of named attributes and whether the underlying filesystem at the server has a named attribute directory or not. Therefore, operations such as SETATTR and GETATTR on the named attribute directory are undefined. 3.1. Mandatory Attributes These MUST be supported by every NFS version 4 client and server in order to ensure a minimum level of interoperability. The server must store and return these attributes and the client must be able to function with an attribute set limited to these attributes. With just the mandatory attributes some client functionality may be impaired or limited in some ways. A client may ask for any of these attributes to be returned by setting a bit in the GETATTR request and the server must return their value. 3.2. Recommended Attributes These attributes are understood well enough to warrant support in the NFS version 4 protocol. However, they may not be supported on all clients and servers. A client may ask for any of these attributes to be returned by setting a bit in the GETATTR request but must handle the case where the server does not return them. A client may ask for the set of attributes the server supports and should not request attributes the server does not support. A server should be tolerant of requests for unsupported attributes and simply not return them rather than considering the request an error. It is expected that servers will support all attributes they comfortably can and only fail to support attributes which are difficult to support in their operating environments. A server should provide attributes whenever they don't have to "tell lies" to the client. For example, a file modification time should be either an accurate time or should not be supported by the server. This will not always be comfortable to clients but the client is better positioned decide whether and how to fabricate or construct an attribute or whether to do without the attribute. Shepler Expires December 22, 2006 [Page 26] Internet-Draft NFSv4 Minor Version 1 June 2006 3.3. Named Attributes These attributes are not supported by direct encoding in the NFS Version 4 protocol but are accessed by string names rather than numbers and correspond to an uninterpreted stream of bytes which are stored with the filesystem object. The name space for these attributes may be accessed by using the OPENATTR operation. The OPENATTR operation returns a filehandle for a virtual "attribute directory" and further perusal of the name space may be done using READDIR and LOOKUP operations on this filehandle. Named attributes may then be examined or changed by normal READ and WRITE and CREATE operations on the filehandles returned from READDIR and LOOKUP. Named attributes may have attributes. It is recommended that servers support arbitrary named attributes. A client should not depend on the ability to store any named attributes in the server's filesystem. If a server does support named attributes, a client which is also able to handle them should be able to copy a file's data and meta-data with complete transparency from one location to another; this would imply that names allowed for regular directory entries are valid for named attribute names as well. Names of attributes will not be controlled by this document or other IETF standards track documents. See the section "IANA Considerations" for further discussion. 3.4. Classification of Attributes Each of the Mandatory and Recommended attributes can be classified in one of three categories: per server, per filesystem, or per filesystem object. Note that it is possible that some per filesystem attributes may vary within the filesystem. See the "homogeneous" attribute for its definition. Note that the attributes time_access_set and time_modify_set are not listed in this section because they are write-only attributes corresponding to time_access and time_modify, and are used in a special instance of SETATTR. o The per server attribute is: lease_time o The per filesystem attributes are: supp_attr, fh_expire_type, link_support, symlink_support, unique_handles, aclsupport, cansettime, case_insensitive, case_preserving, chown_restricted, files_avail, files_free, files_total, fs_locations, homogeneous, maxfilesize, maxname, Shepler Expires December 22, 2006 [Page 27] Internet-Draft NFSv4 Minor Version 1 June 2006 maxread, maxwrite, no_trunc, space_avail, space_free, space_total, time_delta, fs_layouttype, send_impl_id, recv_impl_id o The per filesystem object attributes are: type, change, size, named_attr, fsid, rdattr_error, filehandle, ACL, archive, fileid, hidden, maxlink, mimetype, mode, numlinks, owner, owner_group, rawdev, space_used, system, time_access, time_backup, time_create, time_metadata, time_modify, mounted_on_fileid, layouttype, layouthint, layout_blksize, layout_alignment For quota_avail_hard, quota_avail_soft, and quota_used see their definitions below for the appropriate classification. 3.5. Mandatory Attributes - Definitions +-----------------+----+------------+--------+----------------------+ | name | # | Data Type | Access | Description | +-----------------+----+------------+--------+----------------------+ | supp_attr | 0 | bitmap | READ | The bit vector which | | | | | | would retrieve all | | | | | | mandatory and | | | | | | recommended | | | | | | attributes that are | | | | | | supported for this | | | | | | object. The scope of | | | | | | this attribute | | | | | | applies to all | | | | | | objects with a | | | | | | matching fsid. | | type | 1 | nfs4_ftype | READ | The type of the | | | | | | object (file, | | | | | | directory, symlink, | | | | | | etc.) | | fh_expire_type | 2 | uint32 | READ | Server uses this to | | | | | | specify filehandle | | | | | | expiration behavior | | | | | | to the client. See | | | | | | the section | | | | | | "Filehandles" for | | | | | | additional | | | | | | description. | Shepler Expires December 22, 2006 [Page 28] Internet-Draft NFSv4 Minor Version 1 June 2006 | change | 3 | uint64 | READ | A value created by | | | | | | the server that the | | | | | | client can use to | | | | | | determine if file | | | | | | data, directory | | | | | | contents or | | | | | | attributes of the | | | | | | object have been | | | | | | modified. The server | | | | | | may return the | | | | | | object's | | | | | | time_metadata | | | | | | attribute for this | | | | | | attribute's value | | | | | | but only if the | | | | | | filesystem object | | | | | | can not be updated | | | | | | more frequently than | | | | | | the resolution of | | | | | | time_metadata. | | size | 4 | uint64 | R/W | The size of the | | | | | | object in bytes. | | link_support | 5 | bool | READ | True, if the | | | | | | object's filesystem | | | | | | supports hard links. | | symlink_support | 6 | bool | READ | True, if the | | | | | | object's filesystem | | | | | | supports symbolic | | | | | | links. | | named_attr | 7 | bool | READ | True, if this object | | | | | | has named | | | | | | attributes. In other | | | | | | words, object has a | | | | | | non-empty named | | | | | | attribute directory. | | fsid | 8 | fsid4 | READ | Unique filesystem | | | | | | identifier for the | | | | | | filesystem holding | | | | | | this object. fsid | | | | | | contains major and | | | | | | minor components | | | | | | each of which are | | | | | | uint64. | | unique_handles | 9 | bool | READ | True, if two | | | | | | distinct filehandles | | | | | | guaranteed to refer | | | | | | to two different | | | | | | filesystem objects. | Shepler Expires December 22, 2006 [Page 29] Internet-Draft NFSv4 Minor Version 1 June 2006 | lease_time | 10 | nfs_lease4 | READ | Duration of leases | | | | | | at server in | | | | | | seconds. | | rdattr_error | 11 | enum | READ | Error returned from | | | | | | getattr during | | | | | | readdir. | | filehandle | 19 | nfs_fh4 | READ | The filehandle of | | | | | | this object | | | | | | (primarily for | | | | | | readdir requests). | +-----------------+----+------------+--------+----------------------+ 3.6. Recommended Attributes - Definitions +--------------------+-----+--------------+--------+----------------+ | name | # | Data Type | Access | Description | +--------------------+-----+--------------+--------+----------------+ | ACL | 12 | nfsace4<> | R/W | The access | | | | | | control list | | | | | | for the | | | | | | object. | | aclsupport | 13 | uint32 | READ | Indicates what | | | | | | types of ACLs | | | | | | are supported | | | | | | on the current | | | | | | filesystem. | | archive | 14 | bool | R/W | True, if this | | | | | | file has been | | | | | | archived since | | | | | | the time of | | | | | | last | | | | | | modification | | | | | | (deprecated in | | | | | | favor of | | | | | | time_backup). | | cansettime | 15 | bool | READ | True, if the | | | | | | server able to | | | | | | change the | | | | | | times for a | | | | | | filesystem | | | | | | object as | | | | | | specified in a | | | | | | SETATTR | | | | | | operation. | Shepler Expires December 22, 2006 [Page 30] Internet-Draft NFSv4 Minor Version 1 June 2006 | case_insensitive | 16 | bool | READ | True, if | | | | | | filename | | | | | | comparisons on | | | | | | this | | | | | | filesystem are | | | | | | case | | | | | | insensitive. | | case_preserving | 17 | bool | READ | True, if | | | | | | filename case | | | | | | on this | | | | | | filesystem are | | | | | | preserved. | | chown_restricted | 18 | bool | READ | If TRUE, the | | | | | | server will | | | | | | reject any | | | | | | request to | | | | | | change either | | | | | | the owner or | | | | | | the group | | | | | | associated | | | | | | with a file if | | | | | | the caller is | | | | | | not a | | | | | | privileged | | | | | | user (for | | | | | | example, | | | | | | "root" in UNIX | | | | | | operating | | | | | | environments | | | | | | or in Windows | | | | | | 2000 the "Take | | | | | | Ownership" | | | | | | privilege). | | fileid | 20 | uint64 | READ | A number | | | | | | uniquely | | | | | | identifying | | | | | | the file | | | | | | within the | | | | | | filesystem. | Shepler Expires December 22, 2006 [Page 31] Internet-Draft NFSv4 Minor Version 1 June 2006 | files_avail | 21 | uint64 | READ | File slots | | | | | | available to | | | | | | this user on | | | | | | the filesystem | | | | | | containing | | | | | | this object - | | | | | | this should be | | | | | | the smallest | | | | | | relevant | | | | | | limit. | | files_free | 22 | uint64 | READ | Free file | | | | | | slots on the | | | | | | filesystem | | | | | | containing | | | | | | this object - | | | | | | this should be | | | | | | the smallest | | | | | | relevant | | | | | | limit. | | files_total | 23 | uint64 | READ | Total file | | | | | | slots on the | | | | | | filesystem | | | | | | containing | | | | | | this object. | | fs_locations | 24 | fs_locations | READ | Locations | | | | | | where this | | | | | | filesystem may | | | | | | be found. If | | | | | | the server | | | | | | returns | | | | | | NFS4ERR_MOVED | | | | | | as an error, | | | | | | this attribute | | | | | | MUST be | | | | | | supported. | | hidden | 25 | bool | R/W | True, if the | | | | | | file is | | | | | | considered | | | | | | hidden with | | | | | | respect to the | | | | | | Windows API? | Shepler Expires December 22, 2006 [Page 32] Internet-Draft NFSv4 Minor Version 1 June 2006 | homogeneous | 26 | bool | READ | True, if this | | | | | | object's | | | | | | filesystem is | | | | | | homogeneous, | | | | | | i.e. are per | | | | | | filesystem | | | | | | attributes the | | | | | | same for all | | | | | | filesystem's | | | | | | objects. | | maxfilesize | 27 | uint64 | READ | Maximum | | | | | | supported file | | | | | | size for the | | | | | | filesystem of | | | | | | this object. | | maxlink | 28 | uint32 | READ | Maximum number | | | | | | of links for | | | | | | this object. | | maxname | 29 | uint32 | READ | Maximum | | | | | | filename size | | | | | | supported for | | | | | | this object. | | maxread | 30 | uint64 | READ | Maximum read | | | | | | size supported | | | | | | for this | | | | | | object. | | maxwrite | 31 | uint64 | READ | Maximum write | | | | | | size supported | | | | | | for this | | | | | | object. This | | | | | | attribute | | | | | | SHOULD be | | | | | | supported if | | | | | | the file is | | | | | | writable. Lack | | | | | | of this | | | | | | attribute can | | | | | | lead to the | | | | | | client either | | | | | | wasting | | | | | | bandwidth or | | | | | | not receiving | | | | | | the best | | | | | | performance. | | mimetype | 32 | utf8<> | R/W | MIME body | | | | | | type/subtype | | | | | | of this | | | | | | object. | Shepler Expires December 22, 2006 [Page 33] Internet-Draft NFSv4 Minor Version 1 June 2006 | mode | 33 | mode4 | R/W | UNIX-style | | | | | | mode and | | | | | | permission | | | | | | bits for this | | | | | | object. | | no_trunc | 34 | bool | READ | True, if a | | | | | | name longer | | | | | | than name_max | | | | | | is used, an | | | | | | error be | | | | | | returned and | | | | | | name is not | | | | | | truncated. | | numlinks | 35 | uint32 | READ | Number of hard | | | | | | links to this | | | | | | object. | | owner | 36 | utf8<> | R/W | The string | | | | | | name of the | | | | | | owner of this | | | | | | object. | | owner_group | 37 | utf8<> | R/W | The string | | | | | | name of the | | | | | | group | | | | | | ownership of | | | | | | this object. | | quota_avail_hard | 38 | uint64 | READ | For definition | | | | | | see "Quota | | | | | | Attributes" | | | | | | section below. | | quota_avail_soft | 39 | uint64 | READ | For definition | | | | | | see "Quota | | | | | | Attributes" | | | | | | section below. | | quota_used | 40 | uint64 | READ | For definition | | | | | | see "Quota | | | | | | Attributes" | | | | | | section below. | Shepler Expires December 22, 2006 [Page 34] Internet-Draft NFSv4 Minor Version 1 June 2006 | rawdev | 41 | specdata4 | READ | Raw device | | | | | | identifier. | | | | | | UNIX device | | | | | | major/minor | | | | | | node | | | | | | information. | | | | | | If the value | | | | | | of type is not | | | | | | NF4BLK or | | | | | | NF4CHR, the | | | | | | value return | | | | | | SHOULD NOT be | | | | | | considered | | | | | | useful. | | space_avail | 42 | uint64 | READ | Disk space in | | | | | | bytes | | | | | | available to | | | | | | this user on | | | | | | the filesystem | | | | | | containing | | | | | | this object - | | | | | | this should be | | | | | | the smallest | | | | | | relevant | | | | | | limit. | | space_free | 43 | uint64 | READ | Free disk | | | | | | space in bytes | | | | | | on the | | | | | | filesystem | | | | | | containing | | | | | | this object - | | | | | | this should be | | | | | | the smallest | | | | | | relevant | | | | | | limit. | | space_total | 44 | uint64 | READ | Total disk | | | | | | space in bytes | | | | | | on the | | | | | | filesystem | | | | | | containing | | | | | | this object. | | space_used | 45 | uint64 | READ | Number of | | | | | | filesystem | | | | | | bytes | | | | | | allocated to | | | | | | this object. | Shepler Expires December 22, 2006 [Page 35] Internet-Draft NFSv4 Minor Version 1 June 2006 | system | 46 | bool | R/W | True, if this | | | | | | file is a | | | | | | "system" file | | | | | | with respect | | | | | | to the Windows | | | | | | API? | | time_access | 47 | nfstime4 | READ | The time of | | | | | | last access to | | | | | | the object by | | | | | | a read that | | | | | | was satisfied | | | | | | by the server. | | time_access_set | 48 | settime4 | WRITE | Set the time | | | | | | of last access | | | | | | to the object. | | | | | | SETATTR use | | | | | | only. | | time_backup | 49 | nfstime4 | R/W | The time of | | | | | | last backup of | | | | | | the object. | | time_create | 50 | nfstime4 | R/W | The time of | | | | | | creation of | | | | | | the object. | | | | | | This attribute | | | | | | does not have | | | | | | any relation | | | | | | to the | | | | | | traditional | | | | | | UNIX file | | | | | | attribute | | | | | | "ctime" or | | | | | | "change time". | | time_delta | 51 | nfstime4 | READ | Smallest | | | | | | useful server | | | | | | time | | | | | | granularity. | | time_metadata | 52 | nfstime4 | READ | The time of | | | | | | last meta-data | | | | | | modification | | | | | | of the object. | | time_modify | 53 | nfstime4 | READ | The time of | | | | | | last | | | | | | modification | | | | | | to the object. | Shepler Expires December 22, 2006 [Page 36] Internet-Draft NFSv4 Minor Version 1 June 2006 | time_modify_set | 54 | settime4 | WRITE | Set the time | | | | | | of last | | | | | | modification | | | | | | to the object. | | | | | | SETATTR use | | | | | | only. | | mounted_on_fileid | 55 | uint64 | READ | Like fileid, | | | | | | but if the | | | | | | target | | | | | | filehandle is | | | | | | the root of a | | | | | | filesystem | | | | | | return the | | | | | | fileid of the | | | | | | underlying | | | | | | directory. | | send_impl_id | TBD | impl_ident4 | WRITE | Client | | | | | | provides | | | | | | server with | | | | | | implementation | | | | | | identity via | | | | | | SETATTR. | | recv_impl_id | TBD | nfs_impl_id4 | READ | Client obtains | | | | | | server | | | | | | implementation | | | | | | via GETATTR. | | dir_notif_delay | TBD | R/W | READ | notification | | | | | | delays on | | | | | | directory | | | | | | attributes | | dirent_notif_delay | TBD | R/W | READ | notification | | | | | | delays on | | | | | | child | | | | | | attributes | | fs_layouttype | TBD | layouttype4 | READ | Layout types | | | | | | available for | | | | | | the | | | | | | filesystem. | | layouttype | TBD | layouttype4 | READ | Layout types | | | | | | available for | | | | | | the file. | | layouthint | TBD | layouthint4 | WRITE | Client | | | | | | specified hint | | | | | | for file | | | | | | layout. | Shepler Expires December 22, 2006 [Page 37] Internet-Draft NFSv4 Minor Version 1 June 2006 | layout_blksize | TBD | uint32_t | READ | Preferred | | | | | | block size for | | | | | | layout related | | | | | | I/O. | | layout_alignment | TBD | uint32_t | READ | Preferred | | | | | | alignment for | | | | | | layout related | | | | | | I/O. | | fs_absent | TBD | bool | READ | Is current | | | | | | filesystem | | | | | | present or | | | | | | absent. | | fs_locations_info | TBD | | READ | Full function | | | | | | filesystem | | | | | | location. | | fs_status | TBD | fs4_status | READ | Generic | | | | | | filesystem | | | | | | type | | | | | | information. | | | TBD | | READ | desc | | | TBD | | READ | desc | +--------------------+-----+--------------+--------+----------------+ 3.7. Time Access As defined above, the time_access attribute represents the time of last access to the object by a read that was satisfied by the server. The notion of what is an "access" depends on server's operating environment and/or the server's filesystem semantics. For example, for servers obeying POSIX semantics, time_access would be updated only by the READLINK, READ, and READDIR operations and not any of the operations that modify the content of the object. Of course, setting the corresponding time_access_set attribute is another way to modify the time_access attribute. Whenever the file object resides on a writable filesystem, the server should make best efforts to record time_access into stable storage. However, to mitigate the performance effects of doing so, and most especially whenever the server is satisfying the read of the object's content from its cache, the server MAY cache access time updates and lazily write them to stable storage. It is also acceptable to give administrators of the server the option to disable time_access updates. 3.8. Interpreting owner and owner_group The recommended attributes "owner" and "owner_group" (and also users and groups within the "acl" attribute) are represented in terms of a Shepler Expires December 22, 2006 [Page 38] Internet-Draft NFSv4 Minor Version 1 June 2006 UTF-8 string. To avoid a representation that is tied to a particular underlying implementation at the client or server, the use of the UTF-8 string has been chosen. Note that section 6.1 of [RFC2624] provides additional rationale. It is expected that the client and server will have their own local representation of owner and owner_group that is used for local storage or presentation to the end user. Therefore, it is expected that when these attributes are transferred between the client and server that the local representation is translated to a syntax of the form "user@ dns_domain". This will allow for a client and server that do not use the same local representation the ability to translate to a common syntax that can be interpreted by both. Similarly, security principals may be represented in different ways by different security mechanisms. Servers normally translate these representations into a common format, generally that used by local storage, to serve as a means of identifying the users corresponding to these security principals. When these local identifiers are translated to the form of the owner attribute, associated with files created by such principals they identify, in a common format, the users associated with each corresponding set of security principals. The translation used to interpret owner and group strings is not specified as part of the protocol. This allows various solutions to be employed. For example, a local translation table may be consulted that maps between a numeric id to the user@dns_domain syntax. A name service may also be used to accomplish the translation. A server may provide a more general service, not limited by any particular translation (which would only translate a limited set of possible strings) by storing the owner and owner_group attributes in local storage without any translation or it may augment a translation method by storing the entire string for attributes for which no translation is available while using the local representation for those cases in which a translation is available. Servers that do not provide support for all possible values of the owner and owner_group attributes, should return an error (NFS4ERR_BADOWNER) when a string is presented that has no translation, as the value to be set for a SETATTR of the owner, owner_group, or acl attributes. When a server does accept an owner or owner_group value as valid on a SETATTR (and similarly for the owner and group strings in an acl), it is promising to return that same string when a corresponding GETATTR is done. Configuration changes and ill-constructed name translations (those that contain aliasing) may make that promise impossible to honor. Servers should make appropriate efforts to avoid a situation in which these attributes have their values changed when no real change to ownership has occurred. Shepler Expires December 22, 2006 [Page 39] Internet-Draft NFSv4 Minor Version 1 June 2006 The "dns_domain" portion of the owner string is meant to be a DNS domain name. For example, user@ietf.org. Servers should accept as valid a set of users for at least one domain. A server may treat other domains as having no valid translations. A more general service is provided when a server is capable of accepting users for multiple domains, or for all domains, subject to security constraints. In the case where there is no translation available to the client or server, the attribute value must be constructed without the "@". Therefore, the absence of the @ from the owner or owner_group attribute signifies that no translation was available at the sender and that the receiver of the attribute should not use that string as a basis for translation into its own internal format. Even though the attribute value can not be translated, it may still be useful. In the case of a client, the attribute string may be used for local display of ownership. To provide a greater degree of compatibility with previous versions of NFS (i.e. v2 and v3), which identified users and groups by 32-bit unsigned uid's and gid's, owner and group strings that consist of decimal numeric values with no leading zeros can be given a special interpretation by clients and servers which choose to provide such support. The receiver may treat such a user or group string as representing the same user as would be represented by a v2/v3 uid or gid having the corresponding numeric value. A server is not obligated to accept such a string, but may return an NFS4ERR_BADOWNER instead. To avoid this mechanism being used to subvert user and group translation, so that a client might pass all of the owners and groups in numeric form, a server SHOULD return an NFS4ERR_BADOWNER error when there is a valid translation for the user or owner designated in this way. In that case, the client must use the appropriate name@domain string and not the special form for compatibility. The owner string "nobody" may be used to designate an anonymous user, which will be associated with a file created by a security principal that cannot be mapped through normal means to the owner attribute. 3.9. Character Case Attributes With respect to the case_insensitive and case_preserving attributes, each UCS-4 character (which UTF-8 encodes) has a "long descriptive name" [RFC1345] which may or may not included the word "CAPITAL" or "SMALL". The presence of SMALL or CAPITAL allows an NFS server to implement unambiguous and efficient table driven mappings for case insensitive comparisons, and non-case-preserving storage. For general character handling and internationalization issues, see the Shepler Expires December 22, 2006 [Page 40] Internet-Draft NFSv4 Minor Version 1 June 2006 section "Internationalization". 3.10. Quota Attributes For the attributes related to filesystem quotas, the following definitions apply: quota_avail_soft The value in bytes which represents the amount of additional disk space that can be allocated to this file or directory before the user may reasonably be warned. It is understood that this space may be consumed by allocations to other files or directories though there is a rule as to which other files or directories. quota_avail_hard The value in bytes which represent the amount of additional disk space beyond the current allocation that can be allocated to this file or directory before further allocations will be refused. It is understood that this space may be consumed by allocations to other files or directories. quota_used The value in bytes which represent the amount of disc space used by this file or directory and possibly a number of other similar files or directories, where the set of "similar" meets at least the criterion that allocating space to any file or directory in the set will reduce the "quota_avail_hard" of every other file or directory in the set. Note that there may be a number of distinct but overlapping sets of files or directories for which a quota_used value is maintained. E.g. "all files with a given owner", "all files with a given group owner". etc. The server is at liberty to choose any of those sets but should do so in a repeatable way. The rule may be configured per-filesystem or may be "choose the set with the smallest quota". 3.11. mounted_on_fileid UNIX-based operating environments connect a filesystem into the namespace by connecting (mounting) the filesystem onto the existing file object (the mount point, usually a directory) of an existing filesystem. When the mount point's parent directory is read via an API like readdir(), the return results are directory entries, each with a component name and a fileid. The fileid of the mount point's directory entry will be different from the fileid that the stat() system call returns. The stat() system call is returning the fileid of the root of the mounted filesystem, whereas readdir() is returning the fileid stat() would have returned before any filesystems were Shepler Expires December 22, 2006 [Page 41] Internet-Draft NFSv4 Minor Version 1 June 2006 mounted on the mount point. Unlike NFS version 3, NFS version 4 allows a client's LOOKUP request to cross other filesystems. The client detects the filesystem crossing whenever the filehandle argument of LOOKUP has an fsid attribute different from that of the filehandle returned by LOOKUP. A UNIX-based client will consider this a "mount point crossing". UNIX has a legacy scheme for allowing a process to determine its current working directory. This relies on readdir() of a mount point's parent and stat() of the mount point returning fileids as previously described. The mounted_on_fileid attribute corresponds to the fileid that readdir() would have returned as described previously. While the NFS version 4 client could simply fabricate a fileid corresponding to what mounted_on_fileid provides (and if the server does not support mounted_on_fileid, the client has no choice), there is a risk that the client will generate a fileid that conflicts with one that is already assigned to another object in the filesystem. Instead, if the server can provide the mounted_on_fileid, the potential for client operational problems in this area is eliminated. If the server detects that there is no mounted point at the target file object, then the value for mounted_on_fileid that it returns is the same as that of the fileid attribute. The mounted_on_fileid attribute is RECOMMENDED, so the server SHOULD provide it if possible, and for a UNIX-based server, this is straightforward. Usually, mounted_on_fileid will be requested during a READDIR operation, in which case it is trivial (at least for UNIX- based servers) to return mounted_on_fileid since it is equal to the fileid of a directory entry returned by readdir(). If mounted_on_fileid is requested in a GETATTR operation, the server should obey an invariant that has it returning a value that is equal to the file object's entry in the object's parent directory, i.e. what readdir() would have returned. Some operating environments allow a series of two or more filesystems to be mounted onto a single mount point. In this case, for the server to obey the aforementioned invariant, it will need to find the base mount point, and not the intermediate mount points. 3.12. send_impl_id and recv_impl_id These recommended attributes are used to identify the client and server. In the case of the send_impl_id attribute, the client sends its clientid4 value along with the nfs_impl_id4. The use of the clientid4 value allows the server to identify and match specific client interaction. In the case of the recv_impl_id attribute, the Shepler Expires December 22, 2006 [Page 42] Internet-Draft NFSv4 Minor Version 1 June 2006 client receives the nfs_impl_id4 value. Access to this identification information can be most useful at both client and server. Being able to identify specific implementations can help in planning by administrators or implementers. For example, diagnostic software may extract this information in an attempt to identify implementation problems, performance workload behaviors or general usage statistics. Since the intent of having access to this information is for planning or general diagnosis only, the client and server MUST NOT interpret this implementation identity information in a way that affects interoperational behavior of the implementation. The reason is the if clients and servers did such a thing, they might use fewer capabilities of the protocol than the peer can support, or the client and server might refuse to interoperate. Because it is likely some implementations will violate the protocol specification and interpret the identity information, implementations MUST allow the users of the NFSv4 client and server to set the contents of the sent nfs_impl_id structure to any value. Even though these attributes are recommended, if the server supports one of them it MUST support the other. 3.13. fs_layouttype This attribute applies to a file system and indicates what layout types are supported by the file system. We expect this attribute to be queried when a client encounters a new fsid. This attribute is used by the client to determine if it has applicable layout drivers. 3.14. layouttype This attribute indicates the particular layout type(s) used for a file. This is for informational purposes only. The client needs to use the LAYOUTGET operation in order to get enough information (e.g., specific device information) in order to perform I/O. 3.15. layouthint This attribute may be set on newly created files to influence the metadata server's choice for the file's layout. It is suggested that this attribute is set as one of the initial attributes within the OPEN call. The metadata server may ignore this attribute. This attribute is a sub-set of the layout structure returned by LAYOUTGET. For example, instead of specifying particular devices, this would be used to suggest the stripe width of a file. It is up to the server implementation to determine which fields within the layout it uses. Shepler Expires December 22, 2006 [Page 43] Internet-Draft NFSv4 Minor Version 1 June 2006 [[Comment.3: it has been suggested that the HINT is a well defined type other than pnfs_layoutdata4, similar to pnfs_layoutupdate4.]] 3.16. Access Control Lists The NFS version 4 ACL attribute is an array of access control entries (ACE). Although, the client can read and write the ACL attribute, the NFSv4 model is the server does all access control based on the server's interpretation of the ACL. If at any point the client wants to check access without issuing an operation that modifies or reads data or metadata, the client can use the OPEN and ACCESS operations to do so. There are various access control entry types, as defined in Section 3.16.1. The server is able to communicate which ACE types are supported by returning the appropriate value within the aclsupport attribute. Each ACE covers one or more operations on a file or directory as described in Section 3.16.2. It may also contain one or more flags that modify the semantics of the ACE as defined in Section 3.16.3. The NFS ACE attribute is defined as follows: typedef uint32_t acetype4; typedef uint32_t aceflag4; typedef uint32_t acemask4; struct nfsace4 { acetype4 type; aceflag4 flag; acemask4 access_mask; utf8str_mixed who; }; To determine if a request succeeds, each nfsace4 entry is processed in order by the server. Only ACEs which have a "who" that matches the requester are considered. Each ACE is processed until all of the bits of the requester's access have been ALLOWED. Once a bit (see below) has been ALLOWED by an ACCESS_ALLOWED_ACE, it is no longer considered in the processing of later ACEs. If an ACCESS_DENIED_ACE is encountered where the requester's access still has unALLOWED bits in common with the "access_mask" of the ACE, the request is denied. However, unlike the ALLOWED and DENIED ACE types, the ALARM and AUDIT ACE types do not affect a requester's access, and instead are for triggering events as a result of a requester's access attempt. Therefore, all AUDIT and ALARM ACEs are processed until end of the ACL. When the ACL is fully processed, if there are bits in requester's mask that have not been considered whether the server allows or denies, the access is denied. Even though a request is Shepler Expires December 22, 2006 [Page 44] Internet-Draft NFSv4 Minor Version 1 June 2006 denied, servers may choose to have other restrictions or implementation defined security policies in place. In those cases, access may be decided outside of what is in the ACL. Examples of such security policies or restrictions are: o The owner of the file will always be able granted ACE4_WRITE_ACL and ACE4_READ_ACL permissions. This would prevent the user from getting into the situation where they can't ever modify the ACL. o The ACL may say that an entity is to be granted ACE4_WRITE_DATA permission, but the file system is mounted read only, therefore write access is denied. As mentioned before, this is one of the reasons that client implementations are not recommended to do their own access checking. The NFS version 4 ACL model is quite rich. Some server platforms may provide access control functionality that goes beyond the UNIX-style mode attribute, but which is not as rich as the NFS ACL model. So that users can take advantage of this more limited functionality, the server may indicate that it supports ACLs as long as it follows the guidelines for mapping between its ACL model and the NFS version 4 ACL model. The situation is complicated by the fact that a server may have multiple modules that enforce ACLs. For example, the enforcement for NFS version 4 access may be different from the enforcement for local access, and both may be different from the enforcement for access through other protocols such as SMB. So it may be useful for a server to accept an ACL even if not all of its modules are able to support it. The guiding principle in all cases is that the server must not accept ACLs that appear to make the file more secure than it really is. Shepler Expires December 22, 2006 [Page 45] Internet-Draft NFSv4 Minor Version 1 June 2006 3.16.1. ACE type Type Description _____________________________________________________ ALLOW Explicitly grants the access defined in acemask4 to the file or directory. DENY Explicitly denies the access defined in acemask4 to the file or directory. AUDIT LOG (system dependent) any access attempt to a file or directory which uses any of the access methods specified in acemask4. ALARM Generate a system ALARM (system dependent) when any access attempt is made to a file or directory for the access methods specified in acemask4. A server need not support all of the above ACE types. The bitmask constants used to represent the above definitions within the aclsupport attribute are as follows: const ACL4_SUPPORT_ALLOW_ACL = 0x00000001; const ACL4_SUPPORT_DENY_ACL = 0x00000002; const ACL4_SUPPORT_AUDIT_ACL = 0x00000004; const ACL4_SUPPORT_ALARM_ACL = 0x00000008; The semantics of the "type" field follow the descriptions provided above. The constants used for the type field (acetype4) are as follows: const ACE4_ACCESS_ALLOWED_ACE_TYPE = 0x00000000; const ACE4_ACCESS_DENIED_ACE_TYPE = 0x00000001; const ACE4_SYSTEM_AUDIT_ACE_TYPE = 0x00000002; const ACE4_SYSTEM_ALARM_ACE_TYPE = 0x00000003; Clients should not attempt to set an ACE unless the server claims support for that ACE type. If the server receives a request to set an ACE that it cannot store, it MUST reject the request with NFS4ERR_ATTRNOTSUPP. If the server receives a request to set an ACE that it can store but cannot enforce, the server SHOULD reject the request with NFS4ERR_ATTRNOTSUPP. Example: suppose a server can enforce NFS ACLs for NFS access but Shepler Expires December 22, 2006 [Page 46] Internet-Draft NFSv4 Minor Version 1 June 2006 cannot enforce ACLs for local access. If arbitrary processes can run on the server, then the server SHOULD NOT indicate ACL support. On the other hand, if only trusted administrative programs run locally, then the server may indicate ACL support. 3.16.2. ACE Access Mask The access_mask field contains values based on the following: ACE4_READ_DATA Operation(s) affected: READ OPEN Discussion: Permission to read the data of the file. ACE4_LIST_DIRECTORY Operation(s) affected: READDIR Discussion: Permission to list the contents of a directory. ACE4_WRITE_DATA Operation(s) affected: WRITE OPEN Discussion: Permission to modify a file's data anywhere in the file's offset range. This includes the ability to write to any arbitrary offset and as a result to grow the file. ACE4_ADD_FILE Operation(s) affected: CREATE OPEN Discussion: Permission to add a new file in a directory. The CREATE operation is affected when nfs_ftype4 is NF4LNK, NF4BLK, NF4CHR, NF4SOCK, or NF4FIFO. (NF4DIR is not listed because it is covered by ACE4_ADD_SUBDIRECTORY.) OPEN is affected when used to create a regular file. ACE4_APPEND_DATA Operation(s) affected: WRITE OPEN Discussion: The ability to modify a file's data, but only starting at Shepler Expires December 22, 2006 [Page 47] Internet-Draft NFSv4 Minor Version 1 June 2006 EOF. This allows for the notion of append-only files, by allowing ACE4_APPEND_DATA and denying ACE4_WRITE_DATA to the same user or group. If a file has an ACL such as the one described above and a WRITE request is made for somewhere other than EOF, the server SHOULD return NFS4ERR_ACCESS. ACE4_ADD_SUBDIRECTORY Operation(s) affected: CREATE Discussion: Permission to create a subdirectory in a directory. The CREATE operation is affected when nfs_ftype4 is NF4DIR. ACE4_READ_NAMED_ATTRS Operation(s) affected: OPENATTR Discussion: Permission to read the named attributes of a file or to lookup the named attributes directory. OPENATTR is affected when it is not used to create a named attribute directory. This is when 1.) createdir is TRUE, but a named attribute directory already exists, or 2.) createdir is FALSE. ACE4_WRITE_NAMED_ATTRS Operation(s) affected: OPENATTR Discussion: Permission to write the named attributes of a file or to create a named attribute directory. OPENATTR is affected when it is used to create a named attribute directory. This is when createdir is TRUE and no named attribute directory exists. The ability to check whether or not a named attribute directory exists depends on the ability to look it up, therefore, users also need the ACE4_READ_NAMED_ATTRS permission in order to create a named attribute directory. ACE4_EXECUTE Operation(s) affected: LOOKUP Discussion: Permission to execute a file or traverse/search a directory. ACE4_DELETE_CHILD Operation(s) affected: Shepler Expires December 22, 2006 [Page 48] Internet-Draft NFSv4 Minor Version 1 June 2006 REMOVE Discussion: Permission to delete a file or directory within a directory. See section "ACE4_DELETE vs. ACE4_DELETE_CHILD" for information on how these two access mask bits interact. ACE4_READ_ATTRIBUTES Operation(s) affected: GETATTR of file system object attributes Discussion: The ability to read basic attributes (non-ACLs) of a file. On a UNIX system, basic attributes can be thought of as the stat level attributes. Allowing this access mask bit would mean the entity can execute "ls -l" and stat. ACE4_WRITE_ATTRIBUTES Operation(s) affected: SETATTR of time_access_set, time_backup, time_create, time_modify_set Discussion: Permission to change the times associated with a file or directory to an arbitrary value. A user having ACE4_WRITE_DATA permission, but lacking ACE4_WRITE_ATTRIBUTES must be allowed to implicitly set the times associated with a file. ACE4_DELETE Operation(s) affected: REMOVE Discussion: Permission to delete the file or directory. See section "ACE4_DELETE vs. ACE4_DELETE_CHILD" for information on how these two access mask bits interact. ACE4_READ_ACL Operation(s) affected: GETATTR of acl Discussion: Permission to read the ACL. ACE4_WRITE_ACL Operation(s) affected: SETATTR of acl and mode Discussion: Permission to write the acl and mode attributes. ACE4_WRITE_OWNER Shepler Expires December 22, 2006 [Page 49] Internet-Draft NFSv4 Minor Version 1 June 2006 Operation(s) affected: SETATTR of owner and owner_group Discussions: Permission to write the owner and owner_group attributes. On UNIX systems, this is the ability to execute chown or chgrp. ACE4_SYNCHRONIZE Operation(s) affected: NONE Discussion: Permission to access file locally at the server with synchronized reads and writes. The bitmask constants used for the access mask field are as follows: const ACE4_READ_DATA = 0x00000001; const ACE4_LIST_DIRECTORY = 0x00000001; const ACE4_WRITE_DATA = 0x00000002; const ACE4_ADD_FILE = 0x00000002; const ACE4_APPEND_DATA = 0x00000004; const ACE4_ADD_SUBDIRECTORY = 0x00000004; const ACE4_READ_NAMED_ATTRS = 0x00000008; const ACE4_WRITE_NAMED_ATTRS = 0x00000010; const ACE4_EXECUTE = 0x00000020; const ACE4_DELETE_CHILD = 0x00000040; const ACE4_READ_ATTRIBUTES = 0x00000080; const ACE4_WRITE_ATTRIBUTES = 0x00000100; const ACE4_DELETE = 0x00010000; const ACE4_READ_ACL = 0x00020000; const ACE4_WRITE_ACL = 0x00040000; const ACE4_WRITE_OWNER = 0x00080000; const ACE4_SYNCHRONIZE = 0x00100000; Server implementations need not provide the granularity of control that is implied by this list of masks. For example, POSIX-based systems might not distinguish APPEND_DATA (the ability to append to a file) from WRITE_DATA (the ability to modify existing contents); both masks would be tied to a single "write" permission. When such a server returns attributes to the client, it would show both APPEND_DATA and WRITE_DATA if and only if the write permission is enabled. If a server receives a SETATTR request that it cannot accurately implement, it should error in the direction of more restricted access. For example, suppose a server cannot distinguish overwriting data from appending new data, as described in the previous paragraph. If a client submits an ACE where APPEND_DATA is set but WRITE_DATA is Shepler Expires December 22, 2006 [Page 50] Internet-Draft NFSv4 Minor Version 1 June 2006 not (or vice versa), the server should reject the request with NFS4ERR_ATTRNOTSUPP. Nonetheless, if the ACE has type DENY, the server may silently turn on the other bit, so that both APPEND_DATA and WRITE_DATA are denied. 3.16.2.1. ACE4_DELETE vs. ACE4_DELETE_CHILD There are two separate access mask bits that govern the ability to delete a file: ACE4_DELETE and ACE4_DELETE_CHILD. ACE4_DELETE is intended to be specified by the ACL for the object to be deleted, and ACE4_DELETE_CHILD is intended to be specified by the ACL of the parent directory. In addition to ACE4_DELETE and ACE4_DELETE_CHILD, many systems also consider the "sticky bit" (MODE4_SVTX) and the appropriate "write" mode bit when determining whether to allow a file to be deleted. The mode bit for write corresponds to ACE4_WRITE_DATA, which is the same physical bit as ACE4_ADD_FILE. Therefore, ACE4_ADD_FILE can come into play when determining permission to delete. In the algorithm below, the strategy is that ACE4_DELETE and ACE4_DELETE_CHILD take precedence over the sticky bit, and the sticky bit takes precedence over the "write" mode bits (reflected in ACE4_ADD_FILE). Server implementations SHOULD grant or deny permission to delete based on the following algorithm. Shepler Expires December 22, 2006 [Page 51] Internet-Draft NFSv4 Minor Version 1 June 2006 if ACE4_EXECUTE is denied by the parent directory ACL: deny delete else if ACE4_EXECUTE is unspecified by the parent directory ACL: deny delete else if ACE4_DELETE is allowed by the target object ACL: allow delete else if ACE4_DELETE_CHILD is allowed by the parent directory ACL: allow delete else if ACE4_DELETE_CHILD is denied by the parent directory ACL: deny delete else if ACE4_ADD_FILE is allowed by the parent directory ACL: if MODE4_SVTX is set for the parent directory: if the principal owns the parent directory OR the principal owns the target object OR ACE4_WRITE_DATA is allowed by the target object ACL: allow delete else: deny delete else: allow delete else: deny delete 3.16.3. ACE flag The "flag" field contains values based on the following descriptions. ACE4_FILE_INHERIT_ACE Can be placed on a directory and indicates that this ACE should be added to each new non-directory file created. ACE4_DIRECTORY_INHERIT_ACE Can be placed on a directory and indicates that this ACE should be added to each new directory created. ACE4_INHERIT_ONLY_ACE Can be placed on a directory but does not apply to the directory, only to newly created files/directories as specified by the above two flags. ACE4_NO_PROPAGATE_INHERIT_ACE Can be placed on a directory. Normally when a new directory is created and an ACE exists on the parent directory which is marked ACE4_DIRECTORY_INHERIT_ACE, two ACEs are placed on the new Shepler Expires December 22, 2006 [Page 52] Internet-Draft NFSv4 Minor Version 1 June 2006 directory. One for the directory itself and one which is an inheritable ACE for newly created directories. This flag tells the server to not place an ACE on the newly created directory which is inheritable by subdirectories of the created directory. ACE4_SUCCESSFUL_ACCESS_ACE_FLAG ACE4_FAILED_ACCESS_ACE_FLAG The ACE4_SUCCESSFUL_ACCESS_ACE_FLAG (SUCCESS) and ACE4_FAILED_ACCESS_ACE_FLAG (FAILED) flag bits relate only to ACE4_SYSTEM_AUDIT_ACE_TYPE (AUDIT) and ACE4_SYSTEM_ALARM_ACE_TYPE (ALARM) ACE types. If during the processing of the file's ACL, the server encounters an AUDIT or ALARM ACE that matches the principal attempting the OPEN, the server notes that fact, and the presence, if any, of the SUCCESS and FAILED flags encountered in the AUDIT or ALARM ACE. Once the server completes the ACL processing, and the share reservation processing, and the OPEN call, it then notes if the OPEN succeeded or failed. If the OPEN succeeded, and if the SUCCESS flag was set for a matching AUDIT or ALARM, then the appropriate AUDIT or ALARM event occurs. If the OPEN failed, and if the FAILED flag was set for the matching AUDIT or ALARM, then the appropriate AUDIT or ALARM event occurs. Clearly either or both of the SUCCESS or FAILED can be set, but if neither is set, the AUDIT or ALARM ACE is not useful. The previously described processing applies to that of the ACCESS operation as well. The difference being that "success" or "failure" does not mean whether ACCESS returns NFS4_OK or not. Success means whether ACCESS returns all requested and supported bits. Failure means whether ACCESS failed to return a bit that was requested and supported. ACE4_IDENTIFIER_GROUP Indicates that the "who" refers to a GROUP as defined under UNIX or a GROUP ACCOUNT as defined under Windows. Clients and servers may ignore the ACE4_IDENTIFIER_GROUP flag on ACEs with a who value equal to one of the special identifiers outlined in section "ACE who". The bitmask constants used for the flag field are as follows: const ACE4_FILE_INHERIT_ACE = 0x00000001; const ACE4_DIRECTORY_INHERIT_ACE = 0x00000002; const ACE4_NO_PROPAGATE_INHERIT_ACE = 0x00000004; const ACE4_INHERIT_ONLY_ACE = 0x00000008; const ACE4_SUCCESSFUL_ACCESS_ACE_FLAG = 0x00000010; const ACE4_FAILED_ACCESS_ACE_FLAG = 0x00000020; const ACE4_IDENTIFIER_GROUP = 0x00000040; Shepler Expires December 22, 2006 [Page 53] Internet-Draft NFSv4 Minor Version 1 June 2006 A server need not support any of these flags. If the server supports flags that are similar to, but not exactly the same as, these flags, the implementation may define a mapping between the protocol-defined flags and the implementation-defined flags. Again, the guiding principle is that the file not appear to be more secure than it really is. For example, suppose a client tries to set an ACE with ACE4_FILE_INHERIT_ACE set but not ACE4_DIRECTORY_INHERIT_ACE. If the server does not support any form of ACL inheritance, the server should reject the request with NFS4ERR_ATTRNOTSUPP. If the server supports a single "inherit ACE" flag that applies to both files and directories, the server may reject the request (i.e., requiring the client to set both the file and directory inheritance flags). The server may also accept the request and silently turn on the ACE4_DIRECTORY_INHERIT_ACE flag. 3.16.4. ACE who There are several special identifiers ("who") which need to be understood universally, rather than in the context of a particular DNS domain. Some of these identifiers cannot be understood when an NFS client accesses the server, but have meaning when a local process accesses the file. The ability to display and modify these permissions is permitted over NFS, even if none of the access methods on the server understands the identifiers. Who Description _______________________________________________________________ "OWNER" The owner of the file. "GROUP" The group associated with the file. "EVERYONE" The world, including the owner and owning group. "INTERACTIVE" Accessed from an interactive terminal. "NETWORK" Accessed via the network. "DIALUP" Accessed as a dialup user to the server. "BATCH" Accessed from a batch job. "ANONYMOUS" Accessed without any authentication. "AUTHENTICATED" Any authenticated user (opposite of ANONYMOUS) "SERVICE" Access from a system service. To avoid conflict, these special identifiers are distinguish by an appended "@" and should appear in the form "xxxx@" (note: no domain name after the "@"). For example: ANONYMOUS@. Shepler Expires December 22, 2006 [Page 54] Internet-Draft NFSv4 Minor Version 1 June 2006 3.16.4.1. Discussion on EVERYONE@ It is important to note that "EVERYONE@" is not equivalent to the UNIX "other" entity. This is because, by definition, UNIX "other" does not include the owner or owning group of a file. "EVERYONE@" means literally everyone, including the owner or owning group. 3.16.4.2. Discussion on OWNER@ and GROUP@ Due to the use of the special identifiers "OWNER@" and "GROUP@" to indicate that an ACE applies to the the owner and owning group, respectively, associated with a file, the ACL cannot be used to determine the owner and owning group of a file. This information should be indicated by the values of the owner and owner_group file attributes returned by the server. 3.16.5. Mode Attribute The NFS version 4 mode attribute is based on the UNIX mode bits. The following bits are defined: const MODE4_SUID = 0x800; /* set user id on execution */ const MODE4_SGID = 0x400; /* set group id on execution */ const MODE4_SVTX = 0x200; /* save text even after use */ const MODE4_RUSR = 0x100; /* read permission: owner */ const MODE4_WUSR = 0x080; /* write permission: owner */ const MODE4_XUSR = 0x040; /* execute permission: owner */ const MODE4_RGRP = 0x020; /* read permission: group */ const MODE4_WGRP = 0x010; /* write permission: group */ const MODE4_XGRP = 0x008; /* execute permission: group */ const MODE4_ROTH = 0x004; /* read permission: other */ const MODE4_WOTH = 0x002; /* write permission: other */ const MODE4_XOTH = 0x001; /* execute permission: other */ Bits MODE4_RUSR, MODE4_WUSR, and MODE4_XUSR apply to the principal identified in the owner attribute. Bits MODE4_RGRP, MODE4_WGRP, and MODE4_XGRP apply to the principals identified in the owner_group attribute. Bits MODE4_ROTH, MODE4_WOTH, MODE4_XOTH apply to any principal that does not match that in the owner group, and does not have a group matching that of the owner_group attribute. The remaining bits are not defined by this protocol and MUST NOT be used. The minor version mechanism must be used to define further bit usage. Note that in UNIX, if a file has the MODE4_SGID bit set and no MODE4_XGRP bit set, then READ and WRITE must use mandatory file locking. Shepler Expires December 22, 2006 [Page 55] Internet-Draft NFSv4 Minor Version 1 June 2006 3.16.6. Interaction Between Mode and ACL Attributes As defined, there is a certain amount of overlap between ACL and mode file attributes. Even though there is overlap, ACLs don't contain all the information specified by a mode and modes can't possibly contain all the information specified by an ACL. For servers that support both mode and ACL, the mode's MODE4_R*, MODE4_W* and MODE4_X* values should be computed from the ACL and should be recomputed upon each SETATTR of ACL. Similarly, upon SETATTR of mode, the ACL should be modified in order to allow the mode computed from the ACL to be the same as the mode given to SETATTR. The mode computed from any given ACL should be deterministic. This means that given an ACL, the same mode will always be computed. For servers that support ACL and not mode, clients may handle applications which set and get the mode by creating the correct ACL to send to the server and by computing the mode from the ACL, respectively. In this case, the methods used by the server to keep the mode in sync with the ACL can also be used by the client. These methods are explained in sections Section 3.16.6.3 Section 3.16.6.1 and Section 3.16.6.2. Since the mode can't possibly represent all of the information that is defined by an ACL, there are some descrepencies to be aware of. As explained in the section "Deficiencies in a Mode Representation of an ACL", the mode bits computed from the ACL could potentially convey more restrictive permissions than what would be granted via the ACL. Because of this clients are not recommended to do their own access checks based on the mode of a file. Because the mode attribute includes bits (i.e. MODE4_SUID, MODE4_SGID, MODE4_SVTX) that have nothing to do with ACL semantics, it is permitted for clients to specify both the ACL attribute and mode in the same SETATTR operation. However, because there is no prescribed order for processing the attributes in a SETATTR, clients may see differing results. For recommendations on how to achieve consistent behavior, see Section 3.16.6.4 for recommendations. 3.16.6.1. Recomputing mode upon SETATTR of ACL Keeping the mode and ACL attributes synchronized is important, but as mentioned previously, the mode cannot possibly represent all of the information in the ACL. Still, the mode should be modified to represent the access as accurately as possible. The general algorithm to assign a new mode attribute to an object Shepler Expires December 22, 2006 [Page 56] Internet-Draft NFSv4 Minor Version 1 June 2006 based on a new ACL being set is: 1. Walk through the ACEs in order, looking for ACEs with a "who" value of OWNER@, GROUP@, or EVERYONE@. 2. It is understood that ACEs with a "who" value of OWNER@ affect the *USR bits of the mode, GROUP@ affect *GRP bits, and EVERYONE@ affect *USR, *GRP, and *OTH bits. 3. If such an ACE specifies ALLOW or DENY for ACE4_READ_DATA, ACE4_WRITE_DATA, or ACE4_EXECUTE, and the mode bits affected have not been determined yet, set them to one (if ALLOW) or zero (if DENY). 4. Upon completion, any mode bits as yet undetermined have a value of zero. This pseudocode more precisely describes the algorithm: /* octal constants for the mode bits */ RUSR = 0400 WUSR = 0200 XUSR = 0100 RGRP = 0040 WGRP = 0020 XGRP = 0010 ROTH = 0004 WOTH = 0002 XOTH = 0001 /* * old_mode represents the previous value * of the mode of the object. */ mode_t mode = 0, seen = 0; for each ACE a { if a.type is ALLOW or DENY and ACE4_INHERIT_ONLY_ACE is not set in a.flags { if a.who is OWNER@ { if ((a.mask & ACE4_READ_DATA) && (! (seen & RUSR))) { seen |= RUSR; if a.type is ALLOW { mode |= RUSR; } } Shepler Expires December 22, 2006 [Page 57] Internet-Draft NFSv4 Minor Version 1 June 2006 if ((a.mask & ACE4_WRITE_DATA) && (! (seen & WUSR))) { seen |= WUSR; if a.type is ALLOW { mode |= WUSR; } } if ((a.mask & ACE4_EXECUTE) && (! (seen & XUSR))) { seen |= XUSR; if a.type is ALLOW { mode |= XUSR; } } } else if a.who is GROUP@ { if ((a.mask & ACE4_READ_DATA) && (! (seen & RGRP))) { seen |= RGRP; if a.type is ALLOW { mode |= RGRP; } } if ((a.mask & ACE4_WRITE_DATA) && (! (seen & WGRP))) { seen |= WGRP; if a.type is ALLOW { mode |= WGRP; } } if ((a.mask & ACE4_EXECUTE) && (! (seen & XGRP))) { seen |= XGRP; if a.type is ALLOW { mode |= XGRP; } } } else if a.who is EVERYONE@ { if (a.mask & ACE4_READ_DATA) { if ! (seen & RUSR) { seen |= RUSR; if a.type is ALLOW { mode |= RUSR; } } if ! (seen & RGRP) { seen |= RGRP; if a.type is ALLOW { mode |= RGRP; Shepler Expires December 22, 2006 [Page 58] Internet-Draft NFSv4 Minor Version 1 June 2006 } } if ! (seen & ROTH) { seen |= ROTH; if a.type is ALLOW { mode |= ROTH; } } } if (a.mask & ACE4_WRITE_DATA) { if ! (seen & WUSR) { seen |= WUSR; if a.type is ALLOW { mode |= WUSR; } } if ! (seen & WGRP) { seen |= WGRP; if a.type is ALLOW { mode |= WGRP; } } if ! (seen & WOTH) { seen |= WOTH; if a.type is ALLOW { mode |= WOTH; } } } if (a.mask & ACE4_EXECUTE) { if ! (seen & XUSR) { seen |= XUSR; if a.type is ALLOW { mode |= XUSR; } } if ! (seen & XGRP) { seen |= XGRP; if a.type is ALLOW { mode |= XGRP; } } if ! (seen & XOTH) { seen |= XOTH; if a.type is ALLOW { mode |= XOTH; } } Shepler Expires December 22, 2006 [Page 59] Internet-Draft NFSv4 Minor Version 1 June 2006 } } } } return mode | (old_mode & (SUID | SGID | SVTX)) 3.16.6.2. Applying the mode given to CREATE or OPEN to an inherited ACL The goal of implementing ACL inheritance is for newly created objects to inherit the ACLs they were intended to inherit, but without disregarding the mode that is given with the arguments to the CREATE or OPEN operations. The general algorithm is as follows: 1. Form an ACL on the newly created object that is the concatenation of all inheritable ACEs from its parent directory. Note that there may be zero inheritable ACEs; thus, an object may start with an empty ACL. 2. For each ACE in the new ACL, adjust its flags if necessary, and possibly create two ACEs in place of one. This is necessary to honor the intent of the inheritance- related flags and to preserve information about the original inheritable ACEs in the case that they will be modified by other steps. The algorithm is as follows: A. If the ACE4_NO_PROPAGATE_INHERIT_ACE is set, or if the object being created is not a directory, then clear the following flags: ACE4_NO_PROPAGATE_INHERIT_ACE ACE4_FILE_INHERIT_ACE ACE4_DIRECTORY_INHERIT_ACE ACE4_INHERIT_ONLY_ACE Continue on to the next ACE. B. If the object being created is a directory and ACE4_FILE_INHERIT_ACE is set, but ACE4_DIRECTORY_INHERIT_ACE is NOT set, then we ensure that ACE4_INHERIT_ONLY_ACE is set. Continue on to the next ACE. Otherwise: C. If the type of the ACE is neither ALLOW nor DENY, then continue on to the next ACE. Shepler Expires December 22, 2006 [Page 60] Internet-Draft NFSv4 Minor Version 1 June 2006 D. Copy the original ACE into a second, adjacent ACE. E. On the first ACE, ensure that ACE4_INHERIT_ONLY_ACE is set. F. On the second ACE, clear the following flags: ACE4_NO_PROPAGATE_INHERIT_ACE ACE4_FILE_INHERIT_ACE ACE4_DIRECTORY_INHERIT_ACE ACE4_INHERIT_ONLY_ACE G. On the second ACE, if the type field is ALLOW, an implementation MAY clear the following mask bits: ACE4_WRITE_ACL ACE4_WRITE_OWNER 3. To ensure that the mode is honored, apply the algorithm for applying a mode to a file/directory with an existing ACL on the new object as described in Section 3.16.6.3, using the mode that is to be used for file creation. 3.16.6.3. Applying a Mode to an Existing ACL An existing ACL can mean two things in this context. One, that a file/directory already exists and it has an ACL. Two, that a directory has inheritable ACEs that will make up the ACL for any new files or directories created therein. The high-level goal of the behavior when a mode is set on a file with an existing ACL is to take the new mode into account, without needing to delete a pre-existing ACL. When a mode is applied to an object, e.g. via SETATTR or CREATE/OPEN, the ACL must be modified to accommodate the mode. 1. The ACL is traversed, one ACE at a time. For each ACE: 1. If the type of the ACE is neither ALLOW nor DENY, the ACE is left unchanged. Continue to the next ACE. 2. If the ACE4_INHERIT_ONLY_ACE flag is set on the ACE, it is left unchanged. Continue to the next ACE. Shepler Expires December 22, 2006 [Page 61] Internet-Draft NFSv4 Minor Version 1 June 2006 3. If either or both of ACE4_FILE_INHERIT_ACE or ACE4_DIRECTORY_INHERIT_ACE are set: 1. A copy of the ACE is made, and placed in the ACL immediately following the current ACE. 2. In the first ACE, the flag ACE4_INHERIT_ONLY_ACE is set. 3. In the second ACE, the following flags are cleared: ACE4_FILE_INHERIT_ACE ACE4_DIRECTORY_INHERIT_ACE ACE4_NO_PROPAGATE_INHERIT_ACE The algorithm continues on with the second ACE. 4. If the "who" field is one of the following: OWNER@ GROUP@ EVERYONE@ then the following mask bits are cleared: ACE4_READ_DATA / ACE4_LIST_DIRECTORY ACE4_WRITE_DATA / ACE4_ADD_FILE ACE4_APPEND_DATA / ACE4_ADD_SUBDIRECTORY ACE4_EXECUTE At this point, we proceed to the next ACE. 5. Otherwise, if the "who" field did not match one of OWNER@, GROUP@, or EVERYONE@, the following steps SHOULD be performed. 1. If the type of the ACE is ALLOW, we check the preceding ACE (if any). If it does not meet all of the following criteria: 1. The type field is DENY. Shepler Expires December 22, 2006 [Page 62] Internet-Draft NFSv4 Minor Version 1 June 2006 2. The who field is the same as the current ACE. 3. The flag bit ACE4_IDENTIFIER_GROUP is the same as it is in the current ACE, and no other flag bits are set. 4. The mask bits are a subset of the mask bits of the current ACE, and are also a subset of the following: ACE4_READ_DATA / ACE4_LIST_DIRECTORY ACE4_WRITE_DATA / ACE4_ADD_FILE ACE4_APPEND_DATA / ACE4_ADD_SUBDIRECTORY ACE4_EXECUTE then an ACE of type DENY, with a who equal to the current ACE, flag bits equal to ( & ACE4_IDENTIFIER_GROUP), and no mask bits, is prepended. 2. The following modifications are made to the prepended ACE. The intent is to mask the following ACE to disallow ACE4_READ_DATA, ACE4_WRITE_DATA, ACE4_APPEND_DATA, or ACE4_EXECUTE, based upon the group permissions of the new mode. As a special case, if the ACE matches the current owner of the file, the owner bits are used, rather than the group bits. This is reflected in the algorithm below. Shepler Expires December 22, 2006 [Page 63] Internet-Draft NFSv4 Minor Version 1 June 2006 Let there be three bits defined: #define READ 04 #define WRITE 02 #define EXEC 01 Let "amode" be the new mode, right-shifted three bits, in order to have the group permission bits placed in the three low order bits of amode, i.e. amode = mode >> 3 If ACE4_IDENTIFIER_GROUP is not set in the flags, and the "who" field of the ACE matches the owner of the file, we shift amode three more bits, in order to have the owner permission bits placed in the three low order bits of amode: amode = amode >> 3 amode is now used as follows: If ACE4_READ_DATA is set on the current ACE: If READ is set on amode: ACE4_READ_DATA is cleared on the prepended ACE else: ACE4_READ_DATA is set on the prepended ACE If ACE4_WRITE_DATA is set on the current ACE: If WRITE is set on amode: ACE4_WRITE_DATA is cleared on the prepended ACE else: ACE4_WRITE_DATA is set on the prepended ACE If ACE4_APPEND_DATA is set on the current ACE: If WRITE is set on amode: ACE4_APPEND_DATA is cleared on the prepended ACE else: ACE4_APPEND_DATA is set on the prepended ACE If ACE4_EXECUTE is set on the current ACE: If EXEC is set on amode: ACE4_EXECUTE is cleared on the prepended ACE else: ACE4_EXECUTE is set on the prepended ACE 3. To conform with POSIX, and prevent cases where the owner of the file is given permissions via an explicit group, we implement the following step. Shepler Expires December 22, 2006 [Page 64] Internet-Draft NFSv4 Minor Version 1 June 2006 If ACE4_IDENTIFIER_GROUP is set in the flags field of the ALLOW ACE: Let "mode" be the mode that we are chmoding to: extramode = (mode >> 3) & 07 ownermode = mode >> 6 extramode &= ~ownermode If extramode is not zero: If extramode & READ: Clear ACE4_READ_DATA in both the prepended DENY ACE and the ALLOW ACE If extramode & WRITE: Clear ACE4_WRITE_DATA and ACE_APPEND_DATA in both the prepended DENY ACE and the ALLOW ACE If extramode & EXEC: Clear ACE4_EXECUTE in both the prepended DENY ACE and the ALLOW ACE 2. If there are at least six ACEs, the final six ACEs are examined. If they are not equal to the following ACEs: A1) OWNER@:::DENY A2) OWNER@:ACE4_WRITE_ACL/ACE4_WRITE_OWNER/ ACE4_WRITE_ATTRIBUTES/ACE4_WRITE_NAMED_ATTRIBUTES::ALLOW A3) GROUP@::ACE4_IDENTIFIER_GROUP:DENY A4) GROUP@::ACE4_IDENTIFIER_GROUP:ALLOW A5) EVERYONE@:ACE4_WRITE_ACL/ACE4_WRITE_OWNER/ ACE4_WRITE_ATTRIBUTES/ACE4_WRITE_NAMED_ATTRIBUTES::DENY A6) EVERYONE@:ACE4_READ_ACL/ACE4_READ_ATTRIBUTES/ ACE4_READ_NAMED_ATTRIBUTES/ACE4_SYNCHRONIZE::ALLOW Then six ACEs matching the above are appended. 3. The final six ACEs are adjusted according to the incoming mode. Shepler Expires December 22, 2006 [Page 65] Internet-Draft NFSv4 Minor Version 1 June 2006 /* octal constants for the mode bits */ RUSR = 0400 WUSR = 0200 XUSR = 0100 RGRP = 0040 WGRP = 0020 XGRP = 0010 ROTH = 0004 WOTH = 0002 XOTH = 0001 If RUSR is set: set ACE4_READ_DATA in A2 else: set ACE4_READ_DATA in A1 If WUSR is set: set ACE4_WRITE_DATA and ACE4_APPEND_DATA in A2 else: set ACE4_WRITE_DATA and ACE4_APPEND_DATA in A1 If XUSR is set: set ACE4_EXECUTE in A2 else: set ACE4_EXECUTE in A1 If RGRP is set: set ACE4_READ_DATA in A4 else: set ACE4_READ_DATA in A3 If WGRP is set: set ACE4_WRITE_DATA and ACE4_APPEND_DATA in A4 else: set ACE4_WRITE_DATA and ACE4_APPEND_DATA in A3 If XGRP is set: set ACE4_EXECUTE in A4 else: set ACE4_EXECUTE in A3 If ROTH is set: set ACE4_READ_DATA in A6 else: set ACE4_READ_DATA in A5 If WOTH is set: set ACE4_WRITE_DATA and ACE4_APPEND_DATA in A6 else: set ACE4_WRITE_DATA and ACE4_APPEND_DATA in A5 If XOTH is set: set ACE4_EXECUTE in A6 else: set ACE4_EXECUTE in A5 3.16.6.4. ACL and mode in the same SETATTR The only reason that a mode and ACL should be set in the same SETATTR is if the user wants to set the SUID, SGID and SVTX bits along with setting the permissions by means of an ACL. There is still no way to enforce which order the attributes will be set in, and it is likely that different orders of operations will produce different results. 3.16.6.4.1. Client Side Recommendations If an application needs to enforce a certain behavior, it is recommended that the client implementations set mode and ACL in separate SETATTR requests. This will produce consistent and expected results. If an application wants to set SUID, SGID and SVTX bits and an ACL: Shepler Expires December 22, 2006 [Page 66] Internet-Draft NFSv4 Minor Version 1 June 2006 In the first SETATTR, set the mode with SUID, SGID and SVTX bits as desired and all other bits with a value of 0. In a following SETATTR (preferably in the same COMPOUND) set the ACL. 3.16.6.4.2. Server Side Recommendations If both mode and ACL are given to SETATTR, server implementations should verify that the mode and ACL don't conflict, i.e. the mode computed from the given ACL must be the same as the given mode, excluding the SUID, SGID and SVTX bits. The algorithm for assigning a new mode based on the ACL can be used. This is described in section Section 3.16.6.1. If a server receives a request to set both mode and ACL, but the two conflict, the server should return NFS4ERR_INVAL. 3.16.6.5. Inheritance and turning it off The inheritance of access permissions may be problematic if a user cannot prevent their file from inheriting unwanted permissions. For example, a user, "samf", sets up a shared project directory to be used by everyone working on Project Foo. "lisagab" is a part of Project Foo, but is working on something that should not be seen by anyone else. How can "lisagab" make sure that any new files that she creates in this shared project directory do not inherit anything that could compromise the security of her work? More relevant to the implementors of NFS version 4 clients and servers is the question of how to communicate the fact that user, "lisagab", doesn't want any permissions to be inherited to her newly created file or directory. To do this, implementors should standardize on what the behavior of CREATE and OPEN must be if: 1. just mode is given In this case, inheritance will take place, but the mode will be applied to the inherited ACL as described in Section 3.16.6.1, thereby modifying the ACL. 2. just ACL is given In this case, inheritance will not take place, and the ACL as defined in the CREATE or OPEN will be set without modification. Shepler Expires December 22, 2006 [Page 67] Internet-Draft NFSv4 Minor Version 1 June 2006 3. both mode and ACL are given In this case, implementors should verify that the mode and ACL don't conflict, i.e. the mode computed from the given ACL must be the same as the given mode. The algorithm for assigning a new mode based on the ACL can be used. This is described in Section 3.16.6.1) If a server receives a request to set both mode and ACL, but the two conflict, the server should return NFS4ERR_INVAL. If the mode and ACL don't conflict, inheritance will not take placeand both, the mode and ACL, will be set without modification. 4. neither mode nor ACL are given In this case, inheritance will take place and no modifications to the ACL will happen. It is worth noting that if no inheritable ACEs exist on the parent directory, the file will be created with an empty ACL, thus granting no accesses. 3.16.6.6. Deficiencies in a Mode Representation of an ACL In the presence of an ACL, there are certain cases when the representation of the mode is not guaranteed to be accurate. An example of a situation is detailed below. As mentioned in Section 3.16.6, the representation of the mode is deterministic, but not guaranteed to be accurate. The mode bits potentially convey a more restrictive permission than what will actually be granted via the ACL. Given the following ACL of two ACEs: GROUP@:ACE4_READ_DATA/ACE4_WRITE_DATA/ACE4_EXECUTE: ACE4_IDENTIFIER_GROUP:ALLOW EVERYONE@:ACE4_READ_DATA/ACE4_WRITE_DATA/ACE4_EXECUTE::DENY we would compute a mode of 0070. However, it is possible, even likely, that the owner might be a member of the object's owning group, and thus, the owner would be granted read, write, and execute access to the object. This would conflict with the mode of 0070, where an owner would be denied this access. The only way to overcome this deficiency would be to determine whether the object's owner is a member of the object's owning group. This is difficult, but worse, on a POSIX or any UNIX-like system, it is a process' membership in a group that is important, not a user's. Thus, any fixed mode intended to represent the above ACL can be incorrect. Shepler Expires December 22, 2006 [Page 68] Internet-Draft NFSv4 Minor Version 1 June 2006 Example: administrative databases (possibly /etc/passwd and /etc/ group) indicate that the user "bob" is a member of the group "staff". An object has the ACL given above, is owned by "bob", and has an owning group of "staff". User "bob" has logged into the system, and thus processes have been created owned by "bob" and having membership in group "staff". A mode representation of the above ACL could thus be 0770, due to user "bob" having membership in group "staff". Now, the administrative databases are changed, such that user "bob" is no longer in group "staff". User "bob" logs in to the system again, and thus more processes are created, this time owned by "bob" but NOT in group "staff". A mode of 0770 is inaccurate for processes not belonging to group "staff". But even if the mode of the file were proactively changed to 0070 at the time the group database was edited, mode 0070 would be inaccurate for the pre-existing processes owned by user "bob" and having membership in group "staff". 4. Single-server Name Space This chapter describes the NFSv4 single-server name space. Single- server namespaces may be presented directly to clients, or they may be used as a basis to form larger multi-server namespaces (e.g. site- wide or organization-wide) to be presented to clients, as described in Section 10. 4.1. Server Exports On a UNIX server, the name space describes all the files reachable by pathnames under the root directory or "/". On a Windows NT server the name space constitutes all the files on disks named by mapped disk letters. NFS server administrators rarely make the entire server's filesystem name space available to NFS clients. More often portions of the name space are made available via an "export" feature. In previous versions of the NFS protocol, the root filehandle for each export is obtained through the MOUNT protocol; the client sends a string that identifies the export of name space and the server returns the root filehandle for it. The MOUNT protocol supports an EXPORTS procedure that will enumerate the server's exports. 4.2. Browsing Exports The NFS version 4 protocol provides a root filehandle that clients can use to obtain filehandles for the exports of a particular server, Shepler Expires December 22, 2006 [Page 69] Internet-Draft NFSv4 Minor Version 1 June 2006 via a series of LOOKUP operations within a COMPOUND, to traverse a path. A common user experience is to use a graphical user interface (perhaps a file "Open" dialog window) to find a file via progressive browsing through a directory tree. The client must be able to move from one export to another export via single-component, progressive LOOKUP operations. This style of browsing is not well supported by the NFS version 2 and 3 protocols. The client expects all LOOKUP operations to remain within a single server filesystem. For example, the device attribute will not change. This prevents a client from taking name space paths that span exports. An automounter on the client can obtain a snapshot of the server's name space using the EXPORTS procedure of the MOUNT protocol. If it understands the server's pathname syntax, it can create an image of the server's name space on the client. The parts of the name space that are not exported by the server are filled in with a "pseudo filesystem" that allows the user to browse from one mounted filesystem to another. There is a drawback to this representation of the server's name space on the client: it is static. If the server administrator adds a new export the client will be unaware of it. 4.3. Server Pseudo Filesystem NFS version 4 servers avoid this name space inconsistency by presenting all the exports for a given server within the framework of a single namespace, for that server. An NFS version 4 client uses LOOKUP and READDIR operations to browse seamlessly from one export to another. Portions of the server name space that are not exported are bridged via a "pseudo filesystem" that provides a view of exported directories only. A pseudo filesystem has a unique fsid and behaves like a normal, read only filesystem. Based on the construction of the server's name space, it is possible that multiple pseudo filesystems may exist. For example, /a pseudo filesystem /a/b real filesystem /a/b/c pseudo filesystem /a/b/c/d real filesystem Each of the pseudo filesystems are considered separate entities and therefore will have its own unique fsid. Shepler Expires December 22, 2006 [Page 70] Internet-Draft NFSv4 Minor Version 1 June 2006 4.4. Multiple Roots The DOS and Windows operating environments are sometimes described as having "multiple roots". Filesystems are commonly represented as disk letters. MacOS represents filesystems as top level names. NFS version 4 servers for these platforms can construct a pseudo file system above these root names so that disk letters or volume names are simply directory names in the pseudo root. 4.5. Filehandle Volatility The nature of the server's pseudo filesystem is that it is a logical representation of filesystem(s) available from the server. Therefore, the pseudo filesystem is most likely constructed dynamically when the server is first instantiated. It is expected that the pseudo filesystem may not have an on disk counterpart from which persistent filehandles could be constructed. Even though it is preferable that the server provide persistent filehandles for the pseudo filesystem, the NFS client should expect that pseudo file system filehandles are volatile. This can be confirmed by checking the associated "fh_expire_type" attribute for those filehandles in question. If the filehandles are volatile, the NFS client must be prepared to recover a filehandle value (e.g. with a series of LOOKUP operations) when receiving an error of NFS4ERR_FHEXPIRED. 4.6. Exported Root If the server's root filesystem is exported, one might conclude that a pseudo-filesystem is unneeded. This not necessarily so. Assume the following filesystems on a server: / disk1 (exported) /a disk2 (not exported) /a/b disk3 (exported) Because disk2 is not exported, disk3 cannot be reached with simple LOOKUPs. The server must bridge the gap with a pseudo-filesystem. 4.7. Mount Point Crossing The server filesystem environment may be constructed in such a way that one filesystem contains a directory which is 'covered' or mounted upon by a second filesystem. For example: /a/b (filesystem 1) /a/b/c/d (filesystem 2) The pseudo filesystem for this server may be constructed to look Shepler Expires December 22, 2006 [Page 71] Internet-Draft NFSv4 Minor Version 1 June 2006 like: / (place holder/not exported) /a/b (filesystem 1) /a/b/c/d (filesystem 2) It is the server's responsibility to present the pseudo filesystem that is complete to the client. If the client sends a lookup request for the path "/a/b/c/d", the server's response is the filehandle of the filesystem "/a/b/c/d". In previous versions of the NFS protocol, the server would respond with the filehandle of directory "/a/b/c/d" within the filesystem "/a/b". The NFS client will be able to determine if it crosses a server mount point by a change in the value of the "fsid" attribute. 4.8. Security Policy and Name Space Presentation The application of the server's security policy needs to be carefully considered by the implementor. One may choose to limit the viewability of portions of the pseudo filesystem based on the server's perception of the client's ability to authenticate itself properly. However, with the support of multiple security mechanisms and the ability to negotiate the appropriate use of these mechanisms, the server is unable to properly determine if a client will be able to authenticate itself. If, based on its policies, the server chooses to limit the contents of the pseudo filesystem, the server may effectively hide filesystems from a client that may otherwise have legitimate access. As suggested practice, the server should apply the security policy of a shared resource in the server's namespace to the components of the resource's ancestors. For example: / /a/b /a/b/c The /a/b/c directory is a real filesystem and is the shared resource. The security policy for /a/b/c is Kerberos with integrity. The server should apply the same security policy to /, /a, and /a/b. This allows for the extension of the protection of the server's namespace to the ancestors of the real shared resource. For the case of the use of multiple, disjoint security mechanisms in the server's resources, the security for a particular object in the server's namespace should be the union of all security mechanisms of all direct descendants. Shepler Expires December 22, 2006 [Page 72] Internet-Draft NFSv4 Minor Version 1 June 2006 5. File Locking and Share Reservations Integrating locking into the NFS protocol necessarily causes it to be stateful. With the inclusion of share reservations the protocol becomes substantially more dependent on state than the traditional combination of NFS and NLM [XNFS]. There are three components to making this state manageable: o Clear division between client and server o Ability to reliably detect inconsistency in state between client and server o Simple and robust recovery mechanisms In this model, the server owns the state information. The client communicates its view of this state to the server as needed. The client is also able to detect inconsistent state before modifying a file. To support Win32 share reservations it is necessary to atomically OPEN or CREATE files. Having a separate share/unshare operation would not allow correct implementation of the Win32 OpenFile API. In order to correctly implement share semantics, the previous NFS protocol mechanisms used when a file is opened or created (LOOKUP, CREATE, ACCESS) need to be replaced. The NFS version 4 protocol has an OPEN operation that subsumes the NFS version 3 methodology of LOOKUP, CREATE, and ACCESS. However, because many operations require a filehandle, the traditional LOOKUP is preserved to map a file name to filehandle without establishing state on the server. The policy of granting access or modifying files is managed by the server based on the client's state. These mechanisms can implement policy ranging from advisory only locking to full mandatory locking. 5.1. Locking It is assumed that manipulating a lock is rare when compared to READ and WRITE operations. It is also assumed that crashes and network partitions are relatively rare. Therefore it is important that the READ and WRITE operations have a lightweight mechanism to indicate if they possess a held lock. A lock request contains the heavyweight information required to establish a lock and uniquely define the lock owner. The following sections describe the transition from the heavy weight information to the eventual stateid used for most client and server locking and lease interactions. Shepler Expires December 22, 2006 [Page 73] Internet-Draft NFSv4 Minor Version 1 June 2006 5.1.1. Client ID For each LOCK request, the client must identify itself to the server. This is done in such a way as to allow for correct lock identification and crash recovery. A sequence of a SETCLIENTID operation followed by a SETCLIENTID_CONFIRM operation is required to establish the identification onto the server. Establishment of identification by a new incarnation of the client also has the effect of immediately breaking any leased state that a previous incarnation of the client might have had on the server, as opposed to forcing the new client incarnation to wait for the leases to expire. Breaking the lease state amounts to the server removing all lock, share reservation, and, where the server is not supporting the CLAIM_DELEGATE_PREV claim type, all delegation state associated with same client with the same identity. For discussion of delegation state recovery, see the section "Delegation Recovery". Client identification is encapsulated in the following structure: struct nfs_client_id4 { verifier4 verifier; opaque id; }; The first field, verifier is a client incarnation verifier that is used to detect client reboots. Only if the verifier is different from that the server has previously recorded the client (as identified by the second field of the structure, id) does the server start the process of canceling the client's leased state. The second field, id is a variable length string that uniquely defines the client. There are several considerations for how the client generates the id string: o The string should be unique so that multiple clients do not present the same string. The consequences of two clients presenting the same string range from one client getting an error to one client having its leased state abruptly and unexpectedly canceled. o The string should be selected so the subsequent incarnations (e.g. reboots) of the same client cause the client to present the same string. The implementor is cautioned from an approach that requires the string to be recorded in a local file because this precludes the use of the implementation in an environment where Shepler Expires December 22, 2006 [Page 74] Internet-Draft NFSv4 Minor Version 1 June 2006 there is no local disk and all file access is from an NFS version 4 server. o The string should be different for each server network address that the client accesses, rather than common to all server network addresses. The reason is that it may not be possible for the client to tell if same server is listening on multiple network addresses. If the client issues SETCLIENTID with the same id string to each network address of such a server, the server will think it is the same client, and each successive SETCLIENTID will cause the server to begin the process of removing the client's previous leased state. o The algorithm for generating the string should not assume that the client's network address won't change. This includes changes between client incarnations and even changes while the client is stilling running in its current incarnation. This means that if the client includes just the client's and server's network address in the id string, there is a real risk, after the client gives up the network address, that another client, using a similar algorithm for generating the id string, will generate a conflicting id string. o Given the above considerations, an example of a well generated id string is one that includes: o The server's network address. o The client's network address. o For a user level NFS version 4 client, it should contain additional information to distinguish the client from other user level clients running on the same host, such as a process id or other unique sequence. o Additional information that tends to be unique, such as one or more of: * The client machine's serial number (for privacy reasons, it is best to perform some one way function on the serial number). * A MAC address. * The timestamp of when the NFS version 4 software was first installed on the client (though this is subject to the previously mentioned caution about using information that is stored in a file, because the file might only be accessible over NFS version 4). Shepler Expires December 22, 2006 [Page 75] Internet-Draft NFSv4 Minor Version 1 June 2006 * A true random number. However since this number ought to be the same between client incarnations, this shares the same problem as that of the using the timestamp of the software installation. As a security measure, the server MUST NOT cancel a client's leased state if the principal established the state for a given id string is not the same as the principal issuing the SETCLIENTID. Note that SETCLIENTID and SETCLIENTID_CONFIRM has a secondary purpose of establishing the information the server needs to make callbacks to the client for purpose of supporting delegations. It is permitted to change this information via SETCLIENTID and SETCLIENTID_CONFIRM within the same incarnation of the client without removing the client's leased state. Once a SETCLIENTID and SETCLIENTID_CONFIRM sequence has successfully completed, the client uses the short hand client identifier, of type clientid4, instead of the longer and less compact nfs_client_id4 structure. This short hand client identifier (a clientid) is assigned by the server and should be chosen so that it will not conflict with a clientid previously assigned by the server. This applies across server restarts or reboots. When a clientid is presented to a server and that clientid is not recognized, as would happen after a server reboot, the server will reject the request with the error NFS4ERR_STALE_CLIENTID. When this happens, the client must obtain a new clientid by use of the SETCLIENTID operation and then proceed to any other necessary recovery for the server reboot case (See the section "Server Failure and Recovery"). The client must also employ the SETCLIENTID operation when it receives a NFS4ERR_STALE_STATEID error using a stateid derived from its current clientid, since this also indicates a server reboot which has invalidated the existing clientid (see the next section "lock_owner and stateid Definition" for details). See the detailed descriptions of SETCLIENTID and SETCLIENTID_CONFIRM for a complete specification of the operations. 5.1.2. Server Release of Clientid If the server determines that the client holds no associated state for its clientid, the server may choose to release the clientid. The server may make this choice for an inactive client so that resources are not consumed by those intermittently active clients. If the client contacts the server after this release, the server must ensure the client receives the appropriate error so that it will use the SETCLIENTID/SETCLIENTID_CONFIRM sequence to establish a new identity. Shepler Expires December 22, 2006 [Page 76] Internet-Draft NFSv4 Minor Version 1 June 2006 It should be clear that the server must be very hesitant to release a clientid since the resulting work on the client to recover from such an event will be the same burden as if the server had failed and restarted. Typically a server would not release a clientid unless there had been no activity from that client for many minutes. Note that if the id string in a SETCLIENTID request is properly constructed, and if the client takes care to use the same principal for each successive use of SETCLIENTID, then, barring an active denial of service attack, NFS4ERR_CLID_INUSE should never be returned. However, client bugs, server bugs, or perhaps a deliberate change of the principal owner of the id string (such as the case of a client that changes security flavors, and under the new flavor, there is no mapping to the previous owner) will in rare cases result in NFS4ERR_CLID_INUSE. In that event, when the server gets a SETCLIENTID for a client id that currently has no state, or it has state, but the lease has expired, rather than returning NFS4ERR_CLID_INUSE, the server MUST allow the SETCLIENTID, and confirm the new clientid if followed by the appropriate SETCLIENTID_CONFIRM. 5.1.3. lock_owner and stateid Definition When requesting a lock, the client must present to the server the clientid and an identifier for the owner of the requested lock. These two fields are referred to as the lock_owner and the definition of those fields are: o A clientid returned by the server as part of the client's use of the SETCLIENTID operation. o A variable length opaque array used to uniquely define the owner of a lock managed by the client. This may be a thread id, process id, or other unique value. When the server grants the lock, it responds with a unique stateid. The stateid is used as a shorthand reference to the lock_owner, since the server will be maintaining the correspondence between them. The server is free to form the stateid in any manner that it chooses as long as it is able to recognize invalid and out-of-date stateids. This requirement includes those stateids generated by earlier instances of the server. From this, the client can be properly notified of a server restart. This notification will occur when the Shepler Expires December 22, 2006 [Page 77] Internet-Draft NFSv4 Minor Version 1 June 2006 client presents a stateid to the server from a previous instantiation. The server must be able to distinguish the following situations and return the error as specified: o The stateid was generated by an earlier server instance (i.e. before a server reboot). The error NFS4ERR_STALE_STATEID should be returned. o The stateid was generated by the current server instance but the stateid no longer designates the current locking state for the lockowner-file pair in question (i.e. one or more locking operations has occurred). The error NFS4ERR_OLD_STATEID should be returned. This error condition will only occur when the client issues a locking request which changes a stateid while an I/O request that uses that stateid is outstanding. o The stateid was generated by the current server instance but the stateid does not designate a locking state for any active lockowner-file pair. The error NFS4ERR_BAD_STATEID should be returned. This error condition will occur when there has been a logic error on the part of the client or server. This should not happen. One mechanism that may be used to satisfy these requirements is for the server to, o divide the "other" field of each stateid into two fields: * A server verifier which uniquely designates a particular server instantiation. * An index into a table of locking-state structures. o utilize the "seqid" field of each stateid, such that seqid is monotonically incremented for each stateid that is associated with the same index into the locking-state table. By matching the incoming stateid and its field values with the state held at the server, the server is able to easily determine if a stateid is valid for its current instantiation and state. If the stateid is not valid, the appropriate error can be supplied to the client. Shepler Expires December 22, 2006 [Page 78] Internet-Draft NFSv4 Minor Version 1 June 2006 5.1.4. Use of the stateid and Locking All READ, WRITE and SETATTR operations contain a stateid. For the purposes of this section, SETATTR operations which change the size attribute of a file are treated as if they are writing the area between the old and new size (i.e. the range truncated or added to the file by means of the SETATTR), even where SETATTR is not explicitly mentioned in the text. If the lock_owner performs a READ or WRITE in a situation in which it has established a lock or share reservation on the server (any OPEN constitutes a share reservation) the stateid (previously returned by the server) must be used to indicate what locks, including both record locks and share reservations, are held by the lockowner. If no state is established by the client, either record lock or share reservation, a stateid of all bits 0 is used. Regardless whether a stateid of all bits 0, or a stateid returned by the server is used, if there is a conflicting share reservation or mandatory record lock held on the file, the server MUST refuse to service the READ or WRITE operation. Share reservations are established by OPEN operations and by their nature are mandatory in that when the OPEN denies READ or WRITE operations, that denial results in such operations being rejected with error NFS4ERR_LOCKED. Record locks may be implemented by the server as either mandatory or advisory, or the choice of mandatory or advisory behavior may be determined by the server on the basis of the file being accessed (for example, some UNIX-based servers support a "mandatory lock bit" on the mode attribute such that if set, record locks are required on the file before I/O is possible). When record locks are advisory, they only prevent the granting of conflicting lock requests and have no effect on READs or WRITEs. Mandatory record locks, however, prevent conflicting I/O operations. When they are attempted, they are rejected with NFS4ERR_LOCKED. When the client gets NFS4ERR_LOCKED on a file it knows it has the proper share reservation for, it will need to issue a LOCK request on the region of the file that includes the region the I/O was to be performed on, with an appropriate locktype (i.e. READ*_LT for a READ operation, WRITE*_LT for a WRITE operation). With NFS version 3, there was no notion of a stateid so there was no way to tell if the application process of the client sending the READ or WRITE operation had also acquired the appropriate record lock on the file. Thus there was no way to implement mandatory locking. With the stateid construct, this barrier has been removed. Note that for UNIX environments that support mandatory file locking, the distinction between advisory and mandatory locking is subtle. In Shepler Expires December 22, 2006 [Page 79] Internet-Draft NFSv4 Minor Version 1 June 2006 fact, advisory and mandatory record locks are exactly the same in so far as the APIs and requirements on implementation. If the mandatory lock attribute is set on the file, the server checks to see if the lockowner has an appropriate shared (read) or exclusive (write) record lock on the region it wishes to read or write to. If there is no appropriate lock, the server checks if there is a conflicting lock (which can be done by attempting to acquire the conflicting lock on the behalf of the lockowner, and if successful, release the lock after the READ or WRITE is done), and if there is, the server returns NFS4ERR_LOCKED. For Windows environments, there are no advisory record locks, so the server always checks for record locks during I/O requests. Thus, the NFS version 4 LOCK operation does not need to distinguish between advisory and mandatory record locks. It is the NFS version 4 server's processing of the READ and WRITE operations that introduces the distinction. Every stateid other than the special stateid values noted in this section, whether returned by an OPEN-type operation (i.e. OPEN, OPEN_DOWNGRADE), or by a LOCK-type operation (i.e. LOCK or LOCKU), defines an access mode for the file (i.e. READ, WRITE, or READ- WRITE) as established by the original OPEN which began the stateid sequence, and as modified by subsequent OPENs and OPEN_DOWNGRADEs within that stateid sequence. When a READ, WRITE, or SETATTR which specifies the size attribute, is done, the operation is subject to checking against the access mode to verify that the operation is appropriate given the OPEN with which the operation is associated. In the case of WRITE-type operations (i.e. WRITEs and SETATTRs which set size), the server must verify that the access mode allows writing and return an NFS4ERR_OPENMODE error if it does not. In the case, of READ, the server may perform the corresponding check on the access mode, or it may choose to allow READ on opens for WRITE only, to accommodate clients whose write implementation may unavoidably do reads (e.g. due to buffer cache constraints). However, even if READs are allowed in these circumstances, the server MUST still check for locks that conflict with the READ (e.g. another open specify denial of READs). Note that a server which does enforce the access mode check on READs need not explicitly check for conflicting share reservations since the existence of OPEN for read access guarantees that no conflicting share reservation can exist. A stateid of all bits 1 (one) MAY allow READ operations to bypass locking checks at the server. However, WRITE operations with a stateid with bits all 1 (one) MUST NOT bypass locking checks and are treated exactly the same as if a stateid of all bits 0 were used. Shepler Expires December 22, 2006 [Page 80] Internet-Draft NFSv4 Minor Version 1 June 2006 A lock may not be granted while a READ or WRITE operation using one of the special stateids is being performed and the range of the lock request conflicts with the range of the READ or WRITE operation. For the purposes of this paragraph, a conflict occurs when a shared lock is requested and a WRITE operation is being performed, or an exclusive lock is requested and either a READ or a WRITE operation is being performed. A SETATTR that sets size is treated similarly to a WRITE as discussed above. 5.1.5. Sequencing of Lock Requests Locking is different than most NFS operations as it requires "at- most-one" semantics that are not provided by ONCRPC. ONCRPC over a reliable transport is not sufficient because a sequence of locking requests may span multiple TCP connections. In the face of retransmission or reordering, lock or unlock requests must have a well defined and consistent behavior. To accomplish this, each lock request contains a sequence number that is a consecutively increasing integer. Different lock_owners have different sequences. The server maintains the last sequence number (L) received and the response that was returned. The first request issued for any given lock_owner is issued with a sequence number of zero. Note that for requests that contain a sequence number, for each lock_owner, there should be no more than one outstanding request. If a request (r) with a previous sequence number (r < L) is received, it is rejected with the return of error NFS4ERR_BAD_SEQID. Given a properly-functioning client, the response to (r) must have been received before the last request (L) was sent. If a duplicate of last request (r == L) is received, the stored response is returned. If a request beyond the next sequence (r == L + 2) is received, it is rejected with the return of error NFS4ERR_BAD_SEQID. Sequence history is reinitialized whenever the SETCLIENTID/SETCLIENTID_CONFIRM sequence changes the client verifier. Since the sequence number is represented with an unsigned 32-bit integer, the arithmetic involved with the sequence number is mod 2^32. For an example of modulo arithetic involving sequence numbers see [RFC793]. It is critical the server maintain the last response sent to the client to provide a more reliable cache of duplicate non-idempotent requests than that of the traditional cache described in [Juszczak]. The traditional duplicate request cache uses a least recently used algorithm for removing unneeded requests. However, the last lock request and response on a given lock_owner must be cached as long as the lock state exists on the server. Shepler Expires December 22, 2006 [Page 81] Internet-Draft NFSv4 Minor Version 1 June 2006 The client MUST monotonically increment the sequence number for the CLOSE, LOCK, LOCKU, OPEN, OPEN_CONFIRM, and OPEN_DOWNGRADE operations. This is true even in the event that the previous operation that used the sequence number received an error. The only exception to this rule is if the previous operation received one of the following errors: NFS4ERR_STALE_CLIENTID, NFS4ERR_STALE_STATEID, NFS4ERR_BAD_STATEID, NFS4ERR_BAD_SEQID, NFS4ERR_BADXDR, NFS4ERR_RESOURCE, NFS4ERR_NOFILEHANDLE. 5.1.6. Recovery from Replayed Requests As described above, the sequence number is per lock_owner. As long as the server maintains the last sequence number received and follows the methods described above, there are no risks of a Byzantine router re-sending old requests. The server need only maintain the (lock_owner, sequence number) state as long as there are open files or closed files with locks outstanding. LOCK, LOCKU, OPEN, OPEN_DOWNGRADE, and CLOSE each contain a sequence number and therefore the risk of the replay of these operations resulting in undesired effects is non-existent while the server maintains the lock_owner state. 5.1.7. Releasing lock_owner State When a particular lock_owner no longer holds open or file locking state at the server, the server may choose to release the sequence number state associated with the lock_owner. The server may make this choice based on lease expiration, for the reclamation of server memory, or other implementation specific details. In any event, the server is able to do this safely only when the lock_owner no longer is being utilized by the client. The server may choose to hold the lock_owner state in the event that retransmitted requests are received. However, the period to hold this state is implementation specific. In the case that a LOCK, LOCKU, OPEN_DOWNGRADE, or CLOSE is retransmitted after the server has previously released the lock_owner state, the server will find that the lock_owner has no files open and an error will be returned to the client. If the lock_owner does have a file open, the stateid will not match and again an error is returned to the client. 5.1.8. Use of Open Confirmation In the case that an OPEN is retransmitted and the lock_owner is being used for the first time or the lock_owner state has been previously released by the server, the use of the OPEN_CONFIRM operation will Shepler Expires December 22, 2006 [Page 82] Internet-Draft NFSv4 Minor Version 1 June 2006 prevent incorrect behavior. When the server observes the use of the lock_owner for the first time, it will direct the client to perform the OPEN_CONFIRM for the corresponding OPEN. This sequence establishes the use of an lock_owner and associated sequence number. Since the OPEN_CONFIRM sequence connects a new open_owner on the server with an existing open_owner on a client, the sequence number may have any value. The OPEN_CONFIRM step assures the server that the value received is the correct one. See the section "OPEN_CONFIRM - Confirm Open" for further details. There are a number of situations in which the requirement to confirm an OPEN would pose difficulties for the client and server, in that they would be prevented from acting in a timely fashion on information received, because that information would be provisional, subject to deletion upon non-confirmation. Fortunately, these are situations in which the server can avoid the need for confirmation when responding to open requests. The two constraints are: o The server must not bestow a delegation for any open which would require confirmation. o The server MUST NOT require confirmation on a reclaim-type open (i.e. one specifying claim type CLAIM_PREVIOUS or CLAIM_DELEGATE_PREV). These constraints are related in that reclaim-type opens are the only ones in which the server may be required to send a delegation. For CLAIM_NULL, sending the delegation is optional while for CLAIM_DELEGATE_CUR, no delegation is sent. Delegations being sent with an open requiring confirmation are troublesome because recovering from non-confirmation adds undue complexity to the protocol while requiring confirmation on reclaim- type opens poses difficulties in that the inability to resolve the status of the reclaim until lease expiration may make it difficult to have timely determination of the set of locks being reclaimed (since the grace period may expire). Requiring open confirmation on reclaim-type opens is avoidable because of the nature of the environments in which such opens are done. For CLAIM_PREVIOUS opens, this is immediately after server reboot, so there should be no time for lockowners to be created, found to be unused, and recycled. For CLAIM_DELEGATE_PREV opens, we are dealing with a client reboot situation. A server which supports delegation can be sure that no lockowners for that client have been recycled since client initialization and thus can ensure that confirmation will not be required. Shepler Expires December 22, 2006 [Page 83] Internet-Draft NFSv4 Minor Version 1 June 2006 5.2. Lock Ranges The protocol allows a lock owner to request a lock with a byte range and then either upgrade or unlock a sub-range of the initial lock. It is expected that this will be an uncommon type of request. In any case, servers or server filesystems may not be able to support sub- range lock semantics. In the event that a server receives a locking request that represents a sub-range of current locking state for the lock owner, the server is allowed to return the error NFS4ERR_LOCK_RANGE to signify that it does not support sub-range lock operations. Therefore, the client should be prepared to receive this error and, if appropriate, report the error to the requesting application. The client is discouraged from combining multiple independent locking ranges that happen to be adjacent into a single request since the server may not support sub-range requests and for reasons related to the recovery of file locking state in the event of server failure. As discussed in the section "Server Failure and Recovery" below, the server may employ certain optimizations during recovery that work effectively only when the client's behavior during lock recovery is similar to the client's locking behavior prior to server failure. 5.3. Upgrading and Downgrading Locks If a client has a write lock on a record, it can request an atomic downgrade of the lock to a read lock via the LOCK request, by setting the type to READ_LT. If the server supports atomic downgrade, the request will succeed. If not, it will return NFS4ERR_LOCK_NOTSUPP. The client should be prepared to receive this error, and if appropriate, report the error to the requesting application. If a client has a read lock on a record, it can request an atomic upgrade of the lock to a write lock via the LOCK request by setting the type to WRITE_LT or WRITEW_LT. If the server does not support atomic upgrade, it will return NFS4ERR_LOCK_NOTSUPP. If the upgrade can be achieved without an existing conflict, the request will succeed. Otherwise, the server will return either NFS4ERR_DENIED or NFS4ERR_DEADLOCK. The error NFS4ERR_DEADLOCK is returned if the client issued the LOCK request with the type set to WRITEW_LT and the server has detected a deadlock. The client should be prepared to receive such errors and if appropriate, report the error to the requesting application. 5.4. Blocking Locks Some clients require the support of blocking locks. The NFS version 4 protocol must not rely on a callback mechanism and therefore is Shepler Expires December 22, 2006 [Page 84] Internet-Draft NFSv4 Minor Version 1 June 2006 unable to notify a client when a previously denied lock has been granted. Clients have no choice but to continually poll for the lock. This presents a fairness problem. Two new lock types are added, READW and WRITEW, and are used to indicate to the server that the client is requesting a blocking lock. The server should maintain an ordered list of pending blocking locks. When the conflicting lock is released, the server may wait the lease period for the first waiting client to re-request the lock. After the lease period expires the next waiting client request is allowed the lock. Clients are required to poll at an interval sufficiently small that it is likely to acquire the lock in a timely manner. The server is not required to maintain a list of pending blocked locks as it is used to increase fairness and not correct operation. Because of the unordered nature of crash recovery, storing of lock state to stable storage would be required to guarantee ordered granting of blocking locks. Servers may also note the lock types and delay returning denial of the request to allow extra time for a conflicting lock to be released, allowing a successful return. In this way, clients can avoid the burden of needlessly frequent polling for blocking locks. The server should take care in the length of delay in the event the client retransmits the request. 5.5. Lease Renewal The purpose of a lease is to allow a server to remove stale locks that are held by a client that has crashed or is otherwise unreachable. It is not a mechanism for cache consistency and lease renewals may not be denied if the lease interval has not expired. The following events cause implicit renewal of all of the leases for a given client (i.e. all those sharing a given clientid). Each of these is a positive indication that the client is still active and that the associated state held at the server, for the client, is still valid. o An OPEN with a valid clientid. o Any operation made with a valid stateid (CLOSE, DELEGRETURN, LOCK, LOCKU, OPEN, OPEN_CONFIRM, OPEN_DOWNGRADE, READ, SETATTR, WRITE). This does not include the special stateids of all bits 0 or all bits 1. Note that if the client had restarted or rebooted, the client would not be making these requests without issuing the SETCLIENTID/SETCLIENTID_CONFIRM sequence. The use of the SETCLIENTID/SETCLIENTID_CONFIRM sequence (one that changes the Shepler Expires December 22, 2006 [Page 85] Internet-Draft NFSv4 Minor Version 1 June 2006 client verifier) notifies the server to drop the locking state associated with the client. SETCLIENTID/SETCLIENTID_CONFIRM never renews a lease. If the server has rebooted, the stateids (NFS4ERR_STALE_STATEID error) or the clientid (NFS4ERR_STALE_CLIENTID error) will not be valid hence preventing spurious renewals. This approach allows for low overhead lease renewal which scales well. In the typical case no extra RPC calls are required for lease renewal and in the worst case one RPC is required every lease period (i.e. a RENEW operation). The number of locks held by the client is not a factor since all state for the client is involved with the lease renewal action. Since all operations that create a new lease also renew existing leases, the server must maintain a common lease expiration time for all valid leases for a given client. This lease time can then be easily updated upon implicit lease renewal actions. 5.6. Crash Recovery The important requirement in crash recovery is that both the client and the server know when the other has failed. Additionally, it is required that a client sees a consistent view of data across server restarts or reboots. All READ and WRITE operations that may have been queued within the client or network buffers must wait until the client has successfully recovered the locks protecting the READ and WRITE operations. 5.6.1. Client Failure and Recovery In the event that a client fails, the server may recover the client's locks when the associated leases have expired. Conflicting locks from another client may only be granted after this lease expiration. If the client is able to restart or reinitialize within the lease period the client may be forced to wait the remainder of the lease period before obtaining new locks. To minimize client delay upon restart, lock requests are associated with an instance of the client by a client supplied verifier. This verifier is part of the initial SETCLIENTID call made by the client. The server returns a clientid as a result of the SETCLIENTID operation. The client then confirms the use of the clientid with SETCLIENTID_CONFIRM. The clientid in combination with an opaque owner field is then used by the client to identify the lock owner for OPEN. This chain of associations is then used to identify all locks for a particular client. Shepler Expires December 22, 2006 [Page 86] Internet-Draft NFSv4 Minor Version 1 June 2006 Since the verifier will be changed by the client upon each initialization, the server can compare a new verifier to the verifier associated with currently held locks and determine that they do not match. This signifies the client's new instantiation and subsequent loss of locking state. As a result, the server is free to release all locks held which are associated with the old clientid which was derived from the old verifier. Note that the verifier must have the same uniqueness properties of the verifier for the COMMIT operation. 5.6.2. Server Failure and Recovery If the server loses locking state (usually as a result of a restart or reboot), it must allow clients time to discover this fact and re- establish the lost locking state. The client must be able to re- establish the locking state without having the server deny valid requests because the server has granted conflicting access to another client. Likewise, if there is the possibility that clients have not yet re-established their locking state for a file, the server must disallow READ and WRITE operations for that file. The duration of this recovery period is equal to the duration of the lease period. A client can determine that server failure (and thus loss of locking state) has occurred, when it receives one of two errors. The NFS4ERR_STALE_STATEID error indicates a stateid invalidated by a reboot or restart. The NFS4ERR_STALE_CLIENTID error indicates a clientid invalidated by reboot or restart. When either of these are received, the client must establish a new clientid (See the section "Client ID") and re-establish the locking state as discussed below. The period of special handling of locking and READs and WRITEs, equal in duration to the lease period, is referred to as the "grace period". During the grace period, clients recover locks and the associated state by reclaim-type locking requests (i.e. LOCK requests with reclaim set to true and OPEN operations with a claim type of CLAIM_PREVIOUS). During the grace period, the server must reject READ and WRITE operations and non-reclaim locking requests (i.e. other LOCK and OPEN operations) with an error of NFS4ERR_GRACE. If the server can reliably determine that granting a non-reclaim request will not conflict with reclamation of locks by other clients, the NFS4ERR_GRACE error does not have to be returned and the non- reclaim client request can be serviced. For the server to be able to service READ and WRITE operations during the grace period, it must again be able to guarantee that no possible conflict could arise between an impending reclaim locking request and the READ or WRITE operation. If the server is unable to offer that guarantee, the Shepler Expires December 22, 2006 [Page 87] Internet-Draft NFSv4 Minor Version 1 June 2006 NFS4ERR_GRACE error must be returned to the client. For a server to provide simple, valid handling during the grace period, the easiest method is to simply reject all non-reclaim locking requests and READ and WRITE operations by returning the NFS4ERR_GRACE error. However, a server may keep information about granted locks in stable storage. With this information, the server could determine if a regular lock or READ or WRITE operation can be safely processed. For example, if a count of locks on a given file is available in stable storage, the server can track reclaimed locks for the file and when all reclaims have been processed, non-reclaim locking requests may be processed. This way the server can ensure that non-reclaim locking requests will not conflict with potential reclaim requests. With respect to I/O requests, if the server is able to determine that there are no outstanding reclaim requests for a file by information from stable storage or another similar mechanism, the processing of I/O requests could proceed normally for the file. To reiterate, for a server that allows non-reclaim lock and I/O requests to be processed during the grace period, it MUST determine that no lock subsequently reclaimed will be rejected and that no lock subsequently reclaimed would have prevented any I/O operation processed during the grace period. Clients should be prepared for the return of NFS4ERR_GRACE errors for non-reclaim lock and I/O requests. In this case the client should employ a retry mechanism for the request. A delay (on the order of several seconds) between retries should be used to avoid overwhelming the server. Further discussion of the general issue is included in [Floyd]. The client must account for the server that is able to perform I/O and non-reclaim locking requests within the grace period as well as those that can not do so. A reclaim-type locking request outside the server's grace period can only succeed if the server can guarantee that no conflicting lock or I/O request has been granted since reboot or restart. A server may, upon restart, establish a new value for the lease period. Therefore, clients should, once a new clientid is established, refetch the lease_time attribute and use it as the basis for lease renewal for the lease associated with that server. However, the server must establish, for this restart event, a grace period at least as long as the lease period for the previous server instantiation. This allows the client state obtained during the previous server instance to be reliably re-established. Shepler Expires December 22, 2006 [Page 88] Internet-Draft NFSv4 Minor Version 1 June 2006 5.6.3. Network Partitions and Recovery If the duration of a network partition is greater than the lease period provided by the server, the server will have not received a lease renewal from the client. If this occurs, the server may free all locks held for the client. As a result, all stateids held by the client will become invalid or stale. Once the client is able to reach the server after such a network partition, all I/O submitted by the client with the now invalid stateids will fail with the server returning the error NFS4ERR_EXPIRED. Once this error is received, the client will suitably notify the application that held the lock. As a courtesy to the client or as an optimization, the server may continue to hold locks on behalf of a client for which recent communication has extended beyond the lease period. If the server receives a lock or I/O request that conflicts with one of these courtesy locks, the server must free the courtesy lock and grant the new request. When a network partition is combined with a server reboot, there are edge conditions that place requirements on the server in order to avoid silent data corruption following the server reboot. Two of these edge conditions are known, and are discussed below. The first edge condition has the following scenario: 1. Client A acquires a lock. 2. Client A and server experience mutual network partition, such that client A is unable to renew its lease. 3. Client A's lease expires, so server releases lock. 4. Client B acquires a lock that would have conflicted with that of Client A. 5. Client B releases the lock 6. Server reboots 7. Network partition between client A and server heals. 8. Client A issues a RENEW operation, and gets back a NFS4ERR_STALE_CLIENTID. 9. Client A reclaims its lock within the server's grace period. Thus, at the final step, the server has erroneously granted client Shepler Expires December 22, 2006 [Page 89] Internet-Draft NFSv4 Minor Version 1 June 2006 A's lock reclaim. If client B modified the object the lock was protecting, client A will experience object corruption. The second known edge condition follows: 1. Client A acquires a lock. 2. Server reboots. 3. Client A and server experience mutual network partition, such that client A is unable to reclaim its lock within the grace period. 4. Server's reclaim grace period ends. Client A has no locks recorded on server. 5. Client B acquires a lock that would have conflicted with that of Client A. 6. Client B releases the lock 7. Server reboots a second time 8. Network partition between client A and server heals. 9. Client A issues a RENEW operation, and gets back a NFS4ERR_STALE_CLIENTID. 10. Client A reclaims its lock within the server's grace period. As with the first edge condition, the final step of the scenario of the second edge condition has the server erroneously granting client A's lock reclaim. Solving the first and second edge conditions requires that the server either assume after it reboots that edge condition occurs, and thus return NFS4ERR_NO_GRACE for all reclaim attempts, or that the server record some information stable storage. The amount of information the server records in stable storage is in inverse proportion to how harsh the server wants to be whenever the edge conditions occur. The server that is completely tolerant of all edge conditions will record in stable storage every lock that is acquired, removing the lock record from stable storage only when the lock is unlocked by the client and the lock's lockowner advances the sequence number such that the lock release is not the last stateful event for the lockowner's sequence. For the two aforementioned edge conditions, the harshest a server can be, and still support a grace period for reclaims, requires that the server record in stable storage Shepler Expires December 22, 2006 [Page 90] Internet-Draft NFSv4 Minor Version 1 June 2006 information some minimal information. For example, a server implementation could, for each client, save in stable storage a record containing: o the client's id string o a boolean that indicates if the client's lease expired or if there was administrative intervention (see the section, Server Revocation of Locks) to revoke a record lock, share reservation, or delegation o a timestamp that is updated the first time after a server boot or reboot the client acquires record locking, share reservation, or delegation state on the server. The timestamp need not be updated on subsequent lock requests until the server reboots. The server implementation would also record in the stable storage the timestamps from the two most recent server reboots. Assuming the above record keeping, for the first edge condition, after the server reboots, the record that client A's lease expired means that another client could have acquired a conflicting record lock, share reservation, or delegation. Hence the server must reject a reclaim from client A with the error NFS4ERR_NO_GRACE. For the second edge condition, after the server reboots for a second time, the record that the client had an unexpired record lock, share reservation, or delegation established before the server's previous incarnation means that the server must reject a reclaim from client A with the error NFS4ERR_NO_GRACE. Regardless of the level and approach to record keeping, the server MUST implement one of the following strategies (which apply to reclaims of share reservations, record locks, and delegations): 1. Reject all reclaims with NFS4ERR_NO_GRACE. This is superharsh, but necessary if the server does not want to record lock state in stable storage. 2. Record sufficient state in stable storage such that all known edge conditions involving server reboot, including the two noted in this section, are detected. False positives are acceptable. Note that at this time, it is not known if there are other edge conditions. In the event, after a server reboot, the server determines that there is unrecoverable damage or corruption to the the stable storage, then for all clients and/or locks affected, the server Shepler Expires December 22, 2006 [Page 91] Internet-Draft NFSv4 Minor Version 1 June 2006 MUST return NFS4ERR_NO_GRACE. A mandate for the client's handling of the NFS4ERR_NO_GRACE error is outside the scope of this specification, since the strategies for such handling are very dependent on the client's operating environment. However, one potential approach is described below. When the client receives NFS4ERR_NO_GRACE, it could examine the change attribute of the objects the client is trying to reclaim state for, and use that to determine whether to re-establish the state via normal OPEN or LOCK requests. This is acceptable provided the client's operating environment allows it. In otherwords, the client implementor is advised to document for his users the behavior. The client could also inform the application that its record lock or share reservations (whether they were delegated or not) have been lost, such as via a UNIX signal, a GUI pop-up window, etc. See the section, "Data Caching and Revocation" for a discussion of what the client should do for dealing with unreclaimed delegations on client state. For further discussion of revocation of locks see the section "Server Revocation of Locks". 5.7. Recovery from a Lock Request Timeout or Abort In the event a lock request times out, a client may decide to not retry the request. The client may also abort the request when the process for which it was issued is terminated (e.g. in UNIX due to a signal). It is possible though that the server received the request and acted upon it. This would change the state on the server without the client being aware of the change. It is paramount that the client re-synchronize state with server before it attempts any other operation that takes a seqid and/or a stateid with the same lock_owner. This is straightforward to do without a special re- synchronize operation. Since the server maintains the last lock request and response received on the lock_owner, for each lock_owner, the client should cache the last lock request it sent such that the lock request did not receive a response. From this, the next time the client does a lock operation for the lock_owner, it can send the cached request, if there is one, and if the request was one that established state (e.g. a LOCK or OPEN operation), the server will return the cached result or if never saw the request, perform it. The client can follow up with a request to remove the state (e.g. a LOCKU or CLOSE operation). With this approach, the sequencing and stateid information on the client and server for the given lock_owner will re-synchronize and in turn the lock state will re-synchronize. Shepler Expires December 22, 2006 [Page 92] Internet-Draft NFSv4 Minor Version 1 June 2006 5.8. Server Revocation of Locks At any point, the server can revoke locks held by a client and the client must be prepared for this event. When the client detects that its locks have been or may have been revoked, the client is responsible for validating the state information between itself and the server. Validating locking state for the client means that it must verify or reclaim state for each lock currently held. The first instance of lock revocation is upon server reboot or re- initialization. In this instance the client will receive an error (NFS4ERR_STALE_STATEID or NFS4ERR_STALE_CLIENTID) and the client will proceed with normal crash recovery as described in the previous section. The second lock revocation event is the inability to renew the lease before expiration. While this is considered a rare or unusual event, the client must be prepared to recover. Both the server and client will be able to detect the failure to renew the lease and are capable of recovering without data corruption. For the server, it tracks the last renewal event serviced for the client and knows when the lease will expire. Similarly, the client must track operations which will renew the lease period. Using the time that each such request was sent and the time that the corresponding reply was received, the client should bound the time that the corresponding renewal could have occurred on the server and thus determine if it is possible that a lease period expiration could have occurred. The third lock revocation event can occur as a result of administrative intervention within the lease period. While this is considered a rare event, it is possible that the server's administrator has decided to release or revoke a particular lock held by the client. As a result of revocation, the client will receive an error of NFS4ERR_ADMIN_REVOKED. In this instance the client may assume that only the lock_owner's locks have been lost. The client notifies the lock holder appropriately. The client may not assume the lease period has been renewed as a result of failed operation. When the client determines the lease period may have expired, the client must mark all locks held for the associated lease as "unvalidated". This means the client has been unable to re-establish or confirm the appropriate lock state with the server. As described in the previous section on crash recovery, there are scenarios in which the server may grant conflicting locks after the lease period has expired for a client. When it is possible that the lease period has expired, the client must validate each lock currently held to ensure that a conflicting lock has not been granted. The client may accomplish this task by issuing an I/O request, either a pending I/O Shepler Expires December 22, 2006 [Page 93] Internet-Draft NFSv4 Minor Version 1 June 2006 or a zero-length read, specifying the stateid associated with the lock in question. If the response to the request is success, the client has validated all of the locks governed by that stateid and re-established the appropriate state between itself and the server. If the I/O request is not successful, then one or more of the locks associated with the stateid was revoked by the server and the client must notify the owner. 5.9. Share Reservations A share reservation is a mechanism to control access to a file. It is a separate and independent mechanism from record locking. When a client opens a file, it issues an OPEN operation to the server specifying the type of access required (READ, WRITE, or BOTH) and the type of access to deny others (deny NONE, READ, WRITE, or BOTH). If the OPEN fails the client will fail the application's open request. Pseudo-code definition of the semantics: if (request.access == 0) return (NFS4ERR_INVAL) else if ((request.access & file_state.deny)) || (request.deny & file_state.access)) return (NFS4ERR_DENIED) This checking of share reservations on OPEN is done with no exception for an existing OPEN for the same open_owner. The constants used for the OPEN and OPEN_DOWNGRADE operations for the access and deny fields are as follows: const OPEN4_SHARE_ACCESS_READ = 0x00000001; const OPEN4_SHARE_ACCESS_WRITE = 0x00000002; const OPEN4_SHARE_ACCESS_BOTH = 0x00000003; const OPEN4_SHARE_DENY_NONE = 0x00000000; const OPEN4_SHARE_DENY_READ = 0x00000001; const OPEN4_SHARE_DENY_WRITE = 0x00000002; const OPEN4_SHARE_DENY_BOTH = 0x00000003; 5.10. OPEN/CLOSE Operations To provide correct share semantics, a client MUST use the OPEN operation to obtain the initial filehandle and indicate the desired access and what if any access to deny. Even if the client intends to use a stateid of all 0's or all 1's, it must still obtain the filehandle for the regular file with the OPEN operation so the Shepler Expires December 22, 2006 [Page 94] Internet-Draft NFSv4 Minor Version 1 June 2006 appropriate share semantics can be applied. For clients that do not have a deny mode built into their open programming interfaces, deny equal to NONE should be used. The OPEN operation with the CREATE flag, also subsumes the CREATE operation for regular files as used in previous versions of the NFS protocol. This allows a create with a share to be done atomically. The CLOSE operation removes all share reservations held by the lock_owner on that file. If record locks are held, the client SHOULD release all locks before issuing a CLOSE. The server MAY free all outstanding locks on CLOSE but some servers may not support the CLOSE of a file that still has record locks held. The server MUST return failure, NFS4ERR_LOCKS_HELD, if any locks would exist after the CLOSE. The LOOKUP operation will return a filehandle without establishing any lock state on the server. Without a valid stateid, the server will assume the client has the least access. For example, a file opened with deny READ/WRITE cannot be accessed using a filehandle obtained through LOOKUP because it would not have a valid stateid (i.e. using a stateid of all bits 0 or all bits 1). 5.10.1. Close and Retention of State Information Since a CLOSE operation requests deallocation of a stateid, dealing with retransmission of the CLOSE, may pose special difficulties, since the state information, which normally would be used to determine the state of the open file being designated, might be deallocated, resulting in an NFS4ERR_BAD_STATEID error. Servers may deal with this problem in a number of ways. To provide the greatest degree assurance that the protocol is being used properly, a server should, rather than deallocate the stateid, mark it as close-pending, and retain the stateid with this status, until later deallocation. In this way, a retransmitted CLOSE can be recognized since the stateid points to state information with this distinctive status, so that it can be handled without error. When adopting this strategy, a server should retain the state information until the earliest of: o Another validly sequenced request for the same lockowner, that is not a retransmission. o The time that a lockowner is freed by the server due to period with no activity. Shepler Expires December 22, 2006 [Page 95] Internet-Draft NFSv4 Minor Version 1 June 2006 o All locks for the client are freed as a result of a SETCLIENTID. Servers may avoid this complexity, at the cost of less complete protocol error checking, by simply responding NFS4_OK in the event of a CLOSE for a deallocated stateid, on the assumption that this case must be caused by a retransmitted close. When adopting this approach, it is desirable to at least log an error when returning a no-error indication in this situation. If the server maintains a reply-cache mechanism, it can verify the CLOSE is indeed a retransmission and avoid error logging in most cases. 5.11. Open Upgrade and Downgrade When an OPEN is done for a file and the lockowner for which the open is being done already has the file open, the result is to upgrade the open file status maintained on the server to include the access and deny bits specified by the new OPEN as well as those for the existing OPEN. The result is that there is one open file, as far as the protocol is concerned, and it includes the union of the access and deny bits for all of the OPEN requests completed. Only a single CLOSE will be done to reset the effects of both OPENs. Note that the client, when issuing the OPEN, may not know that the same file is in fact being opened. The above only applies if both OPENs result in the OPENed object being designated by the same filehandle. When the server chooses to export multiple filehandles corresponding to the same file object and returns different filehandles on two different OPENs of the same file object, the server MUST NOT "OR" together the access and deny bits and coalesce the two open files. Instead the server must maintain separate OPENs with separate stateids and will require separate CLOSEs to free them. When multiple open files on the client are merged into a single open file object on the server, the close of one of the open files (on the client) may necessitate change of the access and deny status of the open file on the server. This is because the union of the access and deny bits for the remaining opens may be smaller (i.e. a proper subset) than previously. The OPEN_DOWNGRADE operation is used to make the necessary change and the client should use it to update the server so that share reservation requests by other clients are handled properly. 5.12. Short and Long Leases When determining the time period for the server lease, the usual lease tradeoffs apply. Short leases are good for fast server recovery at a cost of increased RENEW or READ (with zero length) requests. Longer leases are certainly kinder and gentler to servers Shepler Expires December 22, 2006 [Page 96] Internet-Draft NFSv4 Minor Version 1 June 2006 trying to handle very large numbers of clients. The number of RENEW requests drop in proportion to the lease time. The disadvantages of long leases are slower recovery after server failure (the server must wait for the leases to expire and the grace period to elapse before granting new lock requests) and increased file contention (if client fails to transmit an unlock request then server must wait for lease expiration before granting new locks). Long leases are usable if the server is able to store lease state in non-volatile memory. Upon recovery, the server can reconstruct the lease state from its non-volatile memory and continue operation with its clients and therefore long leases would not be an issue. 5.13. Clocks, Propagation Delay, and Calculating Lease Expiration To avoid the need for synchronized clocks, lease times are granted by the server as a time delta. However, there is a requirement that the client and server clocks do not drift excessively over the duration of the lock. There is also the issue of propagation delay across the network which could easily be several hundred milliseconds as well as the possibility that requests will be lost and need to be retransmitted. To take propagation delay into account, the client should subtract it from lease times (e.g. if the client estimates the one-way propagation delay as 200 msec, then it can assume that the lease is already 200 msec old when it gets it). In addition, it will take another 200 msec to get a response back to the server. So the client must send a lock renewal or write data back to the server 400 msec before the lease would expire. The server's lease period configuration should take into account the network distance of the clients that will be accessing the server's resources. It is expected that the lease period will take into account the network propagation delays and other network delay factors for the client population. Since the protocol does not allow for an automatic method to determine an appropriate lease period, the server's administrator may have to tune the lease period. 6. Client-Side Caching Client-side caching of data, of file attributes, and of file names is essential to providing good performance with the NFS protocol. Providing distributed cache coherence is a difficult problem and previous versions of the NFS protocol have not attempted it. Instead, several NFS client implementation techniques have been used to reduce the problems that a lack of coherence poses for users. Shepler Expires December 22, 2006 [Page 97] Internet-Draft NFSv4 Minor Version 1 June 2006 These techniques have not been clearly defined by earlier protocol specifications and it is often unclear what is valid or invalid client behavior. The NFS version 4 protocol uses many techniques similar to those that have been used in previous protocol versions. The NFS version 4 protocol does not provide distributed cache coherence. However, it defines a more limited set of caching guarantees to allow locks and share reservations to be used without destructive interference from client side caching. In addition, the NFS version 4 protocol introduces a delegation mechanism which allows many decisions normally made by the server to be made locally by clients. This mechanism provides efficient support of the common cases where sharing is infrequent or where sharing is read-only. 6.1. Performance Challenges for Client-Side Caching Caching techniques used in previous versions of the NFS protocol have been successful in providing good performance. However, several scalability challenges can arise when those techniques are used with very large numbers of clients. This is particularly true when clients are geographically distributed which classically increases the latency for cache revalidation requests. The previous versions of the NFS protocol repeat their file data cache validation requests at the time the file is opened. This behavior can have serious performance drawbacks. A common case is one in which a file is only accessed by a single client. Therefore, sharing is infrequent. In this case, repeated reference to the server to find that no conflicts exist is expensive. A better option with regards to performance is to allow a client that repeatedly opens a file to do so without reference to the server. This is done until potentially conflicting operations from another client actually occur. A similar situation arises in connection with file locking. Sending file lock and unlock requests to the server as well as the read and write requests necessary to make data caching consistent with the locking semantics (see the section "Data Caching and File Locking") can severely limit performance. When locking is used to provide protection against infrequent conflicts, a large penalty is incurred. This penalty may discourage the use of file locking by applications. The NFS version 4 protocol provides more aggressive caching strategies with the following design goals: Shepler Expires December 22, 2006 [Page 98] Internet-Draft NFSv4 Minor Version 1 June 2006 .IP o Compatibility with a large range of server semantics. .IP o Provide the same caching benefits as previous versions of the NFS protocol when unable to provide the more aggressive model. .IP o Requirements for aggressive caching are organized so that a large portion of the benefit can be obtained even when not all of the requirements can be met. .LP The appropriate requirements for the server are discussed in later sections in which specific forms of caching are covered. (see the section "Open Delegation"). 6.2. Delegation and Callbacks Recallable delegation of server responsibilities for a file to a client improves performance by avoiding repeated requests to the server in the absence of inter-client conflict. With the use of a "callback" RPC from server to client, a server recalls delegated responsibilities when another client engages in sharing of a delegated file. A delegation is passed from the server to the client, specifying the object of the delegation and the type of delegation. There are different types of delegations but each type contains a stateid to be used to represent the delegation when performing operations that depend on the delegation. This stateid is similar to those associated with locks and share reservations but differs in that the stateid for a delegation is associated with a clientid and may be used on behalf of all the open_owners for the given client. A delegation is made to the client as a whole and not to any specific process or thread of control within it. Because callback RPCs may not work in all environments (due to firewalls, for example), correct protocol operation does not depend on them. Preliminary testing of callback functionality by means of a CB_NULL procedure determines whether callbacks can be supported. The CB_NULL procedure checks the continuity of the callback path. A server makes a preliminary assessment of callback availability to a given client and avoids delegating responsibilities until it has determined that callbacks are supported. Because the granting of a delegation is always conditional upon the absence of conflicting access, clients must not assume that a delegation will be granted and they must always be prepared for OPENs to be processed without any delegations being granted. Once granted, a delegation behaves in most ways like a lock. There is an associated lease that is subject to renewal together with all of the other leases held by that client. Unlike locks, an operation by a second client to a delegated file will cause the server to recall a delegation through a callback. Shepler Expires December 22, 2006 [Page 99] Internet-Draft NFSv4 Minor Version 1 June 2006 On recall, the client holding the delegation must flush modified state (such as modified data) to the server and return the delegation. The conflicting request will not receive a response until the recall is complete. The recall is considered complete when the client returns the delegation or the server times out on the recall and revokes the delegation as a result of the timeout. Following the resolution of the recall, the server has the information necessary to grant or deny the second client's request. At the time the client receives a delegation recall, it may have substantial state that needs to be flushed to the server. Therefore, the server should allow sufficient time for the delegation to be returned since it may involve numerous RPCs to the server. If the server is able to determine that the client is diligently flushing state to the server as a result of the recall, the server may extend the usual time allowed for a recall. However, the time allowed for recall completion should not be unbounded. An example of this is when responsibility to mediate opens on a given file is delegated to a client (see the section "Open Delegation"). The server will not know what opens are in effect on the client. Without this knowledge the server will be unable to determine if the access and deny state for the file allows any particular open until the delegation for the file has been returned. A client failure or a network partition can result in failure to respond to a recall callback. In this case, the server will revoke the delegation which in turn will render useless any modified state still on the client. 6.2.1. Delegation Recovery There are three situations that delegation recovery must deal with: o Client reboot or restart o Server reboot or restart o Network partition (full or callback-only) In the event the client reboots or restarts, the failure to renew leases will result in the revocation of record locks and share reservations. Delegations, however, may be treated a bit differently. There will be situations in which delegations will need to be reestablished after a client reboots or restarts. The reason for this is the client may have file data stored locally and this data Shepler Expires December 22, 2006 [Page 100] Internet-Draft NFSv4 Minor Version 1 June 2006 was associated with the previously held delegations. The client will need to reestablish the appropriate file state on the server. To allow for this type of client recovery, the server MAY extend the period for delegation recovery beyond the typical lease expiration period. This implies that requests from other clients that conflict with these delegations will need to wait. Because the normal recall process may require significant time for the client to flush changed state to the server, other clients need be prepared for delays that occur because of a conflicting delegation. This longer interval would increase the window for clients to reboot and consult stable storage so that the delegations can be reclaimed. For open delegations, such delegations are reclaimed using OPEN with a claim type of CLAIM_DELEGATE_PREV. (See the sections on "Data Caching and Revocation" and "Operation 18: OPEN" for discussion of open delegation and the details of OPEN respectively). A server MAY support a claim type of CLAIM_DELEGATE_PREV, but if it does, it MUST NOT remove delegations upon SETCLIENTID_CONFIRM, and instead MUST, for a period of time no less than that of the value of the lease_time attribute, maintain the client's delegations to allow time for the client to issue CLAIM_DELEGATE_PREV requests. The server that supports CLAIM_DELEGATE_PREV MUST support the DELEGPURGE operation. When the server reboots or restarts, delegations are reclaimed (using the OPEN operation with CLAIM_PREVIOUS) in a similar fashion to record locks and share reservations. However, there is a slight semantic difference. In the normal case if the server decides that a delegation should not be granted, it performs the requested action (e.g. OPEN) without granting any delegation. For reclaim, the server grants the delegation but a special designation is applied so that the client treats the delegation as having been granted but recalled by the server. Because of this, the client has the duty to write all modified state to the server and then return the delegation. This process of handling delegation reclaim reconciles three principles of the NFS version 4 protocol: o Upon reclaim, a client reporting resources assigned to it by an earlier server instance must be granted those resources. o The server has unquestionable authority to determine whether delegations are to be granted and, once granted, whether they are to be continued. o The use of callbacks is not to be depended upon until the client has proven its ability to receive them. Shepler Expires December 22, 2006 [Page 101] Internet-Draft NFSv4 Minor Version 1 June 2006 When a network partition occurs, delegations are subject to freeing by the server when the lease renewal period expires. This is similar to the behavior for locks and share reservations. For delegations, however, the server may extend the period in which conflicting requests are held off. Eventually the occurrence of a conflicting request from another client will cause revocation of the delegation. A loss of the callback path (e.g. by later network configuration change) will have the same effect. A recall request will fail and revocation of the delegation will result. A client normally finds out about revocation of a delegation when it uses a stateid associated with a delegation and receives the error NFS4ERR_EXPIRED. It also may find out about delegation revocation after a client reboot when it attempts to reclaim a delegation and receives that same error. Note that in the case of a revoked write open delegation, there are issues because data may have been modified by the client whose delegation is revoked and separately by other clients. See the section "Revocation Recovery for Write Open Delegation" for a discussion of such issues. Note also that when delegations are revoked, information about the revoked delegation will be written by the server to stable storage (as described in the section "Crash Recovery"). This is done to deal with the case in which a server reboots after revoking a delegation but before the client holding the revoked delegation is notified about the revocation. 6.3. Data Caching When applications share access to a set of files, they need to be implemented so as to take account of the possibility of conflicting access by another application. This is true whether the applications in question execute on different clients or reside on the same client. Share reservations and record locks are the facilities the NFS version 4 protocol provides to allow applications to coordinate access by providing mutual exclusion facilities. The NFS version 4 protocol's data caching must be implemented such that it does not invalidate the assumptions that those using these facilities depend upon. 6.3.1. Data Caching and OPENs In order to avoid invalidating the sharing assumptions that applications rely on, NFS version 4 clients should not provide cached data to applications or modify it on behalf of an application when it would not be valid to obtain or modify that same data via a READ or WRITE operation. Shepler Expires December 22, 2006 [Page 102] Internet-Draft NFSv4 Minor Version 1 June 2006 Furthermore, in the absence of open delegation (see the section "Open Delegation") two additional rules apply. Note that these rules are obeyed in practice by many NFS version 2 and version 3 clients. o First, cached data present on a client must be revalidated after doing an OPEN. Revalidating means that the client fetches the change attribute from the server, compares it with the cached change attribute, and if different, declares the cached data (as well as the cached attributes) as invalid. This is to ensure that the data for the OPENed file is still correctly reflected in the client's cache. This validation must be done at least when the client's OPEN operation includes DENY=WRITE or BOTH thus terminating a period in which other clients may have had the opportunity to open the file with WRITE access. Clients may choose to do the revalidation more often (i.e. at OPENs specifying DENY=NONE) to parallel the NFS version 3 protocol's practice for the benefit of users assuming this degree of cache revalidation. Since the change attribute is updated for data and metadata modifications, some client implementors may be tempted to use the time_modify attribute and not change to validate cached data, so that metadata changes do not spuriously invalidate clean data. The implementor is cautioned in this approach. The change attribute is guaranteed to change for each update to the file, whereas time_modify is guaranteed to change only at the granularity of the time_delta attribute. Use by the client's data cache validation logic of time_modify and not change runs the risk of the client incorrectly marking stale data as valid. o Second, modified data must be flushed to the server before closing a file OPENed for write. This is complementary to the first rule. If the data is not flushed at CLOSE, the revalidation done after client OPENs as file is unable to achieve its purpose. The other aspect to flushing the data before close is that the data must be committed to stable storage, at the server, before the CLOSE operation is requested by the client. In the case of a server reboot or restart and a CLOSEd file, it may not be possible to retransmit the data to be written to the file. Hence, this requirement. 6.3.2. Data Caching and File Locking For those applications that choose to use file locking instead of share reservations to exclude inconsistent file access, there is an analogous set of constraints that apply to client side data caching. These rules are effective only if the file locking is used in a way that matches in an equivalent way the actual READ and WRITE operations executed. This is as opposed to file locking that is Shepler Expires December 22, 2006 [Page 103] Internet-Draft NFSv4 Minor Version 1 June 2006 based on pure convention. For example, it is possible to manipulate a two-megabyte file by dividing the file into two one-megabyte regions and protecting access to the two regions by file locks on bytes zero and one. A lock for write on byte zero of the file would represent the right to do READ and WRITE operations on the first region. A lock for write on byte one of the file would represent the right to do READ and WRITE operations on the second region. As long as all applications manipulating the file obey this convention, they will work on a local filesystem. However, they may not work with the NFS version 4 protocol unless clients refrain from data caching. The rules for data caching in the file locking environment are: o First, when a client obtains a file lock for a particular region, the data cache corresponding to that region (if any cache data exists) must be revalidated. If the change attribute indicates that the file may have been updated since the cached data was obtained, the client must flush or invalidate the cached data for the newly locked region. A client might choose to invalidate all of non-modified cached data that it has for the file but the only requirement for correct operation is to invalidate all of the data in the newly locked region. o Second, before releasing a write lock for a region, all modified data for that region must be flushed to the server. The modified data must also be written to stable storage. Note that flushing data to the server and the invalidation of cached data must reflect the actual byte ranges locked or unlocked. Rounding these up or down to reflect client cache block boundaries will cause problems if not carefully done. For example, writing a modified block when only half of that block is within an area being unlocked may cause invalid modification to the region outside the unlocked area. This, in turn, may be part of a region locked by another client. Clients can avoid this situation by synchronously performing portions of write operations that overlap that portion (initial or final) that is not a full block. Similarly, invalidating a locked area which is not an integral number of full buffer blocks would require the client to read one or two partial blocks from the server if the revalidation procedure shows that the data which the client possesses may not be valid. The data that is written to the server as a prerequisite to the unlocking of a region must be written, at the server, to stable storage. The client may accomplish this either with synchronous writes or by following asynchronous writes with a COMMIT operation. This is required because retransmission of the modified data after a server reboot might conflict with a lock held by another client. Shepler Expires December 22, 2006 [Page 104] Internet-Draft NFSv4 Minor Version 1 June 2006 A client implementation may choose to accommodate applications which use record locking in non-standard ways (e.g. using a record lock as a global semaphore) by flushing to the server more data upon an LOCKU than is covered by the locked range. This may include modified data within files other than the one for which the unlocks are being done. In such cases, the client must not interfere with applications whose READs and WRITEs are being done only within the bounds of record locks which the application holds. For example, an application locks a single byte of a file and proceeds to write that single byte. A client that chose to handle a LOCKU by flushing all modified data to the server could validly write that single byte in response to an unrelated unlock. However, it would not be valid to write the entire block in which that single written byte was located since it includes an area that is not locked and might be locked by another client. Client implementations can avoid this problem by dividing files with modified data into those for which all modifications are done to areas covered by an appropriate record lock and those for which there are modifications not covered by a record lock. Any writes done for the former class of files must not include areas not locked and thus not modified on the client. 6.3.3. Data Caching and Mandatory File Locking Client side data caching needs to respect mandatory file locking when it is in effect. The presence of mandatory file locking for a given file is indicated when the client gets back NFS4ERR_LOCKED from a READ or WRITE on a file it has an appropriate share reservation for. When mandatory locking is in effect for a file, the client must check for an appropriate file lock for data being read or written. If a lock exists for the range being read or written, the client may satisfy the request using the client's validated cache. If an appropriate file lock is not held for the range of the read or write, the read or write request must not be satisfied by the client's cache and the request must be sent to the server for processing. When a read or write request partially overlaps a locked region, the request should be subdivided into multiple pieces with each region (locked or not) treated appropriately. 6.3.4. Data Caching and File Identity When clients cache data, the file data needs to be organized according to the filesystem object to which the data belongs. For NFS version 3 clients, the typical practice has been to assume for the purpose of caching that distinct filehandles represent distinct filesystem objects. The client then has the choice to organize and maintain the data cache on this basis. In the NFS version 4 protocol, there is now the possibility to have Shepler Expires December 22, 2006 [Page 105] Internet-Draft NFSv4 Minor Version 1 June 2006 significant deviations from a "one filehandle per object" model because a filehandle may be constructed on the basis of the object's pathname. Therefore, clients need a reliable method to determine if two filehandles designate the same filesystem object. If clients were simply to assume that all distinct filehandles denote distinct objects and proceed to do data caching on this basis, caching inconsistencies would arise between the distinct client side objects which mapped to the same server side object. By providing a method to differentiate filehandles, the NFS version 4 protocol alleviates a potential functional regression in comparison with the NFS version 3 protocol. Without this method, caching inconsistencies within the same client could occur and this has not been present in previous versions of the NFS protocol. Note that it is possible to have such inconsistencies with applications executing on multiple clients but that is not the issue being addressed here. For the purposes of data caching, the following steps allow an NFS version 4 client to determine whether two distinct filehandles denote the same server side object: o If GETATTR directed to two filehandles returns different values of the fsid attribute, then the filehandles represent distinct objects. o If GETATTR for any file with an fsid that matches the fsid of the two filehandles in question returns a unique_handles attribute with a value of TRUE, then the two objects are distinct. o If GETATTR directed to the two filehandles does not return the fileid attribute for both of the handles, then it cannot be determined whether the two objects are the same. Therefore, operations which depend on that knowledge (e.g. client side data caching) cannot be done reliably. o If GETATTR directed to the two filehandles returns different values for the fileid attribute, then they are distinct objects. o Otherwise they are the same object. 6.4. Open Delegation When a file is being OPENed, the server may delegate further handling of opens and closes for that file to the opening client. Any such delegation is recallable, since the circumstances that allowed for the delegation are subject to change. In particular, the server may receive a conflicting OPEN from another client, the server must recall the delegation before deciding whether the OPEN from the other Shepler Expires December 22, 2006 [Page 106] Internet-Draft NFSv4 Minor Version 1 June 2006 client may be granted. Making a delegation is up to the server and clients should not assume that any particular OPEN either will or will not result in an open delegation. The following is a typical set of conditions that servers might use in deciding whether OPEN should be delegated: o The client must be able to respond to the server's callback requests. The server will use the CB_NULL procedure for a test of callback ability. o The client must have responded properly to previous recalls. o There must be no current open conflicting with the requested delegation. o There should be no current delegation that conflicts with the delegation being requested. o The probability of future conflicting open requests should be low based on the recent history of the file. o The existence of any server-specific semantics of OPEN/CLOSE that would make the required handling incompatible with the prescribed handling that the delegated client would apply (see below). There are two types of open delegations, read and write. A read open delegation allows a client to handle, on its own, requests to open a file for reading that do not deny read access to others. Multiple read open delegations may be outstanding simultaneously and do not conflict. A write open delegation allows the client to handle, on its own, all opens. Only one write open delegation may exist for a given file at a given time and it is inconsistent with any read open delegations. When a client has a read open delegation, it may not make any changes to the contents or attributes of the file but it is assured that no other client may do so. When a client has a write open delegation, it may modify the file data since no other client will be accessing the file's data. The client holding a write delegation may only affect file attributes which are intimately connected with the file data: size, time_modify, change. When a client has an open delegation, it does not send OPENs or CLOSEs to the server but updates the appropriate status internally. For a read open delegation, opens that cannot be handled locally (opens for write or that deny read access) must be sent to the server. Shepler Expires December 22, 2006 [Page 107] Internet-Draft NFSv4 Minor Version 1 June 2006 When an open delegation is made, the response to the OPEN contains an open delegation structure which specifies the following: o the type of delegation (read or write) o space limitation information to control flushing of data on close (write open delegation only, see the section "Open Delegation and Data Caching") o an nfsace4 specifying read and write permissions o a stateid to represent the delegation for READ and WRITE The delegation stateid is separate and distinct from the stateid for the OPEN proper. The standard stateid, unlike the delegation stateid, is associated with a particular lock_owner and will continue to be valid after the delegation is recalled and the file remains open. When a request internal to the client is made to open a file and open delegation is in effect, it will be accepted or rejected solely on the basis of the following conditions. Any requirement for other checks to be made by the delegate should result in open delegation being denied so that the checks can be made by the server itself. o The access and deny bits for the request and the file as described in the section "Share Reservations". o The read and write permissions as determined below. The nfsace4 passed with delegation can be used to avoid frequent ACCESS calls. The permission check should be as follows: o If the nfsace4 indicates that the open may be done, then it should be granted without reference to the server. o If the nfsace4 indicates that the open may not be done, then an ACCESS request must be sent to the server to obtain the definitive answer. The server may return an nfsace4 that is more restrictive than the actual ACL of the file. This includes an nfsace4 that specifies denial of all access. Note that some common practices such as mapping the traditional user "root" to the user "nobody" may make it incorrect to return the actual ACL of the file in the delegation response. The use of delegation together with various other forms of caching Shepler Expires December 22, 2006 [Page 108] Internet-Draft NFSv4 Minor Version 1 June 2006 creates the possibility that no server authentication will ever be performed for a given user since all of the user's requests might be satisfied locally. Where the client is depending on the server for authentication, the client should be sure authentication occurs for each user by use of the ACCESS operation. This should be the case even if an ACCESS operation would not be required otherwise. As mentioned before, the server may enforce frequent authentication by returning an nfsace4 denying all access with every open delegation. 6.4.1. Open Delegation and Data Caching OPEN delegation allows much of the message overhead associated with the opening and closing files to be eliminated. An open when an open delegation is in effect does not require that a validation message be sent to the server. The continued endurance of the "read open delegation" provides a guarantee that no OPEN for write and thus no write has occurred. Similarly, when closing a file opened for write and if write open delegation is in effect, the data written does not have to be flushed to the server until the open delegation is recalled. The continued endurance of the open delegation provides a guarantee that no open and thus no read or write has been done by another client. For the purposes of open delegation, READs and WRITEs done without an OPEN are treated as the functional equivalents of a corresponding type of OPEN. This refers to the READs and WRITEs that use the special stateids consisting of all zero bits or all one bits. Therefore, READs or WRITEs with a special stateid done by another client will force the server to recall a write open delegation. A WRITE with a special stateid done by another client will force a recall of read open delegations. With delegations, a client is able to avoid writing data to the server when the CLOSE of a file is serviced. The file close system call is the usual point at which the client is notified of a lack of stable storage for the modified file data generated by the application. At the close, file data is written to the server and through normal accounting the server is able to determine if the available filesystem space for the data has been exceeded (i.e. server returns NFS4ERR_NOSPC or NFS4ERR_DQUOT). This accounting includes quotas. The introduction of delegations requires that a alternative method be in place for the same type of communication to occur between client and server. In the delegation response, the server provides either the limit of the size of the file or the number of modified blocks and associated block size. The server must ensure that the client will be able to flush data to the server of a size equal to that provided in the Shepler Expires December 22, 2006 [Page 109] Internet-Draft NFSv4 Minor Version 1 June 2006 original delegation. The server must make this assurance for all outstanding delegations. Therefore, the server must be careful in its management of available space for new or modified data taking into account available filesystem space and any applicable quotas. The server can recall delegations as a result of managing the available filesystem space. The client should abide by the server's state space limits for delegations. If the client exceeds the stated limits for the delegation, the server's behavior is undefined. Based on server conditions, quotas or available filesystem space, the server may grant write open delegations with very restrictive space limitations. The limitations may be defined in a way that will always force modified data to be flushed to the server on close. With respect to authentication, flushing modified data to the server after a CLOSE has occurred may be problematic. For example, the user of the application may have logged off the client and unexpired authentication credentials may not be present. In this case, the client may need to take special care to ensure that local unexpired credentials will in fact be available. This may be accomplished by tracking the expiration time of credentials and flushing data well in advance of their expiration or by making private copies of credentials to assure their availability when needed. 6.4.2. Open Delegation and File Locks When a client holds a write open delegation, lock operations are performed locally. This includes those required for mandatory file locking. This can be done since the delegation implies that there can be no conflicting locks. Similarly, all of the revalidations that would normally be associated with obtaining locks and the flushing of data associated with the releasing of locks need not be done. When a client holds a read open delegation, lock operations are not performed locally. All lock operations, including those requesting non-exclusive locks, are sent to the server for resolution. 6.4.3. Handling of CB_GETATTR The server needs to employ special handling for a GETATTR where the target is a file that has a write open delegation in effect. The reason for this is that the client holding the write delegation may have modified the data and the server needs to reflect this change to the second client that submitted the GETATTR. Therefore, the client holding the write delegation needs to be interrogated. The server will use the CB_GETATTR operation. The only attributes that the server can reliably query via CB_GETATTR are size and change. Shepler Expires December 22, 2006 [Page 110] Internet-Draft NFSv4 Minor Version 1 June 2006 Since CB_GETATTR is being used to satisfy another client's GETATTR request, the server only needs to know if the client holding the delegation has a modified version of the file. If the client's copy of the delegated file is not modified (data or size), the server can satisfy the second client's GETATTR request from the attributes stored locally at the server. If the file is modified, the server only needs to know about this modified state. If the server determines that the file is currently modified, it will respond to the second client's GETATTR as if the file had been modified locally at the server. Since the form of the change attribute is determined by the server and is opaque to the client, the client and server need to agree on a method of communicating the modified state of the file. For the size attribute, the client will report its current view of the file size. For the change attribute, the handling is more involved. For the client, the following steps will be taken when receiving a write delegation: o The value of the change attribute will be obtained from the server and cached. Let this value be represented by c. o The client will create a value greater than c that will be used for communicating modified data is held at the client. Let this value be represented by d. o When the client is queried via CB_GETATTR for the change attribute, it checks to see if it holds modified data. If the file is modified, the value d is returned for the change attribute value. If this file is not currently modified, the client returns the value c for the change attribute. For simplicity of implementation, the client MAY for each CB_GETATTR return the same value d. This is true even if, between successive CB_GETATTR operations, the client again modifies in the file's data or metadata in its cache. The client can return the same value because the only requirement is that the client be able to indicate to the server that the client holds modified data. Therefore, the value of d may always be c + 1. While the change attribute is opaque to the client in the sense that it has no idea what units of time, if any, the server is counting change with, it is not opaque in that the client has to treat it as an unsigned integer, and the server has to be able to see the results of the client's changes to that integer. Therefore, the server MUST encode the change attribute in network order when sending it to the client. The client MUST decode it from network order to its native Shepler Expires December 22, 2006 [Page 111] Internet-Draft NFSv4 Minor Version 1 June 2006 order when receiving it and the client MUST encode it network order when sending it to the server. For this reason, change is defined as an unsigned integer rather than an opaque array of octets. For the server, the following steps will be taken when providing a write delegation: o Upon providing a write delegation, the server will cache a copy of the change attribute in the data structure it uses to record the delegation. Let this value be represented by sc. o When a second client sends a GETATTR operation on the same file to the server, the server obtains the change attribute from the first client. Let this value be cc. o If the value cc is equal to sc, the file is not modified and the server returns the current values for change, time_metadata, and time_modify (for example) to the second client. o If the value cc is NOT equal to sc, the file is currently modified at the first client and most likely will be modified at the server at a future time. The server then uses its current time to construct attribute values for time_metadata and time_modify. A new value of sc, which we will call nsc, is computed by the server, such that nsc >= sc + 1. The server then returns the constructed time_metadata, time_modify, and nsc values to the requester. The server replaces sc in the delegation record with nsc. To prevent the possibility of time_modify, time_metadata, and change from appearing to go backward (which would happen if the client holding the delegation fails to write its modified data to the server before the delegation is revoked or returned), the server SHOULD update the file's metadata record with the constructed attribute values. For reasons of reasonable performance, committing the constructed attribute values to stable storage is OPTIONAL. As discussed earlier in this section, the client MAY return the same cc value on subsequent CB_GETATTR calls, even if the file was modified in the client's cache yet again between successive CB_GETATTR calls. Therefore, the server must assume that the file has been modified yet again, and MUST take care to ensure that the new nsc it constructs and returns is greater than the previous nsc it returned. An example implementation's delegation record would satisfy this mandate by including a boolean field (let us call it "modified") that is set to false when the delegation is granted, and an sc value set at the time of grant to the change attribute value. The modified field would be set to true the first time cc != sc, and would stay true until the delegation is returned or revoked. The Shepler Expires December 22, 2006 [Page 112] Internet-Draft NFSv4 Minor Version 1 June 2006 processing for constructing nsc, time_modify, and time_metadata would use this pseudo code: if (!modified) { do CB_GETATTR for change and size; if (cc != sc) modified = TRUE; } else { do CB_GETATTR for size; } if (modified) { sc = sc + 1; time_modify = time_metadata = current_time; update sc, time_modify, time_metadata into file's metadata; } return to client (that sent GETATTR) the attributes it requested, but make sure size comes from what CB_GETATTR returned. Do not update the file's metadata with the client's modified size. In the case that the file attribute size is different than the server's current value, the server treats this as a modification regardless of the value of the change attribute retrieved via CB_GETATTR and responds to the second client as in the last step. This methodology resolves issues of clock differences between client and server and other scenarios where the use of CB_GETATTR break down. It should be noted that the server is under no obligation to use CB_GETATTR and therefore the server MAY simply recall the delegation to avoid its use. 6.4.4. Recall of Open Delegation The following events necessitate recall of an open delegation: o Potentially conflicting OPEN request (or READ/WRITE done with "special" stateid) o SETATTR issued by another client o REMOVE request for the file Shepler Expires December 22, 2006 [Page 113] Internet-Draft NFSv4 Minor Version 1 June 2006 o RENAME request for the file as either source or target of the RENAME Whether a RENAME of a directory in the path leading to the file results in recall of an open delegation depends on the semantics of the server filesystem. If that filesystem denies such RENAMEs when a file is open, the recall must be performed to determine whether the file in question is, in fact, open. In addition to the situations above, the server may choose to recall open delegations at any time if resource constraints make it advisable to do so. Clients should always be prepared for the possibility of recall. When a client receives a recall for an open delegation, it needs to update state on the server before returning the delegation. These same updates must be done whenever a client chooses to return a delegation voluntarily. The following items of state need to be dealt with: o If the file associated with the delegation is no longer open and no previous CLOSE operation has been sent to the server, a CLOSE operation must be sent to the server. o If a file has other open references at the client, then OPEN operations must be sent to the server. The appropriate stateids will be provided by the server for subsequent use by the client since the delegation stateid will not longer be valid. These OPEN requests are done with the claim type of CLAIM_DELEGATE_CUR. This will allow the presentation of the delegation stateid so that the client can establish the appropriate rights to perform the OPEN. (see the section "Operation 18: OPEN" for details.) o If there are granted file locks, the corresponding LOCK operations need to be performed. This applies to the write open delegation case only. o For a write open delegation, if at the time of recall the file is not open for write, all modified data for the file must be flushed to the server. If the delegation had not existed, the client would have done this data flush before the CLOSE operation. o For a write open delegation when a file is still open at the time of recall, any modified data for the file needs to be flushed to the server. o With the write open delegation in place, it is possible that the file was truncated during the duration of the delegation. For Shepler Expires December 22, 2006 [Page 114] Internet-Draft NFSv4 Minor Version 1 June 2006 example, the truncation could have occurred as a result of an OPEN UNCHECKED with a size attribute value of zero. Therefore, if a truncation of the file has occurred and this operation has not been propagated to the server, the truncation must occur before any modified data is written to the server. In the case of write open delegation, file locking imposes some additional requirements. To precisely maintain the associated invariant, it is required to flush any modified data in any region for which a write lock was released while the write delegation was in effect. However, because the write open delegation implies no other locking by other clients, a simpler implementation is to flush all modified data for the file (as described just above) if any write lock has been released while the write open delegation was in effect. An implementation need not wait until delegation recall (or deciding to voluntarily return a delegation) to perform any of the above actions, if implementation considerations (e.g. resource availability constraints) make that desirable. Generally, however, the fact that the actual open state of the file may continue to change makes it not worthwhile to send information about opens and closes to the server, except as part of delegation return. Only in the case of closing the open that resulted in obtaining the delegation would clients be likely to do this early, since, in that case, the close once done will not be undone. Regardless of the client's choices on scheduling these actions, all must be performed before the delegation is returned, including (when applicable) the close that corresponds to the open that resulted in the delegation. These actions can be performed either in previous requests or in previous operations in the same COMPOUND request. 6.4.5. Clients that Fail to Honor Delegation Recalls A client may fail to respond to a recall for various reasons, such as a failure of the callback path from server to the client. The client may be unaware of a failure in the callback path. This lack of awareness could result in the client finding out long after the failure that its delegation has been revoked, and another client has modified the data for which the client had a delegation. This is especially a problem for the client that held a write delegation. The server also has a dilemma in that the client that fails to respond to the recall might also be sending other NFS requests, including those that renew the lease before the lease expires. Without returning an error for those lease renewing operations, the server leads the client to believe that the delegation it has is in force. Shepler Expires December 22, 2006 [Page 115] Internet-Draft NFSv4 Minor Version 1 June 2006 This difficulty is solved by the following rules: o When the callback path is down, the server MUST NOT revoke the delegation if one of the following occurs: * The client has issued a RENEW operation and the server has returned an NFS4ERR_CB_PATH_DOWN error. The server MUST renew the lease for any record locks and share reservations the client has that the server has known about (as opposed to those locks and share reservations the client has established but not yet sent to the server, due to the delegation). The server SHOULD give the client a reasonable time to return its delegations to the server before revoking the client's delegations. * The client has not issued a RENEW operation for some period of time after the server attempted to recall the delegation. This period of time MUST NOT be less than the value of the lease_time attribute. o When the client holds a delegation, it can not rely on operations, except for RENEW, that take a stateid, to renew delegation leases across callback path failures. The client that wants to keep delegations in force across callback path failures must use RENEW to do so. 6.4.6. Delegation Revocation At the point a delegation is revoked, if there are associated opens on the client, the applications holding these opens need to be notified. This notification usually occurs by returning errors for READ/WRITE operations or when a close is attempted for the open file. If no opens exist for the file at the point the delegation is revoked, then notification of the revocation is unnecessary. However, if there is modified data present at the client for the file, the user of the application should be notified. Unfortunately, it may not be possible to notify the user since active applications may not be present at the client. See the section "Revocation Recovery for Write Open Delegation" for additional details. 6.5. Data Caching and Revocation When locks and delegations are revoked, the assumptions upon which successful caching depend are no longer guaranteed. For any locks or share reservations that have been revoked, the corresponding owner needs to be notified. This notification includes applications with a file open that has a corresponding delegation which has been revoked. Shepler Expires December 22, 2006 [Page 116] Internet-Draft NFSv4 Minor Version 1 June 2006 Cached data associated with the revocation must be removed from the client. In the case of modified data existing in the client's cache, that data must be removed from the client without it being written to the server. As mentioned, the assumptions made by the client are no longer valid at the point when a lock or delegation has been revoked. For example, another client may have been granted a conflicting lock after the revocation of the lock at the first client. Therefore, the data within the lock range may have been modified by the other client. Obviously, the first client is unable to guarantee to the application what has occurred to the file in the case of revocation. Notification to a lock owner will in many cases consist of simply returning an error on the next and all subsequent READs/WRITEs to the open file or on the close. Where the methods available to a client make such notification impossible because errors for certain operations may not be returned, more drastic action such as signals or process termination may be appropriate. The justification for this is that an invariant for which an application depends on may be violated. Depending on how errors are typically treated for the client operating environment, further levels of notification including logging, console messages, and GUI pop-ups may be appropriate. 6.5.1. Revocation Recovery for Write Open Delegation Revocation recovery for a write open delegation poses the special issue of modified data in the client cache while the file is not open. In this situation, any client which does not flush modified data to the server on each close must ensure that the user receives appropriate notification of the failure as a result of the revocation. Since such situations may require human action to correct problems, notification schemes in which the appropriate user or administrator is notified may be necessary. Logging and console messages are typical examples. If there is modified data on the client, it must not be flushed normally to the server. A client may attempt to provide a copy of the file data as modified during the delegation under a different name in the filesystem name space to ease recovery. Note that when the client can determine that the file has not been modified by any other client, or when the client has a complete cached copy of file in question, such a saved copy of the client's view of the file may be of particular value for recovery. In other case, recovery using a copy of the file based partially on the client's cached data and partially on the server copy as modified by other clients, will be anything but straightforward, so clients may avoid saving file contents in these situations or mark the results specially to warn users of possible problems. Shepler Expires December 22, 2006 [Page 117] Internet-Draft NFSv4 Minor Version 1 June 2006 Saving of such modified data in delegation revocation situations may be limited to files of a certain size or might be used only when sufficient disk space is available within the target filesystem. Such saving may also be restricted to situations when the client has sufficient buffering resources to keep the cached copy available until it is properly stored to the target filesystem. 6.6. Attribute Caching The attributes discussed in this section do not include named attributes. Individual named attributes are analogous to files and caching of the data for these needs to be handled just as data caching is for ordinary files. Similarly, LOOKUP results from an OPENATTR directory are to be cached on the same basis as any other pathnames and similarly for directory contents. Clients may cache file attributes obtained from the server and use them to avoid subsequent GETATTR requests. Such caching is write through in that modification to file attributes is always done by means of requests to the server and should not be done locally and cached. The exception to this are modifications to attributes that are intimately connected with data caching. Therefore, extending a file by writing data to the local data cache is reflected immediately in the size as seen on the client without this change being immediately reflected on the server. Normally such changes are not propagated directly to the server but when the modified data is flushed to the server, analogous attribute changes are made on the server. When open delegation is in effect, the modified attributes may be returned to the server in the response to a CB_RECALL call. The result of local caching of attributes is that the attribute caches maintained on individual clients will not be coherent. Changes made in one order on the server may be seen in a different order on one client and in a third order on a different client. The typical filesystem application programming interfaces do not provide means to atomically modify or interrogate attributes for multiple files at the same time. The following rules provide an environment where the potential incoherences mentioned above can be reasonably managed. These rules are derived from the practice of previous NFS protocols. o All attributes for a given file (per-fsid attributes excepted) are cached as a unit at the client so that no non-serializability can arise within the context of a single file. o An upper time boundary is maintained on how long a client cache entry can be kept without being refreshed from the server. Shepler Expires December 22, 2006 [Page 118] Internet-Draft NFSv4 Minor Version 1 June 2006 o When operations are performed that change attributes at the server, the updated attribute set is requested as part of the containing RPC. This includes directory operations that update attributes indirectly. This is accomplished by following the modifying operation with a GETATTR operation and then using the results of the GETATTR to update the client's cached attributes. Note that if the full set of attributes to be cached is requested by READDIR, the results can be cached by the client on the same basis as attributes obtained via GETATTR. A client may validate its cached version of attributes for a file by fetching just both the change and time_access attributes and assuming that if the change attribute has the same value as it did when the attributes were cached, then no attributes other than time_access have changed. The reason why time_access is also fetched is because many servers operate in environments where the operation that updates change does not update time_access. For example, POSIX file semantics do not update access time when a file is modified by the write system call. Therefore, the client that wants a current time_access value should fetch it with change during the attribute cache validation processing and update its cached time_access. The client may maintain a cache of modified attributes for those attributes intimately connected with data of modified regular files (size, time_modify, and change). Other than those three attributes, the client MUST NOT maintain a cache of modified attributes. Instead, attribute changes are immediately sent to the server. In some operating environments, the equivalent to time_access is expected to be implicitly updated by each read of the content of the file object. If an NFS client is caching the content of a file object, whether it is a regular file, directory, or symbolic link, the client SHOULD NOT update the time_access attribute (via SETATTR or a small READ or READDIR request) on the server with each read that is satisfied from cache. The reason is that this can defeat the performance benefits of caching content, especially since an explicit SETATTR of time_access may alter the change attribute on the server. If the change attribute changes, clients that are caching the content will think the content has changed, and will re-read unmodified data from the server. Nor is the client encouraged to maintain a modified version of time_access in its cache, since this would mean that the client will either eventually have to write the access time to the server with bad performance effects, or it would never update the server's time_access, thereby resulting in a situation where an application that caches access time between a close and open of the same file observes the access time oscillating between the past and present. The time_access attribute always means the time of last Shepler Expires December 22, 2006 [Page 119] Internet-Draft NFSv4 Minor Version 1 June 2006 access to a file by a read that was satisfied by the server. This way clients will tend to see only time_access changes that go forward in time. 6.7. Data and Metadata Caching and Memory Mapped Files Some operating environments include the capability for an application to map a file's content into the application's address space. Each time the application accesses a memory location that corresponds to a block that has not been loaded into the address space, a page fault occurs and the file is read (or if the block does not exist in the file, the block is allocated and then instantiated in the application's address space). As long as each memory mapped access to the file requires a page fault, the relevant attributes of the file that are used to detect access and modification (time_access, time_metadata, time_modify, and change) will be updated. However, in many operating environments, when page faults are not required these attributes will not be updated on reads or updates to the file via memory access (regardless whether the file is local file or is being access remotely). A client or server MAY fail to update attributes of a file that is being accessed via memory mapped I/O. This has several implications: o If there is an application on the server that has memory mapped a file that a client is also accessing, the client may not be able to get a consistent value of the change attribute to determine whether its cache is stale or not. A server that knows that the file is memory mapped could always pessimistically return updated values for change so as to force the application to always get the most up to date data and metadata for the file. However, due to the negative performance implications of this, such behavior is OPTIONAL. o If the memory mapped file is not being modified on the server, and instead is just being read by an application via the memory mapped interface, the client will not see an updated time_access attribute. However, in many operating environments, neither will any process running on the server. Thus NFS clients are at no disadvantage with respect to local processes. o If there is another client that is memory mapping the file, and if that client is holding a write delegation, the same set of issues as discussed in the previous two bullet items apply. So, when a server does a CB_GETATTR to a file that the client has modified in its cache, the response from CB_GETATTR will not necessarily be accurate. As discussed earlier, the client's obligation is to report that the file has been modified since the delegation was Shepler Expires December 22, 2006 [Page 120] Internet-Draft NFSv4 Minor Version 1 June 2006 granted, not whether it has been modified again between successive CB_GETATTR calls, and the server MUST assume that any file the client has modified in cache has been modified again between successive CB_GETATTR calls. Depending on the nature of the client's memory management system, this weak obligation may not be possible. A client MAY return stale information in CB_GETATTR whenever the file is memory mapped. o The mixture of memory mapping and file locking on the same file is problematic. Consider the following scenario, where a page size on each client is 8192 bytes. * Client A memory maps first page (8192 bytes) of file X * Client B memory maps first page (8192 bytes) of file X * Client A write locks first 4096 bytes * Client B write locks second 4096 bytes * Client A, via a STORE instruction modifies part of its locked region. * Simultaneous to client A, client B issues a STORE on part of its locked region. Here the challenge is for each client to resynchronize to get a correct view of the first page. In many operating environments, the virtual memory management systems on each client only know a page is modified, not that a subset of the page corresponding to the respective lock regions has been modified. So it is not possible for each client to do the right thing, which is to only write to the server that portion of the page that is locked. For example, if client A simply writes out the page, and then client B writes out the page, client A's data is lost. Moreover, if mandatory locking is enabled on the file, then we have a different problem. When clients A and B issue the STORE instructions, the resulting page faults require a record lock on the entire page. Each client then tries to extend their locked range to the entire page, which results in a deadlock. Communicating the NFS4ERR_DEADLOCK error to a STORE instruction is difficult at best. If a client is locking the entire memory mapped file, there is no problem with advisory or mandatory record locking, at least until the client unlocks a region in the middle of the file. Given the above issues the following are permitted: Shepler Expires December 22, 2006 [Page 121] Internet-Draft NFSv4 Minor Version 1 June 2006 o Clients and servers MAY deny memory mapping a file they know there are record locks for. o Clients and servers MAY deny a record lock on a file they know is memory mapped. o A client MAY deny memory mapping a file that it knows requires mandatory locking for I/O. If mandatory locking is enabled after the file is opened and mapped, the client MAY deny the application further access to its mapped file. 6.8. Name Caching The results of LOOKUP and READDIR operations may be cached to avoid the cost of subsequent LOOKUP operations. Just as in the case of attribute caching, inconsistencies may arise among the various client caches. To mitigate the effects of these inconsistencies and given the context of typical filesystem APIs, an upper time boundary is maintained on how long a client name cache entry can be kept without verifying that the entry has not been made invalid by a directory change operation performed by another client. .LP When a client is not making changes to a directory for which there exist name cache entries, the client needs to periodically fetch attributes for that directory to ensure that it is not being modified. After determining that no modification has occurred, the expiration time for the associated name cache entries may be updated to be the current time plus the name cache staleness bound. When a client is making changes to a given directory, it needs to determine whether there have been changes made to the directory by other clients. It does this by using the change attribute as reported before and after the directory operation in the associated change_info4 value returned for the operation. The server is able to communicate to the client whether the change_info4 data is provided atomically with respect to the directory operation. If the change values are provided atomically, the client is then able to compare the pre-operation change value with the change value in the client's name cache. If the comparison indicates that the directory was updated by another client, the name cache associated with the modified directory is purged from the client. If the comparison indicates no modification, the name cache can be updated on the client to reflect the directory operation and the associated timeout extended. The post-operation change value needs to be saved as the basis for future change_info4 comparisons. As demonstrated by the scenario above, name caching requires that the client revalidate name cache data by inspecting the change attribute of a directory at the point when the name cache item was cached. Shepler Expires December 22, 2006 [Page 122] Internet-Draft NFSv4 Minor Version 1 June 2006 This requires that the server update the change attribute for directories when the contents of the corresponding directory is modified. For a client to use the change_info4 information appropriately and correctly, the server must report the pre and post operation change attribute values atomically. When the server is unable to report the before and after values atomically with respect to the directory operation, the server must indicate that fact in the change_info4 return value. When the information is not atomically reported, the client should not assume that other clients have not changed the directory. 6.9. Directory Caching The results of READDIR operations may be used to avoid subsequent READDIR operations. Just as in the cases of attribute and name caching, inconsistencies may arise among the various client caches. To mitigate the effects of these inconsistencies, and given the context of typical filesystem APIs, the following rules should be followed: o Cached READDIR information for a directory which is not obtained in a single READDIR operation must always be a consistent snapshot of directory contents. This is determined by using a GETATTR before the first READDIR and after the last of READDIR that contributes to the cache. o An upper time boundary is maintained to indicate the length of time a directory cache entry is considered valid before the client must revalidate the cached information. The revalidation technique parallels that discussed in the case of name caching. When the client is not changing the directory in question, checking the change attribute of the directory with GETATTR is adequate. The lifetime of the cache entry can be extended at these checkpoints. When a client is modifying the directory, the client needs to use the change_info4 data to determine whether there are other clients modifying the directory. If it is determined that no other client modifications are occurring, the client may update its directory cache to reflect its own changes. As demonstrated previously, directory caching requires that the client revalidate directory cache data by inspecting the change attribute of a directory at the point when the directory was cached. This requires that the server update the change attribute for directories when the contents of the corresponding directory is modified. For a client to use the change_info4 information appropriately and correctly, the server must report the pre and post operation change attribute values atomically. When the server is Shepler Expires December 22, 2006 [Page 123] Internet-Draft NFSv4 Minor Version 1 June 2006 unable to report the before and after values atomically with respect to the directory operation, the server must indicate that fact in the change_info4 return value. When the information is not atomically reported, the client should not assume that other clients have not changed the directory. 7. Security Negotiation The NFSv4.0 specification contains three oversights and ambiguities with respect to the SECINFO operation. First, it is impossible for the client to use the SECINFO operation to determine the correct security triple for accessing a parent directory. This is because SECINFO takes as arguments the current file handle and a component name. However, NFSv4.0 uses the LOOKUPP operation to get the parent directory of the current file handle. If the client uses the wrong security when issuing the LOOKUPP, and gets back an NFS4ERR_WRONGSEC error, SECINFO is useless to the client. The client is left with guessing which security the server will accept. This defeats the purpose of SECINFO, which was to provide an efficient method of negotiating security. Second, there is ambiguity as to what the server should do when it is passed a LOOKUP operation such that the server restricts access to the current file handle with one security triple, and access to the component with a different triple, and remote procedure call uses one of the two security triples. Should the server allow the LOOKUP? Third, there is a problem as to what the client must do (or can do), whenever the server returns NFS4ERR_WRONGSEC in response to a PUTFH operation. The NFSv4.0 specification says that client should issue a SECINFO using the parent filehandle and the component name of the filehandle that PUTFH was issued with. This may not be convenient for the client. This document resolves the above three issues in the context of NFSv4.1. 8. Clarification of Security Negotiation in NFSv4.1 This section attempts to clarify NFSv4.1 security negotiation issues. Unless noted otherwise, for any mention of PUTFH in this section, the reader should interpret it as applying to PUTROOTFH and PUTPUBFH in addition to PUTFH. Shepler Expires December 22, 2006 [Page 124] Internet-Draft NFSv4 Minor Version 1 June 2006 8.1. PUTFH + LOOKUP The server implementation may decide whether to impose any restrictions on export security administration. There are at least three approaches (Sc is the flavor set of the child export, Sp that of the parent), a) Sc <= Sp (<= for subset) b) Sc ^ Sp != {} (^ for intersection, {} for the empty set) c) free form To support b (when client chooses a flavor that is not a member of Sp) and c, PUTFH must NOT return NFS4ERR_WRONGSEC in case of security mismatch. Instead, it should be returned from the LOOKUP that follows. Since the above guideline does not contradict a, it should be followed in general. 8.2. PUTFH + LOOKUPP Since SECINFO only works its way down, there is no way LOOKUPP can return NFS4ERR_WRONGSEC without the server implementing SECINFO_NO_NAME. SECINFO_NO_NAME solves this issue because via style "parent", it works in the opposite direction as SECINFO (component name is implicit in this case). 8.3. PUTFH + SECINFO This case should be treated specially. A security sensitive client should be allowed to choose a strong flavor when querying a server to determine a file object's permitted security flavors. The security flavor chosen by the client does not have to be included in the flavor list of the export. Of course the server has to be configured for whatever flavor the client selects, otherwise the request will fail at RPC authentication. In theory, there is no connection between the security flavor used by SECINFO and those supported by the export. But in practice, the client may start looking for strong flavors from those supported by the export, followed by those in the mandatory set. Shepler Expires December 22, 2006 [Page 125] Internet-Draft NFSv4 Minor Version 1 June 2006 8.4. PUTFH + Anything Else PUTFH must return NFS4ERR_WRONGSEC in case of security mismatch. This is the most straightforward approach without having to add NFS4ERR_WRONGSEC to every other operations. PUTFH + SECINFO_NO_NAME (style "current_fh") is needed for the client to recover from NFS4ERR_WRONGSEC. 9. NFSv4.1 Sessions 9.1. Sessions Background 9.1.1. Introduction to Sessions This draft proposes extensions to NFS version 4 [RFC3530] enabling it to support sessions and endpoint management, and to support operation atop RDMA-capable RPC over transports such as iWARP. [RDMAP, DDP] These extensions enable support for exactly-once semantics by NFSv4 servers, multipathing and trunking of transport connections, and enhanced security. The ability to operate over RDMA enables greatly enhanced performance. Operation over existing TCP is enhanced as well. While discussed here with respect to IETF-chartered transports, the proposed protocol is intended to function over other standards, such as Infiniband. [IB] The following are the major aspects of this proposal: o Changes are proposed within the framework of NFSv4 minor versioning. RPC, XDR, and the NFSv4 procedures and operations are preserved. The proposed extension functions equally well over existing transports and RDMA, and interoperates transparently with existing implementations, both at the local programmatic interface and over the wire. o An explicit session is introduced to NFSv4, and new operations are added to support it. The session allows for enhanced trunking, failover and recovery, and authentication efficiency, along with necessary support for RDMA. The session is implemented as operations within NFSv4 COMPOUND and does not impact layering or interoperability with existing NFSv4 implementations. The NFSv4 callback channel is dynamically associated and is connected by the client and not the server, enhancing security and operation through firewalls. In fact, the callback channel will be enabled to share the same connection as the operations channel. Shepler Expires December 22, 2006 [Page 126] Internet-Draft NFSv4 Minor Version 1 June 2006 o An enhanced RPC layer enables NFSv4 operation atop RDMA. The session assists RDMA-mode connection, and additional facilities are provided for managing RDMA resources at both NFSv4 server and client. Existing NFSv4 operations continue to function as before, though certain size limits are negotiated. A companion draft to this document, "RDMA Transport for ONC RPC" [RPCRDMA] is to be referenced for details of RPC RDMA support. o Support for exactly-once semantics ("EOS") is enabled by the new session facilities, by providing to the server a way to bound the size of the duplicate request cache for a single client, and to manage its persistent storage. Block Diagram +-----------------+-------------------------------------+ | NFSv4 | NFSv4 + session extensions | +-----------------+------+----------------+-------------+ | Operations | Session | | +------------------------+----------------+ | | RPC/XDR | | +-------------------------------+---------+ | | Stream Transport | RDMA Transport | +-------------------------------+-----------------------+ 9.1.2. Motivation NFS version 4 [RFC3530] has been granted "Proposed Standard" status. The NFSv4 protocol was developed along several design points, important among them: effective operation over wide-area networks, including the Internet itself; strong security integrated into the protocol; extensive cross-platform interoperability including integrated locking semantics compatible with multiple operating systems; and protocol extensibility. The NFS version 4 protocol, however, does not provide support for certain important transport aspects. For example, the protocol does not address response caching, which is required to provide correctness for retried client requests across a network partition, nor does it provide an interoperable way to support trunking and multipathing of connections. This leads to inefficiencies, especially where trunking and multipathing are concerned, and presents additional difficulties in supporting RDMA fabrics, in which endpoints may require dedicated or specialized resources. Sessions can be employed to unify NFS-level constructs such as the clientid, with transport-level constructs such as transport endpoints. Each transport endpoint draws on resources via its membership in a Shepler Expires December 22, 2006 [Page 127] Internet-Draft NFSv4 Minor Version 1 June 2006 session. Resource management can be more strictly maintained, leading to greater server efficiency in implementing the protocol. The enhanced operation over a session affords an opportunity to the server to implement a highly reliable duplicate request cache, and thereby export exactly-once semantics. NFSv4 advances the state of high-performance local sharing, by virtue of its integrated security, locking, and delegation, and its excellent coverage of the sharing semantics of multiple operating systems. It is precisely this environment where exactly-once semantics become a fundamental requirement. Additionally, efforts to standardize a set of protocols for Remote Direct Memory Access, RDMA, over the Internet Protocol Suite have made significant progress. RDMA is a general solution to the problem of CPU overhead incurred due to data copies, primarily at the receiver. Substantial research has addressed this and has borne out the efficacy of the approach. An overview of this is the RDDP Problem Statement document, [RDDPPS]. Numerous upper layer protocols achieve extremely high bandwidth and low overhead through the use of RDMA. Products from a wide variety of vendors employ RDMA to advantage, and prototypes have demonstrated the effectiveness of many more. Here, we are concerned specifically with NFS and NFS-style upper layer protocols; examples from Network Appliance [DAFS, DCK+03], Fujitsu Prime Software Technologies [FJNFS, FJDAFS] and Harvard University [KM02] are all relevant. By layering a session binding for NFS version 4 directly atop a standard RDMA transport, a greatly enhanced level of performance and transparency can be supported on a wide variety of operating system platforms. These combined capabilities alter the landscape between local filesystems and network attached storage, enable a new level of performance, and lead new classes of application to take advantage of NFS. 9.1.3. Problem Statement Two issues drive the current proposal: correctness, and performance. Both are instances of "raising the bar" for NFS, whereby the desire to use NFS in new classes applications can be accommodated by providing the basic features to make such use feasible. Such applications include tightly coupled sharing environments such as cluster computing, high performance computing (HPC) and information processing such as databases. These trends are explored in depth in [NFSPS]. The first issue, correctness, exemplified among the attributes of Shepler Expires December 22, 2006 [Page 128] Internet-Draft NFSv4 Minor Version 1 June 2006 local filesystems, is support for exactly-once semantics. Such semantics have not been reliably available with NFS. Server-based duplicate request caches [CJ89] help, but do not reliably provide strict correctness. For the type of application which is expected to make extensive use of the high-performance RDMA-enabled environment, the reliable provision of such semantics is a fundamental requirement. Introduction of a session to NFSv4 will address these issues. With higher performance and enhanced semantics comes the problem of enabling advanced endpoint management, for example high-speed trunking, multipathing and failover. These characteristics enable availability and performance. RFC3530 presents some issues in permitting a single clientid to access a server over multiple connections. A second issue encountered in common by NFS implementations is the CPU overhead required to implement the protocol. Primary among the sources of this overhead is the movement of data from NFS protocol messages to its eventual destination in user buffers or aligned kernel buffers. The data copies consume system bus bandwidth and CPU time, reducing the available system capacity for applications. [RDDPPS] Achieving zero-copy with NFS has to date required sophisticated, "header cracking" hardware and/or extensive platform- specific virtual memory mapping tricks. Combined in this way, NFSv4, RDMA and the emerging high-speed network fabrics will enable delivery of performance which matches that of the fastest local filesystems, preserving the key existing local filesystem semantics, while enhancing them by providing network filesystem sharing semantics. RDMA implementations generally have other interesting properties, such as hardware assisted protocol access, and support for user space access to I/O. RDMA is compelling here for another reason; hardware offloaded networking support in itself does not avoid data copies, without resorting to implementing part of the NFS protocol in the NIC. Support of RDMA by NFS enables the highest performance at the architecture level rather than by implementation; this enables ubiquitous and interoperable solutions. By providing file access performance equivalent to that of local file systems, NFSv4 over RDMA will enable applications running on a set of client machines to interact through an NFSv4 file system, just as applications running on a single machine might interact through a local file system. This raises the issue of whether additional protocol enhancements to Shepler Expires December 22, 2006 [Page 129] Internet-Draft NFSv4 Minor Version 1 June 2006 enable such interaction would be desirable and what such enhancements would be. This is a complicated issue which the working group needs to address and will not be further discussed in this document. 9.1.4. NFSv4 Session Extension Characteristics This draft will present a solution based upon minor versioning of NFSv4. It will introduce a session to collect transport endpoints and resources such as reply caching, which in turn enables enhancements such as trunking, failover and recovery. It will describe use of RDMA by employing support within an underlying RPC layer [RPCRDMA]. Most importantly, it will focus on making the best possible use of an RDMA transport. These extensions are proposed as elements of a new minor revision of NFS version 4. In this draft, NFS version 4 will be referred to generically as "NFSv4", when describing properties common to all minor versions. When referring specifically to properties of the original, minor version 0 protocol, "NFSv4.0" will be used, and changes proposed here for minor version 1 will be referred to as "NFSv4.1". This draft proposes only changes which are strictly upward- compatible with existing RPC and NFS Application Programming Interfaces (APIs). 9.2. Transport Issues The Transport Issues section of the document explores the details of utilizing the various supported transports. 9.2.1. Session Model The first and most evident issue in supporting diverse transports is how to provide for their differences. This draft proposes introducing an explicit session. A session introduces minimal protocol requirements, and provides for a highly useful and convenient way to manage numerous endpoint- related issues. The session is a local construct; it represents a named, higher-layer object to which connections can refer, and encapsulates properties important to each associated client. A session is a dynamically created, long-lived server object created by a client, used over time from one or more transport connections. Its function is to maintain the server's state relative to the connection(s) belonging to a client instance. This state is entirely independent of the connection itself. The session in effect becomes Shepler Expires December 22, 2006 [Page 130] Internet-Draft NFSv4 Minor Version 1 June 2006 the object representing an active client on a connection or set of connections. Clients may create multiple sessions for a single clientid, and may wish to do so for optimization of transport resources, buffers, or server behavior. A session could be created by the client to represent a single mount point, for separate read and write "channels", or for any number of other client-selected parameters. The session enables several things immediately. Clients may disconnect and reconnect (voluntarily or not) without loss of context at the server. (Of course, locks, delegations and related associations require special handling, and generally expire in the extended absence of an open connection.) Clients may connect multiple transport endpoints to this common state. The endpoints may have all the same attributes, for instance when trunked on multiple physical network links for bandwidth aggregation or path failover. Or, the endpoints can have specific, special purpose attributes such as callback channels. The NFSv4 specification does not provide for any form of flow control; instead it relies on the windowing provided by TCP to throttle requests. This unfortunately does not work with RDMA, which in general provides no operation flow control and will terminate a connection in error when limits are exceeded. Limits are therefore exchanged when a session is created; These limits then provide maxima within which each session's connections must operate, they are managed within these limits as described in [RPCRDMA]. The limits may also be modified dynamically at the server's choosing by manipulating certain parameters present in each NFSv4.1 request. The presence of a maximum request limit on the session bounds the requirements of the duplicate request cache. This can be used to advantage by a server, which can accurately determine any storage needs and enable it to maintain duplicate request cache persistence and to provide reliable exactly-once semantics. Finally, given adequate connection-oriented transport security semantics, authentication and authorization may be cached on a per- session basis, enabling greater efficiency in the issuing and processing of requests on both client and server. A proposal for transparent, server-driven implementation of this in NFSv4 has been made. [CCM] The existence of the session greatly facilitates the implementation of this approach. This is discussed in detail in the Authentication Efficiencies section later in this draft. Shepler Expires December 22, 2006 [Page 131] Internet-Draft NFSv4 Minor Version 1 June 2006 9.2.2. Connection State In RFC3530, the combination of a connected transport endpoint and a clientid forms the basis of connection state. While has been made to be workable with certain limitations, there are difficulties in correct and robust implementation. The NFSv4.0 protocol must provide a server-initiated connection for the callback channel, and must carefully specify the persistence of client state at the server in the face of transport interruptions. The server has only the client's transport address binding (the IP 4-tuple) to identify the client RPC transaction stream and to use as a lookup tag on the duplicate request cache. (A useful overview of this is in [RW96].) If the server listens on multiple adddresses, and the client connects to more than one, it must employ different clientid's on each, negating its ability to aggregate bandwidth and redundancy. In effect, each transport connection is used as the server's representation of client state. But, transport connections are potentially fragile and transitory. In this proposal, a session identifier is assigned by the server upon initial session negotiation on each connection. This identifier is used to associate additional connections, to renegotiate after a reconnect, to provide an abstraction for the various session properties, and to address the duplicate request cache. No transport-specific information is used in the duplicate request cache implementation of an NFSv4.1 server, nor in fact the RPC XID itself. The session identifier is unique within the server's scope and may be subject to certain server policies such as being bounded in time. It is envisioned that the primary transport model will be connection oriented. Connection orientation brings with it certain potential optimizations, such as caching of per-connection properties, which are easily leveraged through the generality of the session. However, it is possible that in future, other transport models could be accommodated below the session abstraction. 9.2.3. NFSv4 Channels, Sessions and Connections There are at least two types of NFSv4 channels: the "operations" channel used for ordinary requests from client to server, and the "back" channel, used for callback requests from server to client. As mentioned above, different NFSv4 operations on these channels can lead to different resource needs. For example, server callback operations (CB_RECALL) are specific, small messages which flow from server to client at arbitrary times, while data transfers such as read and write have very different sizes and asymmetric behaviors. It is sometimes impractical for the RDMA peers (NFSv4 client and Shepler Expires December 22, 2006 [Page 132] Internet-Draft NFSv4 Minor Version 1 June 2006 NFSv4 server) to post buffers for these various operations on a single connection. Commingling of requests with responses at the client receive queue is particularly troublesome, due both to the need to manage both solicited and unsolicited completions, and to provision buffers for both purposes. Due to the lack of any ordering of callback requests versus response arrivals, without any other mechanisms, the client would be forced to allocate all buffers sized to the worst case. The callback requests are likely to be handled by a different task context from that handling the responses. Significant demultiplexing and thread management may be required if both are received on the same queue. However, if callbacks are relatively rare (perhaps due to client access patterns), many of these difficulties can be minimized. Also, the client may wish to perform trunking of operations channel requests for performance reasons, or multipathing for availability. This proposal permits both, as well as many other session and connection possibilities, by permitting each operation to carry session membership information and to share session (and clientid) state in order to draw upon the appropriate resources. For example, reads and writes may be assigned to specific, optimized connections, or sorted and separated by any or all of size, idempotency, etc. To address the problems described above, this proposal allows multiple sessions to share a clientid, as well as for multiple connections to share a session. Single Connection model: Shepler Expires December 22, 2006 [Page 133] Internet-Draft NFSv4 Minor Version 1 June 2006 NFSv4.1 Session / \ Operations_Channel [Back_Channel] \ / Connection | Multi-connection trunked model (2 operations channels shown): NFSv4.1 Session / \ Operations_Channels [Back_Channel] | | | Connection Connection [Connection] | | | Multi-connection split-use model (2 mounts shown): NFSv4.1 Session / \ (/home) (/usr/local - readonly) / \ | Operations_Channel [Back_Channel] | | | Operations_Channel Connection [Connection] | | | Connection | In this way, implementation as well as resource management may be optimized. Each session will have its own response caching and buffering, and each connection or channel will have its own transport resources, as appropriate. Clients which do not require certain behaviors may optimize such resources away completely, by using specific sessions and not even creating the additional channels and connections. 9.2.4. Reconnection, Trunking and Failover Reconnection after failure references stored state on the server associated with lease recovery during the grace period. The session provides a convenient handle for storing and managing information regarding the client's previous state on a per- connection basis, e.g. to be used upon reconnection. Reconnection to a previously existing session, and its stored resources, are covered in the "Connection Models" section below. Shepler Expires December 22, 2006 [Page 134] Internet-Draft NFSv4 Minor Version 1 June 2006 One important aspect of reconnection is that of RPC library support. Traditionally, an Upper Layer RPC-based Protocol such as NFS leaves all transport knowledge to the RPC layer implementation below it. This allows NFS to operate over a wide variety of transports and has proven to be a highly successful approach. The session, however, introduces an abstraction which is, in a way, "between" RPC and NFSv4.1. It is important that the session abstraction not have ramifications within the RPC layer. One such issue arises within the reconnection logic of RPC. Previously, an explicit session binding operation, which established session context for each new connection, was explored. This however required that the session binding also be performed during reconnect, which in turn required an RPC request. This additional request requires new RPC semantics, both in implementation and the fact that a new request is inserted into the RPC stream. Also, the binding of a connection to a session required the upper layer to become "aware" of connections, something the RPC layer abstraction architecturally abstracts away. Therefore the session binding is not handled in connection scope but instead explicitly carried in each request. For Reliability Availability and Serviceability (RAS) issues such as bandwidth aggregation and multipathing, clients frequently seek to make multiple connections through multiple logical or physical channels. The session is a convenient point to aggregate and manage these resources. 9.2.5. Server Duplicate Request Cache Server duplicate request caches, while not a part of an NFS protocol, have become a standard, even required, part of any NFS implementation. First described in [CJ89], the duplicate request cache was initially found to reduce work at the server by avoiding duplicate processing for retransmitted requests. A second, and in the long run more important benefit, was improved correctness, as the cache avoided certain destructive non-idempotent requests from being reinvoked. However, such caches do not provide correctness guarantees; they cannot be managed in a reliable, persistent fashion. The reason is understandable - their storage requirement is unbounded due to the lack of any such bound in the NFS protocol, and they are dependent on transport addresses for request matching. As proposed in this draft, the presence of maximum request count limits and negotiated maximum sizes allows the size and duration of the cache to be bounded, and coupled with a long-lived session identifier, enables its persistent storage on a per-session basis. Shepler Expires December 22, 2006 [Page 135] Internet-Draft NFSv4 Minor Version 1 June 2006 This provides a single unified mechanism which provides the following guarantees required in the NFSv4 specification, while extending them to all requests, rather than limiting them only to a subset of state- related requests: "It is critical the server maintain the last response sent to the client to provide a more reliable cache of duplicate non- idempotent requests than that of the traditional cache described in [CJ89]..." [RFC3530] The maximum request count limit is the count of active operations, which bounds the number of entries in the cache. Constraining the size of operations additionally serves to limit the required storage to the product of the current maximum request count and the maximum response size. This storage requirement enables server- side efficiencies. Session negotiation allows the server to maintain other state. An NFSv4.1 client invoking the session destroy operation will cause the server to denegotiate (close) the session, allowing the server to deallocate cache entries. Clients can potentially specify that such caches not be kept for appropriate types of sessions (for example, read-only sessions). This can enable more efficient server operation resulting in improved response times, and more efficient sizing of buffers and response caches. Similarly, it is important for the client to explicitly learn whether the server is able to implement reliable semantics. Knowledge of whether these semantics are in force is critical for a highly reliable client, one which must provide transactional integrity guarantees. When clients request that the semantics be enabled for a given session, the session reply must inform the client if the mode is in fact enabled. In this way the client can confidently proceed with operations without having to implement consistency facilities of its own. 9.3. Session Initialization and Transfer Models Session initialization issues, and data transfer models relevant to both TCP and RDMA are discussed in this section. 9.3.1. Session Negotiation The following parameters are exchanged between client and server at session creation time. Their values allow the server to properly size resources allocated in order to service the client's requests, and to provide the server with a way to communicate limits to the client for proper and optimal operation. They are exchanged prior to Shepler Expires December 22, 2006 [Page 136] Internet-Draft NFSv4 Minor Version 1 June 2006 all session-related activity, over any transport type. Discussion of their use is found in their descriptions as well as throughout this section. Maximum Requests The client's desired maximum number of concurrent requests is passed, in order to allow the server to size its reply cache storage. The server may modify the client's requested limit downward (or upward) to match its local policy and/or resources. Over RDMA-capable RPC transports, the per-request management of low-level transport message credits is handled within the RPC layer. [RPCRDMA] Maximum Request/Response Sizes The maximum request and response sizes are exchanged in order to permit allocation of appropriately sized buffers and request cache entries. The size must allow for certain protocol minima, allowing the receipt of maximally sized operations (e.g. RENAME requests which contains two name strings). Note the maximum request/response sizes cover the entire request/response message and not simply the data payload as traditional NFS maximum read or write size. Also note the server implementation may not, in fact probably does not, require the reply cache entries to be sized as large as the maximum response. The server may reduce the client's requested sizes. Inline Padding/Alignment The server can inform the client of any padding which can be used to deliver NFSv4 inline WRITE payloads into aligned buffers. Such alignment can be used to avoid data copy operations at the server for both TCP and inline RDMA transfers. For RDMA, the client informs the server in each operation when padding has been applied. [RPCRDMA] Transport Attributes A placeholder for transport-specific attributes is provided, with a format to be determined. Possible examples of information to be passed in this parameter include transport security attributes to be used on the connection, RDMA- specific attributes, legacy "private data" as used on existing RDMA fabrics, transport Quality of Service attributes, etc. This information is to be passed to the peer's transport layer by local means which is currently outside the scope of this draft, however one attribute is provided in the RDMA case: Shepler Expires December 22, 2006 [Page 137] Internet-Draft NFSv4 Minor Version 1 June 2006 RDMA Read Resources RDMA implementations must explicitly provision resources to support RDMA Read requests from connected peers. These values must be explicitly specified, to provide adequate resources for matching the peer's expected needs and the connection's delay- bandwidth parameters. The client provides its chosen value to the server in the initial session creation, the value must be provided in each client RDMA endpoint. The values are asymmetric and should be set to zero at the server in order to conserve RDMA resources, since clients do not issue RDMA Read operations in this proposal. The result is communicated in the session response, to permit matching of values across the connection. The value may not be changed in the duration of the session, although a new value may be requested as part of a new session. 9.3.2. RDMA Requirements A complete discussion of the operation of RPC-based protocols atop RDMA transports is in [RPCRDMA]. Where RDMA is considered, this proposal assumes the use of such a layering; it addresses only the upper layer issues relevant to making best use of RPC/RDMA. A connection oriented (reliable sequenced) RDMA transport will be required. There are several reasons for this. First, this model most closely reflects the general NFSv4 requirement of long-lived and congestion-controlled transports. Second, to operate correctly over either an unreliable or unsequenced RDMA transport, or both, would require significant complexity in the implementation and protocol not appropriate for a strict minor version. For example, retransmission on connected endpoints is explicitly disallowed in the current NFSv4 draft; it would again be required with these alternate transport characteristics. Third, the proposal assumes a specific RDMA ordering semantic, which presents the same set of ordering and reliability issues to the RDMA layer over such transports. The RDMA implementation provides for making connections to other RDMA-capable peers. In the case of the current proposals before the RDDP working group, these RDMA connections are preceded by a "streaming" phase, where ordinary TCP (or NFS) traffic might flow. However, this is not assumed here and sizes and other parameters are explicitly exchanged upon a session entering RDMA mode. 9.3.3. RDMA Connection Resources On transport endpoints which support automatic RDMA mode, that is, endpoints which are created in the RDMA-enabled state, a single, preposted buffer must initially be provided by both peers, and the Shepler Expires December 22, 2006 [Page 138] Internet-Draft NFSv4 Minor Version 1 June 2006 client session negotiation must be the first exchange. On transport endpoints supporting dynamic negotiation, a more sophisticated negotiation is possible, but is not discussed in the current draft. RDMA imposes several requirements on upper layer consumers. Registration of memory and the need to post buffers of a specific size and number for receive operations are a primary consideration. Registration of memory can be a relatively high-overhead operation, since it requires pinning of buffers, assignment of attributes (e.g. readable/writable), and initialization of hardware translation. Preregistration is desirable to reduce overhead. These registrations are specific to hardware interfaces and even to RDMA connection endpoints, therefore negotiation of their limits is desirable to manage resources effectively. Following the basic registration, these buffers must be posted by the RPC layer to handle receives. These buffers remain in use by the RPC/NFSv4 implementation; the size and number of them must be known to the remote peer in order to avoid RDMA errors which would cause a fatal error on the RDMA connection. The session provides a natural way for the server to manage resource allocation to each client rather than to each transport connection itself. This enables considerable flexibility in the administration of transport endpoints. 9.3.4. TCP and RDMA Inline Transfer Model The basic transfer model for both TCP and RDMA is referred to as "inline". For TCP, this is the only transfer model supported, since TCP carries both the RPC header and data together in the data stream. For RDMA, the RDMA Send transfer model is used for all NFS requests and replies, but data is optionally carried by RDMA Writes or RDMA Reads. Use of Sends is required to ensure consistency of data and to deliver completion notifications. The pure-Send method is typically used where the data payload is small, or where for whatever reason target memory for RDMA is not available. Shepler Expires December 22, 2006 [Page 139] Internet-Draft NFSv4 Minor Version 1 June 2006 Inline message exchange Client Server : Request : Send : ------------------------------> : untagged : : buffer : Response : untagged : <------------------------------ : Send buffer : : Client Server : Read request : Send : ------------------------------> : untagged : : buffer : Read response with data : untagged : <------------------------------ : Send buffer : : Client Server : Write request with data : Send : ------------------------------> : untagged : : buffer : Write response : untagged : <------------------------------ : Send buffer : : Responses must be sent to the client on the same connection that the request was sent. It is important that the server does not assume any specific client implementation, in particular whether connections within a session share any state at the client. This is also important to preserve ordering of RDMA operations, and especially RMDA consistency. Additionally, it ensures that the RPC RDMA layer makes no requirement of the RDMA provider to open its memory registration handles (Steering Tags) beyond the scope of a single RDMA connection. This is an important security consideration. Two values must be known to each peer prior to issuing Sends: the maximum number of sends which may be posted, and their maximum size. These values are referred to, respectively, as the message credits and the maximum message size. While the message credits might vary dynamically over the duration of the session, the maximum message size does not. The server must commit to preserving this number of duplicate request cache entires, and preparing a number of receive buffers equal to or greater than its currently advertised credit value, each of the advertised size. These ensure that transport resources are allocated sufficient to receive the full advertised Shepler Expires December 22, 2006 [Page 140] Internet-Draft NFSv4 Minor Version 1 June 2006 limits. Note that the server must post the maximum number of session requests to each client operations channel. The client is not required to spread its requests in any particular fashion across connections within a session. If the client wishes, it may create multiple sessions, each with a single or small number of operations channels to provide the server with this resource advantage. Or, over RDMA the server may employ a "shared receive queue". The server can in any case protect its resources by restricting the client's request credits. While tempting to consider, it is not possible to use the TCP window as an RDMA operation flow control mechanism. First, to do so would violate layering, requiring both senders to be aware of the existing TCP outbound window at all times. Second, since requests are of variable size, the TCP window can hold a widely variable number of them, and since it cannot be reduced without actually receiving data, the receiver cannot limit the sender. Third, any middlebox interposing on the connection would wreck any possible scheme. [MIDTAX] In this proposal, maximum request count limits are exchanged at the session level to allow correct provisioning of receive buffers by transports. When operating over TCP or other similar transport, request limits and sizes are still employed in NFSv4.1, but instead of being required for correctness, they provide the basis for efficient server implementation of the duplicate request cache. The limits are chosen based upon the expected needs and capabilities of the client and server, and are in fact arbitrary. Sizes may be specified by the client as zero (requesting the server's preferred or optimal value), and request limits may be chosen in proportion to the client's capabilities. For example, a limit of 1000 allows 1000 requests to be in progress, which may generally be far more than adequate to keep local networks and servers fully utilized. Both client and server have independent sizes and buffering, but over RDMA fabrics client credits are easily managed by posting a receive buffer prior to sending each request. Each such buffer may not be completed with the corresponding reply, since responses from NFSv4 servers arrive in arbitrary order. When an operations channel is also used for callbacks, the client must account for callback requests by posting additional buffers. Note that implementation- specific facilities such as a shared receive queue may also allow optimization of these allocations. When a session is created, the client requests a preferred buffer size, and the server provides its answer. The server posts all Shepler Expires December 22, 2006 [Page 141] Internet-Draft NFSv4 Minor Version 1 June 2006 buffers of at least this size. The client must comply by not sending requests greater than this size. It is recommended that server implementations do all they can to accommodate a useful range of possible client requests. There is a provision in [RPCRDMA] to allow the sending of client requests which exceed the server's receive buffer size, but it requires the server to "pull" the client's request as a "read chunk" via RDMA Read. This introduces at least one additional network roundtrip, plus other overhead such as registering memory for RDMA Read at the client and additional RDMA operations at the server, and is to be avoided. An issue therefore arises when considering the NFSv4 COMPOUND procedures. Since an arbitrary number (total size) of operations can be specified in a single COMPOUND procedure, its size is effectively unbounded. This cannot be supported by RDMA Sends, and therefore this size negotiation places a restriction on the construction and maximum size of both COMPOUND requests and responses. If a COMPOUND results in a reply at the server that is larger than can be sent in an RDMA Send to the client, then the COMPOUND must terminate and the operation which causes the overflow will provide a TOOSMALL error status result. 9.3.5. RDMA Direct Transfer Model Placement of data by explicitly tagged RDMA operations is referred to as "direct" transfer. This method is typically used where the data payload is relatively large, that is, when RDMA setup has been performed prior to the operation, or when any overhead for setting up and performing the transfer is regained by avoiding the overhead of processing an ordinary receive. The client advertises RDMA buffers in this proposed model, and not the server. This means the "XDR Decoding with Read Chunks" described in [RPCRDMA] is not employed by NFSv4.1 replies, and instead all results transferred via RDMA to the client employ "XDR Decoding with Write Chunks". There are several reasons for this. First, it allows for a correct and secure mode of transfer. The client may advertise specific memory buffers only during specific times, and may revoke access when it pleases. The server is not required to expose copies of local file buffers for individual clients, or to lock or copy them for each client access. Second, client credits based on fixed-size request buffers are easily managed on the server, but for the server additional management of buffers for client RDMA Reads is not well-bounded. For example, the client may not perform these RDMA Read operations in a timely fashion, therefore the server would have to protect itself against Shepler Expires December 22, 2006 [Page 142] Internet-Draft NFSv4 Minor Version 1 June 2006 denial-of-service on these resources. Third, it reduces network traffic, since buffer exposure outside the scope and duration of a single request/response exchange necessitates additional memory management exchanges. There are costs associated with this decision. Primary among them is the need for the server to employ RDMA Read for operations such as large WRITE. The RDMA Read operation is a two-way exchange at the RDMA layer, which incurs additional overhead relative to RDMA Write. Additionally, RDMA Read requires resources at the data source (the client in this proposal) to maintain state and to generate replies. These costs are overcome through use of pipelining with credits, with sufficient RDMA Read resources negotiated at session initiation, and appropriate use of RDMA for writes by the client - for example only for transfers above a certain size. A description of which NFSv4 operation results are eligible for data transfer via RDMA Write is in [NFSDDP]. There are only two such operations: READ and READLINK. When XDR encoding these requests on an RDMA transport, the NFSv4.1 client must insert the appropriate xdr_write_list entries to indicate to the server whether the results should be transferred via RDMA or inline with a Send. As described in [NFSDDP], a zero-length write chunk is used to indicate an inline result. In this way, it is unnecessary to create new operations for RDMA-mode versions of READ and READLINK. Another tool to avoid creation of new, RDMA-mode operations is the Reply Chunk [RPCRDMA], which is used by RPC in RDMA mode to return large replies via RDMA as if they were inline. Reply chunks are used for operations such as READDIR, which returns large amounts of information, but in many small XDR segments. Reply chunks are offered by the client and the server can use them in preference to inline. Reply chunks are transparent to upper layers such as NFSv4. In any very rare cases where another NFSv4.1 operation requires larger buffers than were negotiated when the session was created (for example extraordinarily large RENAMEs), the underlying RPC layer may support the use of "Message as an RDMA Read Chunk" and "RDMA Write of Long Replies" as described in [RPCRDMA]. No additional support is required in the NFSv4.1 client for this. The client should be certain that its requested buffer sizes are not so small as to make this a frequent occurrence, however. All operations are initiated by a Send, and are completed with a Send. This is exactly as in conventional NFSv4, but under RDMA has a significant purpose: RDMA operations are not complete, that is, guaranteed consistent, at the data sink until followed by a Shepler Expires December 22, 2006 [Page 143] Internet-Draft NFSv4 Minor Version 1 June 2006 successful Send completion (i.e. a receive). These events provide a natural opportunity for the initiator (client) to enable and later disable RDMA access to the memory which is the target of each operation, in order to provide for consistent and secure operation. The RDMAP Send with Invalidate operation may be worth employing in this respect, as it relieves the client of certain overhead in this case. A "onetime" boolean advisory to each RDMA region might become a hint to the server that the client will use the three-tuple for only one NFSv4 operation. For a transport such as iWARP, the server can assist the client in invalidating the three-tuple by performing a Send with Solicited Event and Invalidate. The server may ignore this hint, in which case the client must perform a local invalidate after receiving the indication from the server that the NFSv4 operation is complete. This may be considered in a future version of this draft and [NFSDDP]. In a trusted environment, it may be desirable for the client to persistently enable RDMA access by the server. Such a model is desirable for the highest level of efficiency and lowest overhead. Shepler Expires December 22, 2006 [Page 144] Internet-Draft NFSv4 Minor Version 1 June 2006 RDMA message exchanges Client Server : Direct Read Request : Send : ------------------------------> : untagged : : buffer : Segment : tagged : <------------------------------ : RDMA Write buffer : : : : [Segment] : tagged : <------------------------------ : [RDMA Write] buffer : : : Direct Read Response : untagged : <------------------------------ : Send (w/Inv.) buffer : : Client Server : Direct Write Request : Send : ------------------------------> : untagged : : buffer : Segment : tagged : v------------------------------ : RDMA Read buffer : +-----------------------------> : : : : : [Segment] : tagged : v------------------------------ : [RDMA Read] buffer : +-----------------------------> : : : : Direct Write Response : untagged : <------------------------------ : Send (w/Inv.) buffer : : 9.4. Connection Models There are three scenarios in which to discuss the connection model. Each will be discussed individually, after describing the common case encountered at initial connection establishment. After a successful connection, the first request proceeds, in the case of a new client association, to initial session creation, and then optionally to session callback channel binding, prior to regular operation. Shepler Expires December 22, 2006 [Page 145] Internet-Draft NFSv4 Minor Version 1 June 2006 Commonly, each new client "mount" will be the action which drives creation of a new session. However there are any number of other approaches. Clients may choose to share a single connection and session among all their mount points. Or, clients may support trunking, where additional connections are created but all within a single session. Alternatively, the client may choose to create multiple sessions, each tuned to the buffering and reliability needs of the mount point. For example, a readonly mount can sharply reduce its write buffering and also makes no requirement for the server to support reliable duplicate request caching. Similarly, the client can choose among several strategies for clientid usage. Sessions can share a single clientid, or create new clientids as the client deems appropriate. For kernel-based clients which service multiple authenticated users, a single clientid shared across all mount points is generally the most appropriate and flexible approach. For example, all the client's file operations may wish to share locking state and the local client kernel takes the responsibility for arbitrating access locally. For clients choosing to support other authentication models, perhaps example userspace implementations, a new clientid is indicated. Through use of session create options, both models are supported at the client's choice. Since the session is explicitly created and destroyed by the client, and each client is uniquely identified, the server may be specifically instructed to discard unneeded presistent state. For this reason, it is possible that a server will retain any previous state indefinitely, and place its destruction under administrative control. Or, a server may choose to retain state for some configurable period, provided that the period meets other NFSv4 requirements such as lease reclamation time, etc. However, since discarding this state at the server may affect the correctness of the server as seen by the client across network partitioning, such discarding of state should be done only in a conservative manner. Each client request to the server carries a new SEQUENCE operation within each COMPOUND, which provides the session context. This session context then governs the request control, duplicate request caching, and other persistent parameters managed by the server for a session. 9.4.1. TCP Connection Model The following is a schematic diagram of the NFSv4.1 protocol exchanges leading up to normal operation on a TCP stream. Shepler Expires December 22, 2006 [Page 146] Internet-Draft NFSv4 Minor Version 1 June 2006 Client Server TCPmode : Create Clientid(nfs_client_id4) : TCPmode : ------------------------------> : : : : Clientid reply(clientid, ...) : : <------------------------------ : : : : Create Session(clientid, size S, : : maxreq N, STREAM, ...) : : ------------------------------> : : : : Session reply(sessionid, size S', : : maxreq N') : : <------------------------------ : : : : : : ------------------------------> : : <------------------------------ : : : : No net additional exchange is added to the initial negotiation by this proposal. In the NFSv4.1 exchange, the CREATECLIENTID replaces SETCLIENTID (eliding the callback "clientaddr4" addressing) and CREATESESSION subsumes the function of SETCLIENTID_CONFIRM, as described elsewhere in this document. Callback channel binding is optional, as in NFSv4.0. Note that the STREAM transport type is shown above, but since the transport mode remains unchanged and transport attributes are not necessarily exchanged, DEFAULT could also be passed. 9.4.2. Negotiated RDMA Connection Model One possible design which has been considered is to have a "negotiated" RDMA connection model, supported via use of a session bind operation as a required first step. However due to issues mentioned earlier, this proved problematic. This section remains as a reminder of that fact, and it is possible such a mode can be supported. It is not considered critical that this be supported for two reasons. One, the session persistence provides a way for the server to remember important session parameters, such as sizes and maximum request counts. These values can be used to restore the endpoint prior to making the first reply. Two, there are currently no critical RDMA parameters to set in the endpoint at the server side of the connection. RDMA Read resources, which are in general not settable after entering RDMA mode, are set only at the client - the originator of the connection. Therefore as long as the RDMA provider Shepler Expires December 22, 2006 [Page 147] Internet-Draft NFSv4 Minor Version 1 June 2006 supports an automatic RDMA connection mode, no further support is required from the NFSv4.1 protocol for reconnection. Note, the client must provide at least as many RDMA Read resources to its local queue for the benefit of the server when reconnecting, as it used when negotiating the session. If this value is no longer appropriate, the client should resynchronize its session state, destroy the existing session, and start over with the more appropriate values. 9.4.3. Automatic RDMA Connection Model The following is a schematic diagram of the NFSv4.1 protocol exchanges performed on an RDMA connection. Client Server RDMAmode : : : RDMAmode : : : Prepost : : : Prepost receive : : : receive : : : Create Clientid(nfs_client_id4) : : ------------------------------> : : : Prepost : Clientid reply(clientid, ...) : receive : <------------------------------ : Prepost : : receive : Create Session(clientid, size S, : : maxreq N, RDMA ...) : : ------------------------------> : : : Prepost <=N' : Session reply(sessionid, size S', : receives of : maxreq N') : size S' : <------------------------------ : : : : : : ------------------------------> : : <------------------------------ : : : : 9.5. Buffer Management, Transfer, Flow Control Inline operations in NFSv4.1 behave effectively the same as TCP sends. Procedure results are passed in a single message, and its completion at the client signal the receiving process to inspect the message. RDMA operations are performed solely by the server in this proposal, Shepler Expires December 22, 2006 [Page 148] Internet-Draft NFSv4 Minor Version 1 June 2006 as described in the previous "RDMA Direct Model" section. Since server RDMA operations do not result in a completion at the client, and due to ordering rules in RDMA transports, after all required RDMA operations are complete, a Send (Send with Solicited Event for iWARP) containing the procedure results is performed from server to client. This Send operation will result in a completion which will signal the client to inspect the message. In the case of client read-type NFSv4 operations, the server will have issued RDMA Writes to transfer the resulting data into client- advertised buffers. The subsequent Send operation performs two necessary functions: finalizing any active or pending DMA at the client, and signaling the client to inspect the message. In the case of client write-type NFSv4 operations, the server will have issued RDMA Reads to fetch the data from the client-advertised buffers. No data consistency issues arise at the client, but the completion of the transfer must be acknowledged, again by a Send from server to client. In either case, the client advertises buffers for direct (RDMA style) operations. The client may desire certain advertisement limits, and may wish the server to perform remote invalidation on its behalf when the server has completed its RDMA. This may be considered in a future version of this draft. In the absence of remote invalidation, the client may perform its own, local invalidation after the operation completes. This invalidation should occur prior to any RPCSEC GSS integrity checking, since a validly remotely accessible buffer can possibly be modified by the peer. However, after invalidation and the contents integrity checked, the contents are locally secure. Credit updates over RDMA transports are supported at the RPC layer as described in [RPCRDMA]. In each request, the client requests a desired number of credits to be made available to the connection on which it sends the request. The client must not send more requests than the number which the server has previously advertised, or in the case of the first request, only one. If the client exceeds its credit limit, the connection may close with a fatal RDMA error. The server then executes the request, and replies with an updated credit count accompanying its results. Since replies are sequenced by their RDMA Send order, the most recent results always reflect the server's limit. In this way the client will always know the maximum number of requests it may safely post. Because the client requests an arbitrary credit count in each Shepler Expires December 22, 2006 [Page 149] Internet-Draft NFSv4 Minor Version 1 June 2006 request, it is relatively easy for the client to request more, or fewer, credits to match its expected need. A client that discovered itself frequently queuing outgoing requests due to lack of server credits might increase its requested credits proportionately in response. Or, a client might have a simple, configurable number. The protocol also provides a per-operation "maxslot" exchange to assist in dynamic adjustment at the session level, described in a later section. Occasionally, a server may wish to reduce the total number of credits it offers a certain client on a connection. This could be encountered if a client were found to be consuming its credits slowly, or not at all. A client might notice this itself, and reduce its requested credits in advance, for instance requesting only the count of operations it currently has queued, plus a few as a base for starting up again. Such mechanisms can, however, be potentially complicated and are implementation-defined. The protocol does not require them. Because of the way in which RDMA fabrics function, it is not possible for the server (or client back channel) to cancel outstanding receive operations. Therefore, effectively only one credit can be withdrawn per receive completion. The server (or client back channel) would simply not replenish a receive operation when replying. The server can still reduce the available credit advertisement in its replies to the target value it desires, as a hint to the client that its credit target is lower and it should expect it to be reduced accordingly. Of course, even if the server could cancel outstanding receives, it cannot do so, since the client may have already sent requests in expectation of the previous limit. This brings out an interesting scenario similar to the client reconnect discussed earlier in "Connection Models". How does the server reduce the credits of an inactive client? One approach is for the server to simply close such a connection and require the client to reconnect at a new credit limit. This is acceptable, if inefficient, when the connection setup time is short and where the server supports persistent session semantics. A better approach is to provide a back channel request to return the operations channel credits. The server may request the client to return some number of credits, the client must comply by performing operations on the operations channel, provided of course that the request does not drop the client's credit count to zero (in which case the connection would deadlock). If the client finds that it has no requests with which to consume the credits it was previously granted, it must send zero-length Send RDMA operations, or NULL NFSv4 Shepler Expires December 22, 2006 [Page 150] Internet-Draft NFSv4 Minor Version 1 June 2006 operations in order to return the resources to the server. If the client fails to comply in a timely fashion, the server can recover the resources by breaking the connection. While in principle, the back channel credits could be subject to a similar resource adjustment, in practice this is not an issue, since the back channel is used purely for control and is expected to be statically provisioned. It is important to note that in addition to maximum request counts, the sizes of buffers are negotiated per-session. This permits the most efficient allocation of resources on both peers. There is an important requirement on reconnection: the sizes posted by the server at reconnect must be at least as large as previously used, to allow recovery. Any replies that are replayed from the server's duplicate request cache must be able to be received into client buffers. In the case where a client has received replies to all its retried requests (and therefore received all its expected responses), then the client may disconnect and reconnect with different buffers at will, since no cache replay will be required. 9.6. Retry and Replay NFSv4.0 forbids retransmission on active connections over reliable transports; this includes connected-mode RDMA. This restriction must be maintained in NFSv4.1. If one peer were to retransmit a request (or reply), it would consume an additional credit on the other. If the server retransmitted a reply, it would certainly result in an RDMA connection loss, since the client would typically only post a single receive buffer for each request. If the client retransmitted a request, the additional credit consumed on the server might lead to RDMA connection failure unless the client accounted for it and decreased its available credit, leading to wasted resources. RDMA credits present a new issue to the duplicate request cache in NFSv4.1. The request cache may be used when a connection within a session is lost, such as after the client reconnects. Credit information is a dynamic property of the connection, and stale values must not be replayed from the cache. This implies that the request cache contents must not be blindly used when replies are issued from it, and credit information appropriate to the channel must be refreshed by the RPC layer. Finally, RDMA fabrics do not guarantee that the memory handles (Steering Tags) within each rdma three-tuple are valid on a scope outside that of a single connection. Therefore, handles used by the Shepler Expires December 22, 2006 [Page 151] Internet-Draft NFSv4 Minor Version 1 June 2006 direct operations become invalid after connection loss. The server must ensure that any RDMA operations which must be replayed from the request cache use the newly provided handle(s) from the most recent request. 9.7. The Back Channel The NFSv4 callback operations present a significant resource problem for the RDMA enabled client. Clearly, callbacks must be negotiated in the way credits are for the ordinary operations channel for requests flowing from client to server. But, for callbacks to arrive on the same RDMA endpoint as operation replies would require dedicating additional resources, and specialized demultiplexing and event handling. Or, callbacks may not require RDMA sevice at all (they do not normally carry substantial data payloads). It is highly desirable to streamline this critical path via a second communications channel. The session callback channel binding facility is designed for exactly such a situation, by dynamically associating a new connected endpoint with the session, and separately negotiating sizes and counts for active callback channel operations. The binding operation is firewall-friendly since it does not require the server to initiate the connection. This same method serves as well for ordinary TCP connection mode. It is expected that all NFSv4.1 clients may make use of the session facility to streamline their design. The back channel functions exactly the same as the operations channel except that no RDMA operations are required to perform transfers, instead the sizes are required to be sufficiently large to carry all data inline, and of course the client and server reverse their roles with respect to which is in control of credit management. The same rules apply for all transfers, with the server being required to flow control its callback requests. The back channel is optional. If not bound on a given session, the server must not issue callback operations to the client. This in turn implies that such a client must never put itself in the situation where the server will need to do so, lest the client lose its connection by force, or its operation be incorrect. For the same reason, if a back channel is bound, the client is subject to revocation of its delegations if the back channel is lost. Any connection loss should be corrected by the client as soon as possible. This can be convenient for the NFSv4.1 client; if the client expects Shepler Expires December 22, 2006 [Page 152] Internet-Draft NFSv4 Minor Version 1 June 2006 to make no use of back channel facilities such as delegations, then there is no need to create it. This may save significant resources and complexity at the client. For these reasons, if the client wishes to use the back channel, that channel must be bound first, before using the operations channel. In this way, the server will not find itself in a position where it will send callbacks on the operations channel when the client is not prepared for them. There is one special case, that where the back channel is bound in fact to the operations channel's connection. This configuration would be used normally over a TCP stream connection to exactly implement the NFSv4.0 behavior, but over RDMA would require complex resource and event management at both sides of the connection. The server is not required to accept such a bind request on an RDMA connection for this reason, though it is recommended. 9.8. COMPOUND Sizing Issues Very large responses may pose duplicate request cache issues. Since servers will want to bound the storage required for such a cache, the unlimited size of response data in COMPOUND may be troublesome. If COMPOUND is used in all its generality, then the inclusion of certain non-idempotent operations within a single COMPOUND request may render the entire request non-idempotent. (For example, a single COMPOUND request which read a file or symbolic link, then removed it, would be obliged to cache the data in order to allow identical replay). Therefore, many requests might include operations that return any amount of data. It is not satisfactory for the server to reject COMPOUNDs at will with NFS4ERR_RESOURCE when they pose such difficulties for the server, as this results in serious interoperability problems. Instead, any such limits must be explicitly exposed as attributes of the session, ensuring that the server can explicitly support any duplicate request cache needs at all times. 9.9. Data Alignment A negotiated data alignment enables certain scatter/gather optimizations. A facility for this is supported by [RPCRDMA]. Where NFS file data is the payload, specific optimizations become highly attractive. Header padding is requested by each peer at session initiation, and may be zero (no padding). Padding leverages the useful property that RDMA receives preserve alignment of data, even when they are placed Shepler Expires December 22, 2006 [Page 153] Internet-Draft NFSv4 Minor Version 1 June 2006 into anonymous (untagged) buffers. If requested, client inline writes will insert appropriate pad bytes within the request header to align the data payload on the specified boundary. The client is encouraged to be optimistic and simply pad all WRITEs within the RPC layer to the negotiated size, in the expectation that the server can use them efficiently. It is highly recommended that clients offer to pad headers to an appropriate size. Most servers can make good use of such padding, which allows them to chain receive buffers in such a way that any data carried by client requests will be placed into appropriate buffers at the server, ready for filesystem processing. The receiver's RPC layer encounters no overhead from skipping over pad bytes, and the RDMA layer's high performance makes the insertion and transmission of padding on the sender a significant optimization. In this way, the need for servers to perform RDMA Read to satisfy all but the largest client writes is obviated. An added benefit is the reduction of message roundtrips on the network - a potentially good trade, where latency is present. The value to choose for padding is subject to a number of criteria. A primary source of variable-length data in the RPC header is the authentication information, the form of which is client-determined, possibly in response to server specification. The contents of COMPOUNDs, sizes of strings such as those passed to RENAME, etc. all go into the determination of a maximal NFSv4 request size and therefore minimal buffer size. The client must select its offered value carefully, so as not to overburden the server, and vice- versa. The payoff of an appropriate padding value is higher performance. Sender gather: |RPC Request|Pad bytes|Length| -> |User data...| \------+---------------------/ \ \ \ \ Receiver scatter: \-----------+- ... /-----+----------------\ \ \ |RPC Request|Pad|Length| -> |FS buffer|->|FS buffer|->... In the above case, the server may recycle unused buffers to the next posted receive if unused by the actual received request, or may pass the now-complete buffers by reference for normal write processing. For a server which can make use of it, this removes any need for data copies of incoming data, without resorting to complicated end-to-end buffer advertisement and management. This includes most kernel-based and integrated server designs, among many others. The client may perform similar optimizations, if desired. Padding is negotiated by the session creation operation, and Shepler Expires December 22, 2006 [Page 154] Internet-Draft NFSv4 Minor Version 1 June 2006 subsequently used by the RPC RDMA layer, as described in [RPCRDMA]. 9.10. NFSv4 Integration The following section discusses the integration of the proposed RDMA extensions with NFSv4.0. 9.10.1. Minor Versioning Minor versioning is the existing facility to extend the NFSv4 protocol, and this proposal takes that approach. Minor versioning of NFSv4 is relatively restrictive, and allows for tightly limited changes only. In particular, it does not permit adding new "procedures" (it permits adding only new "operations"). Interoperability concerns make it impossible to consider additional layering to be a minor revision. This somewhat limits the changes that can be proposed when considering extensions. To support the duplicate request cache integrated with sessions and request control, it is desirable to tag each request with an identifier to be called a Slotid. This identifier must be passed by NFSv4 when running atop any transport, including traditional TCP. Therefore it is not desirable to add the Slotid to a new RPC transport, even though such a transport is indicated for support of RDMA. This draft and [RPCRDMA] do not propose such an approach. Instead, this proposal conforms to the requirements of NFSv4 minor versioning, through the use of a new operation within NFSv4 COMPOUND procedures as detailed below. If sessions are in use for a given clientid, this same clientid cannot be used for non-session NFSv4 operation, including NFSv4.0. Because the server will have allocated session-specific state to the active clientid, it would be an unnecessary burden on the server implementor to support and account for additional, non- session traffic, in addition to being of no benefit. Therefore this proposal prohibits a single clientid from doing this. Nevertheless, employing a new clientid for such traffic is supported. 9.10.2. Slot Identifiers and Server Duplicate Request Cache The presence of deterministic maximum request limits on a session enables in-progress requests to be assigned unique values with useful properties. The RPC layer provides a transaction ID (xid), which, while required to be unique, is not especially convenient for tracking requests. Shepler Expires December 22, 2006 [Page 155] Internet-Draft NFSv4 Minor Version 1 June 2006 The transaction ID is only meaningful to the issuer (client), it cannot be interpreted at the server except to test for equality with previously issued requests. Because RPC operations may be completed by the server in any order, many transaction IDs may be outstanding at any time. The client may therefore perform a computationally expensive lookup operation in the process of demultiplexing each reply. In the proposal, there is a limit to the number of active requests. This immediately enables a convenient, computationally efficient index for each request which is designated as a Slot Identifier, or slotid. When the client issues a new request, it selects a slotid in the range 0..N-1, where N is the server's current "totalrequests" limit granted the client on the session over which the request is to be issued. The slotid must be unused by any of the requests which the client has already active on the session. "Unused" here means the client has no outstanding request for that slotid. Because the slot id is always an integer in the range 0..N-1, client implementations can use the slotid from a server response to efficiently match responses with outstanding requests, such as, for example, by using the slotid to index into a outstanding request array. This can be used to avoid expensive hashing and lookup functions in the performace-critical receive path. The sequenceid, which accompanies the slotid in each request, is important for a second, important check at the server: it must be able to be determined efficiently whether a request using a certain slotid is a retransmit or a new, never-before-seen request. It is not feasible for the client to assert that it is retransmitting to implement this, because for any given request the client cannot know the server has seen it unless the server actually replies. Of course, if the client has seen the server's reply, the client would not retransmit! The sequenceid must increase monotonically for each new transmit of a given slotid, and must remain unchanged for any retransmission. The server must in turn compare each newly received request's sequenceid with the last one previously received for that slotid, to see if the new request is: o A new request, in which the sequenceid is greater than that previously seen in the slot (accounting for sequence wraparound). The server proceeds to execute the new request. o A retransmitted request, in which the sequenceid is equal to that last seen in the slot. Note that this request may be either Shepler Expires December 22, 2006 [Page 156] Internet-Draft NFSv4 Minor Version 1 June 2006 complete, or in progress. The server performs replay processing in these cases. o A misordered duplicate, in which the sequenceid is less than that previously seen in the slot. The server must drop the incoming request, which may imply dropping the connection if the transport is reliable, as dictated by section 3.1.1 of [RFC3530]. This last condition is possible on any connection, not just unreliable, unordered transports. Delayed behavior on abandoned TCP connections which are not yet closed at the server, or pathological client implementations can cause it, among other causes. Therefore, the server may wish to harden itself against certain repeated occurrences of this, as it would for retransmissions in [RFC3530]. It is recommended, though not necessary for protocol correctness, that the client simply increment the sequenceid by one for each new request on each slotid. This reduces the wraparound window to a minimum, and is useful for tracing and avoidance of possible implementation errors. The client may however, for implementation-specific reasons, choose a different algorithm. For example it might maintain a single sequence space for all slots in the session - e.g. employing the RPC XID itself. The sequenceid, in any case, is never interpreted by the server for anything but to test by comparison with previously seen values. The server may thereby use the slotid, in conjunction with the sessionid and sequenceid, within the SEQUENCE portion of the request to maintain its duplicate request cache (DRC) for the session, as opposed to the traditional approach of ONC RPC applications that use the XID along with certain transport information [RW96]. Unlike the XID, the slotid is always within a specific range; this has two implications. The first implication is that for a given session, the server need only cache the results of a limited number of COMPOUND requests. The second implication derives from the first, which is unlike XID-indexed DRCs, the slotid DRC by its nature cannot be overflowed. Through use of the sequenceid to identify retransmitted requests, it is notable that the server does not need to actually cache the request itself, reducing the storage requirements of the DRC further. These new facilities makes it practical to maintain all the required entries for an effective DRC. The slotid and sequenceid therefore take over the traditional role of the port number in the server DRC implementation, and the session replaces the IP address. This approach is considerably more portable Shepler Expires December 22, 2006 [Page 157] Internet-Draft NFSv4 Minor Version 1 June 2006 and completely robust - it is not subject to the frequent reassignment of ports as clients reconnect over IP networks. In addition, the RPC XID is not used in the reply cache, enhancing robustness of the cache in the face of any rapid reuse of XIDs by the client. It is required to encode the slotid information into each request in a way that does not violate the minor versioning rules of the NFSv4.0 specification. This is accomplished here by encoding it in a control operation within each NFSv4.1 COMPOUND and CB_COMPOUND procedure. The operation easily piggybacks within existing messages. The implementation section of this document describes the specific proposal. In general, the receipt of a new sequenced request arriving on any valid slot is an indication that the previous DRC contents of that slot may be discarded. In order to further assist the server in slot management, the client is required to use the lowest available slot when issuing a new request. In this way, the server may be able to retire additional entries. However, in the case where the server is actively adjusting its granted maximum request count to the client, it may not be able to use receipt of the slotid to retire cache entries. The slotid used in an incoming request may not reflect the server's current idea of the client's session limit, because the request may have been sent from the client before the update was received. Therefore, in the downward adjustment case, the server may have to retain a number of duplicate request cache entries at least as large as the old value, until operation sequencing rules allow it to infer that the client has seen its reply. The SEQUENCE (and CB_SEQUENCE) operation also carries a "maxslot" value which carries additional client slot usage information. The client must always provide its highest-numbered outstanding slot value in the maxslot argument, and the server may reply with a new recognized value. The client should in all cases provide the most conservative value possible, although it can be increased somewhat above the actual instantaneous usage to maintain some minimum or optimal level. This provides a way for the client to yield unused request slots back to the server, which in turn can use the information to reallocate resources. Obviously, maxslot can never be zero, or the session would deadlock. The server also provides a target maxslot value to the client, which is an indication to the client of the maxslot the server wishes the client to be using. This permits the server to withdraw (or add) resources from a client that has been found to not be using them, in Shepler Expires December 22, 2006 [Page 158] Internet-Draft NFSv4 Minor Version 1 June 2006 order to more fairly share resources among a varying level of demand from other clients. The client must always comply with the server's value updates, since they indicate newly established hard limits on the client's access to session resources. However, because of request pipelining, the client may have active requests in flight reflecting prior values, therefore the server must not immediately require the client to comply. It is worthwhile to note that Sprite RPC [BW87] defined a "channel" which in some ways is similar to the slotid proposed here. Sprite RPC used channels to implement parallel request processing and request/response cache retirement. 9.10.3. Resolving server callback races with sessions It is possible for server callbacks to arrive at the client before the reply from related forward channel operations. For example, a client may have been granted a delegation to a file it has opened, but the reply to the OPEN (informing the client of the granting of the delegation) may be delayed in the network. If a conflicting operation arrives at the server, it will recall the delegation using the callback channel, which may be on a different TCP connection, perhaps even a different network. If the callback request arrives before the related reply, the client may reply to the server with an error. The presence of a session between client and server can be used to alleviate this issue. When a session is in place, each client request is uniquely identified by its { slotid, sequenceid } pair. By the rules under which slot entries (duplicate request cache entries) are retired, the server has knowledge whether the client has "seen" each of the server's replies. The server can therefore provide sufficient information to the client to allow it to disambiguate between an erroneous or conflicting callback and a race condition. To implement this, the CB_SEQUENCE operation which begins each server callback may optionally carry a related { slotid, sequenceid } identifier. If the client finds this identifier to be currently outstanding (the server's reply has not been seen by the client), it can determine that the callback has raced the reply, and act accordingly. The client must not simply wait forever for the expected server reply to arrive any of the session's operations channels, because it is possible that they will be delayed indefinitely. However, it should endeavor to wait for a period of time, and if the time expires it can provide a more meaningful error such as NFS4ERR_DELAY. Shepler Expires December 22, 2006 [Page 159] Internet-Draft NFSv4 Minor Version 1 June 2006 [[Comment.4: We need to consider the clients' options here, and describe them... NFS4ERR_DELAY has been discussed as a legal reply to CB_RECALL?]] There are other scenarios under which callbacks may race replies, among them pnfs layout recalls, described in Section 14.5.3 [[Comment.5: fill in the blanks w/others, etc...]] Therefore, for each client operation which might result in some sort of server callback, the server should "remember" the { slotid, sequenceid } pair of the client request until the slotid retirement rules allow the server to determine that the client has, in fact, seen the server's reply. During this time, any recalls of the associated object should carry these identifiers, for the benefit of the client. After this time, it is not necessary for the server to provide this information in related callbacks, since it is certain that a race condition can no longer occur. 9.10.4. COMPOUND and CB_COMPOUND Support for per-operation control can be piggybacked onto NFSv4 COMPOUNDs with full transparency, by placing such facilities into their own, new operation, and placing this operation first in each COMPOUND under the new NFSv4 minor protocol revision. The contents of the operation would then apply to the entire COMPOUND. Recall that the NFSv4 minor revision is contained within the COMPOUND header, encoded prior to the COMPOUNDed operations. By simply requiring that the new operation always be contained in NFSv4 minor COMPOUNDs, the control protocol can piggyback perfectly with each request and response. In this way, the NFSv4 RDMA Extensions may stay in compliance with the minor versioning requirements specified in section 10 of [RFC3530]. Referring to section 13.1 of the same document, the proposed session- enabled COMPOUND and CB_COMPOUND have the form: Shepler Expires December 22, 2006 [Page 160] Internet-Draft NFSv4 Minor Version 1 June 2006 +-----+--------------+-----------+------------+-----------+---- | tag | minorversion | numops | control op | op + args | ... | | (== 1) | (limited) | + args | | +-----+--------------+-----------+------------+-----------+---- and the reply's structure is: +------------+-----+--------+-------------------------------+--// |last status | tag | numres | status + control op + results | // +------------+-----+--------+-------------------------------+--// //-----------------------+---- // status + op + results | ... //-----------------------+---- The single control operation within each NFSv4.1 COMPOUND defines the context and operational session parameters which govern that COMPOUND request and reply. Placing it first in the COMPOUND encoding is required in order to allow its processing before other operations in the COMPOUND. 9.10.5. eXternal Data Representation Efficiency RDMA is a copy avoidance technology, and it is important to maintain this efficiency when decoding received messages. Traditional XDR implementations frequently use generated unmarshaling code to convert objects to local form, incurring a data copy in the process (in addition to subjecting the caller to recursive calls, etc). Often, such conversions are carried out even when no size or byte order conversion is necessary. It is recommended that implementations pay close attention to the details of memory referencing in such code. It is far more efficient to inspect data in place, using native facilities to deal with word size and byte order conversion into registers or local variables, rather than formally (and blindly) performing the operation via fetch, reallocate and store. Of particular concern is the result of the READDIR operation, in which such encoding abounds. 9.10.6. Effect of Sessions on Existing Operations The use of a session replaces the use of the SETCLIENTID and SETCLIENTID_CONFIRM operations, and allows certain simplification of the RENEW and callback addressing mechanisms in the base protocol. The cb_program and cb_location which are obtained by the server in SETCLIENTID_CONFIRM must not be used by the server, because the Shepler Expires December 22, 2006 [Page 161] Internet-Draft NFSv4 Minor Version 1 June 2006 NFSv4.1 client performs callback channel designation with BIND_BACKCHANNEL. Therefore the SETCLIENTID and SETCLIENTID_CONFIRM operations becomes obsolete when sessions are in use, and a server should return an error to NFSv4.1 clients which might issue either operation. Another favorable result of the session is that the server is able to avoid requiring the client to perform OPEN_CONFIRM operations. The existence of a reliable and effective DRC means that the server will be able to determine whether an OPEN request carrying a previously known open_owner from a client is or is not a retransmission. Because of this, the server no longer requires OPEN_CONFIRM to verify whether the client is retransmitting an open request. This in turn eliminates the server's reason for requesting OPEN_CONFIRM - the server can simply replace any previous information on this open_owner. Client OPEN operations are therefore streamlined, reducing overhead and latency through avoiding the additional OPEN_CONFIRM exchange. Since the session carries the client liveness indication with it implicitly, any request on a session associated with a given client will renew that client's leases. Therefore the RENEW operation is made unnecessary when a session is present, as any request (including a SEQUENCE operation with or without additional NFSv4 operations) performs its function. It is possible (though this proposal does not make any recommendation) that the RENEW operation could be made obsolete. An interesting issue arises however if an error occurs on such a SEQUENCE operation. If the SEQUENCE operation fails, perhaps due to an invalid slotid or other non-renewal-based issue, the server may or may not have performed the RENEW. In this case, the state of any renewal is undefined, and the client should make no assumption that it has been performed. In practice, this should not occur but even if it did, it is expected the client would perform some sort of recovery which would result in a new, successful, SEQUENCE operation being run and the client assured that the renewal took place. 9.10.7. Authentication Efficiencies NFSv4 requires the use of the RPCSEC_GSS ONC RPC security flavor [RFC2203] to provide authentication, integrity, and privacy via cryptography. The server dictates to the client the use of RPCSEC_GSS, the service (authentication, integrity, or privacy), and the specific GSS-API security mechanism that each remote procedure call and result will use. If the connection's integrity is protected by an additional means Shepler Expires December 22, 2006 [Page 162] Internet-Draft NFSv4 Minor Version 1 June 2006 than RPCSEC_GSS, such as via IPsec, then the use of RPCSEC_GSS's integrity service is nearly redundant (See the Security Considerations section for more explanation of why it is "nearly" and not completely redundant). Likewise, if the connection's privacy is protected by additional means, then the use of both RPCSEC_GSS's integrity and privacy services is nearly redundant. Connection protection schemes, such as IPsec, are more likely to be implemented in hardware than upper layer protocols like RPCSEC_GSS. Hardware-based cryptography at the IPsec layer will be more efficient than software-based cryptography at the RPCSEC_GSS layer. When transport integrity can be obtained, it is possible for server and client to downgrade their per-operation authentication, after an appropriate exchange. This downgrade can in fact be as complete as to establish security mechanisms that have zero cryptographic overhead, effectively using the underlying integrity and privacy services provided by transport. Based on the above observations, a new GSS-API mechanism, called the Channel Conjunction Mechanism [CCM], is being defined. The CCM works by creating a GSS-API security context using as input a cookie that the initiator and target have previously agreed to be a handle for GSS-API context created previously over another GSS-API mechanism. NFSv4.1 clients and servers should support CCM and they must use as the cookie the handle from a successful RPCSEC_GSS context creation over a non-CCM mechanism (such as Kerberos V5). The value of the cookie will be equal to the handle field of the rpc_gss_init_res structure from the RPCSEC_GSS specification. The [CCM] Draft provides further discussion and examples. 9.11. Sessions Security Considerations The NFSv4 minor version 1 retains all of existing NFSv4 security; all security considerations present in NFSv4.0 apply to it equally. Security considerations of any underlying RDMA transport are additionally important, all the more so due to the emerging nature of such transports. Examining these issues is outside the scope of this draft. When protecting a connection with RPCSEC_GSS, all data in each request and response (whether transferred inline or via RDMA) continues to receive this protection over RDMA fabrics [RPCRDMA]. However when performing data transfers via RDMA, RPCSEC_GSS protection of the data transfer portion works against the efficiency Shepler Expires December 22, 2006 [Page 163] Internet-Draft NFSv4 Minor Version 1 June 2006 which RDMA is typically employed to achieve. This is because such data is normally managed solely by the RDMA fabric, and intentionally is not touched by software. Therefore when employing RPCSEC_GSS under CCM, and where integrity protection has been "downgraded", the cooperation of the RDMA transport provider is critical to maintain any integrity and privacy otherwise in place for the session. The means by which the local RPCSEC_GSS implementation is integrated with the RDMA data protection facilities are outside the scope of this draft. It is logical to use the same GSS context on a session's callback channel as that used on its operations channel(s), particularly when the connection is shared by both. The client must indicate to the server: - what security flavor(s) to use in the call back. A special callback flavor might be defined for this. - if the flavor is RPCSEC_GSS, then the client must have previously created an RPCSEC_GSS session with the server. The client offers to the server the the opaque handle<> value from the rpc_gss_init_res structure, the window size of RPCSEC_GSS sequence numbers, and an opaque gss_cb_handle. This exchange can be performed as part of session and clientid creation, and the issue warrants careful analysis before being specified. If the NFS client wishes to maintain full control over RPCSEC_GSS protection, it may still perform its transfer operations using either the inline or RDMA transfer model, or of course employ traditional TCP stream operation. In the RDMA inline case, header padding is recommended to optimize behavior at the server. At the client, close attention should be paid to the implementation of RPCSEC_GSS processing to minimize memory referencing and especially copying. These are well-advised in any case! The proposed session callback channel binding improves security over that provided by NFSv4 for the callback channel. The connection is client-initiated, and subject to the same firewall and routing checks as the operations channel. The connection cannot be hijacked by an attacker who connects to the client port prior to the intended server. The connection is set up by the client with its desired attributes, such as optionally securing with IPsec or similar. The binding is fully authenticated before being activated. Shepler Expires December 22, 2006 [Page 164] Internet-Draft NFSv4 Minor Version 1 June 2006 9.11.1. Authentication Proper authentication of the principal which issues any session and clientid in the proposed NFSv4.1 operations exactly follows the similar requirement on client identifiers in NFSv4.0. It must not be possible for a client to impersonate another by guessing its session identifiers for NFSv4.1 operations, nor to bind a callback channel to an existing session. To protect against this, NFSv4.0 requires appropriate authentication and matching of the principal used. This is discussed in Section 16, Security Considerations of [RFC3530]. The same requirement when using a session identifier applies to NFSv4.1 here. Going beyond NFSv4.0, the presence of a session associated with any clientid may also be used to enhance NFSv4.1 security with respect to client impersonation. In NFSv4.0, there are many operations which carry no clientid, including in particular those which employ a stateid argument. A rogue client which wished to carry out a denial of service attack on another client could perform CLOSE, DELEGRETURN, etc operations with that client's current filehandle, sequenceid and stateid, after having obtained them from eavesdropping or other approach. Locking and open downgrade operations could be similarly attacked. When an NFSv4.1 session is in place for any clientid, countermeasures are easily applied through use of authentication by the server. Because the sessionid is present in each request within a session, the server may verify that the clientid is in fact originating from a principal with the appropriate authenticated credentials, that the sessionid belongs to the clientid, and that the stateid is valid in these contexts. This is in general not possible with the affected operations in NFSv4.0 due to the fact that the clientid is not present in the requests. In the event that authentication information is not available in the incoming request, for example after a reconnection when the security was previously downgraded using CCM, the server must require the client re-establish the authentication in order that the server may validate the other client-provided context, prior to executing any operation. The sessionid, present in the newly retransmitted request, combined with the retransmission detection enabled by the NFSv4.1 duplicate request cache, are a convenient and reliable context for the server to use for this contingency. The server should take care to protect itself against denial of service attacks in the creation of sessions and clientids. Clients who connect and create sessions, only to disconnect and never use them may leave significant state behind. (The same issue applies to Shepler Expires December 22, 2006 [Page 165] Internet-Draft NFSv4 Minor Version 1 June 2006 NFSv4.0 with clients who may perform SETCLIENTID, then never perform SETCLIENTID_CONFIRM.) Careful authentication coupled with resource checks is highly recommended. 10. Multi-server Name Space NFSv4.1 supports attributes that allow a namespace to extend beyond the boundaries of a single server. Use of such multi-server namespaces is optional, and for many purposes, single-server namespace are perfectly acceptable. Use of multi-server namespaces can provide many advantages, however, by separating a file system's logical position in a name space from the (possibly changing) logistical and administrative considerations that result in particular file systems being located on particular servers. 10.1. Location attributes NFSv4 contains recommended attributes that allow file systems on one server to be associated with one or more instances of that file system on other servers. These attributes specify such file systems by specifying a server name (either a DNS name or an IP address) together with the path of that filesystem within that server's single-server name space. The fs_locations_info recommended attribute allows specification of one more file systems locations where the data corresponding to a given file system may be found. This attributes provides to the client, in addition to information about file system locations, extensive information about the various file system choices (e.g. priority for use, writability, currency, etc.) as well as information to help the client efficiently effect as seamless a transition as possible among multiple file system instances, when and if that should be necessary. The fs_locations recommended attribute is inherited from NFSv4.0 and only allows specification of the file system locations where the data corresponding to a given file system may be found. Servers should make this attribute available whenever fs_locations_info is supported, but client use of fs_locations_info is to be preferred. 10.2. File System Presence or Absence A given location in an NFSv4 namespace (typically but not necessarily a multi-server namespace) can have a number of file system locations associated with it (via the fs_locations or fs_locations_info attribute). There may also be an actual current file system at that location, accessible via normal namespace operations (e.g. LOOKUP). Shepler Expires December 22, 2006 [Page 166] Internet-Draft NFSv4 Minor Version 1 June 2006 In this case there, the file system is said to be "present" at that position in the namespace and clients will typically use it, reserving use of additional locations specified via the location- related attributes to situations in which the principal location is no longer available. When there is no actual filesystem at the namespace location in question, the file system is said to be "absent". An absent file system contains no files or directories other than the root and any reference to it, except to access a small set of attributes useful in determining alternate locations, will result in an error, NFS4ERR_MOVED. Note that if the server ever returns NFS4ERR_MOVED (i.e. file systems may be absent), it MUST support the fs_locations attribute and SHOULD support the fs_locations_info and fs_absent attributes. While the error name suggests that we have a case of a file system which once was present, and has only become absent later, this is only one possibility. A position in the namespace may be permanently absent with the file system(s) designated by the location attributes the only realization. The name NFS4ERR_MOVED reflects an earlier, more limited conception of its function, but this error will be returned whenever the referenced file system is absent, whether it has moved or not. Except in the case of GETATTR-type operations (to be discussed later), when the current filehandle at the start of an operation is within an absent file system, that operation is not performed and the error NFS4ERR_MOVED returned, to indicate that the filesystem is absent on the current server. Because a GETFH cannot succeed, if the current filehandle is within an absent file system, filehandles within an absent filesystem cannot be transferred to the client. When a client does have filehandles within an absent file system, it is the result of obtaining them when the file system was present, and having the file system become absent subsequently. It should be noted that because the check for the current filehandle being within an absent filesystem happens at the start of every operation, operations which change the current filehandle so that it is within an absent filesystem will not result in an error. This allows such combinations as PUTFH-GETATTR and LOOKUP-GETATTR to be used to get attribute information, particularly location attribute information, as discussed below. The recommended file system attribute fs_absent can used to interrogate the present/absent status of a given file system. Shepler Expires December 22, 2006 [Page 167] Internet-Draft NFSv4 Minor Version 1 June 2006 10.3. Getting Attributes for an Absent File System When a file system is absent, most attributes are not available, but it is necessary to allow the client access to the small set of attributes that are available, and most particularly those that give information about the correct current locations for this file system, fs_locations and fs_locations_info. 10.3.1. GETATTR Within an Absent File System As mentioned above, an exception is made for GETATTR in that attributes may be obtained for a filehandle within an absent file system. This exception only applies if the attribute mask contains at least one attribute bit that indicates the client is interested in a result regarding an absent file system: fs_locations, fs_locations_info, or fs_absent. If none of these attributes is requested, GETATTR will result in an NFS4ERR_MOVED error. When a GETATTR is done on an absent file system, the set of supported attributes is very limited. Many attributes, including those that are normally mandatory will not be available on an absent file system. In addition to the attributes mentioned above (fs_locations, fs_locations_info, fs_absent), the following attributes SHOULD be available on absent file systems, in the case of recommended attributes at least to the same degree that they are available on present file systems. change: This attribute is useful for absent file systems and can be helpful in summarizing to the client when any of the location- related attributes changes. fsid: This attribute should be provided so that the client can determine file system boundaries, including, in particular, the boundary between present and absent file systems. mounted_on_fileid: For objects at the top of an absent file system this attribute needs to be available. Since the fileid is one which is within the present parent file system, there should be no need to reference the absent file system to provide this information. Other attributes SHOULD NOT be made available for absent file systems, even when it is possible to provide them. The server should not assume that more information is always better and should avoid gratuitously providing additional information. When a GETATTR operation includes a bit mask for one of the attributes fs_locations, fs_locations_info, or absent, but where the Shepler Expires December 22, 2006 [Page 168] Internet-Draft NFSv4 Minor Version 1 June 2006 bit mask includes attributes which are not supported, GETATTR will not return an error, but will return the mask of the actual attributes supported with the results. Handling of VERIFY/NVERIFY is similar to GETATTR in that if the attribute mask does not include fs_locations, fs_locations_info, or absent, the error NFS4ERR_MOVED will result. It differs in that any appearance in the attribute mask of an attribute not supported for an absent file system (and note that this will include some normally mandatory attributes), will also cause an NFS4ERR_MOVED result. 10.3.2. READDIR and Absent File Systems A READDIR performed when the current filehandle is within an absent file system will result in an NFS4ERR_MOVED error, since, unlike the case of GETATTR, no such exception is made for READDIR. Attributes for an absent file system may be fetched via a READDIR for a directory in a present file system, when that directory contains the root directories of one or more absent filesystems. In this case, the handling is as follows: o If the attribute set requested includes one of the attributes fs_locations, fs_locations_info, or absent, then fetching of attributes proceeds normally and no NFS4ERR_MOVED indication is returned, even when the rdattr_error attribute is requested. o If the attribute set requested does not include one of the attributes fs_locations, fs_locations_info, or fs_absent, then if the rdattr_error attribute is requested, each directory entry for the root of an absent file system, will report NFS4ERR_MOVED as the value of the rdattr_error attribute. o If the attribute set requested does not include any of the attributes fs_locations, fs_locations_info, fs_absent, or rdattr_error then the occurrence of the root of an absent file system within the directory will result in the READDIR failing with an NFSER_MOVED error. o The unavailability of an attribute because of a file system's absence, even one that is ordinarily mandatory, does not result in any error indication. The set of attributes returned for the root directory of the absent filesystem in that case is simply restricted to those actually available. Shepler Expires December 22, 2006 [Page 169] Internet-Draft NFSv4 Minor Version 1 June 2006 10.4. Uses of Location Information The location-bearing attributes (fs_locations and fs_locations_info), provide, together with the possibility of absent filesystems, a number of important facilities in providing reliable, manageable, and scalable data access. When a file system is present, these attribute can provide alternative locations, to be used to access the same data, in the event that server failures, communications problems, or other difficulties, make continued access to the current file system impossible or otherwise impractical. Provision of such alternate locations is referred to as "replication" although there are cases in which replicated sets of data are not in fact present, and the replicas are instead different paths to the same data. When a file system is present and becomes absent, clients can be given the opportunity to have continued access to their data, at an alternate location. In this case, a continued attempt to use the data in the now-absent file system will result in an NFSERR_MOVED error and at that point the successor locations (typically only one but multiple choices are possible) can be fetched and used to continue access. Transfer of the file system contents to the new location is referred to as "migration", but it should be kept in mind that there are cases in which this term can be used, like "replication" when there is no actual data migration per se. Where a file system was not previously present, specification of file system location provides a means by which file systems located on one server can be associated with a name space defined by another server, thus allowing a general multi-server namespace facility. Designation of such a location, in place of an absent filesystem, is called "referral". 10.4.1. File System Replication The fs_locations and fs_locations_info attributes provide alternative locations, to be used to access data in place of the current file system. On first access to a filesystem, the client should obtain the value of the set alternate locations by interrogating the fs_locations or fs_locations_info attribute, with the latter being preferred. In the event that server failures, communications problems, or other difficulties, make continued access to the current file system impossible or otherwise impractical, the client can use the alternate locations as a way to get continued access to his data. Shepler Expires December 22, 2006 [Page 170] Internet-Draft NFSv4 Minor Version 1 June 2006 The alternate locations may be physical replicas of the (typically read-only) file system data, or they may reflect alternate paths to the same server or provide for the use of various form of server clustering in which multiple servers provide alternate ways of accessing the same physical file system. How these different modes of file system transition are represented within the fs_locations and fs_locations_info attributes and how the client deals with file system transition issues will be discussed in detail below. 10.4.2. File System Migration When a file system is present and becomes absent, clients can be given the opportunity to have continued access to their data, at an alternate location, as specified by the fs_locations or fs_locations_info attribute. Typically, a client will be accessing the file system in question, get a an NFS4ERR_MOVED error, and then use the fs_locations or fs_locations_info attribute to determine the new location of the data. When fs_locations_info is used, additional information will be available which will define the nature of the client's handling of the transition to a new server. Such migration can be helpful in providing load balancing or general resource reallocation. The protocol does not specify how the filesystem will be moved between servers. It is anticipated that a number of different server-to-server transfer mechanisms might be used with the choice left to the server implementor. The NFSv4.1 protocol specifies the method used to communicate the migration event between client and server. The new location may be an alternate communication path to the same server, or, in the case of various forms of server clustering, another server providing access to the same physical file system. The client's responsibilities in dealing with this transition depend on the specific nature of the new access path and how and whether data was in fact migrated. These issues will be discussed in detail below. Although a single successor location is typical, multiple locations may be provided, together with information that allows priority among the choices to be indicated, via information in the fs_locations_info attribute. Where suitable clustering mechanisms make it possible to provide multiple identical file systems or paths to them, this allows the client the opportunity to deal with any resource or communications issues that might limit data availability. Shepler Expires December 22, 2006 [Page 171] Internet-Draft NFSv4 Minor Version 1 June 2006 10.4.3. Referrals Referrals provide a way of placing a file system in a location essentially without respect to its physical location on a given server. This allows a single server of a set of servers to present a multi-server namespace that encompasses filesystems located on multiple servers. Some likely uses of this include establishment of site-wide or organization-wide namespaces, or even knitting such together into a truly global namespace. Referrals occur when a client determines, upon first referencing a position in the current namespace, that it is part of a new file system and that that file system is absent. When this occurs, typically by receiving the error NFS4ERR_MOVED, the actual location or locations of the file system can be determined by fetching the fs_locations or fs_locations_info attribute. Use of multi-server namespaces is enabled by NFSv4 but is not required. The use of multi-server namespaces and their scope will depend on the application used, and system administration preferences. Multi-server namespaces can be established by a single server providing a large set of referrals to all of the included filesystems. Alternatively, a single multi-server namespace may be administratively segmented with separate referral file systems (on separate servers) for each separately-administered section of the name space. Any segment or the top-level referral file system may use replicated referral file systems for higher availability. 10.5. Additional Client-side Considerations When clients make use of servers that implement referrals and migration, care should be taken so that a user who mounts a given filesystem that includes a referral or a relocated filesystem continue to see a coherent picture of that user-side filesystem despite the fact that it contains a number of server-side filesystems which may be on different servers. One important issue is upward navigation from the root of a server- side filesystem to its parent (specified as ".." in UNIX). The client needs to determine when it hits an fsid root going up the filetree. When at such a point, and needs to ascend to the parent, it must do so locally instead of sending a LOOKUPP call to the server. The LOOKUPP would normally return the ancestor of the target filesystem on the target server, which may not be part of the space that the client mounted. Shepler Expires December 22, 2006 [Page 172] Internet-Draft NFSv4 Minor Version 1 June 2006 Another issue concerns refresh of referral locations. When referrals are used extensively, they may change as server configurations change. It is expected that clients will cache information related to traversing referrals so that future client side requests are resolved locally without server communication. This is usually rooted in client-side name lookup caching. Clients should periodically purge this data for referral points in order to detect changes in location information. When the change attribute changes for directories that hold referral entries or for the referral entries themselves, clients should consider any associated cached referral information to be out of date. 10.6. Effecting File System Transitions Transitions between file system instances, whether due to switching between replicas upon server unavailability, or in response to a server-initiated migration event are best dealt with together. Even though the prototypical use cases of replication and migration contain distinctive sets of features, when all possibilities for these operations are considered, the underlying unity of these operations, from the client's point of view is clear, even though for the server pragmatic considerations will normally force different implementation strategies for planned and unplanned transitions. A number of methods are possible for servers to replicate data and to track client state in order to allow clients to transition between file system instances with a minimum of disruption. Such methods vary between those that use inter-server clustering techniques to limit the changes seen by the client, to those that are less aggressive, use more standard methods of replicating data, and impose a greater burden on the client to adapt to the transition. The NFSv4.1 protocol does not impose choices on clients and servers with regard to that spectrum of transition methods. In fact, there are many valid choices, depending on client and application requirements and their interaction with server implementation choices. The NFSv4.1 protocol does define the specific choices that can be made, how these choices are communicated to the client and how the client is to deal with any discontinuities. In the sections below references will be made to various possible server implementation choices as a way of illustrating the transition scenarios that clients may deal with. The intent here is not to define or limit server implementations but rather to illustrate the range of issues that clients may face. In the discussion below, references will be made to a file system having a particular property or of two file systems (typically the Shepler Expires December 22, 2006 [Page 173] Internet-Draft NFSv4 Minor Version 1 June 2006 source and destination) belonging to a common class of any of several types. Two file systems that belong to such a class share some important aspect of file system behavior that clients may depend upon when present, to easily effect a seamless transition between file system instances. Conversely, where the file systems do not belong to such a common class, the client has to deal with various sorts of implementation discontinuities which may cause performance or other issues in effecting a transition. Where the fs_locations_info attribute is available, such file system classification data will be made directly available to the client. See Section 10.10 for details. When only fs_locations is available, default assumptions with regard to such classifications have to be inferred. See Section 10.9 for details. In cases in which one server is expected to accept opaque values from the client that originated from another server, it is a wise implementation practice for the servers to encode the "opaque" values in network byte order. If this is done, servers acting as replicas or immigrating filesystems will be able to parse values like stateids, directory cookies, filehandles, etc. even if their native byte order is different from that of other servers cooperating in the replication and migration of the filesystem. 10.6.1. Transparent File System Transitions Discussion of transition possibilities will start at the most transparent end of the spectrum of possibilities. When there are multiple paths to a single server, and there are network problems that force another path to be used, or when a path is to be put out of service, a replication or migration event may occur without any real replication or migration. Nevertheless, such events fit within the same general framework in that there is a transition between file system locations, communicated just as other, less transparent transitions are communicated. There are cases of transparent transitions that may happen independent of location information, in that a specific host name, may map to several IP addresses, allowing session trunking to provide alternate paths. In other cases, however multiple addresses may have separate location entries for specific file systems to preferentially direct traffic for those specific file systems to certain server addresses, subject to planned or unplanned, corresponding to a nominal replication or migrations event. The specific details of the transition depend on file system equivalence class information (as provided by the fs_locations_info and fs_locations attributes). Shepler Expires December 22, 2006 [Page 174] Internet-Draft NFSv4 Minor Version 1 June 2006 o Where the old and new filesystems belong to the same _endpoint_ class, the transition consists of creating a new connection which is associated with the existing session to the old server endpoint. Where a connection cannot be associated with the existing session, the target server must be able to recognize the sessionid as invalid and force creation on a new session or a new client id. o Where the old and new filesystems do not belong to the same _endpoint_ classes, but to the same _server_ class, the transition consists of creating a new session, associated with the existing clientid. Where the clientid is stale, the stale, the target server must be able to recognize the clientid as no longer valid and force creation of a new clientid. In either of the above cases, the file system may be shown as belonging to the same _sharing_ class, class allowing the alternate session or connection to be established in advance and used either to accelerate the file system transition when necessary (avoiding connection latency), or to provide higher performance by actively using multiple paths simultaneously. When two file systems belong to the same _endpoint_ class, or _sharing_ class, many transition issues are eliminated, and any information indicating otherwise is ignored as erroneous. In all such transparent transition cases, the following apply: o File handles stay the same if persistent and if volatile are only subject to expiration, if they would be in the absence of file system transition. o Fileid values do not change across the transition. o The file system will have the same fsid in both the old and new the old and new locations. o Change attribute values are consistent across the transition and do not have to be refetched. When change attributes indicate that a cached object is still valid, it can remain cached. o Session, client, and state identifier retain their validity across the transition, except where their staleness is recognized and reported by the new server. Except where such staleness requires it, no lock reclamation is needed. o Write verifiers are presumed to retain their validity and can be presented to COMMIT, with the expectation that if COMMIT on the Shepler Expires December 22, 2006 [Page 175] Internet-Draft NFSv4 Minor Version 1 June 2006 new server accept them as valid, then that server has all of the data unstably written to the original server and has committed it to stable storage as requested. 10.6.2. Filehandles and File System Transitions There are a number of ways in which filehandles can be handled across a file system transition. These can be divided into two broad classes depending upon whether the two file systems across which the transition happens share sufficient state to effect some sort of continuity of filesystem handling. When there is no such co-operation in filehandle assignment, the two file systems are reported as being in different _handle_ classes. In this case, all filehandles are assumed to expire as part of the file system transition. Note that this behavior does not depend on fh_expire_type attribute and supersedes the specification of FH4_VOL_MIGRATION bit, which only affects behavior when fs_locations_info is not available. When there is co-operation in filehandle assignment, the two file systems are reported as being in the same _handle_ classes. In this case, persistent filehandle remain valid after the file system transition, while volatile filehandles (excluding those while are only volatile due to the FH4_VOL_MIGRATION bit) are subject to expiration on the target server. 10.6.3. Fileid's and File System Transitions In NFSv4.0, the issue of continuity of fileid's in the event of a file system transition was not addressed. The general expectation had been that in situations in which the two filesystem instances are created by a single vendor using some sort of filesystem image copy, fileid's will be consistent across the transition while in the analogous multi-vendor transitions they will not. This poses difficulties, especially for the client without special knowledge of the of the transition mechanisms adopted by the server. It is important to note that while clients themselves may have no trouble with a fileid changing as a result of a file system transition event, applications do typically have access to the fileid (e.g. via stat), and the result of this is that an application may work perfectly well if there is no filesystem instance transition or if any such transition is among instances created by a single vendor, yet be unable to deal with the situation in which a multi-vendor transition occurs, at the wrong time. Providing the same fileid's in a multi-vendor (multiple server Shepler Expires December 22, 2006 [Page 176] Internet-Draft NFSv4 Minor Version 1 June 2006 vendors) environment has generally been held to be quite difficult. While there is work to be done, it needs to be pointed out that this difficulty is partly self-imposed. Servers have typically identified fileid with inode number, i.e. with a quantity used to find the file in question. This identification poses special difficulties for migration of an fs between vendors where assigning the same index to a given file may not be possible. Note here that a fileid does not require that it be useful to find the file in question, only that it is unique within the given fs. Servers prepared to accept a fileid as a single piece of metadata and store it apart from the value used to index the file information can relatively easily maintain a fileid value across a migration event, allowing a truly transparent migration event. In any case, where servers can provide continuity of fileids, they should and the client should be able to find out that such continuity is available, and take appropriate action. Information about the continuity (or lack thereof) of fileid's across a file system is represented by specifying whether the file systems in question are of the same _fileid_ class. 10.6.4. Fsid's and File System Transitions Since fsid's are only unique within a per-server basis, it is to be expected that they will change during a file system transition. Clients should not make the fsid's received from the server visible to application since they may not be globally unique, and because they may change during a file system transition event. Applications are best served if they are isolated from such transitions to the extent possible. 10.6.5. The Change Attribute and File System Transitions Since the change attribute is defined as a server-specific one, change attributes fetched from one server are normally presumed to be invalid on another server. Such a presumption is troublesome since it would invalidate all cached change attributes, requiring refetching. Even more disruptive, the absence of any assured continuity for the change attribute means that even if the same value is gotten on refetch no conclusions can drawn as to whether the object in question has changed. The identical change attribute could be merely an artifact, of a modified file with a different change attribute construction algorithm, with that new algorithm just happening to result in an identical change value. When the two file systems have consistent change attribute formats, and this fact is communicated to the client by reporting as in the same _change_ class, the client may assume a continuity of change Shepler Expires December 22, 2006 [Page 177] Internet-Draft NFSv4 Minor Version 1 June 2006 attribute construction and handle this situation just as it would be handled without any filesystem transition. 10.6.6. Lock State and File System Transitions In a file system transition, the two file systems may have co- operated in state management. When this is the case, and the two file systems belong to the same _state_ class, the two file systems will have compatible state environments. In the case of migration, the servers involved in the migration of a filesystem SHOULD transfer all server state from the original to the new server. When this done, it must be done in a way that is transparent to the client. With replication, such a degree of common state is typically not the case. Clients, however should use the information provided by the fs_locations_info attribute to determine whether such sharing is in effect when this is available, and only if that attribute is not available depend on these defaults. This state transfer will reduce disruption to the client when a file system transition If the servers are successful in transferring all state, the client will continue to use stateids assigned by the original server. Therefore the new server must recognize these stateids as valid. This holds true for the clientid as well. Since responsibility for an entire filesystem is transferred is with such an event, there is no possibility that conflicts will arise on the new server as a result of the transfer of locks. As part of the transfer of information between servers, leases would be transferred as well. The leases being transferred to the new server will typically have a different expiration time from those for the same client, previously on the old server. To maintain the property that all leases on a given server for a given client expire at the same time, the server should advance the expiration time to the later of the leases being transferred or the leases already present. This allows the client to maintain lease renewal of both classes without special effort. When the two servers belong to the same _state_ class, it does not necessarily mean that when dealing with the transition, the client will not have to reclaim state. However it does mean that the client may proceed using his current clientid and stateid's just as if there had been no file system transition event and only reclaim state when an NFS4ERR_STALE_CLIENTID or NFS4ERR_STALE_STATEID error is received. File systems co-operating in state management may actually share state or simply divide the id space so as to recognize (and reject as stale) each others state and clients id's. Servers which do share state may not do under all conditions or all times. The requirement Shepler Expires December 22, 2006 [Page 178] Internet-Draft NFSv4 Minor Version 1 June 2006 for the server is that if it cannot be sure in accepting an id that it reflects the locks the client was given, it must treat all associated state as stale and report it as such to the client. When two file systems belong to different _state_ classes, the client must establish a new state on the destination, and reclaim if possible. In this case, old stateids and clientid's should not be presented to the new server since there is no assurance that they will not conflict with id's valid on that server. In either case, when actual locks are not known to be maintained, the destination server may establish a grace period specific to the given file system, with non-reclaim locks being rejected for that file system, even though normal locks are being granted for other file systems. Clients should not infer the absence of a grace period for file systems being transitioned to a server from responses to requests for other file systems. In the case of lock reclamation for a given file system after a file system transition, edge conditions can arise similar to those for reclaim after server reboot (although in the case of the planned state transfer associated with migration, these can be avoided by securely recording lock state as part of state migration. Where the destination server cannot guarantee that locks will not be incorrectly granted, the destination server should not establish a file-system-specific grace period. In place of a file-system-specific version of RECLAIM_COMPLETE, servers may assume that an attempt to obtain a new lock, other than be reclaim, indicate the end of the client's attempt to reclaim locks for that file system. [NOTE: The alternative would be to adapt RECLAIM_COMPLETE to this task]. Information about client identity that may be propagated between servers in the form of nfs_client_id4 and associated verifiers, under the assumption that the client presents the same values to all the servers with which it deals. [NOTE: This contradicts what is currently said about SETCLIENTID, and interacts with the issue of what sessions should do about this.] Servers are encouraged to provide facilities to allow locks to be reclaimed on the new server after a file system transition. Often, however, in cases in which the two file systems are not of the same _state _ class, such facilities may not be available and client should be prepared to re-obtain locks, even though it is possible that the client may have his LOCK or OPEN request denied due to a conflicting lock. In some environments, such as the transition between read-only file systems, such denial of locks should not pose Shepler Expires December 22, 2006 [Page 179] Internet-Draft NFSv4 Minor Version 1 June 2006 large difficulties in practice. When an attempt to re-establish a lock on a new server is denied, the client should treat the situation as if his original lock had been revoked. In all cases in which the lock is granted, the client cannot assume that no conflicting could have been granted in the interim. Where change attribute continuity is present, the client may check the change attribute to check for unwanted file modifications. Where even this is not available, and the file system is not read-only a client may reasonably treat all pending locks as having been revoked. 10.6.6.1. Leases and File System Transitions In the case of lease renewal, the client may not be submitting requests for a filesystem that has been transferred to another server. This can occur because of the lease renewal mechanism. The client renews leases for all filesystems when submitting a request to any one filesystem at the server. In order for the client to schedule renewal of leases that may have been relocated to the new server, the client must find out about lease relocation before those leases expire. To accomplish this, all operations which renew leases for a client (i.e. OPEN, CLOSE, READ, WRITE, RENEW, LOCK, LOCKT, LOCKU), will return the error NFS4ERR_LEASE_MOVED if responsibility for any of the leases to be renewed has been transferred to a new server. This condition will continue until the client receives an NFS4ERR_MOVED error and the server receives the subsequent GETATTR for the fs_locations or fs_locations_info attribute for an access to each filesystem for which a lease has been moved to a new server. [ISSUE: There is a conflict between this and the idea in the sessions text that we can have every op in the session implicitly renew the lease. This needs to be dealt with. D. Noveck will create an issue in the issue tracker.] When a client receives an NFS4ERR_LEASE_MOVED error, it should perform an operation on each filesystem associated with the server in question. When the client receives an NFS4ERR_MOVED error, the client can follow the normal process to obtain the new server information (through the fs_locations and fs_locations_info attributes) and perform renewal of those leases on the new server, unless information in fs_locations_info attribute shows that no state could have been transferred. If the server has not had state transferred to it transparently, the client will receive either NFS4ERR_STALE_CLIENTID or NFS4ERR_STALE_STATEID from the new server, as described above, and the client can then recover state information as it does in the event of server failure. Shepler Expires December 22, 2006 [Page 180] Internet-Draft NFSv4 Minor Version 1 June 2006 10.6.6.2. Transitions and the Lease_time Attribute In order that the client may appropriately manage its leases in the case of a file system transition, the destination server must establish proper values for the lease_time attribute. When state is transferred transparently, that state should include the correct value of the lease_time attribute. The lease_time attribute on the destination server must never be less than that on the source since this would result in premature expiration of leases granted by the source server. Upon transitions in which state is transferred transparently, the client is under no obligation to re- fetch the lease_time attribute and may continue to use the value previously fetched (on the source server). If state has not been transferred transparently, either because the file systems are show as being in different state classes or because the client sees a real or simulated server reboot), the client should fetch the value of lease_time on the new (i.e. destination) server, and use it for subsequent locking requests. However the server must respect a grace period at least as long as the lease_time on the source server, in order to ensure that clients have ample time to reclaim their lock before potentially conflicting non-reclaimed locks are granted. 10.6.7. Write Verifiers and File System Transitions In a file system transition, the two file systems may be clustered in the handling of unstably written data. When this is the case, and the two file systems belong to the same _verifier_ class, valid verifiers from one system may be recognized by the other and superfluous writes avoided. There is no requirement that all valid verifiers be recognized, but it cannot be the case that a verifier is recognized as valid when it is not. [NOTE: We need to resolve the issue of proper verifier scope]. When two file systems belong to different _verifier_ classes, the client must assume that all unstable writes in existence at the time file system transition, have been lost since there is no way the old verifier can recognized as valid (or not) on the target server. 10.7. Effecting File System Referrals Referrals are effected when an absent file system is encountered, and one or more alternate locations are made available by the fs_locations or fs_locations_info attributes. The client will typically get an NFS4ERR_MOVED error, fetch the appropriate location information and proceed to access the file system on different Shepler Expires December 22, 2006 [Page 181] Internet-Draft NFSv4 Minor Version 1 June 2006 server, even though it retains its logical position within the original namespace. The examples given in the sections below are somewhat artificial in that an actual client will not typically do a multi-component lookup, but will have cached information regarding the upper levels of the name hierarchy. However, these example are chosen to make the required behavior clear and easy to put within the scope of a small number of requests, without getting unduly into details of how specific clients might choose to cache things. 10.7.1. Referral Example (LOOKUP) Let us suppose that the following COMPOUND is issued in an environment in which /src/linux/2.7/latest is absent from the target server. This may be for a number of reasons. It may be the case that the file system has moved, or, it may be the case that the target server is functioning mainly, or solely, to refer clients to the servers on which various file systems are located. o PUTROOTFH o LOOKUP "src" o LOOKUP "linux" o LOOKUP "2.7" o LOOKUP "latest" o GETFH o GETATTR fsid,fileid,size,ctime Under the given circumstances, the following will be the result. o PUTROOTFH --> NFS_OK. The current fh is now the root of the pseudo-fs. o LOOKUP "src" --> NFS_OK. The current fh is for /src and is within the pseudo-fs. o LOOKUP "linux" --> NFS_OK. The current fh is for /src/linux and is within the pseudo-fs. o LOOKUP "2.7" --> NFS_OK. The current fh is for /src/linux/2.7 and is within the pseudo-fs. Shepler Expires December 22, 2006 [Page 182] Internet-Draft NFSv4 Minor Version 1 June 2006 o LOOKUP "latest" --> NFS_OK. The current fh is for /src/linux/2.7/ latest and is within a new, absent fs, but ... the client will never see the value of that fh. o GETFH --> NFS4ERR_MOVED. Fails because current fh is in an absent fs at the start of the operation and the spec makes no exception for GETFH. o GETATTR fsid,fileid,size,ctime. Not executed because the failure of the GETFH stops processing of the COMPOUND. Given the failure of the GETFH, the client has the job of determining the root of the absent file system and where to find that file system, i.e. the server and path relative to that server's root fh. Note here that in this example, the client did not obtain filehandles and attribute information (e.g. fsid) for the intermediate directories, so that he would not be sure where the absent file system starts. It could be the case, for example, that /src/linux/2.7 is the root of the moved filesystem and that the reason that the lookup of "latest" succeeded is that the filesystem was not absent on that op but was moved between the last LOOKUP and the GETFH (since COMPOUND is not atomic). Even if we had the fsid's for all of the intermediate directories, we could have no way of knowing that /src/linux/2.7/latest was the root of a new fs, since we don't yet have its fsid. In order to get the necessary information, let us re-issue the chain of lookup's with GETFH's and GETATTR's to at least get the fsid's so we can be sure where the appropriate fs boundaries are. The client could choose to get fs_locations_info at the same time but in most cases the client will have a good guess as to where fs boundaries are (because of where NFS4ERR_MOVED was gotten and where not) making fetching of fs_locations_info unnecessary. OP01: PUTROOTFH --> NFS_OK - Current fh is root of pseudo-fs. OP02: GETATTR(fsid) --> NFS_OK - Just for completeness. Normally, clients will know the fsid of the pseudo-fs as soon as they establish communication with a server. OP03: LOOKUP "src" --> NFS_OK Shepler Expires December 22, 2006 [Page 183] Internet-Draft NFSv4 Minor Version 1 June 2006 OP04: GETATTR(fsid) --> NFS_OK - Get current fsid to see where fs boundaries are. The fsid will be that for the pseudo-fs in this example, so no boundary. OP05: GETFH --> NFS_OK - Current fh is for /src and is within pseudo-fs. OP06: LOOKUP "linux" --> NFS_OK - Current fh is for /src/linux and is within pseudo-fs. OP07: GETATTR(fsid) --> NFS_OK - Get current fsid to see where fs boundaries are. The fsid will be that for the pseudo-fs in this example, so no boundary. OP08: GETFH --> NFS_OK - Current fh is for /src/linux and is within pseudo-fs. OP09: LOOKUP "2.7" --> NFS_OK - Current fh is for /src/linux/2.7 and is within pseudo-fs. OP10: GETATTR(fsid) --> NFS_OK - Get current fsid to see where fs boundaries are. The fsid will be that for the pseudo-fs in this example, so no boundary. OP11: GETFH --> NFS_OK - Current fh is for /src/linux/2.7 and is within pseudo-fs. OP12: LOOKUP "latest" --> NFS_OK - Current fh is for /src/linux/2.7/latest and is within a new, absent fs, but ... - The client will never see the value of that fh OP13: GETATTR(fsid, fs_locations_info) --> NFS_OK - We are getting the fsid to know where the fs boundaries are. Note that the fsid we are given will not necessarily be preserved at the new location. That fsid might be different and in fact the fsid we have for this fs might a valid fsid of a different fs on Shepler Expires December 22, 2006 [Page 184] Internet-Draft NFSv4 Minor Version 1 June 2006 that new server. - In this particular case, we are pretty sure anyway that what has moved is /src/linux/2.7/latest rather than /src/linux/2.7 since we have the fsid of the latter and it is that of the pseudo-fs, which presumably cannot move. However, in other examples, we might not have this kind of information to rely on (e.g. /src/linux/2.7 might be a non-pseudo filesystem separate from /src/linux/2.7/ latest), so we need to have another reliable source information on the boundary of the fs which is moved. If, for example, the filesystem "/src/linux" had moved we would have a case of migration rather than referral and once the boundaries of the migrated filesystem was clear we could fetch fs_locations_info. - We are fetching fs_locations_info because the fact that we got an NFS4ERR_MOVED at this point means that it most likely that this is a referral and we need the destination. Even if it is the case that "/src/linux/2.7" is a filesystem which has migrated, we will still need the location information for that file system. OP14: GETFH --> NFS4ERR_MOVED - Fails because current fh is in an absent fs at the start of the operation and the spec makes no exception for GETFH. Note that this has the happy consequence that we don't have to worry about the volatility or lack thereof of the fh. If the root of the fs on the new location is a persistent fh, then we can assume that this fh, which we never saw is a persistent fh, which, if we could see it, would exactly match the new fh. At least, there is no evidence to disprove that. On the other hand, if we find a volatile root at the new location, then the filehandle which we never saw must have been volatile or at least nobody can prove otherwise. Given the above, the client knows where the root of the absent file system is, by noting where the change of fsid occurred. The fs_locations_info attribute also gives the client the actual location of the absent file system, so that the referral can proceed. The server gives the client the bare minimum of information about the absent file system so that there will be very little scope for problems of conflict between information sent by the referring server and information of the file system's home. No filehandles and very few attributes are present on the referring server and the client can treat those it receives as basically transient information with the function of enabling the referral. Shepler Expires December 22, 2006 [Page 185] Internet-Draft NFSv4 Minor Version 1 June 2006 10.7.2. Referral Example (READDIR) Another context in which a client may encounter referrals is when it does a READDIR on directory in which some of the sub-directories are the roots of absent file systems. Suppose such a directory is read as follows: o PUTROOTFH o LOOKUP "src" o LOOKUP "linux" o LOOKUP "2.7" o READDIR (fsid, size, ctime, mounted_on_fileid) In this case, because rdattr_error is not requested, fs_locations_info is not requested, and some of attributes cannot be provided the result will be an NFS4ERR_MOVED error on the READDIR, with the detailed results as follows: o PUTROOTFH --> NFS_OK. The current fh is at the root of the pseudo-fs. o LOOKUP "src" --> NFS_OK. The current fh is for /src and is within the pseudo-fs. o LOOKUP "linux" --> NFS_OK. The current fh is for /src/linux and is within the pseudo-fs. o LOOKUP "2.7" --> NFS_OK. The current fh is for /src/linux/2.7 and is within the pseudo-fs. o READDIR (fsid, size, ctime, mounted_on_fileid) --> NFS4ERR_MOVED. Note that the same error would have been returned if /src/linux/2.7 had migrated, when in fact it is because the directory contains the root of an absent fs. So now suppose that we reissue with rdattr_error: o PUTROOTFH o LOOKUP "src" o LOOKUP "linux" Shepler Expires December 22, 2006 [Page 186] Internet-Draft NFSv4 Minor Version 1 June 2006 o LOOKUP "2.7" o READDIR (rdattr_error, fsid, size, ctime, mounted_on_fileid) The results will be: o PUTROOTFH --> NFS_OK. The current fh is at the root of the pseudo-fs. o LOOKUP "src" --> NFS_OK. The current fh is for /src and is within the pseudo-fs. o LOOKUP "linux" --> NFS_OK. The current fh is for /src/linux and is within the pseudo-fs. o LOOKUP "2.7" --> NFS_OK. The current fh is for /src/linux/2.7 and is within the pseudo-fs. o READDIR (rdattr_error, fsid, size, ctime, mounted_on_fileid) --> NFS_OK. The attributes for "latest" will only contain rdattr_error with the value will be NFS4ERR_MOVED, together with an fsid value and an a value for mounted_on_fileid. So suppose we do another READDIR to get fs_locations_info, although we could have used a GETATTR directly, as in the previous section. o PUTROOTFH o LOOKUP "src" o LOOKUP "linux" o LOOKUP "2.7" o READDIR (rdattr_error, fs_locations_info, mounted_on_fileid, fsid, size, ctime) The results would be: o PUTROOTFH --> NFS_OK. The current fh is at the root of the pseudo-fs. o LOOKUP "src" --> NFS_OK. The current fh is for /src and is within the pseudo-fs. o LOOKUP "linux" --> NFS_OK. The current fh is for /src/linux and is within the pseudo-fs. Shepler Expires December 22, 2006 [Page 187] Internet-Draft NFSv4 Minor Version 1 June 2006 o LOOKUP "2.7" --> NFS_OK. The current fh is for /src/linux/2.7 and is within the pseudo-fs. o READDIR (rdattr_error, fs_locations_info, mounted_on_fileid, fsid, size, ctime) --> NFS_OK. The attributes will be as shown below. The attributes for "latest" will only contain o rdattr_error (value: NFS4ERR_MOVED) o fs_locations_info ) o mounted_on_fileid (value: unique fileid within referring fs) o fsid (value: unique value within referring server) The attribute entry for "latest" will not contain size or ctime. 10.8. The Attribute fs_absent In order to provide the client information about whether the current file system is present or absent, the fs_absent attribute may be interrogated. As noted above, this attribute, when supported, may be requested of absent filesystems without causing NFS4ERR_MOVED to be returned and it should always be available. Servers are strongly urged to support this attribute on all filesystems if they support it on any filesystem. 10.9. The Attribute fs_locations The fs_locations attribute is structured in the following way: struct fs_location { utf8str_cis server<>; pathname4 rootpath; }; struct fs_locations { pathname4 fs_root; fs_location locations<>; }; The fs_location struct is used to represent the location of a filesystem by providing a server name and the path to the root of the file system within that server's namespace. When a set of servers have corresponding file systems at the same path within their Shepler Expires December 22, 2006 [Page 188] Internet-Draft NFSv4 Minor Version 1 June 2006 namespaces, an array of server names may be provided. An entry in the server array is an UTF8 string and represents one of a traditional DNS host name, IPv4 address, or IPv6 address. It is not a requirement that all servers that share the same rootpath be listed in one fs_location struct. The array of server names is provided for convenience. Servers that share the same rootpath may also be listed in separate fs_location entries in the fs_locations attribute. The fs_locations struct and attribute contains an array of such locations. Since the name space of each server may be constructed differently, the "fs_root" field is provided. The path represented by fs_root represents the location of the filesystem in the current server's name space, i.e. that of the server from which the fs_locations attribute was obtained. The fs_root path is meant to aid the client by clearly referencing the root of the file system whose locations are being reported, no matter what object within the current file system, the current filehandle designates. As an example, suppose there is a replicated filesystem located at two servers (servA and servB). At servA, the filesystem is located at path "/a/b/c". At, servB the filesystem is located at path "/x/y/z". If the client were to obtain the fs_locations value for the directory at "/a/b/c/d", it might not necessarily know that the filesystem's root is located in servA's name space at "/a/b/c". When the client switches to servB, it will need to determine that the directory it first referenced at servA is now represented by the path "/x/y/z/d" on servB. To facilitate this, the fs_locations attribute provided by servA would have a fs_root value of "/a/b/c" and two entries in fs_locations. One entry in fs_locations will be for itself (servA) and the other will be for servB with a path of "/x/y/z". With this information, the client is able to substitute "/x/y/z" for the "/a/b/c" at the beginning of its access path and construct "/x/y/z/d" to use for the new server. Since fs_locations attribute lacks information defining various attributes of the various file system choices presented, it should only be interrogated and used when fs_locations_info is not available. When fs_locations is used, information about the specific locations should be assumed based on the following rules. The following rules are general and apply irrespective of the context. o When a DNS server name maps to multiple IP addresses, they should be considered identical, i.e. of the same _endpoint_ class. o Except in the case of servers sharing an _endpoint_ class, all listed servers should be considered as of the same _handle_ class, Shepler Expires December 22, 2006 [Page 189] Internet-Draft NFSv4 Minor Version 1 June 2006 if and only if, the current fh_expire_type attribute does not include the FH4_VOL_MIGRATION bit. Note that in the case of referral, filehandle issues do not apply since there can be no filehandles known within the current file system nor is there any access to the fh_expire_type attribute on the referring (absent) file system. o Except in the case of servers sharing an _endpoint_ class, all listed servers should be considered as of the same _fileid_ class, if and only if, the fh_expire_type attribute indicates persistent filehandles and does not include the FH4_VOL_MIGRATION bit. Note that in the case of referral, fileid issues do not apply since there can be no fileids known within the referring (absent) file system nor is there any access to the fh_expire_type attribute. o Except in the case of servers sharing an _endpoint_ class, all listed servers should be considered as of different _change_ classes. For other class assignments, handling depends of file system transitions depends on the reasons for the transition: o When the transition is due to migration, the target should be treated as being of the same _state_ and _verifier_ class as the source. o When the transition is due to failover to another replica, the target should be treated as being of a different _state_ and _verifier_ class from the source. The specific choices reflect typical implementation patterns for failover and controlled migration respectively. Since other choices are possible and useful, this information is better obtained by using fs_locations_info. See the section "Security Considerations" for a discussion on the recommendations for the security flavor to be used by any GETATTR operation that requests the "fs_locations" attribute. 10.10. The Attribute fs_locations_info The fs_locations_info attribute is intended as a more functional replacement for fs_locations which will continue to exist and be supported. Clients can use it get a more complete set of information about alternative file system locations. When the server does not support fs_locations_info, fs_locations can be used to get a subset of the information. A server which supports fs_locations_info MUST support fs_locations as well. Shepler Expires December 22, 2006 [Page 190] Internet-Draft NFSv4 Minor Version 1 June 2006 There are several sorts of additional information present in fs_locations_info, that aren't available in fs_locations: o Attribute continuity information to allow a client to select a location which meets the transparency requirements of the applications accessing the data and to take advantage of optimizations that server guarantees as to attribute continuity may provide (e.g. change attribute). o Filesystem identity information which indicates when multiple replicas, from the clients point of view, correspond to the same target filesystem, allowing them to be used interchangeably, without disruption, as multiple paths to the same thing. o Information which will bear on the suitability of various replicas, depending on the use that the client intends. For example, many applications need an absolutely up-to-date copy (e.g. those that write), while others may only need access to the most up-to-date copy reasonably available. o Server-derived preference information for replicas, which can be used to implement load-balancing while giving the client the entire fs list to be used in case the primary fails. The fs_locations_info attribute consists of a root pathname (just like fs_locations), together with an array of location4_item structures. Shepler Expires December 22, 2006 [Page 191] Internet-Draft NFSv4 Minor Version 1 June 2006 struct locations4_server { int32_t currency; uint32_t info<>; utf8str_cis server; }; const LIBX_GFLAGS = 0; const LIBX_TFLAGS = 1; const LIBX_CLSHARE = 2; const LIBX_CLSERVER = 3; const LIBX_CLENDPOINT = 4; const LIBX_CLHANDLE = 5; const LIBX_CLFILEID = 6; const LIBX_CLVERIFIER = 7; const LIBX_CLSTATE = 8; const LIBX_READRANK = 9; const LIBX_WRITERANK = 10; const LIBX_READORDER = 11; const LIBX_WRITEORDER = 12; const LIGF_WRITABLE = 0x01; const LIGF_CUR_REQ = 0x02; const LIGF_ABSENT = 0x04; const LIGF_GOING = 0x08; const LITF_RDMA = 0x01; struct locations4_item { locations4_server entries<>; pathname4 rootpath; }; struct locations4_info { pathname4 fs_root; locations4_item items<>; }; The fs_locations_info attribute is structured similarly to the fs_locations attribute. A top-level structure (fs_locations4 or locations4_info) contains the entire attribute including the root pathname of the fs and an array of lower-level structures that define replicas that share a common root path on their respective servers. Those lower-level structures in turn (fs_locations4 or location4_item) contain a specific pathname and information on one or more individual server replicas. For that last lowest-level Shepler Expires December 22, 2006 [Page 192] Internet-Draft NFSv4 Minor Version 1 June 2006 information, fs_locations has a server name in the form of utf8str_cis, while fs_locations_info has a location4_server structure that contains per-server-replica information in addition to the server name. The location4_server structure consists of the following items: o An indication of file system up-to-date-ness (currency) in terms of approximate seconds before the present. A negative value indicates that the server is unable to give any reasonably useful value here. A zero indicates that filesystem is the actual writable data or a reliably coherent and fully up-to-date copy. Positive values indicate how out- of-date this copy can normally be before it is considered for update. Such a value is not a guarantee that such updates will always be performed on the required schedule but instead serve as a hint about how far behind the most up-to-date copy of the data, this copy would normally be expected to be. o A counted array of 32-but words containing various sorts of data, about the particular file system instance. This data includes general flags, transport capability flags, file system equivalence class information, and selection priority information. The encoding will be discussed below. o The server string. For the case of the replica currently being accessed (via GETATTR), a null string may be used to indicate the current address being used for the RPC call. Data within the info array, is in the form of 8-bit data items even though that array is, from XDR's point of view an array of 32-bit integers. This definition was chosen because: o The kinds of data in the info array, representing, flags, file system classes and priorities among set of file systems representing the same data are such that eight bits provides a quite acceptable range of values. Even where there might be more than 256 such file system instances, having more than 256 distinct classes or priorities is unlikely. o XDR does not have any means to declare an 8-bit data type, other than an ASCII string, and using 32-bit data types would lead to significant space inefficiency. o Explicit definition of the various specific data items within XDR would limit expandability in that any extension within a subsequent minor version would require yet another attribute, leading to specification and implementation clumsiness. Shepler Expires December 22, 2006 [Page 193] Internet-Draft NFSv4 Minor Version 1 June 2006 o Such explicit definitions would also make it impossible to propose standards-track extensions apart from a full minor version. Each 8-bit successive field within this array is designated by a constant byte-index as defined above. More significant bit fields within a single word have successive indices with a transition to the next word following the most significant 8-bit field in each word. The set of info data is subject to expansion in a future minor version, or in a standard-track RFC, within the context of a single minor version. The server SHOULD NOT send and the client MUST not use indices within the info array that are not defined in standards- track RFC's. The following fragment of c++ code (with Doxygen-style comments) illustrates how data items within the info array can be found using a byte-index such as specified by the constants beginning with "LIBX_". The associated InfoArray object is assume to be initialized with "Length" containing the XDR-specified length in terms of 32-bit words and "Data" containing the array of words encoded by the "info<>" specification. Shepler Expires December 22, 2006 [Page 194] Internet-Draft NFSv4 Minor Version 1 June 2006 class InfoArray { private: uint32_t Length; uint32_t Data[]; public: uint8_t GetValue(int byteIndex); }; /// @brief Get the value of a locations4_server info value /// /// This method obtains the specific info value given a /// byte index defined in the NFSv4.1 spec or another /// later standards-track document. /// /// @param[in] byteIndex The byte index identifying the /// item requested. /// @returns The value of the requested item. uint8_t InfoArray::GetItem(int byteIndex) { int wordIndex = byteIndex/4; int byteWithinWord = byteIndex % 4; if (wordIndex >= Length) { return (0); } uint32_t ourWord = Data[wordIndex]; return ((ourWord >> (byteWithinWord*8)) & 0xff); } The info array contains within it: o Two 8-bit flag fields, one devoted to general file-system characteristics and a second reserved for transport-related capabilities. o Seven 8-bit class values which define various file system equivalence classes as explained below. o Four 8-bit priority values which govern file system selection as explained below. The general file system characteristics flag (at byte index LIBX_GFLAGS) has the following bits defined within it: Shepler Expires December 22, 2006 [Page 195] Internet-Draft NFSv4 Minor Version 1 June 2006 o LIGF_WRITABLE indicates that this fs target is writable, allowing it to be selected by clients which may need to write on this filesystem. When the current filesystem instance is writable, then any other filesystem to which the client might switch must incorporate within its data any committed write made on the current filesystem instance. See the section on verifier class, for issues related to uncommitted writes. While there is no harm in not setting this flag for a filesystem that turns out to be writable, turning the flag on for read-only filesystem can cause problems for clients who select a migration or replication target based on it and then find themselves unable to write. o LIGF_CUR_REQ indicates that this replica is the one on which the request is being made. Only a single server entry may have this flag set and in the case of a referral, no entry will have it. o LIGF_ABSENT indicates that this entry corresponds an absent filesystem replica. It can only be set if LIGF_CUR_REQ is set. When both such bits are set it indicates that a filesystem instance is not usable but that the information in the entry can be used to determine the sorts of continuity available when switching from this replica to other possible replicas. Since this bit can only be true if LIGF_CUR_REQ is true, the value could be determined using the fs_absent attribute but the information is also made available here for the convenience of the client. An entry with this bit, since it represents a true filesystem (albeit absent) does not appear in the event of a referral, but only where a filesystem has been accessed at this location and subsequently been migrated. o LIGF_GOING indicates that a replica, while still available, should not be used further. The client, if using it, should make an orderly transfer to another filesystem instance as expeditiously as possible. It is expected that file systems going out of service will be announced as LIGF_GOING some time before the actual loss of service and that the valid_for value will be sufficiently small to allow servers to detect and act on scheduled events while large enough that the cost of the requests to fetch the fs_locations_info values will not be excessive. Values on the order of ten minutes seem reasonable. The transport-flag field (at byte index LIBX_TFLAGS) contains the following bits related to the transport capabilities of the specific file system. o LITF_RDMA indicates that this file system provides NFSv4.1 file system access using an RDMA-capable transport. Shepler Expires December 22, 2006 [Page 196] Internet-Draft NFSv4 Minor Version 1 June 2006 Attribute continuity and filesystem identity information are expressed by defining equivalence relations on the sets of file systems presented to the client. Each such relation is expressed as a set of file system equivalence classes. For each relation, a file system has an 8-bit class number. Two file systems belong to the same class if both have identical non-zero class numbers. Zero is treated as non-matching. Most often, the relevant question for the client will be whether a given replica is identical-with/ continuous-to the current one in a given respect but the information should be available also as to whether two other replicas match in that respect as well. The following fields specify the file system's class numbers for the equivalence relations used in determining the nature of file system transitions. See Section 10.6 for details about how this information is to be used. o The field with byte-index LIBX_CLSHARE defines the sharing class for the file system. o The field with byte-index LIBX_CLSERVER defines the server class for the file system. o The field with byte-index LIBX_CLENDPOINT defines the endpoint class for the file system. o The field with byte-index LIBX_CLHANDLE defines the handle class for the file system. o The field with byte-index LIBX_CLFILEID defines the fileid class for the file system. o The field with byte-index LIBX_CLVERIFIER defines the verifier class for the file system. o The field with byte-index LIBX_CLSTATE defines the state class for the file system. Server-specified preference information is also provided via 8-bit values within the info array. The values provide a rank and an order (see below) to be used with separate values specifiable for the cases of read-only and writable file systems. These values are compared for different file systems to establish the server-specified preference, with lower values indicating "more preferred". Rank is used to express a strict server-imposed ordering on clients, with lower values indicating "more preferred." Clients should attempt to use all replicas with a given rank before they use one Shepler Expires December 22, 2006 [Page 197] Internet-Draft NFSv4 Minor Version 1 June 2006 with a higher rank. Only if all of those file systems are unavailable should the client proceed to those of a higher rank. Within a rank, the order value is used to specify the server's preference to guide the client's selection when the client's own preferences are not controlling, with lower values of order indicating "more preferred." If replicas are approximately equal in all respects, clients should defer to the order specified by the server. When clients look at server latency as part of their selection, they are free to use this criterion but it is suggested that when latency differences are not significant, the server- specified order should guide selection. o The field at byte index LIBX_READRANK gives the rank value to be used for read-only access. o The field at byte index LIBX_READOREDER gives the order value to be used for read-only access. o The field at byte index LIBX_WRITERANK gives the rank value to be used for writable access. o The field at byte index LIBX_WRITEOREDER gives the order value to be used for writable access. Depending on the potential need for write access by a given client, one of the pairs of rank and order values is used. The read rank and order should only be used if the client knows that only reading will ever be done or if it is prepared to switch to a different replica in the event that any write access capability is required in the future. The locations4_info structure, encoding the fs_locations_info attribute contains the following: o The fs_root field which contains the pathname of the root of the current filesystem on the current server, just as it does the fs_locations4 structure. o An array of locations4_item structures, which contain information about replicas of the current filesystem. Where the current filesystem is actually present, or has been present, i.e. this is not a referral situation, one of the locations4_item structure will contain a locations4_server for the current server. This structure will have LIGF_ABSENT set if the current filesystem is absent, i.e. normal access to it will return NFS4ERR_MOVED. o The valid_for field specifies a time for which it is reasonable for a client to use the fs_locations_info attribute without Shepler Expires December 22, 2006 [Page 198] Internet-Draft NFSv4 Minor Version 1 June 2006 refetch. The valid_for value does not provide a guarantee of validity since servers can unexpectedly go out of service or become inaccessible for any number of reasons. Clients are well- advised to refetch this information for actively accessed filesystem at every valid_for seconds. This is particularly important when filesystem replicas may go out of service in a controlled way using the LIGF_GOING flag to communicate an ongoing change. The server should set valid_for to a value which allows well-behaved clients to notice the LIF_GOING flag and make an orderly switch before the loss of service becomes effective. If this value is zero, then no refetch interval is appropriate and the client need not refetch this data on any particular schedule. In the event of a transition to a new filesystem instance, a new value of the fs_locations_info attribute will be fetched at the destination and it is to be expected that this may have a different valid_for value, which the client should then use, in the same fashion as the previous value. As noted above, the fs_locations_info attribute, when supported, may be requested of absent filesystems without causing NFS4ERR_MOVED to be returned and it is generally expected that will be available for both present and absent filesystems even if only a single location_server entry is present, designating the current (present) filesystem, or two location_server entries designating the current (and now previous) location of an absent filesystem and its successor location. Servers are strongly urged to support this attribute on all filesystems if they support it on any filesystem. 10.11. The Attribute fs_status In an environment in which multiple copies of the same basic set of data are available, information regarding the particular source of such data and the relationships among different copies, can be very helpful in providing consistent data to applications. Shepler Expires December 22, 2006 [Page 199] Internet-Draft NFSv4 Minor Version 1 June 2006 enum status4_type { STATUS4_FIXED = 1, STATUS4_UPDATED = 2, STATUS4_INTERLOCKED = 3, STATUS4_WRITABLE = 4, STATUS4_ABSENT = 5 }; struct fs4_status { status4_type type; utf8str_cs source; utf8str_cs current; int32_t age; nfstime4 version; }; The type value indicates the kind of filesystem image represented. This is of particular importance when using the version values to determine appropriate succession of filesystem images. Five types are distinguished: o STATUS4_FIXED which indicates a read-only image in the sense that it will never change. The possibility is allowed that as a result of migration or switch to a different image, changed data can be accessed but within the confines of this instance, no change is allowed. The client can use this fact to aggressively cache. o STATUS4_UPDATED which indicates an image that cannot be updated by the user writing to it but may be changed exogenously, typically because it is a periodically updated copy of another writable filesystem somewhere else. o STATUS4_VERSIONED which indicates that the image, like the STATUS4_UPDATED case, is updated exogenously, but it provides a guarantee that the server will carefully update the associated version value so that the client, may if it chooses, protect itself from a situation in which it reads data from one version of the filesystem, and then later reads data from an earlier version of the same filesystem. See below for a discussion of how this can be done. o STATUS4_WRITABLE which indicates that the filesystem is an actual writable one. The client need not of course actually write to the filesystem, but once it does, it should not accept a transition to anything other than a writable instance of that same filesystem. Shepler Expires December 22, 2006 [Page 200] Internet-Draft NFSv4 Minor Version 1 June 2006 o STATUS4_ABSENT which indicates that the information is the last valid for a filesystem which is no longer present. The opaque strings source and current provide a way of presenting information about the source of the filesystem image being present. It is not intended that client do anything with this information other than make it available to administrative tools. It is intended that this information be helpful when researching possible problems with a filesystem image that might arise when it is unclear if the correct image is being accessed and if not, how that image came to be made. This kind of debugging information will be helpful, if, as seems likely, copies of filesystems are made in many different ways (e.g. simple user-level copies, filesystem- level point-in-time copies, cloning of the underlying storage), under a variety of administrative arrangements. In such environments, determining how a given set of data was constructed can be very helpful in resolving problems. The opaque string 'source' is used to indicate the source of a given filesystem with the expectation that tools capable of creating a filesystem image propagate this information, when that is possible. It is understood that this may not always be possible since a user- level copy may be thought of as creating a new data set and the tools used may have no mechanism to propagate this data. When a filesystem is initially created associating with it data regarding how the filesystem was created, where it was created, by whom, etc. can be put in this attribute in a human- readable string form so that it will be available when propagated to subsequent copies of this data. The opaque string 'current' should provide whatever information is available about the source of the current copy. Such information as the tool creating it, any relevant parameters to that tool, the time at which the copy was done, the user making the change, the server on which the change was made etc. All information should be in a human- readable string form. The age provides an indication of how out-of-date the file system currently is with respect to its ultimate data source (in case of cascading data updates). This complements the currency field of locations4_server (See Section 10.10) in the following way: the information in locations4_server.currency gives a bound for how out of date the data in a file system might typically get, while the age gives a bound on how out of date that data actually is. Negative values imply no information is available. A zero means that this data is known to be current. A positive value means that this data is known to be no older than that number of seconds with respect to the ultimate data source. Shepler Expires December 22, 2006 [Page 201] Internet-Draft NFSv4 Minor Version 1 June 2006 The version field provides a version identification, in the form of a time value, such that successive versions always have later time values. When the filesystem type is anything other than STATUS4_VERSIONED, the server may provide such a value but there is no guarantee as to its validity and clients will not use it except to provide additional information to add to 'source' and 'current'. When the type is STATUS4_VERSIONED, servers should provide a value of version which progresses monotonically whenever any new version of the data is established. This allows the client, if reliable image progression is important to it, to fetch this attribute as part of each COMPOUND where data or metadata from the filesystem is used. When it is important to the client to make sure that only valid successor images are accepted, it must make sure that it does not read data or metadata from the filesystem without updating its sense of the current state of the image, to avoid the possibility that the fs_status which the client holds will be one for an earlier image, and so accept a new filesystem instance which is later than that but still earlier than updated data read by the client. In order to do this reliably, it must do a GETATTR of fs_status that follows any interrogation of data or metadata within the filesystem in question. Often this is most conveniently done by appending such a GETATTR after all other operations that reference a given filesystem. When errors occur between reading filesystem data and performing such a GETATTR, care must be exercised to make sure that the data in question is not used before obtaining the proper fs_status value. In this connection, when an OPEN is done within such a versioned filesystem and the associated GETATTR of fs_status is not successfully completed, the open file in question must not be accessed until that fs_status is fetched. The procedure above will ensure that before using any data from the filesystem the client has in hand a newly-fetched current version of the filesystem image. Multiple values for multiple requests in flight can be resolved by assembling them into the required partial order (and the elements should form a total order within it) and using the last. The client may then, when switching among filesystem instances, decline to use an instance which is not of type STATUS4_VERSIONED or whose version field is earlier than the last one obtained from the predecessor filesystem instance. 11. Directory Delegations Shepler Expires December 22, 2006 [Page 202] Internet-Draft NFSv4 Minor Version 1 June 2006 11.1. Introduction to Directory Delegations The major addition to NFS version 4 in the area of caching is the ability of the server to delegate certain responsibilities to the client. When the server grants a delegation for a file to a client, the client receives certain semantics with respect to the sharing of that file with other clients. At OPEN, the server may provide the client either a read or write delegation for the file. If the client is granted a read delegation, it is assured that no other client has the ability to write to the file for the duration of the delegation. If the client is granted a write delegation, the client is assured that no other client has read or write access to the file. This reduces network traffic and server load by allowing the client to perform certain operations on local file data and can also provide stronger consistency for the local data. Directory caching for the NFS version 4 protocol is similar to previous versions. Clients typically cache directory information for a duration determined by the client. At the end of a predefined timeout, the client will query the server to see if the directory has been updated. By caching attributes, clients reduce the number of GETATTR calls made to the server to validate attributes. Furthermore, frequently accessed files and directories, such as the current working directory, have their attributes cached on the client so that some NFS operations can be performed without having to make an RPC call. By caching name and inode information about most recently looked up entries in DNLC (Directory Name Lookup Cache), clients do not need to send LOOKUP calls to the server every time these files are accessed. This caching approach works reasonably well at reducing network traffic in many environments. However, it does not address environments where there are numerous queries for files that do not exist. In these cases of "misses", the client must make RPC calls to the server in order to provide reasonable application semantics and promptly detect the creation of new directory entries. Examples of high miss activity are compilation in software development environments. The current behavior of NFS limits its potential scalability and wide-area sharing effectiveness in these types of environments. Other distributed stateful filesystem architectures such as AFS and DFS have proven that adding state around directory contents can greatly reduce network traffic in high miss environments. Delegation of directory contents is proposed as an extension for NFSv4. Such an extension would provide similar traffic reduction benefits as with file delegations. By allowing clients to cache directory contents (in a read-only fashion) while being notified of Shepler Expires December 22, 2006 [Page 203] Internet-Draft NFSv4 Minor Version 1 June 2006 changes, the client can avoid making frequent requests to interrogate the contents of slowly-changing directories, reducing network traffic and improving client performance. These extensions allow improved namespace cache consistency to be achieved through delegations and synchronous recalls alone without asking for notifications. In addition, if time-based consistency is sufficient, asynchronous notifications can provide performance benefits for the client, and possibly the server, under some common operating conditions such as slowly-changing and/or very large directories. 11.2. Directory Delegation Design (in brief) A new operation GET_DIR_DELEGATION is used by the client to ask for a directory delegation. The delegation covers directory attributes and all entries in the directory. If either of these change the delegation will be recalled synchronously. The operation causing the recall will have to wait before the recall is complete. Any changes to directory entry attributes will not cause the delegation to be recalled. In addition to asking for delegations, a client can also ask for notifications for certain events. These events include changes to directory attributes and/or its contents. If a client asks for notification for a certain event, the server will notify the client when that event occurs. This will not result in the delegation being recalled for that client. The notifications are asynchronous and provide a way of avoiding recalls in situations where a directory is changing enough that the pure recall model may not be effective while trying to allow the client to get substantial benefit. In the absence of notifications, once the delegation is recalled the client has to refresh its directory cache which might not be very efficient for very large directories. The delegation is read only and the client may not make changes to the directory other than by performing NFSv4 operations that modify the directory or the associated file attributes so that the server has knowledge of these changes. In order to keep the client namespace in sync with the server, the server will notify the client holding the delegation of the changes made as a result. This is to avoid any subsequent GETATTR or READDIR calls to the server. If a client holding the delegation makes any changes to the directory, the delegation will not be recalled. Delegations can be recalled by the server at any time. Normally, the server will recall the delegation when the directory changes in a way that is not covered by the notification, or when the directory Shepler Expires December 22, 2006 [Page 204] Internet-Draft NFSv4 Minor Version 1 June 2006 changes and notifications have not been requested. Also if the server notices that handing out a delegation for a directory is causing too many notifications to be sent out, it may decide not to hand out a delegation for that directory or recall existing delegations. If another client removes the directory for which a delegation has been granted, the server will recall the delegation. Both the notification and recall operations need a callback path to exist between the client and server. If the callback path does not exist, then delegation can not be granted. Note that with the session extensions [talpey] that should not be an issue. In the absense of sessions, the server will have to establish a callback path to the client to send callbacks. 11.3. Recommended Attributes in support of Directory Delegations dir_notif_delay - notification delays on directory attributes dir_entry_notif_delay - notification delays on child attributes These attributes allow the client and server to negotiate the frequency of notifications sent due to changes in attributes. These attributes are returned as part of a GETATTR call on the directory. The dir_notif_delay value covers all attribute changes to the directory and the dir_entry_notif_delay covers all attribute changes to any child in the directory. These attributes are per directory. The client needs to get these values by doing a GETATTR on the directory for which it wants notifications. However these attributes are only required when the client is interested in getting attribute notifications. For all other types of notifications and delegation requests without notifications, these attributes are not required. When the client calls the GET_DIR_DELEGATION operation and asks for attribute change notifications, it should request notification delays that are no less than the values in the server-provided attributes. If the client requests smaller delays, the server should not commit to sending notifications for that change event. A value of zero for these attributes means the server will send the notification as soon as the change occurs. It is not recommended to set this value to zero since that can put a lot of burden on the server.nfstime4 values that compute to negative values are illegal. By granting a request for notifications, the server commits to Shepler Expires December 22, 2006 [Page 205] Internet-Draft NFSv4 Minor Version 1 June 2006 delaying notifications to that client by no more than the notification delay which the client requested. 11.4. Delegation Recall The server will recall the directory delegation by sending a callback to the client. It will use the same callback procedure as used for recalling file delegations. The server will recall the delegation when the directory changes in a way that is not covered by the notification. However the server will not recall the delegation if attributes of an entry within the directory change. Also if the server notices that handing out a delegation for a directory is causing too many notifications to be sent out, it may decide not to hand out a delegation for that directory. If another client tries to remove the directory for which a delegation has been granted, the server will recall the delegation. The server will recall the delegation by sending a CB_RECALL callback to the client. If the recall is done because of a directory changing event, the request making that change will need to wait while the client returns the delegation. 11.5. Delegation Recovery Crash recovery has two main goals, avoiding the necessity of breaking application guarantees with respect to locked files and delivery of updates cached at the client. Neither of these applies to directories protected by read delegations and notifications. Thus, the client is required to establish a new delegation on a server or client reboot. 12. Introduction The NFSv4 protocol [6] specifies the interaction between a client that accesses files and a server that provides access to files and is responsible for coordinating access by multiple clients. As described in the pNFS problem statement, this requires that all access to a set of files exported by a single NFSv4 server be performed by that server; at high data rates the server may become a bottleneck. The parallel NFS (pNFS) extensions to NFSv4 allow data accesses to bypass this bottleneck by permitting direct client access to the storage devices containing the file data. When file data for a single NFSv4 server is stored on multiple and/or higher throughput storage devices (by comparison to the server's throughput capability), the result can be significantly better file access Shepler Expires December 22, 2006 [Page 206] Internet-Draft NFSv4 Minor Version 1 June 2006 performance. The relationship among multiple clients, a single server, and multiple storage devices for pNFS (server and clients have access to all storage devices) is shown in this diagram: +-----------+ |+-----------+ +-----------+ ||+-----------+ | | ||| | NFSv4 + pNFS | | +|| Clients |<------------------------------>| Server | +| | | | +-----------+ | | ||| +-----------+ ||| | ||| | ||| Storage +-----------+ | ||| Protocol |+-----------+ | ||+----------------||+-----------+ Control| |+-----------------||| | Protocol| +------------------+|| Storage |------------+ +| Devices | +-----------+ Figure 67 In this structure, the responsibility for coordination of file access by multiple clients is shared among the server, clients, and storage devices. This is in contrast to NFSv4 without pNFS extensions, in which this is primarily the server's responsibility, some of which can be delegated to clients under strictly specified conditions. The pNFS extension to NFSv4 takes the form of new operations that manage data location information called a "layout". The layout is managed in a similar fashion as NFSv4 data delegations (e.g., they are recallable and revocable). However, they are distinct abstractions and are manipulated with new operations. When a client holds a layout, it has rights to access the data directly using the location information in the layout. There are new attributes that describe general layout characteristics. However, much of the required information cannot be managed solely within the attribute framework, because it will need to have a strictly limited term of validity, subject to invalidation by the server. This requires the use of new operations to obtain, return, recall, and modify layouts, in addition to new attributes. This document specifies both the NFSv4 extensions required to distribute file access coordination between the server and its clients and a NFSv4 file storage protocol that may be used to access Shepler Expires December 22, 2006 [Page 207] Internet-Draft NFSv4 Minor Version 1 June 2006 data stored on NFSv4 storage devices. Storage protocols used to access a variety of other storage devices are deliberately not specified here. These might include: o Block/volume protocols such as iSCSI ([12]), and FCP ([13]). The block/volume protocol support can be independent of the addressing structure of the block/volume protocol used, allowing more than one protocol to access the same file data and enabling extensibility to other block/volume protocols. o Object protocols such as OSD over iSCSI or Fibre Channel [14]. o Other storage protocols, including PVFS and other file systems that are in use in HPC environments. pNFS is designed to accommodate these protocols and be extensible to new classes of storage protocols that may be of interest. The distribution of file access coordination between the server and its clients increases the level of responsibility placed on clients. Clients are already responsible for ensuring that suitable access checks are made to cached data and that attributes are suitably propagated to the server. Generally, a misbehaving client that hosts only a single-user can only impact files accessible to that single user. Misbehavior by a client hosting multiple users may impact files accessible to all of its users. NFSv4 delegations increase the level of client responsibility as a client that carries out actions requiring a delegation without obtaining that delegation will cause its user(s) to see unexpected and/or incorrect behavior. Some uses of pNFS extend the responsibility of clients beyond delegations. In some configurations, the storage devices cannot perform fine-grained access checks to ensure that clients are only performing accesses within the bounds permitted to them by the pNFS operations with the server (e.g., the checks may only be possible at file system granularity rather than file granularity). In situations where this added responsibility placed on clients creates unacceptable security risks, pNFS configurations in which storage devices cannot perform fine-grained access checks SHOULD NOT be used. All pNFS server implementations MUST support NFSv4 access to any file accessible via pNFS in order to provide an interoperable means of file access in such situations. See Section 15 on Security for further discussion. Finally, there are issues about how layouts interact with the existing NFSv4 abstractions of data delegations and byte range locking. These issues, and others, are also discussed here. Shepler Expires December 22, 2006 [Page 208] Internet-Draft NFSv4 Minor Version 1 June 2006 13. General Definitions This protocol extension partitions the NFSv4 file system protocol into two parts, the control path and the data path. The control path is implemented by the extended (p)NFSv4 server. When the file system being exported by (p)NFSv4 uses storage devices that are visible to clients over the network, the data path may be implemented by direct communication between the extended (p)NFSv4 file system client and the storage devices. This leads to a few new terms used to describe the protocol extension and some clarifications of existing terms. 13.1. Metadata Server A pNFS "server" or "metadata server" is a server as defined by RFC3530 [6], which additionally provides support of the pNFS minor extension. When using the pNFS NFSv4 minor extension, the metadata server may hold only the metadata associated with a file, while the data can be stored on the storage devices. However, similar to NFSv4, data may also be written through the metadata server. Note: directory data is always accessed through the metadata server. 13.2. Client A pNFS "client" is a client as defined by RFC3530 [6], with the addition of supporting the pNFS minor extension server protocol and with the addition of supporting at least one storage protocol for performing I/O directly to storage devices. 13.3. Storage Device This is a device, or server, that controls the file's data, but leaves other metadata management up to the metadata server. A storage device could be another NFS server, or an Object Storage Device (OSD) or a block device accessed over a SAN (e.g., either FiberChannel or iSCSI SAN). The goal of this extension is to allow direct communication between clients and storage devices. 13.4. Storage Protocol This is the protocol between the pNFS client and the storage device used to access the file data. Three following types have been described: file protocols (e.g., NFSv4), object protocols (e.g., OSD), and block/volume protocols (e.g., based on SCSI-block commands). These protocols are in turn realizable over a variety of transport stacks. We anticipate there will be variations on these storage protocols, including new protocols that are unknown at this time or experimental in nature. The details of the storage protocols will be described in other documents so that pNFS clients can be Shepler Expires December 22, 2006 [Page 209] Internet-Draft NFSv4 Minor Version 1 June 2006 written to use these storage protocols. Use of NFSv4 itself as a file-based storage protocol is described in Section 16. 13.5. Control Protocol This is a protocol used by the exported file system between the server and storage devices. Specification of such protocols is outside the scope of this draft. Such control protocols would be used to control such activities as the allocation and deallocation of storage and the management of state required by the storage devices to perform client access control. The control protocol should not be confused with protocols used to manage LUNs in a SAN and other sysadmin kinds of tasks. While the pNFS protocol allows for any control protocol, in practice the control protocol is closely related to the storage protocol. For example, if the storage devices are NFS servers, then the protocol between the pNFS metadata server and the storage devices is likely to involve NFS operations. Similarly, when object storage devices are used, the pNFS metadata server will likely use iSCSI/OSD commands to manipulate storage. However, this document does not mandate any particular control protocol. Instead, it just describes the requirements on the control protocol for maintaining attributes like modify time, the change attribute, and the end-of-file position. 13.6. Metadata This is information about a file, like its name, owner, where it stored, and so forth. The information is managed by the exported file system server (metadata server). Metadata also includes lower- level information like block addresses and indirect block pointers. Depending the storage protocol, block-level metadata may or may not be managed by the metadata server, but is instead managed by Object Storage Devices or other servers acting as a storage device. 13.7. Layout A layout defines how a file's data is organized on one or more storage devices. There are many possible layout types. They vary in the storage protocol used to access the data, and in the aggregation scheme that lays out the file data on the underlying storage devices. Layouts are described in more detail below. Shepler Expires December 22, 2006 [Page 210] Internet-Draft NFSv4 Minor Version 1 June 2006 14. pNFS protocol semantics This section describes the semantics of the pNFS protocol extension to NFSv4; this is the protocol between the client and the metadata server. 14.1. Definitions This sub-section defines a number of terms necessary for describing layouts and their semantics. In addition, it more precisely defines how layouts are identified and how they can be composed of smaller granularity layout segments. 14.1.1. Layout Types A layout describes the mapping of a file's data to the storage devices that hold the data. A layout is said to belong to a specific "layout type" (see Section 1.2.17 for its RPC definition). The layout type allows for variants to handle different storage protocols (e.g., block/volume [11], object [10], and file [Section 16] layout types). A metadata server, along with its control protocol, must support at least one layout type. A private sub-range of the layout type name space is also defined. Values from the private layout type range can be used for internal testing or experimentation. As an example, a file layout type could be an array of tuples (e.g., deviceID, file_handle), along with a definition of how the data is stored across the devices (e.g., striping). A block/volume layout might be an array of tuples that store along with information about block size and the file offset of the first block. An object layout might be an array of tuples and an additional structure (i.e., the aggregation map) that defines how the logical byte sequence of the file data is serialized into the different objects. Note, the actual layouts are more complex than these simple expository examples. This document defines a NFSv4 file layout type using a stripe-based aggregation scheme (see Section 16). Adjunct specifications are being drafted that precisely define other layout formats (e.g., block/volume [11], and object [10] layouts) to allow interoperability among clients and metadata servers. 14.1.2. Layout Iomode The iomode indicates to the metadata server the client's intent to perform either READs (only) or a mixture of I/O possibly containing WRITEs as well as READs (i.e., READ/WRITE). For certain layout types, it is useful for a client to specify this intent at LAYOUTGET Shepler Expires December 22, 2006 [Page 211] Internet-Draft NFSv4 Minor Version 1 June 2006 time. E.g., for block/volume based protocols, block allocation could occur when a READ/WRITE iomode is specified. A special LAYOUTIOMODE_ANY iomode is defined and can only be used for LAYOUTRETURN and LAYOUTRECALL, not for LAYOUTGET. It specifies that layouts pertaining to both READ and RW iomodes are being returned or recalled, respectively. A storage device may validate I/O with regards to the iomode; this is dependent upon storage device implementation. Thus, if the client's layout iomode differs from the I/O being performed the storage device may reject the client's I/O with an error indicating a new layout with the correct I/O mode should be fetched. E.g., if a client gets a layout with a READ iomode and performs a WRITE to a storage device, the storage device is allowed to reject that WRITE. The iomode does not conflict with OPEN share modes or lock requests; open mode checks and lock enforcement are always enforced, and are logically separate from the pNFS layout level. As well, open modes and locks are the preferred method for restricting user access to data files. E.g., an OPEN of read, deny-write does not conflict with a LAYOUTGET containing an iomode of READ/WRITE performed by another client. Applications that depend on writing into the same file concurrently may use byte range locking to serialize their accesses. 14.1.3. Layout Segments Until this point, layouts have been defined in a fairly vague manner. A layout is more precisely identified by the following tuple: ; the FH refers to the FH of the file on the metadata server. Note, layouts describe a file, not a byte-range of a file. Since a layout that describes an entire file may be very large, there is a desire to manage layouts in smaller chunks that correspond to byte-ranges of the file. For example, the entire layout need not be returned, recalled, or committed. These chunks are called "layout segments" and are further identified by the byte-range they represent. Layout operations require the identification of the layout segment (i.e., clientID, FH, layout type, and byte-range), as well as the iomode. This structure allows clients and metadata servers to aggregate the results of layout operations into a singly maintained layout. It is important to define when layout segments overlap and/or conflict with each other. For a layout segment to overlap another layout segment both segments must be of the same layout type, correspond to the same filehandle, and have the same iomode; in addition, the byte-ranges of the segments must overlap. Layout Shepler Expires December 22, 2006 [Page 212] Internet-Draft NFSv4 Minor Version 1 June 2006 segments conflict, when they overlap and differ in the content of the layout (i.e., the storage device/file mapping parameters differ). Note, differing iomodes do not lead to conflicting layouts. It is permissible for layout segments with different iomodes, pertaining to the same byte range, to be held by the same client. 14.1.4. Device IDs The "deviceID" is a short name for a storage device. In practice, a significant amount of information may be required to fully identify a storage device. Instead of embedding all that information in a layout, a level of indirection is used. Layouts embed device IDs, and a new operation (GETDEVICEINFO) is used to retrieve the complete identity information about the storage device according to its layout type. For example, the identity of a file server or object server could be an IP address and port. The identity of a block device could be a volume label. Due to multipath connectivity in a SAN environment, agreement on a volume label is considered the reliable way to locate a particular storage device. The device ID is qualified by the layout type and unique per file system (FSID). This allows different layout drivers to generate device IDs without the need for co-ordination. In addition to GETDEVICEINFO, another operation, GETDEVICELIST, has been added to allow clients to fetch the mappings of multiple storage devices attached to a metadata server. Clients cannot expect the mapping between device ID and storage device address to persist across server reboots, hence a client MUST fetch new mappings on startup or upon detection of a metadata server reboot unless it can revalidate its existing mappings. Not all layout types support such revalidation, and the means of doing so is layout specific. If data are reorganized from a storage device with a given device ID to a different storage device (i.e., if the mapping between storage device and data changes), the layout describing the data MUST be recalled rather than assigning the new storage device to the old device ID. 14.1.5. Aggregation Schemes Aggregation schemes can describe layouts like simple one-to-one mapping, concatenation, and striping. A general aggregation scheme allows nested maps so that more complex layouts can be compactly described. The canonical aggregation type for this extension is striping, which allows a client to access storage devices in parallel. Even a one-to-one mapping is useful for a file server that wishes to distribute its load among a set of other file servers. Shepler Expires December 22, 2006 [Page 213] Internet-Draft NFSv4 Minor Version 1 June 2006 14.2. Guarantees Provided by Layouts Layouts delegate to the client the ability to access data out of band. The layout guarantees the holder that the layout will be recalled when the state encapsulated by the layout becomes invalid (e.g., through some operation that directly or indirectly modifies the layout) or, possibly, when a conflicting layout is requested, as determined by the layout's iomode. When a layout is recalled, and then returned by the client, the client retains the ability to access file data with normal NFSv4 I/O operations through the metadata server. Only the right to do I/O out-of-band is affected. Holding a layout does not guarantee that a user of the layout has the rights to access the data represented by the layout. All user access rights MUST be obtained through the appropriate open, lock, and access operations (i.e., those that would be used in the absence of pNFS). However, if a valid layout for a file is not held by the client, the storage device should reject all I/Os to that file's byte range that originate from that client. In summary, layouts and ordinary file access controls are independent. The act of modifying a file for which a layout is held, does not necessarily conflict with the holding of the layout that describes the file being modified. However, with certain layout types (e.g., block/volume layouts), the layout's iomode must agree with the type of I/O being performed. Depending upon the layout type and storage protocol in use, storage device access permissions may be granted by LAYOUTGET and may be encoded within the type specific layout. If access permissions are encoded within the layout, the metadata server must recall the layout when those permissions become invalid for any reason; for example when a file becomes unwritable or inaccessible to a client. Note, clients are still required to perform the appropriate access operations as described above (e.g., open and lock ops). The degree to which it is possible for the client to circumvent these access operations must be clearly addressed by the individual layout type documents, as well as the consequences of doing so. In addition, these documents must be clear about the requirements and non- requirements for the checking performed by the server. If the pNFS metadata server supports mandatory byte range locks then byte range locks must behave as specified by the NFSv4 protocol, as observed by users of files. If a storage device is unable to restrict access by a pNFS client who does not hold a required mandatory byte range lock then the metadata server must not grant layouts to a client, for that storage device, that permits any access that conflicts with a mandatory byte range lock held by another client. In this scenario, it is also necessary for the metadata server to ensure that byte range locks are not granted to a client if Shepler Expires December 22, 2006 [Page 214] Internet-Draft NFSv4 Minor Version 1 June 2006 any other client holds a conflicting layout; in this case all conflicting layouts must be recalled and returned before the lock request can be granted. This requires the pNFS server to understand the capabilities of its storage devices. 14.3. Getting a Layout A client obtains a layout through a new operation, LAYOUTGET. The metadata server will give out layouts of a particular type (e.g., block/volume, object, or file) and aggregation as requested by the client. The client selects an appropriate layout type which the server supports and the client is prepared to use. The layout returned to the client may not line up exactly with the requested byte range. A field within the LAYOUTGET request, "minlength", specifies the minimum overlap that MUST exist between the requested layout and the layout returned by the metadata server. The "minlength" field should specify a size of at least one. A metadata server may give-out multiple overlapping, non-conflicting layout segments to the same client in response to a LAYOUTGET. There is no implied ordering between getting a layout and performing a file OPEN. For example, a layout may first be retrieved by placing a LAYOUTGET operation in the same compound as the initial file OPEN. Once the layout has been retrieved, it can be held across multiple OPEN and CLOSE sequences. The storage protocol used by the client to access the data on the storage device is determined by the layout's type. The client needs to select a "layout driver" that understands how to interpret and use that layout. The API used by the client to talk to its drivers is outside the scope of the pNFS extension. The storage protocol between the client's layout driver and the actual storage is covered by other protocols specifications such as iSCSI (block storage), OSD (object storage) or NFS (file storage). Although, the metadata server is in control of the layout for a file, the pNFS client can provide hints to the server when a file is opened or created about preferred layout type and aggregation scheme. The pNFS extension introduces a LAYOUT_HINT attribute that the client can set at creation time to provide a hint to the server for new files. It is suggested that this attribute be set as one of the initial attributes to OPEN when creating a new file. Setting this attribute separately, after the file has been created could make it difficult, or impossible, for the server implementation to comply. Shepler Expires December 22, 2006 [Page 215] Internet-Draft NFSv4 Minor Version 1 June 2006 14.4. Committing a Layout Due to the nature of the protocol, the file attributes, and data location mapping (e.g., which offsets store data vs. store holes) that exist on the metadata storage device may become inconsistent in relation to the data stored on the storage devices; e.g., when WRITEs occur before a layout has been committed (e.g., between a LAYOUTGET and a LAYOUTCOMMIT). Thus, it is necessary to occasionally re-sync this state and make it visible to other clients through the metadata server. The LAYOUTCOMMIT operation is responsible for committing a modified layout segment to the metadata server. Note: the data should be written and committed to the appropriate storage devices before the LAYOUTCOMMIT occurs. Note, if the data is being written asynchronously through the metadata server a COMMIT to the metadata server is required to sync the data and make it visible on the storage devices (see Section 14.6 for more details). The scope of this operation depends on the storage protocol in use. For block/ volume-based layouts, it may require updating the block list that comprises the file and committing this layout to stable storage. While, for file-layouts it requires some synchronization of attributes between the metadata and storage devices (i.e., mainly the size attribute; EOF). It is important to note that the level of synchronization is from the point of view of the client who issued the LAYOUTCOMMIT. The updated state on the metadata server need only reflect the state as of the client's last operation previous to the LAYOUTCOMMIT, it need not reflect a globally synchronized state (e.g., other clients may be performing, or may have performed I/O since the client's last operation and the LAYOUTCOMMIT). The control protocol is free to synchronize the attributes before it receives a LAYOUTCOMMIT, however upon successful completion of a LAYOUTCOMMIT, state that exists on the metadata server that describes the file MUST be in sync with the state existing on the storage devices that comprise that file as of the issuing client's last operation. Thus, a client that queries the size of a file between a WRITE to a storage device and the LAYOUTCOMMIT may observe a size that does not reflects the actual data written. 14.4.1. LAYOUTCOMMIT and mtime/atime/change The change attribute and the modify/access times may be updated, by the server, at LAYOUTCOMMIT time; since for some layout types, the change attribute and atime/mtime can not be updated by the appropriate I/O operation performed at a storage device. The arguments to LAYOUTCOMMIT allow the client to provide suggested access and modify time values to the server. Again, depending upon Shepler Expires December 22, 2006 [Page 216] Internet-Draft NFSv4 Minor Version 1 June 2006 the layout type, these client provided values may or may not be used. The server should sanity check the client provided values before they are used. For example, the server should ensure that time does not flow backwards. According to the NFSv4 specification, The client always has the option to set these attributes through an explicit SETATTR operation. As mentioned, for some layout protocols the change attribute and mtime/atime may be updated at or after the time the I/O occurred (e.g., if the storage device is able to communicate these attributes to the metadata server). If, upon receiving a LAYOUTCOMMIT, the server implementation is able to determine that the file did not change since the last time the change attribute was updated (e.g., no WRITEs or over-writes occurred), the implementation need not update the change attribute; file-based protocols may have enough state to make this determination or may update the change attribute upon each file modification. This also applies for mtime and atime; if the server implementation is able to determine that the file has not been modified since the last mtime update, the server need not update mtime at LAYOUTCOMMIT time. Once LAYOUTCOMMIT completes, the new change attribute and mtime/atime should be visible if that file was modified since the latest previous LAYOUTCOMMIT or LAYOUTGET. 14.4.2. LAYOUTCOMMIT and size The file's size may be updated at LAYOUTCOMMIT time as well. The LAYOUTCOMMIT operation contains an argument ("last_write_offset") that indicates the highest byte offset written but not yet committed via LAYOUTCOMMIT. Note: this argument is switched on a boolean value indicating whether or not a previous write occured. If the switch is false, no "last_write_offset" is given; a "last_write_offset" specifying an offset of 0 means byte 0 was the highest last byte written. The metadata server may do one of the following: 1. It may update the file's size based on the last write offset. However, to the extent possible, the metadata server should sanity check any value to which the file's size is going to be set. E.g., it must not truncate the file based on the client presenting a smaller last write offset than the file's current size. 2. If it has sufficient other knowledge of file size (e.g., by querying the storage devices through the control protocol), it may ignore the client provided argument and use the query-derived value. Shepler Expires December 22, 2006 [Page 217] Internet-Draft NFSv4 Minor Version 1 June 2006 3. It may use the last write offset as a hint, subject to correction when other information is available as above. The method chosen to update the file's size will depend on the storage device's and/or the control protocol's implementation. For example, if the storage devices are block devices with no knowledge of file size, the metadata server must rely on the client to set the size appropriately. A new size flag and length are also returned in the results of a LAYOUTCOMMIT. This union indicates whether a new size was set, and to what length it was set. If a new size is set as a result of LAYOUTCOMMIT, then the metadata server must reply with the new size. As well, if the size is updated, the metadata server in conjunction with the control protocol SHOULD ensure that the new size is reflected by the storage devices immediately upon return of the LAYOUTCOMMIT operation; e.g., a READ up to the new file size should succeed on the storage devices (assuming no intervening truncations). Again, if the client wants to explicitly zero-extend or truncate a file, SETATTR must be used; it need not be used when simply writing past EOF. Since client layout holders may be unaware of changes made to the file's size, through LAYOUTCOMMIT or SETATTR, by other clients, an additional callback/notification has been added for pNFS. CB_SIZECHANGED is a notification that the metadata server sends to layout holders to notify them of a change in file size. This is preferred over issuing CB_LAYOUTRECALL to each of the layout holders. 14.4.3. LAYOUTCOMMIT and layoutupdate The LAYOUTCOMMIT operation contains a "layoutupdate" argument. This argument is a layout type specific structure. The structure can be used to pass arbitrary layout type specific information from the client to the metadata server at LAYOUTCOMMIT time. For example, if using a block/volume layout, the client can indicate to the metadata server which reserved or allocated blocks it used and which it did not. The "layoutupdate" structure need not be the same structure as the layout returned by LAYOUTGET. The structure is defined by the layout type and is opaque to LAYOUTCOMMIT. 14.5. Recalling a Layout 14.5.1. Basic Operation Since a layout protects a client's access to a file via a direct client-storage-device path, a layout need only be recalled when it is semantically unable to serve this function. Typically, this occurs when the layout no longer encapsulates the true location of the file over the byte range it represents. Any operation or action (e.g., Shepler Expires December 22, 2006 [Page 218] Internet-Draft NFSv4 Minor Version 1 June 2006 server driven restriping or load balancing) that changes the layout will result in a recall of the layout. A layout is recalled by the CB_LAYOUTRECALL callback operation (see Section 29). This callback can either recall a layout segment identified by a byte range, or all the layouts associated with a file system (FSID). However, there is no single operation to return all layouts associated with an FSID; multiple layout segments may be returned in a single compound operation. Section 14.5.3 discusses sequencing issues surrounding the getting, returning, and recalling of layouts. The iomode is also specified when recalling a layout or layout segment. Generally, the iomode in the recall request must match the layout, or segment, being returned; e.g., a recall with an iomode of RW should cause the client to only return RW layout segments (not R segments). However, a special LAYOUTIOMODE_ANY enumeration is defined to enable recalling a layout of any type (i.e., the client must return both read-only and read/write layouts). A REMOVE operation may cause the metadata server to recall the layout to prevent the client from accessing a non-existent file and to reclaim state stored on the client. Since a REMOVE may be delayed until the last close of the file has occurred, the recall may also be delayed until this time. As well, once the file has been removed, after the last reference, the client SHOULD no longer be able to perform I/O using the layout (e.g., with file-based layouts an error such as ESTALE could be returned). Although, the pNFS extension does not alter the caching capabilities of clients, or their semantics, it recognizes that some clients may perform more aggressive write-behind caching to optimize the benefits provided by pNFS. However, write-behind caching may impact the latency in returning a layout in response to a CB_LAYOUTRECALL; just as caching impacts DELEGRETURN with regards to data delegations. Client implementations should limit the amount of dirty data they have outstanding at any one time. Server implementations may fence clients from performing direct I/O to the storage devices if they perceive that the client is taking too long to return a layout once recalled. A server may be able to monitor client progress by watching client I/Os or by observing LAYOUTRETURNs of sub-portions of the recalled layout. The server can also limit the amount of dirty data to be flushed to storage devices by limiting the byte ranges covered in the layouts it gives out. Once a layout has been returned, the client MUST NOT issue I/Os to the storage devices for the file, byte range, and iomode represented by the returned layout. If a client does issue an I/O to a storage device for which it does not hold a layout, the storage device SHOULD reject the I/O. Shepler Expires December 22, 2006 [Page 219] Internet-Draft NFSv4 Minor Version 1 June 2006 14.5.2. Recall Callback Robustness For simplicity, the discussion thus far has assumed that pNFS client state for a file exactly matches the pNFS server state for that file and client regarding layout ranges and permissions. This assumption leads to the implicit assumption that any callback results in a LAYOUTRETURN or set of LAYOUTRETURNs that exactly match the range in the callback, since both client and server agree about the state being maintained. However, it can be useful if this assumption does not always hold. For example: o It may be useful for clients to be able to discard layout information without calling LAYOUTRETURN. If conflicts that require callbacks are very rare, and a server can use a multi-file callback to recover per-client resources (e.g., via a FSID recall, or a multi-file recall within a single compound), the result may be significantly less client-server pNFS traffic. o It may be similarly useful for servers to enhance information about what layout ranges are held by a client beyond what a client actually holds. In the extreme, a server could manage conflicts on a per-file basis, only issuing whole-file callbacks even though clients may request and be granted sub-file ranges. o As well, the synchronized state assumption is not robust to minor errors. A more robust design would allow for divergence between client and server and the ability to recover. It is vital that a client not assign itself layout permissions beyond what the server has granted and that the server not forget layout permissions that have been granted in order to avoid errors. On the other hand, if a server believes that a client holds a layout segment that the client does not know about, it's useful for the client to be able to issue the LAYOUTRETURN that the server is expecting in response to a recall. Thus, in light of the above, it is useful for a server to be able to issue callbacks for layout ranges it has not granted to a client, and for a client to return ranges it does not hold. A pNFS client must always return layout segments that comprise the full range specified by the recall. Note, the full recalled layout range need not be returned as part of a single operation, but may be returned in segments. This allows the client to stage the flushing of dirty data, layout commits, and returns. Also, it indicates to the metadata server that the client is making progress. In order to ensure client/server convergence on the layout state, the final LAYOUTRETURN operation in a sequence of returns for a particular recall, SHOULD specify the entire range being recalled, Shepler Expires December 22, 2006 [Page 220] Internet-Draft NFSv4 Minor Version 1 June 2006 even if layout segments pertaining to partial ranges were previously returned. In addition, if the client holds no layout segment that overlaps the range being recalled, the client should return the NFS4ERR_NOMATCHING_LAYOUT error code. This allows the server to update its view of the client's layout state. 14.5.3. Recall/Return Sequencing As with other stateful operations, pNFS requires the correct sequencing of layout operations. This proposal assumes that sessions will precede or accompany pNFS into NFSv4.x and thus, pNFS will require the use of sessions. If the sessions proposal does not precede pNFS, then this proposal needs to be modified to provide for the correct sequencing of pNFS layout operations. Also, this specification is reliant on the sessions protocol to provide the correct sequencing between regular operations and callbacks. It is the server's responsibility to avoid inconsistencies regarding the layouts it hands out and the client's responsibility to properly serialize its layout requests. One critical issue with operation sequencing concerns callbacks. The protocol must defend against races between the reply to a LAYOUTGET operation and a subsequent CB_LAYOUTRECALL. It MUST NOT be possible for a client to process the CB_LAYOUTRECALL for a layout that it has not received in a reply message to a LAYOUTGET. The callback races section (Section 9.10.3) describes the sessions mechanism for allowing the client to detect such situations in order to not process such a CB_LAYOUTRECALL. The LAYOUTGET operation is in this case the dependent operation which the server should reference in any layout recall, if it remains active in the server's slot table. 14.5.3.1. Client Side Considerations Consider a pNFS client that has issued a LAYOUTGET and then receives an overlapping recall callback for the same file. There are two possibilities, which in the absence of a session, the client cannot distinguish when the callback arrives: 1. The server processed the LAYOUTGET before issuing the recall, so the LAYOUTGET response is in flight, and must be waited for because it may be carrying layout info that will need to be returned to deal with the recall callback. 2. The server issued the callback before receiving the LAYOUTGET. The server will not respond to the LAYOUTGET until the recall callback is processed. Shepler Expires December 22, 2006 [Page 221] Internet-Draft NFSv4 Minor Version 1 June 2006 This can cause deadlock, as the client must wait for the LAYOUTGET response before processing the recall in the first case, but that response will not arrive until after the recall is processed in the second case. In the presence of a session, the server will provide the client with the { slotid , sequenceid } of any earlier LAYOUTGET which remains unconfirmed at the server by the session slot usage rules. This allows the client to disambiguate between the two cases, in case 1, the server will provide the reference, whereas in case 2 it will not (because there is no dependent client operation). Therefore, the action at the client will only require waiting in the case that the client has not yet seen the sever's earlier reply to the LAYOUTGET. Without the session, this deadlock can be avoided by adhering to the following requirements: o A LAYOUTGET MUST be rejected with an error (i.e., NFS4ERR_RECALLCONFLICT) if there's an overlapping outstanding recall callback to the same client o When processing a recall, the client MUST wait for a response to all conflicting outstanding LAYOUTGETs before performing any RETURN that could be affected by any such response. o The client SHOULD wait for responses to all operations required to complete a recall before sending any LAYOUTGETs that would conflict with the recall because the server is likely to return errors for them. Now the client can wait for the LAYOUTGET response, as it will be received in both cases. 14.5.3.2. Server Side Considerations Consider a related situation from the pNFS server's point of view. The server has issued a recall callback and receives an overlapping LAYOUTGET for the same file before the LAYOUTRETURN(s) that respond to the recall callback. Again, there are two cases: 1. The client issued the LAYOUTGET before processing the recall callback. 2. The client issued the LAYOUTGET after processing the recall callback, but it arrived before the LAYOUTRETURN that completed that processing. The simplest approach is to always reject the overlapping LAYOUTGET. The client has two ways to avoid this result - it can issue the Shepler Expires December 22, 2006 [Page 222] Internet-Draft NFSv4 Minor Version 1 June 2006 LAYOUTGET as a subsequent element of a COMPOUND containing the LAYOUTRETURN that completes the recall callback, or it can wait for the response to that LAYOUTRETURN. There is little a session can do to disambiguate between these two cases, because both operations are independent of one another. They are simply asynchronous events which crossed. The situation can even occur if the session is configured to use a single connection for both operations and callbacks. This leads to a more general problem; in the absence of a callback if a client issues concurrent overlapping LAYOUTGET and LAYOUTRETURN operations, it is possible for the server to process them in either order. Again, a client must take the appropriate precautions in serializing its actions. [ASIDE: HighRoad forbids a client from doing this, as the per-file layout stateid will cause one of the two operations to be rejected with a stale layout stateid. This approach is simpler and produces better results by comparison to allowing concurrent operations, at least for this sort of conflict case, because server execution of operations in an order not anticipated by the client may produce results that are not useful to the client (e.g., if a LAYOUTRETURN is followed by a concurrent overlapping LAYOUTGET, but executed in the other order, the client will not retain layout extents for the overlapping range).] 14.6. Metadata Server Write Propagation Asynchronous writes written through the metadata server may be propagated lazily to the storage devices. For data written asynchronously through the metadata server, a client performing a read at the appropriate storage device is not guaranteed to see the newly written data until a COMMIT occurs at the metadata server. While the write is pending, reads to the storage device can give out either the old data, the new data, or a mixture thereof. After either a synchronous write completes, or a COMMIT is received (for asynchronously written data), the metadata server must ensure that storage devices give out the new data and that the data has been written to stable storage. If the server implements its storage in any way such that it cannot obey these constraints, then it must recall the layouts to prevent reads being done that cannot be handled correctly. 14.7. Crash Recovery Crash recovery is complicated due to the distributed nature of the pNFS protocol. In general, crash recovery for layouts is similar to Shepler Expires December 22, 2006 [Page 223] Internet-Draft NFSv4 Minor Version 1 June 2006 crash recovery for delegations in the base NFSv4 protocol. However, the client's ability to perform I/O without contacting the metadata server introduces subtleties that must be handled correctly if file system corruption is to be avoided. 14.7.1. Leases The layout lease period plays a critical role in crash recovery. Depending on the capabilities of the storage protocol, it is crucial that the client is able to maintain an accurate layout lease timer to ensure that I/Os are not issued to storage devices after expiration of the layout lease period. In order for the client to do so, it must know which operations renew a lease. 14.7.1.1. Lease Renewal The current NFSv4 specification allows for implicit lease renewals to occur upon receiving an I/O. However, due to the distributed pNFS architecture, implicit lease renewals are limited to operations performed at the metadata server; this includes I/O performed through the metadata server. So, a client must not assume that READ and WRITE I/O to storage devices implicitly renew lease state. If sessions are required for pNFS, as has been suggested, then the SEQUENCE operation is to be used to explicitly renew leases. It is proposed that the SEQUENCE operation be extended to return all the specific information that RENEW does, but not as an error as RENEW returns it. Since, when using session, beginning each compound with the SEQUENCE op allows renews to be performed without an additional operation and without an additional request. Again, the client must not rely on any operation to the storage devices to renew a lease. Using the SEQUENCE operation for renewals, simplifies the client's perception of lease renewal. 14.7.1.2. Client Lease Timer Depending on the storage protocol and layout type in use, it may be crucial that the client not issue I/Os to storage devices if the corresponding layout's lease has expired. Doing so may lead to file system corruption if the layout has been given out and used by another client. In order to prevent this, the client must maintain an accurate lease timer for all layouts held. RFC3530 has the following to say regarding the maintenance of a client lease timer: ...the client must track operations which will renew the lease period. Using the time that each such request was sent and the time that the corresponding reply was received, the client should bound the time that the corresponding renewal could have occurred Shepler Expires December 22, 2006 [Page 224] Internet-Draft NFSv4 Minor Version 1 June 2006 on the server and thus determine if it is possible that a lease period expiration could have occurred. To be conservative, the client should start its lease timer based on the time that the it issued the operation to the metadata server, rather than based on the time of the response. It is also necessary to take propagation delay into account when requesting a renewal of the lease: ...the client should subtract it from lease times (e.g., if the client estimates the one-way propagation delay as 200 msec, then it can assume that the lease is already 200 msec old when it gets it). In addition, it will take another 200 msec to get a response back to the server. So the client must send a lock renewal or write data back to the server 400 msec before the lease would expire. Thus, the client must be aware of the one-way propagation delay and should issue renewals well in advance of lease expiration. Clients, to the extent possible, should try not to issue I/Os that may extend past the lease expiration time period. However, since this is not always possible, the storage protocol must be able to protect against the effects of inflight I/Os, as is discussed later. 14.7.2. Client Recovery Client recovery for layouts works in much the same way as NFSv4 client recovery works for other lock/delegation state. When an NFSv4 client reboots, it will lose all information about the layouts that it previously owned. There are two methods by which the server can reclaim these resources and allow otherwise conflicting layouts to be provided to other clients. The first is through the expiry of the client's lease. If the client recovery time is longer than the lease period, the client's lease will expire and the server will know that state may be released. for layouts the server may release the state immediately upon lease expiry or it may allow the layout to persist awaiting possible lease revival, as long as there are no conflicting requests. On the other hand, the client may recover in less time than it takes for the lease period to expire. In such a case, the client will contact the server through the standard SETCLIENTID protocol. The server will find that the client's id matches the id of the previous client invocation, but that the verifier is different. The server uses this as a signal to release all the state associated with the client's previous invocation. Shepler Expires December 22, 2006 [Page 225] Internet-Draft NFSv4 Minor Version 1 June 2006 14.7.3. Metadata Server Recovery The server recovery case is slightly more complex. In general, the recovery process again follows the standard NFSv4 recovery model: the client will discover that the metadata server has rebooted when it receives an unexpected STALE_STATEID or STALE_CLIENTID reply from the server; it will then proceed to try to reclaim its previous delegations during the server's recovery grace period. However, layouts have a slightly different mechanism for reclaim. The problem is that a client which uses LAYOUTGET to reclaim a layout might not get the same layout it had previously. The range might be different or it might get the same range but the content of the layout might be different. For example, if using a block/volume-based layout, the blocks provisionally assigned by the layout might be different, in which case the client will have to write the corresponding blocks again. Instead of reclaiming a layout with LAYOUTGET, a client can attempt to commit data written before the file server crash by setting a reclaim bit on the LAYOUTCOMMIT operation. This should only be done for data that the client has already written using a layout obtained before the server restart. For data still dirty in the client memory, the client should get a new layout segment after the server's grace period has elapsed. Alternatively, the client can write that data through the metadata server using the standard NFSv4 WRITE. In the case that the client has written dirty data to a provisionally allocated region of the layout, but was unable to commit the layout changes for this data before the server rebooted, the client may be unable to reliably re-read the data from the data storage devices in order to write it again via the metadata server. In this case the client needs to inform the metadata server that the layout has changed, before the server has completed its recovery grace period and starts allowing updates to the file-system. For this purpose, the LAYOUTCOMMIT operation contains a "reclaim" field. During the metadata server's recovery grace period (and only during the recovery grace period) the client may send a LAYOUTCOMMIT request with the "reclaim" field set to "true". This indicates that the client is attempting to commit changes to the file layout that occurred prior to the reboot of the metadata server. The "layout update" field of the request must contain the portion of the layout that the client held prior to the metadata server reboot which covers the outstanding writes. The metadata server is free to apply consistency checks on the layout update provided by the client, and reject the request if the checks fail. If the checks do not fail, then the server MUST commit the changes to the file layout contained in the "layoutupdate" field of the LAYOUTCOMMIT request, ensuring that the clients outstanding writes are not lost. Shepler Expires December 22, 2006 [Page 226] Internet-Draft NFSv4 Minor Version 1 June 2006 During the recovery grace period the metadata server should apply the standard approach to handling WRITE and LAYOUTGET requests. That is, if the server can reliably determine that servicing such a request will not conflict with an impending LAYOUTCOMMIT reclaim request, it may choose to service the request. If the server is unable to offer this guarantee, it MUST reject the request with status NFS4ERR_GRACE. For a metadata server to provide simple, valid handling during the grace period with respect to pNFS layouts, the easiest method is to simply reject all non-reclaim pNFS requests and WRITE operations by returning the NFS4ERR_GRACE error. However, depending on the storage protocol and server implementation, the server may be able to determine that a particular request is safe. For example, a server may save provisional allocation mappings for each file to stable storage, and use this information during the recovery grace period to determine that a WRITE request is safe. Under such circumstances, the WRITE request MAY be serviced. To re-iterate, for a server to allow non-reclaim pNFS requests and WRITE operations to be serviced during the recovery grace period, it MUST determine that the request will not conflict with any subsequent LAYOUTCOMMIT with reclaim request. There is an important safety concern associated with layouts that does not come into play in the standard NFSv4 case. If a standard NFSv4 client makes use of a stale delegation, while reading, the consequence could be to deliver stale data to an application. If writing, using a stale delegation or a stale state stateid for an open or lock would result in the rejection of the client's write with the appropriate stale stateid error. However, the pNFS layout enables the client to directly access the file system storage; if this access is not properly managed by the NFSv4 server the client can potentially corrupt the file system data or metadata. Thus, it is vitally important that the client discover that the metadata server has rebooted, and that the client stops using stale layouts before the metadata server gives them away to other clients. To ensure this, the client must be implemented so that layouts are never used to access the storage after the client's lease timer has expired. It is crucial that clients have precise knowledge of the lease periods of their layouts. For specific details on lease renewal and client lease timers, see Section 14.7.1. The prohibition on using stale layouts applies to all layout related accesses, especially the flushing of dirty data to the storage devices. If the client's lease timer expires because the client could not contact the server for any reason, the client MUST immediately stop using the layout until the server can be contacted and the layout can be officially recovered or reclaimed. However, Shepler Expires December 22, 2006 [Page 227] Internet-Draft NFSv4 Minor Version 1 June 2006 this is only part of the solution. It is also necessary to deal with the consequences of I/Os already in flight. The issue of the effects of I/Os started before lease expiration and possibly continuing through lease expiration is the responsibility of the data storage protocol and as such is layout type specific. There are two approaches the data storage protocol can take. The protocol may adopt a global solution which prevents all I/Os from being executed after the lease expiration and thus is safe against a client who issues I/Os after lease expiration. This is the preferred solution and the solution used by NFSv4 file based layouts (see Section 16.6); as well, the object storage device protocol allows storage to fence clients after lease expiration. Alternatively, the storage protocol may rely on proper client operation and only deal with the effects of lingering I/Os. These solutions may impact the client layout-driver, the metadata server layout-driver, and the control protocol. 14.7.4. Storage Device Recovery Storage device crash recovery is mostly dependent upon the layout type in use. However, there are a few general techniques a client can use if it discovers a storage device has crashed while holding asynchronously written, non-committed, data. First and foremost, it is important to realize that the client is the only one who has the information necessary to recover asynchronously written data; since, it holds the dirty data and most probably nobody else does. Second, the best solution is for the client to err on the side or caution and attempt to re-write the dirty data through another path. The client, rather than hold the asynchronously written data indefinitely, is encouraged to, and can make sure that the data is written by using other paths to that data. The client may write the data to the metadata server, either synchronously or asynchronously with a subsequent COMMIT. Once it does this, there is no need to wait for the original storage device. In the event that the data range to be committed is transferred to a different storage device, as indicated in a new layout, the client may write to that storage device. Once the data has been committed at that storage device, either through a synchronous write or through a commit to that storage device (e.g., through the NFSv4 COMMIT operation for the NFSv4 file layout), the client should consider the transfer of responsibility for the data to the new server as strong evidence that this is the intended and most effective method for the client to get the data written. In either case, once the write is on stable storage (through either the storage device or metadata server), there is no need to continue either attempting to commit or attempting to synchronously write the data to the original storage device or wait Shepler Expires December 22, 2006 [Page 228] Internet-Draft NFSv4 Minor Version 1 June 2006 for that storage device to become available. That storage device may never be visible to the client again. This approach does have a "lingering write" problem, similar to regular NFSv4. Suppose a WRITE is issued to a storage device for which no response is received. The client breaks the connection, trying to re-establish a new one, and gets a recall of the layout. The client issues the I/O for the dirty data through an alternative path, for example, through the metadata server and it succeeds. The client then goes on to perform additional writes that all succeed. If at some time later, the original write to the storage device succeeds, data inconsistency could result. The same problem can occur in regular NFSv4. For example, a WRITE is held in a switch for some period of time while other writes are issued and replied to, if the original WRITE finally succeeds, the same issues can occur. However, this is solved by sessions in NFSv4.x. 15. Security Considerations The pNFS extension partitions the NFSv4 file system protocol into two parts, the control path and the data path (i.e., storage protocol). The control path contains all the new operations described by this extension; all existing NFSv4 security mechanisms and features apply to the control path. The combination of components in a pNFS system (see Figure 67) is required to preserve the security properties of NFSv4 with respect to an entity accessing data via a client, including security countermeasures to defend against threats that NFSv4 provides defenses for in environments where these threats are considered significant. In some cases, the security countermeasures for connections to storage devices may take the form of physical isolation or a recommendation not to use pNFS in an environment. For example, it is currently infeasible to provide confidentiality protection for some storage device access protocols to protect against eavesdropping; in environments where eavesdropping on such protocols is of sufficient concern to require countermeasures, physical isolation of the communication channel (e.g., via direct connection from client(s) to storage device(s)) and/or a decision to forego use of pNFS (e.g., and fall back to NFSv4) may be appropriate courses of action. In full generality where communication with storage devices is subject to the same threats as client-server communication, the protocols used for that communication need to provide security mechanisms comparable to those available via RPSEC_GSS for NFSv4. Many situations in which pNFS is likely to be used will not be subject to the overall threat profile for which NFSv4 is required to Shepler Expires December 22, 2006 [Page 229] Internet-Draft NFSv4 Minor Version 1 June 2006 provide countermeasures. pNFS implementations MUST NOT remove NFSv4's access controls. The combination of clients, storage devices, and the server are responsible for ensuring that all client to storage device file data access respects NFSv4 ACLs and file open modes. This entails performing both of these checks on every access in the client, the storage device, or both. If a pNFS configuration performs these checks only in the client, the risk of a misbehaving client obtaining unauthorized access is an important consideration in determining when it is appropriate to use such a pNFS configuration. Such configurations SHOULD NOT be used when client- only access checks do not provide sufficient assurance that NFSv4 access control is being applied correctly. The following subsections describe security considerations specifically applicable to each of the three major storage device protocol types supported for pNFS. [Requiring strict equivalence to NFSv4 security mechanisms is the wrong approach. Will need to lay down a set of statements that each protocol has to make starting with access check location/properties.] 15.1. File Layout Security A NFSv4 file layout type is defined in Section 16; see Section 16.7 for additional security considerations and details. In summary, the NFSv4 file layout type requires that all I/O access checks MUST be performed by the storage devices, as defined by the NFSv4 specification. If another file layout type is being used, additional access checks may be required. But in all cases, the access control performed by the storage devices must be at least as strict as that specified by the NFSv4 protocol. 15.2. Object Layout Security The object storage protocol MUST implement the security aspects described in version 1 of the T10 OSD protocol definition [14]. The remainder of this section gives an overview of the security mechanism described in that standard. The goal is to give the reader a basic understanding of the object security model. Any discrepancies between this text and the actual standard are obviously to be resolved in favor of the OSD standard. The object storage protocol relies on a cryptographically secure capability to control accesses at the object storage devices. Capabilities are generated by the metadata server, returned to the client, and used by the client as described below to authenticate Shepler Expires December 22, 2006 [Page 230] Internet-Draft NFSv4 Minor Version 1 June 2006 their requests to the Object Storage Device (OSD). Capabilities therefore achieve the required access and open mode checking. They allow the file server to define and check a policy (e.g., open mode) and the OSD to check and enforce that policy without knowing the details (e.g., user IDs and ACLs). Since capabilities are tied to layouts, and since they are used to enforce access control, the server should recall layouts and revoke capabilities when the file ACL or mode changes in order to signal the clients. Each capability is specific to a particular object, an operation on that object, a byte range w/in the object, and has an explicit expiration time. The capabilities are signed with a secret key that is shared by the object storage devices (OSD) and the metadata managers. clients do not have device keys so they are unable to forge capabilities. The the following sketch of the algorithm should help the reader understand the basic model. LAYOUTGET returns {CapKey = MAC(CapArgs), CapArgs} The client uses CapKey to sign all the requests it issues for that object using the respective CapArgs. In other words, the CapArgs appears in the request to the storage device, and that request is signed with the CapKey as follows: ReqMAC = MAC(Req, Nonceln) The following is sent to the OSD: {CapArgs, Req, Nonceln, ReqMAC}. The OSD uses the SecretKey it shares with the metadata server to compare the ReqMAC the client sent with a locally computed MAC(CapArgs)>(Req, Nonceln) and if they match the OSD assumes that the capabilities came from an authentic metadata server and allows access to the object, as allowed by the CapArgs. Therefore, if the server LAYOUTGET reply, holding CapKey and CapArgs, is snooped by another client, it can be used to generate valid OSD requests (within the CapArgs access restriction). To provide the required privacy requirements for the capabilities returned by LAYOUTGET, the GSS-API can be used, e.g. by using a session key known to the file server and to the client to encrypt the whole layout or parts of it. Two general ways to provide privacy in the absence of GSS-API that are independent of NFSv4 are either an isolated network such as a VLAN or a secure channel provided by IPsec. Shepler Expires December 22, 2006 [Page 231] Internet-Draft NFSv4 Minor Version 1 June 2006 15.3. Block/Volume Layout Security As typically used, block/volume protocols rely on clients to enforce file access checks since the storage devices are generally unaware of the files they are storing and in particular are unaware of which blocks belongs to which file. In such environments, the physical addresses of blocks are exported to pNFS clients via layouts. An alternative method of block/volume protocol use is for the storage devices to export virtualized block addresses, which do reflect the files to which blocks belong. These virtual block addresses are exported to pNFS clients via layouts. This allows the storage device to make appropriate access checks, while mapping virtual block addresses to physical block addresses. In environments where access control is important and client-only access checks provide insufficient assurance of access control enforcement (e.g., there is concern about a malicious of malfunctioning client skipping the access checks) and where physical block addresses are exported to clients, the storage devices will generally be unable to compensate for these client deficiencies. In such threat environments, block/volume protocols SHOULD NOT be used with pNFS, unless the storage device is able to implement the appropriate access checks, via use of virtualized block addresses, or other means. NFSv4 without pNFS or pNFS with a different type of storage protocol would be a more suitable means to access files in such environments. Storage-device/protocol-specific methods (e.g. LUN masking/mapping) may be available to prevent malicious or high- risk clients from directly accessing storage devices. 16. The NFSv4 File Layout Type This section describes the semantics and format of NFSv4 file-based layouts. 16.1. File Striping and Data Access The file layout type describes a method for striping data across multiple devices. The data for each stripe unit is stored within an NFSv4 file located on a particular storage device. The structures used to describe the stripe layout are as follows: Shepler Expires December 22, 2006 [Page 232] Internet-Draft NFSv4 Minor Version 1 June 2006 enum stripetype4 { STRIPE_SPARSE = 1, STRIPE_DENSE = 2 }; struct nfsv4_file_layouthint { stripetype4 stripe_type; length4 stripe_unit; uint32_t stripe_width; }; struct nfsv4_file_layout { /* Per data stripe */ pnfs_deviceid4 dev_id<>; nfs_fh4 fh; }; struct nfsv4_file_layouttype4 { /* Per file */ stripetype4 stripe_type; bool commit_through_mds; length4 stripe_unit; length4 file_size; nfsv4_file_layout dev_list<>; }; The file layout specifies an ordered array of tuples, as well as the stripe size, type of stripe layout (discussed a little later), and the file's current size as of LAYOUTGET time. The filehandle, "fh", identifies the file on a storage device identified by "dev_id", that holds a particular stripe of the file. The "dev_id" array can be used for multipathing and is discussed further in Section 16.1.3. The stripe width is determined by the stripe unit size multiplied by the number of devices in the dev_list. The stripe held by is determined by that tuples position within the device list, "dev_list". For example, consider a dev_list consisting of the following pairs: <(1,0x12), (2,0x13), (1,0x15)> and stripe_unit = 32KB The stripe width is 32KB * 3 devices = 96KB. The first entry specifies that on device 1 in the data file with filehandle 0x12 holds the first 32KB of data (and every 32KB stripe beginning where the file's offset % 96KB == 0). Devices may be repeated multiple times within the device list array; this is shown where storage device 1 holds both the first and third stripe of data. Filehandles can only be repeated if a sparse stripe type is used. Data is striped across the devices in the order listed in the device list array in increments of the stripe size. A data Shepler Expires December 22, 2006 [Page 233] Internet-Draft NFSv4 Minor Version 1 June 2006 file stored on a storage device MUST map to a single file as defined by the metadata server; i.e., data from two files as viewed by the metadata server MUST NOT be stored within the same data file on any storage device. The "stripe_type" field specifies how the data is laid out within the data file on a storage device. It allows for two different data layouts: sparse and dense or packed. The stripe type determines the calculation that must be made to map the client visible file offset to the offset within the data file located on the storage device. The layout hint structure is described in more detail in Section 3.15. It is used, by the client, as by the FILE_LAYOUT_HINT attribute to specify the type of layout to be used for a newly created file. 16.1.1. Sparse and Dense Storage Device Data Layouts The stripe_type field allows for two storage device data file representations. Example sparse and dense storage device data layouts are illustrated below: Sparse file-layout (stripe_unit = 4KB) ------------------ Is represented by the following file layout on the storage devices: Offset ID:0 ID:1 ID:2 0 +--+ +--+ +--+ +--+ indicates a |//| | | | | |//| stripe that 4KB +--+ +--+ +--+ +--+ contains data | | |//| | | 8KB +--+ +--+ +--+ | | | | |//| 12KB +--+ +--+ +--+ |//| | | | | 16KB +--+ +--+ +--+ | | |//| | | +--+ +--+ +--+ The sparse file-layout has holes for the byte ranges not exported by that storage device. This allows clients to access data using the real offset into the file, regardless of the storage device's position within the stripe. However, if a client writes to one of the holes (e.g., offset 4-12KB on device 1), then an error MUST be returned by the storage device. This requires that the storage device have knowledge of the layout for each file. Shepler Expires December 22, 2006 [Page 234] Internet-Draft NFSv4 Minor Version 1 June 2006 When using a sparse layout, the offset into the storage device data file is the same as the offset into the main file. Dense/packed file-layout (stripe_unit = 4KB) ------------------------ Is represented by the following file layout on the storage devices: Offset ID:0 ID:1 ID:2 0 +--+ +--+ +--+ |//| |//| |//| 4KB +--+ +--+ +--+ |//| |//| |//| 8KB +--+ +--+ +--+ |//| |//| |//| 12KB +--+ +--+ +--+ |//| |//| |//| 16KB +--+ +--+ +--+ |//| |//| |//| +--+ +--+ +--+ The dense or packed file-layout does not leave holes on the storage devices. Each stripe unit is spread across the storage devices. As such, the storage devices need not know the file's layout since the client is allowed to write to any offset. The calculation to determine the byte offset within the data file for dense storage device layouts is: stripe_width = stripe_unit * N; where N = |dev_list| dev_offset = floor(file_offset / stripe_width) * stripe_unit + file_offset % stripe_unit Regardless of the storage device data file layout, the calculation to determine the index into the device array is the same: dev_idx = floor(file_offset / stripe_unit) mod N Section 16.5 describe the semantics for dealing with reads to holes within the striped file. This is of particular concern, since each individual component stripe file (i.e., the component of the striped file that lives on a particular storage device) may be of different length. Thus, clients may experience 'short' reads when reading off the end of one of these component files. Shepler Expires December 22, 2006 [Page 235] Internet-Draft NFSv4 Minor Version 1 June 2006 16.1.2. Metadata and Storage Device Roles In many cases, the metadata server and the storage device will be separate pieces of physical hardware. The specification text is written as if that were always case. However, it can be the case that the same physical hardware is used to implement both a metadata and storage device and in this case, the specification text's references to these two entities are to be understood as referring to the same physical hardware implementing two distinct roles and it is important that it be clearly understood on behalf of which role the hardware is executing at any given time. Two sub-cases can be distinguished. In the first sub-case, the same physical hardware is used to implement both a metadata and data server in which each role is addressed through a distinct network interface (e.g., IP addresses for the metadata server and storage device are distinct). As long as the storage device address is obtained from the layout and is distinct from the metadata server's address, using the device ID therein to obtain the appropriate storage device address, it is always clear, for any given request, to what role it is directed, based on the destination IP address. However, it may also be the case that even though the metadata server and storage device are distinct from one client's point of view, the roles may be reversed according to another client's point of view. For example, in the cluster file system model a metadata server to one client, may be a storage device to another client. Thus, it is safer to always mark the filehandle so that operations addressed to storage devices can be distinguished. The second sub-case is where both the metadata and storage device have the same network address. This requires us to make the distinction as to which role each request is directed, on a another basis. Since the network address is the same, the request is understood as being directed at one or the other, based on the filehandle of the first current filehandle value for the request. If the first current file handle is one derived from a layout (i.e., it is specified within the layout) (and it is recommended that these be distinguishable), then the request is to be considered as executed by a storage device. Otherwise, the operation is to be understood as executed by the metadata server. If a current filehandle is set that is inconsistent with the role to which it is directed, then the error NFS4ERR_BADHANDLE should result. For example, if a request is directed at the storage device, because the first current handle is from a layout, any attempt to set the current filehandle to be a value not from a layout should be rejected. Similarly, if the first current file handle was for a Shepler Expires December 22, 2006 [Page 236] Internet-Draft NFSv4 Minor Version 1 June 2006 value not from a layout, a subsequent attempt to set the current file handle to a value obtained from a layout should be rejected. 16.1.3. Device Multipathing The NFSv4 file layout supports multipathing to 'equivalent' devices. Device-level multipathing is primarily of use in the case of a data server failure --- it allows the client to switch to another storage device that is exporting the same data stripe, without having to contact the metadata server for a new layout. To support device multipathing, an array of device IDs is encoded within the data stripe portion of the file's layout. This array represents an ordered list of devices where the first element has the highest priority. Each device in the list MUST be 'equivalent' to every other device in the list and each device must be attempted in the order specified. Equivalent devices MUST export the same system image (e.g., the stateids and filehandles that they use are the same) and must provide the same consistency guarantees. Two equivalent storage devices must also have sufficient connections to the storage, such that writing to one storage device is equivalent to writing to another, this also applies to reading. Also, if multiple copies of the same data exist, reading from one must provide access to all existing copies. As such, it is unlikely that multipathing will provide additional benefit in the case of an I/O error. [NOTE: the error cases in which a client is expected to attempt an equivalent storage device should be specified.] 16.1.4. Operations Issued to Storage Devices Clients MUST use the filehandle described within the layout when accessing data on the storage devices. When using the layout's filehandle, the client MUST only issue READ, WRITE, PUTFH, COMMIT, and NULL operations to the storage device associated with that filehandle. If a client issues an operation other than those specified above, using the filehandle and storage device listed in the client's layout, that storage device SHOULD return an error to the client. The client MUST follow the instruction implied by the layout (i.e., which filehandles to use on which devices). As described in Section 14.2, a client MUST NOT issue I/Os to storage devices for which it does not hold a valid layout. The storage devices may reject such requests. GETATTR and SETATTR MUST be directed to the metadata server. In the case of a SETATTR of the size attribute, the control protocol is Shepler Expires December 22, 2006 [Page 237] Internet-Draft NFSv4 Minor Version 1 June 2006 responsible for propagating size updates/truncations to the storage devices. In the case of extending WRITEs to the storage devices, the new size must be visible on the metadata server once a LAYOUTCOMMIT has completed (see Section 14.4.2). Section 16.5, describes the mechanism by which the client is to handle storage device file's that do not reflect the metadata server's size. 16.1.5. COMMIT through metadata server commit_through_mds in the file layout gives the metadata server a preferred way of preforming COMMIT. If this flag is true, the client SHOULD send COMMIT to the metadata server instead of sending it to the same data server to which the associated WRITEs were sent. In order to maintain the current NFSv4 commit and recovery model, all the data servers MUST return a common verifier for all WRITEs in a given file layout. The value of the write verifier MUST be changed at the metadata server or any data server that is referenced in the layout, whenever there is a server event that can possibly lead to loss of uncommitted data. The scope of the verifier can be for a file or for the entire pNFS server. It might be more difficult for the server to maintain the verifier at the file level but the benefit is that only events that impact a given file will require recovery action. The single COMMIT to the metadata server will return a verifier and the client should compare it to all the verifiers from the WRITEs and fail the COMMIT if there is any mismatched verifiers. If COMMIT to the MDS fails, the client should reissue WRITEs for all the dirty data in the file. The client should treat dirty data with mismatched verifier as WRITE failure and try to recover by reissuing the WRITEs to the original DS or using other path to that data if the layout has not been recalled. Other option the client has is getting a new layout or just rewrite the data through the metadata server. If the flag commit_through_mds is false the client should not send COMMIT to the metadata server. Although it is valid to send COMMIT to the metadata server it should be used only to commit data that was written through the metadata server. See also section 14.7.4 "Storage Device Recover" for recovery options. 16.2. Global Stateid Requirements Note, there are no stateids returned embedded within the layout. The client MUST use the stateid representing open or lock state as returned by an earlier metadata operation (e.g., OPEN, LOCK), or a special stateid to perform I/O on the storage devices, as in regular NFSv4. Special stateid usage for I/O is subject to the NFSv4 protocol specification. The stateid used for I/O MUST have the same effect and be subject to the same validation on storage device as it Shepler Expires December 22, 2006 [Page 238] Internet-Draft NFSv4 Minor Version 1 June 2006 would if the I/O was being performed on the metadata server itself in the absence of pNFS. This has the implication that stateids are globally valid on both the metadata and storage devices. This requires the metadata server to propagate changes in lock and open state to the storage devices, so that the storage devices can validate I/O accesses. This is discussed further in Section 16.4. Depending on when stateids are propagated, the existence of a valid stateid on the storage device may act as proof of a valid layout. [NOTE: a number of proposals have been made that have the possibility of limiting the amount of validation performed by the storage device, if any of these proposals are accepted or obtain consensus, the global stateid requirement can be revisited.] 16.3. The Layout Iomode The layout iomode need not used by the metadata server when servicing NFSv4 file-based layouts, although in some circumstances it may be useful to use. For example, if the server implementation supports reading from read-only replicas or mirrors, it would be useful for the server to return a layout enabling the client to do so. As such, the client should set the iomode based on its intent to read or write the data. The client may default to an iomode of READ/WRITE (LAYOUTIOMODE_RW). The iomode need not be checked by the storage devices when clients perform I/O. However, the storage devices SHOULD still validate that the client holds a valid layout and return an error if the client does not. 16.4. Storage Device State Propagation Since the metadata server, which handles lock and open-mode state changes, as well as ACLs, may not be collocated with the storage devices where I/O access are validated, as such, the server implementation MUST take care of propagating changes of this state to the storage devices. Once the propagation to the storage devices is complete, the full effect of those changes must be in effect at the storage devices. However, some state changes need not be propagated immediately, although all changes SHOULD be propagated promptly. These state propagations have an impact on the design of the control protocol, even though the control protocol is outside of the scope of this specification. Immediate propagation refers to the synchronous propagation of state from the metadata server to the storage device(s); the propagation must be complete before returning to the client. Shepler Expires December 22, 2006 [Page 239] Internet-Draft NFSv4 Minor Version 1 June 2006 16.4.1. Lock State Propagation Mandatory locks MUST be made effective at the storage devices before the request that establishes them returns to the caller. Thus, mandatory lock state MUST be synchronously propagated to the storage devices. On the other hand, since advisory lock state is not used for checking I/O accesses at the storage devices, there is no semantic reason for propagating advisory lock state to the storage devices. However, since all lock, unlock, open downgrades and upgrades affect the sequence ID stored within the stateid, the stateid changes which may cause difficulty if this state is not propagated. Thus, when a client uses a stateid on a storage device for I/O with a newer sequence number than the one the storage device has, the storage device should query the metadata server and get any pending updates to that stateid. This allows stateid sequence number changes to be propagated lazily, on-demand. [NOTE: With the reliance on the sessions protocol, there is no real need for sequence ID portion of the stateid to be validated on I/O accesses. It is proposed that the seq. ID checking is obsoleted.] Since updates to advisory locks neither confer nor remove privileges, these changes need not be propagated immediately, and may not need to be propagated promptly. The updates to advisory locks need only be propagated when the storage device needs to resolve a question about a stateid. In fact, if byte-range locking is not mandatory (i.e., is advisory) the clients are advised not to use the lock-based stateids for I/O at all. The stateids returned by open are sufficient and eliminate overhead for this kind of state propagation. 16.4.2. Open-mode Validation Open-mode validation MUST be performed against the open mode(s) held by the storage devices. However, the server implementation may not always require the immediate propagation of changes. Reduction in access because of CLOSEs or DOWNGRADEs do not have to be propagated immediately, but SHOULD be propagated promptly; whereas changes due to revocation MUST be propagated immediately. On the other hand, changes that expand access (e.g., new OPEN's and upgrades) don't have to be propagated immediately but the storage device SHOULD NOT reject a request because of mode issues without making sure that the upgrade is not in flight. 16.4.3. File Attributes Since the SETATTR operation has the ability to modify state that is visible on both the metadata and storage devices (e.g., the size), care must be taken to ensure that the resultant state across the set Shepler Expires December 22, 2006 [Page 240] Internet-Draft NFSv4 Minor Version 1 June 2006 of storage devices is consistent; especially when truncating or growing the file. As described earlier, the LAYOUTCOMMIT operation is used to ensure that the metadata is synced with changes made to the storage devices. For the file-based protocol, it is necessary to re-sync state such as the size attribute, and the setting of mtime/atime. See Section 14.4 for a full description of the semantics regarding LAYOUTCOMMIT and attribute synchronization. It should be noted, that by using a file- based layout type, it is possible to synchronize this state before LAYOUTCOMMIT occurs. For example, the control protocol can be used to query the attributes present on the storage devices. Any changes to file attributes that control authorization or access as reflected by ACCESS calls or READs and WRITEs on the metadata server, MUST be propagated to the storage devices for enforcement on READ and WRITE I/O calls. If the changes made on the metadata server result in more restrictive access permissions for any user, those changes MUST be propagated to the storage devices synchronously. Recall that the NFSv4 protocol [6] specifies that: ...since the NFS version 4 protocol does not impose any requirement that READs and WRITEs issued for an open file have the same credentials as the OPEN itself, the server still must do appropriate access checking on the READs and WRITEs themselves. This also includes changes to ACLs. The propagation of access right changes due to changes in ACLs may be asynchronous only if the server implementation is able to determine that the updated ACL is not more restrictive for any user specified in the old ACL. Due to the relative infrequency of ACL updates, it is suggested that all changes be propagated synchronously. [NOTE: it has been suggested that the NFSv4 specification is in error with regard to allowing principles other than those used for OPEN to be used for file I/O. If changes within a minor version alter the behavior of NFSv4 with regard to OPEN principals and stateids some access control checking at the storage device can be made less expensive. pNFS should be altered to take full advantage of these changes.] 16.5. Storage Device Component File Size A potential problem exists when a component data file on a particular storage device is grown past EOF; the problem exists for both dense and sparse layouts. Imagine the following scenario: a client creates a new file (size == 0) and writes to byte 128KB; the client then Shepler Expires December 22, 2006 [Page 241] Internet-Draft NFSv4 Minor Version 1 June 2006 seeks to the beginning of the file and reads byte 100. The client should receive 0s back as a result of the read. However, if the read falls on a different storage device to the client's original write, the storage device servicing the READ may still believe that the file's size is at 0 and return no data with the EOF flag set. The storage device can only return 0s if it knows that the file's size has been extended. This would require the immediate propagation of the file's size to all storage devices, which is potentially very costly, instead, another approach as outlined below. First, the file's size is returned within the layout by LAYOUTGET. This size must reflect the latest size at the metadata server as set by the most recent of either the last LAYOUTCOMMIT or SETATTR; however, it may be more recent. Second, if a client performs a read that is returned short (i.e., is fully within the file's size, but the storage device indicates EOF and returns partial or no data), the client must assume that it is a hole and substitute 0s for the data not read up until its known local file size. If a client extends the file, it must update its local file size. Third, if the metadata server receives a SETATTR of the size or a LAYOUTCOMMIT that alters the file's size, the metadata server must send out CB_SIZECHANGED messages with the new size to clients holding layouts; it need not send a notification to the client that performed the operation that resulted in the size changing). Upon reception of the CB_SIZECHANGED notification, clients must update their local size for that file. As well, if a new file size is returned as a result to LAYOUTCOMMIT, the client must update their local file size. 16.6. Crash Recovery Considerations As described in Section 14.7, the layout type specific storage protocol is responsible for handling the effects of I/Os started before lease expiration, extending through lease expiration. The NFSv4 file layout type prevents all I/Os from being executed after lease expiration, without relying on a precise client lease timer and without requiring storage devices to maintain lease timers. It works as follows. In the presence of sessions, each compound begins with a SEQUENCE operation that contains the "clientID". On the storage device, the clientID can be used to validate that the client has a valid layout for the I/O being performed, if it does not, the I/O is rejected. Before the metadata server takes any action to invalidate a layout given out by a previous instance, it must make sure that all layouts from that previous instance are invalidated at the storage devices. Note: it is sufficient to invalidate the stateids associated with the layout only if special stateids are not being used for I/O at the storage devices, otherwise the layout itself must be invalidated. Shepler Expires December 22, 2006 [Page 242] Internet-Draft NFSv4 Minor Version 1 June 2006 This means that a metadata server may not restripe a file until it has contacted all of the storage devices to invalidate the layouts from the previous instance nor may it give out locks that conflict with locks embodied by the stateids associated with any layout from the previous instance without either doing a specific invalidation (as it would have to do anyway) or doing a global storage device invalidation. 16.7. Security Considerations The NFSv4 file layout type MUST adhere to the security considerations outlined in Section 15. More specifically, storage devices must make all of the required access checks on each READ or WRITE I/O as determined by the NFSv4 protocol [6]. This impacts the control protocol and the propagation of state from the metadata server to the storage devices; see Section 16.4 for more details. 16.8. Alternate Approaches Two alternate approaches exist for file-based layouts and the method used by clients to obtain stateids used for I/O. Both approaches embed stateids within the layout. However, before examining these approaches it is important to understand the distinction between clients and owners. Delegations belong to clients, while locks (e.g., record and share reservations) are held by owners which in turn belong to a specific client. As such, delegations can only protect against inter-client conflicts, not intra-client conflicts. Layouts are held by clients and SHOULD NOT be associated with state held by owners. Therefore, if stateids used for data access are embedded within a layout, these stateids can only act as delegation stateids, protecting against inter-client conflicts; stateids pertaining to an owner can not be embedded within the layout. This has the implication that the client MUST arbitrate among all intra-client conflicts (e.g., arbitrating among lock requests by different processes) before issuing pNFS operations. Using the stateids stored within the layout, storage devices can only arbitrate between clients (not owners). The first alternate approach is to do away with global stateids, stateids returned by OPEN/LOCK that are valid on the metadata server and storage devices, and use only stateids embedded within the layout. This approach has the drawback that the stateids used for I/O access can not be validated against per owner state, since they are only associated with the client holding the layout. It breaks the semantics of tieing a stateid used for I/O to an open instance. This has the implication that clients must delegate per owner lock and open requests internally, rather than push the work onto the Shepler Expires December 22, 2006 [Page 243] Internet-Draft NFSv4 Minor Version 1 June 2006 storage devices. The storage devices can still arbitrate and enforce inter-client lock and open state. The second approach is a hybrid approach. This approach allows for stateids to be embedded with the layout, but also allows for the possibility of global stateids. If the stateid embedded within the layout is a special stateid of all zeros, then the stateid referring to the last successful OPEN/LOCK should be used. This approach is recommended if it is decided that using NFSv4 as a control protocol is required. This proposal suggests the global stateid approach due to the cleaner semantics it provides regarding the relationship between stateids used for I/O and their corresponding open instance or lock state. However, it does have a profound impact on the control protocol's implementation and the state propagation that is required (as described in Section 16.4). 17. Layouts and Aggregation This section describes several aggregation schemes in a semi-formal way to provide context for layout formats. These definitions will be formalized in other protocols. However, the set of understood types is part of this protocol in order to provide for basic interoperability. The layout descriptions include (deviceID, objectID) tuples that identify some storage object on some storage device. The addressing formation associated with the deviceID is obtained with GETDEVICEINFO. The interpretation of the objectID depends on the storage protocol. The objectID could be a filehandle for an NFSv4 storage device. It could be a OSD object ID for an object server. The layout for a block device generally includes additional block map information to enumerate blocks or extents that are part of the layout. 17.1. Simple Map The data is located on a single storage device. In this case the file server can act as the front end for several storage devices and distribute files among them. Each file is limited in its size and performance characteristics by a single storage device. The simple map consists of (deviceID, objectID). Shepler Expires December 22, 2006 [Page 244] Internet-Draft NFSv4 Minor Version 1 June 2006 17.2. Block Extent Map The data is located on a LUN in the SAN. The layout consists of an array of (deviceID, blockID, offset, length) tuples. Each entry describes a block extent. 17.3. Striped Map (RAID 0) The data is striped across storage devices. The parameters of the stripe include the number of storage devices (N) and the size of each stripe unit (U). A full stripe of data is N * U bytes. The stripe map consists of an ordered list of (deviceID, objectID) tuples and the parameter value for U. The first stripe unit (the first U bytes) are stored on the first (deviceID, objectID), the second stripe unit on the second (deviceID, objectID) and so forth until the first complete stripe. The data layout then wraps around so that byte (N*U) of the file is stored on the first (deviceID, objectID) in the list, but starting at offset U within that object. The striped layout allows a client to read or write to the component objects in parallel to achieve high bandwidth. The striped map for a block device would be slightly different. The map is an ordered list of (deviceID, blockID, blocksize), where the deviceID is rotated among a set of devices to achieve striping. 17.4. Replicated Map The file data is replicated on N storage devices. The map consists of N (deviceID, objectID) tuples. When data is written using this map, it should be written to N objects in parallel. When data is read, any component object can be used. This map type is controversial because it highlights the issues with error recovery. Those issues get interesting with any scheme that employs redundancy. The handling of errors (e.g., only a subset of replicas get updated) is outside the scope of this protocol extension. Instead, it is a function of the storage protocol and the metadata control protocol. 17.5. Concatenated Map The map consists of an ordered set of N (deviceID, objectID, size) tuples. Each successive tuple describes the next segment of the file. Shepler Expires December 22, 2006 [Page 245] Internet-Draft NFSv4 Minor Version 1 June 2006 17.6. Nested Map The nested map is used to compose more complex maps out of simpler ones. The map format is an ordered set of M sub-maps, each submap applies to a byte range within the file and has its own type such as the ones introduced above. Any level of nesting is allowed in order to build up complex aggregation schemes. 18. Minor Versioning To address the requirement of an NFS protocol that can evolve as the need arises, the NFS version 4 protocol contains the rules and framework to allow for future minor changes or versioning. The base assumption with respect to minor versioning is that any future accepted minor version must follow the IETF process and be documented in a standards track RFC. Therefore, each minor version number will correspond to an RFC. Minor version zero of the NFS version 4 protocol is represented by this RFC. The COMPOUND procedure will support the encoding of the minor version being requested by the client. The following items represent the basic rules for the development of minor versions. Note that a future minor version may decide to modify or add to the following rules as part of the minor version definition. 1. Procedures are not added or deleted To maintain the general RPC model, NFS version 4 minor versions will not add to or delete procedures from the NFS program. 2. Minor versions may add operations to the COMPOUND and CB_COMPOUND procedures. The addition of operations to the COMPOUND and CB_COMPOUND procedures does not affect the RPC model. * Minor versions may append attributes to GETATTR4args, bitmap4, and GETATTR4res. This allows for the expansion of the attribute model to allow for future growth or adaptation. * Minor version X must append any new attributes after the last documented attribute. Shepler Expires December 22, 2006 [Page 246] Internet-Draft NFSv4 Minor Version 1 June 2006 Since attribute results are specified as an opaque array of per-attribute XDR encoded results, the complexity of adding new attributes in the midst of the current definitions will be too burdensome. 3. Minor versions must not modify the structure of an existing operation's arguments or results. Again the complexity of handling multiple structure definitions for a single operation is too burdensome. New operations should be added instead of modifying existing structures for a minor version. This rule does not preclude the following adaptations in a minor version. * adding bits to flag fields such as new attributes to GETATTR's bitmap4 data type * adding bits to existing attributes like ACLs that have flag words * extending enumerated types (including NFS4ERR_*) with new values 4. Minor versions may not modify the structure of existing attributes. 5. Minor versions may not delete operations. This prevents the potential reuse of a particular operation "slot" in a future minor version. 6. Minor versions may not delete attributes. 7. Minor versions may not delete flag bits or enumeration values. 8. Minor versions may declare an operation as mandatory to NOT implement. Specifying an operation as "mandatory to not implement" is equivalent to obsoleting an operation. For the client, it means that the operation should not be sent to the server. For the server, an NFS error can be returned as opposed to "dropping" the request as an XDR decode error. This approach allows for the obsolescence of an operation while maintaining its structure so that a future minor version can reintroduce the operation. Shepler Expires December 22, 2006 [Page 247] Internet-Draft NFSv4 Minor Version 1 June 2006 1. Minor versions may declare attributes mandatory to NOT implement. 2. Minor versions may declare flag bits or enumeration values as mandatory to NOT implement. 9. Minor versions may downgrade features from mandatory to recommended, or recommended to optional. 10. Minor versions may upgrade features from optional to recommended or recommended to mandatory. 11. A client and server that support minor version X must support minor versions 0 (zero) through X-1 as well. 12. No new features may be introduced as mandatory in a minor version. This rule allows for the introduction of new functionality and forces the use of implementation experience before designating a feature as mandatory. 13. A client MUST NOT attempt to use a stateid, filehandle, or similar returned object from the COMPOUND procedure with minor version X for another COMPOUND procedure with minor version Y, where X != Y. 19. Internationalization The primary issue in which NFS version 4 needs to deal with internationalization, or I18N, is with respect to file names and other strings as used within the protocol. The choice of string representation must allow reasonable name/string access to clients which use various languages. The UTF-8 encoding of the UCS as defined by ISO10646 [7] allows for this type of access and follows the policy described in "IETF Policy on Character Sets and Languages", RFC2277 [8]. [RFC-XXX-stringprep-XXX], otherwise know as "stringprep", documents a framework for using Unicode/UTF-8 in networking protocols, so as "to increase the likelihood that string input and string comparison work in ways that make sense for typical users throughout the world." A protocol must define a profile of stringprep "in order to fully specify the processing options." The remainder of this Internationalization section defines the NFS version 4 stringprep profiles. Much of terminology used for the remainder of this section comes from stringprep. Shepler Expires December 22, 2006 [Page 248] Internet-Draft NFSv4 Minor Version 1 June 2006 There are three UTF-8 string types defined for NFS version 4: utf8str_cs, utf8str_cis, and utf8str_mixed. Separate profiles are defined for each. Each profile defines the following, as required by stringprep: o The intended applicability of the profile o The character repertoire that is the input and output to stringprep (which is Unicode 3.2 for referenced version of stringprep) o The mapping tables from stringprep used (as described in section 3 of stringprep) o Any additional mapping tables specific to the profile o The Unicode normalization used, if any (as described in section 4 of stringprep) o The tables from stringprep listing of characters that are prohibited as output (as described in section 5 of stringprep) o The bidirectional string testing used, if any (as described in section 6 of stringprep) o Any additional characters that are prohibited as output specific to the profile Stringprep discusses Unicode characters, whereas NFS version 4 renders UTF-8 characters. Since there is a one to one mapping from UTF-8 to Unicode, where ever the remainder of this document refers to to Unicode, the reader should assume UTF-8. Much of the text for the profiles comes from [RFC-XXX-nameprep-XXX]. 19.1. Stringprep profile for the utf8str_cs type Every use of the utf8str_cs type definition in the NFS version 4 protocol specification follows the profile named nfs4_cs_prep. 19.1.1. Intended applicability of the nfs4_cs_prep profile The utf8str_cs type is a case sensitive string of UTF-8 characters. Its primary use in NFS Version 4 is for naming components and pathnames. Components and pathnames are stored on the server's filesystem. Two valid distinct UTF-8 strings might be the same after processing via the utf8str_cs profile. If the strings are two names inside a directory, the NFS version 4 server will need to either: Shepler Expires December 22, 2006 [Page 249] Internet-Draft NFSv4 Minor Version 1 June 2006 o disallow the creation of a second name if it's post processed form collides with that of an existing name, or o allow the creation of the second name, but arrange so that after post processing, the second name is different than the post processed form of the first name. 19.1.2. Character repertoire of nfs4_cs_prep The nfs4_cs_prep profile uses Unicode 3.2, as defined in stringprep's Appendix A.1 19.1.3. Mapping used by nfs4_cs_prep The nfs4_cs_prep profile specifies mapping using the following tables from stringprep: Table B.1 Table B.2 is normally not part of the nfs4_cs_prep profile as it is primarily for dealing with case-insensitive comparisons. However, if the NFS version 4 file server supports the case_insensitive filesystem attribute, and if case_insensitive is true, the NFS version 4 server MUST use Table B.2 (in addition to Table B1) when processing utf8str_cs strings, and the NFS version 4 client MUST assume Table B.2 (in addition to Table B.1) are being used. If the case_preserving attribute is present and set to false, then the NFS version 4 server MUST use table B.2 to map case when processing utf8str_cs strings. Whether the server maps from lower to upper case or the upper to lower case is an implementation dependency. 19.1.4. Normalization used by nfs4_cs_prep The nfs4_cs_prep profile does not specify a normalization form. A later revision of this specification may specify a particular normalization form. Therefore, the server and client can expect that they may receive unnormalized characters within protocol requests and responses. If the operating environment requires normalization, then the implementation must normalize utf8str_cs strings within the protocol before presenting the information to an application (at the client) or local filesystem (at the server). 19.1.5. Prohibited output for nfs4_cs_prep The nfs4_cs_prep profile specifies prohibiting using the following tables from stringprep: Shepler Expires December 22, 2006 [Page 250] Internet-Draft NFSv4 Minor Version 1 June 2006 Table C.3 Table C.4 Table C.5 Table C.6 Table C.7 Table C.8 Table C.9 19.1.6. Bidirectional output for nfs4_cs_prep The nfs4_cs_prep profile does not specify any checking of bidirectional strings. 19.2. Stringprep profile for the utf8str_cis type Every use of the utf8str_cis type definition in the NFS version 4 protocol specification follows the profile named nfs4_cis_prep. 19.2.1. Intended applicability of the nfs4_cis_prep profile The utf8str_cis type is a case insensitive string of UTF-8 characters. Its primary use in NFS Version 4 is for naming NFS servers. 19.2.2. Character repertoire of nfs4_cis_prep The nfs4_cis_prep profile uses Unicode 3.2, as defined in stringprep's Appendix A.1 19.2.3. Mapping used by nfs4_cis_prep The nfs4_cis_prep profile specifies mapping using the following tables from stringprep: Table B.1 Table B.2 Shepler Expires December 22, 2006 [Page 251] Internet-Draft NFSv4 Minor Version 1 June 2006 19.2.4. Normalization used by nfs4_cis_prep The nfs4_cis_prep profile specifies using Unicode normalization form KC, as described in stringprep. 19.2.5. Prohibited output for nfs4_cis_prep The nfs4_cis_prep profile specifies prohibiting using the following tables from stringprep: Table C.1.2 Table C.2.2 Table C.3 Table C.4 Table C.5 Table C.6 Table C.7 Table C.8 Table C.9 19.2.6. Bidirectional output for nfs4_cis_prep The nfs4_cis_prep profile specifies checking bidirectional strings as described in stringprep's section 6. 19.3. Stringprep profile for the utf8str_mixed type Every use of the utf8str_mixed type definition in the NFS version 4 protocol specification follows the profile named nfs4_mixed_prep. 19.3.1. Intended applicability of the nfs4_mixed_prep profile The utf8str_mixed type is a string of UTF-8 characters, with a prefix that is case sensitive, a separator equal to '@', and a suffix that is fully qualified domain name. Its primary use in NFS Version 4 is for naming principals identified in an Access Control Entry. Shepler Expires December 22, 2006 [Page 252] Internet-Draft NFSv4 Minor Version 1 June 2006 19.3.2. Character repertoire of nfs4_mixed_prep The nfs4_mixed_prep profile uses Unicode 3.2, as defined in stringprep's Appendix A.1 19.3.3. Mapping used by nfs4_cis_prep For the prefix and the separator of a utf8str_mixed string, the nfs4_mixed_prep profile specifies mapping using the following table from stringprep: Table B.1 For the suffix of a utf8str_mixed string, the nfs4_mixed_prep profile specifies mapping using the following tables from stringprep: Table B.1 Table B.2 19.3.4. Normalization used by nfs4_mixed_prep The nfs4_mixed_prep profile specifies using Unicode normalization form KC, as described in stringprep. 19.3.5. Prohibited output for nfs4_mixed_prep The nfs4_mixed_prep profile specifies prohibiting using the following tables from stringprep: Table C.1.2 Table C.2.2 Table C.3 Table C.4 Table C.5 Table C.6 Table C.7 Table C.8 Table C.9 Shepler Expires December 22, 2006 [Page 253] Internet-Draft NFSv4 Minor Version 1 June 2006 19.3.6. Bidirectional output for nfs4_mixed_prep The nfs4_mixed_prep profile specifies checking bidirectional strings as described in stringprep's section 6. 19.4. UTF-8 Related Errors Where the client sends an invalid UTF-8 string, the server should return an NFS4ERR_INVAL (Table 5) error. This includes cases in which inappropriate prefixes are detected and where the count includes trailing bytes that do not constitute a full UCS character. Where the client supplied string is valid UTF-8 but contains characters that are not supported by the server as a value for that string (e.g. names containing characters that have more than two octets on a filesystem that supports Unicode characters only), the server should return an NFS4ERR_BADCHAR (Table 5) error. Where a UTF-8 string is used as a file name, and the filesystem, while supporting all of the characters within the name, does not allow that particular name to be used, the server should return the error NFS4ERR_BADNAME (Table 5). This includes situations in which the server filesystem imposes a normalization constraint on name strings, but will also include such situations as filesystem prohibitions of "." and ".." as file names for certain operations, and other such constraints. 20. Error Definitions NFS error numbers are assigned to failed operations within a compound request. A compound request contains a number of NFS operations that have their results encoded in sequence in a compound reply. The results of successful operations will consist of an NFS4_OK status followed by the encoded results of the operation. If an NFS operation fails, an error status will be entered in the reply and the compound request will be terminated. Protocol Error Definitions +------------------------------+--------+---------------------------+ | Error | Number | Description | +------------------------------+--------+---------------------------+ | NFS4_OK | 0 | Indicates the operation | | | | completed successfully. | Shepler Expires December 22, 2006 [Page 254] Internet-Draft NFSv4 Minor Version 1 June 2006 | NFS4ERR_ACCESS | 13 | Permission denied. The | | | | caller does not have the | | | | correct permission to | | | | perform the requested | | | | operation. Contrast this | | | | with NFS4ERR_PERM, which | | | | restricts itself to owner | | | | or privileged user | | | | permission failures. | | NFS4ERR_ATTRNOTSUPP | 10032 | An attribute specified is | | | | not supported by the | | | | server. Does not apply to | | | | the GETATTR operation. | | NFS4ERR_ADMIN_REVOKED | 10047 | Due to administrator | | | | intervention, the | | | | lockowner's record locks, | | | | share reservations, and | | | | delegations have been | | | | revoked by the server. | | NFS4ERR_BADCHAR | 10040 | A UTF-8 string contains a | | | | character which is not | | | | supported by the server | | | | in the context in which | | | | it being used. | | NFS4ERR_BAD_COOKIE | 10003 | READDIR cookie is stale. | | NFS4ERR_BADHANDLE | 10001 | Illegal NFS filehandle. | | | | The filehandle failed | | | | internal consistency | | | | checks. | | NFS4ERR_BADIOMODE | TDB | Layout iomode is invalid. | | NFS4ERR_BADLAYOUT | TDB | Layout specified is | | | | invalid. | | NFS4ERR_BADNAME | 10041 | A name string in a | | | | request consists of valid | | | | UTF-8 characters | | | | supported by the server | | | | but the name is not | | | | supported by the server | | | | as a valid name for | | | | current operation. | | NFS4ERR_BADOWNER | 10039 | An owner, owner_group, or | | | | ACL attribute value can | | | | not be translated to | | | | local representation. | | NFS4ERR_BADTYPE | 10007 | An attempt was made to | | | | create an object of a | | | | type not supported by the | | | | server. | Shepler Expires December 22, 2006 [Page 255] Internet-Draft NFSv4 Minor Version 1 June 2006 | NFS4ERR_BAD_RANGE | 10042 | The range for a LOCK, | | | | LOCKT, or LOCKU operation | | | | is not appropriate to the | | | | allowable range of | | | | offsets for the server. | | NFS4ERR_BAD_SEQID | 10026 | The sequence number in a | | | | locking request is | | | | neither the next expected | | | | number or the last number | | | | processed. | | NFS4ERR_BADSESSION | TDB | TDB | | NFS4ERR_BADSLOT | TDB | TDB | | NFS4ERR_BAD_STATEID | 10025 | A stateid generated by | | | | the current server | | | | instance, but which does | | | | not designate any locking | | | | state (either current or | | | | superseded) for a current | | | | lockowner-file pair, was | | | | used. | | NFS4ERR_BADXDR | 10036 | The server encountered an | | | | XDR decoding error while | | | | processing an operation. | | NFS4ERR_CLID_INUSE | 10017 | The SETCLIENTID operation | | | | has found that a client | | | | id is already in use by | | | | another client. | | NFS4ERR_DEADLOCK | 10045 | The server has been able | | | | to determine a file | | | | locking deadlock | | | | condition for a blocking | | | | lock request. | Shepler Expires December 22, 2006 [Page 256] Internet-Draft NFSv4 Minor Version 1 June 2006 | NFS4ERR_DELAY | 10008 | The server initiated the | | | | request, but was not able | | | | to complete it in a | | | | timely fashion. The | | | | client should wait and | | | | then try the request with | | | | a new RPC transaction ID. | | | | For example, this error | | | | should be returned from a | | | | server that supports | | | | hierarchical storage and | | | | receives a request to | | | | process a file that has | | | | been migrated. In this | | | | case, the server should | | | | start the immigration | | | | process and respond to | | | | client with this error. | | | | This error may also occur | | | | when a necessary | | | | delegation recall makes | | | | processing a request in a | | | | timely fashion | | | | impossible. | | NFS4ERR_DELEG_ALREADY_WANTED | TBD | The client has already | | | | registered that it wants | | | | a delegation. | | NFS4ERR_DENIED | 10010 | An attempt to lock a file | | | | is denied. Since this may | | | | be a temporary condition, | | | | the client is encouraged | | | | to retry the lock request | | | | until the lock is | | | | accepted. | | NFS4ERR_DIRDELEG_UNAVAIL | TBD | TBD | | NFS4ERR_DQUOT | 69 | Resource (quota) hard | | | | limit exceeded. The | | | | user's resource limit on | | | | the server has been | | | | exceeded. | | NFS4ERR_EXIST | 17 | File exists. The file | | | | specified already exists. | | NFS4ERR_EXPIRED | 10011 | A lease has expired that | | | | is being used in the | | | | current operation. | Shepler Expires December 22, 2006 [Page 257] Internet-Draft NFSv4 Minor Version 1 June 2006 | NFS4ERR_FBIG | 27 | File too large. The | | | | operation would have | | | | caused a file to grow | | | | beyond the server's | | | | limit. | | NFS4ERR_FHEXPIRED | 10014 | The filehandle provided | | | | is volatile and has | | | | expired at the server. | | NFS4ERR_FILE_OPEN | 10046 | The operation can not be | | | | successfully processed | | | | because a file involved | | | | in the operation is | | | | currently open. | | NFS4ERR_GRACE | 10013 | The server is in its | | | | recovery or grace period | | | | which should match the | | | | lease period of the | | | | server. | | NFS4ERR_INVAL | 22 | Invalid argument or | | | | unsupported argument for | | | | an operation. Two | | | | examples are attempting a | | | | READLINK on an object | | | | other than a symbolic | | | | link or specifying a | | | | value for an enum field | | | | that is not defined in | | | | the protocol (e.g. | | | | nfs_ftype4). | | NFS4ERR_IO | 5 | I/O error. A hard error | | | | (for example, a disk | | | | error) occurred while | | | | processing the requested | | | | operation. | | NFS4ERR_ISDIR | 21 | Is a directory. The | | | | caller specified a | | | | directory in a | | | | non-directory operation. | | NFS4ERR_LAYOUTTRYLATER | TDB | Layouts are temporarily | | | | unavailable for the file, | | | | client should retry | | | | later. | | NFS4ERR_LAYOUTUNAVAILABLE | TDB | Layouts are not available | | | | for the file or its | | | | containing file system. | Shepler Expires December 22, 2006 [Page 258] Internet-Draft NFSv4 Minor Version 1 June 2006 | NFS4ERR_LEASE_MOVED | 10031 | A lease being renewed is | | | | associated with a | | | | filesystem that has been | | | | migrated to a new server. | | NFS4ERR_LOCKED | 10012 | A read or write operation | | | | was attempted on a locked | | | | file. | | NFS4ERR_LOCK_NOTSUPP | 10043 | Server does not support | | | | atomic upgrade or | | | | downgrade of locks. | | NFS4ERR_LOCK_RANGE | 10028 | A lock request is | | | | operating on a sub-range | | | | of a current lock for the | | | | lock owner and the server | | | | does not support this | | | | type of request. | | NFS4ERR_LOCKS_HELD | 10037 | A CLOSE was attempted and | | | | file locks would exist | | | | after the CLOSE. | | NFS4ERR_MINOR_VERS_MISMATCH | 10021 | The server has received a | | | | request that specifies an | | | | unsupported minor | | | | version. The server must | | | | return a COMPOUND4res | | | | with a zero length | | | | operations result array. | | NFS4ERR_MLINK | 31 | Too many hard links. | | NFS4ERR_MOVED | 10019 | The filesystem which | | | | contains the current | | | | filehandle object is not | | | | present at the server. It | | | | may have been relocated, | | | | migrated to another | | | | server or may have never | | | | been present. The client | | | | may obtain the new | | | | filesystem location by | | | | obtaining the | | | | "fs_locations" attribute | | | | for the current | | | | filehandle. For further | | | | discussion, refer to the | | | | section "Multi-server | | | | Name Space". | | NFS4ERR_NAMETOOLONG | 63 | The filename in an | | | | operation was too long. | Shepler Expires December 22, 2006 [Page 259] Internet-Draft NFSv4 Minor Version 1 June 2006 | NFS4ERR_NOENT | 2 | No such file or | | | | directory. The file or | | | | directory name specified | | | | does not exist. | | NFS4ERR_NOFILEHANDLE | 10020 | The logical current | | | | filehandle value (or, in | | | | the case of RESTOREFH, | | | | the saved filehandle | | | | value) has not been set | | | | properly. This may be a | | | | result of a malformed | | | | COMPOUND operation (i.e. | | | | no PUTFH or PUTROOTFH | | | | before an operation that | | | | requires the current | | | | filehandle be set). | | NFS4ERR_NO_GRACE | 10033 | A reclaim of client state | | | | has fallen outside of the | | | | grace period of the | | | | server. As a result, the | | | | server can not guarantee | | | | that conflicting state | | | | has not been provided to | | | | another client. | | NFS4ERR_NOMATCHING_LAYOUT | TBD | Client has no matching | | | | layout (segment) to | | | | return. | | NFS4ERR_NOSPC | 28 | No space left on device. | | | | The operation would have | | | | caused the server's | | | | filesystem to exceed its | | | | limit. | | NFS4ERR_NOTDIR | 20 | Not a directory. The | | | | caller specified a | | | | non-directory in a | | | | directory operation. | | NFS4ERR_NOTEMPTY | 66 | An attempt was made to | | | | remove a directory that | | | | was not empty. | | NFS4ERR_NOTSUPP | 10004 | Operation is not | | | | supported. | | NFS4ERR_NOT_SAME | 10027 | This error is returned by | | | | the VERIFY operation to | | | | signify that the | | | | attributes compared were | | | | not the same as provided | | | | in the client's request. | Shepler Expires December 22, 2006 [Page 260] Internet-Draft NFSv4 Minor Version 1 June 2006 | NFS4ERR_NXIO | 6 | I/O error. No such device | | | | or address. | | NFS4ERR_OLD_STATEID | 10024 | A stateid which | | | | designates the locking | | | | state for a | | | | lockowner-file at an | | | | earlier time was used. | | NFS4ERR_OPENMODE | 10038 | The client attempted a | | | | READ, WRITE, LOCK or | | | | SETATTR operation not | | | | sanctioned by the stateid | | | | passed (e.g. writing to a | | | | file opened only for | | | | read). | | NFS4ERR_OP_ILLEGAL | 10044 | An illegal operation | | | | value has been specified | | | | in the argop field of a | | | | COMPOUND or CB_COMPOUND | | | | procedure. | | NFS4ERR_PERM | 1 | Not owner. The operation | | | | was not allowed because | | | | the caller is either not | | | | a privileged user (root) | | | | or not the owner of the | | | | target of the operation. | | NFS4ERR_RECALLCONFLICT | TBD | Layout is unavailable due | | | | to a conflicting | | | | LAYOUTRECALL that is in | | | | progress. | | NFS4ERR_RECLAIM_BAD | 10034 | The reclaim provided by | | | | the client does not match | | | | any of the server's state | | | | consistency checks and is | | | | bad. | | NFS4ERR_RECLAIM_CONFLICT | 10035 | The reclaim provided by | | | | the client has | | | | encountered a conflict | | | | and can not be provided. | | | | Potentially indicates a | | | | misbehaving client. | Shepler Expires December 22, 2006 [Page 261] Internet-Draft NFSv4 Minor Version 1 June 2006 | NFS4ERR_RESOURCE | 10018 | For the processing of the | | | | COMPOUND procedure, the | | | | server may exhaust | | | | available resources and | | | | can not continue | | | | processing operations | | | | within the COMPOUND | | | | procedure. This error | | | | will be returned from the | | | | server in those instances | | | | of resource exhaustion | | | | related to the processing | | | | of the COMPOUND | | | | procedure. | | NFS4ERR_RESTOREFH | 10030 | The RESTOREFH operation | | | | does not have a saved | | | | filehandle (identified by | | | | SAVEFH) to operate upon. | | NFS4ERR_ROFS | 30 | Read-only filesystem. A | | | | modifying operation was | | | | attempted on a read-only | | | | filesystem. | | NFS4ERR_SAME | 10009 | This error is returned by | | | | the NVERIFY operation to | | | | signify that the | | | | attributes compared were | | | | the same as provided in | | | | the client's request. | | NFS4ERR_SERVERFAULT | 10006 | An error occurred on the | | | | server which does not map | | | | to any of the legal NFS | | | | version 4 protocol error | | | | values. The client should | | | | translate this into an | | | | appropriate error. UNIX | | | | clients may choose to | | | | translate this to EIO. | | NFS4ERR_SHARE_DENIED | 10015 | An attempt to OPEN a file | | | | with a share reservation | | | | has failed because of a | | | | share conflict. | | NFS4ERR_STALE | 70 | Invalid filehandle. The | | | | filehandle given in the | | | | arguments was invalid. | | | | The file referred to by | | | | that filehandle no longer | | | | exists or access to it | | | | has been revoked. | Shepler Expires December 22, 2006 [Page 262] Internet-Draft NFSv4 Minor Version 1 June 2006 | NFS4ERR_STALE_CLIENTID | 10022 | A clientid not recognized | | | | by the server was used in | | | | a locking or | | | | SETCLIENTID_CONFIRM | | | | request. | | NFS4ERR_STALE_STATEID | 10023 | A stateid generated by an | | | | earlier server instance | | | | was used. | | NFS4ERR_SYMLINK | 10029 | The current filehandle | | | | provided for a LOOKUP is | | | | not a directory but a | | | | symbolic link. Also used | | | | if the final component of | | | | the OPEN path is a | | | | symbolic link. | | NFS4ERR_TOOSMALL | 10005 | The encoded response to a | | | | READDIR request exceeds | | | | the size limit set by the | | | | initial request. | | NFS4ERR_UNKNOWN_LAYOUTTYPE | TBD | Layout type is unknown. | | NFS4ERR_WRONGSEC | 10016 | The security mechanism | | | | being used by the client | | | | for the operation does | | | | not match the server's | | | | security policy. The | | | | client should change the | | | | security mechanism being | | | | used and retry the | | | | operation. | | NFS4ERR_XDEV | 18 | Attempt to do an | | | | operation between | | | | different fsids. | | NFS4ERR_ | TDB | TDB | +------------------------------+--------+---------------------------+ Table 5 21. NFS version 4.1 Procedures 21.1. Procedure 0: NULL - No Operation 21.1.1. SYNOPSIS 21.1.2. ARGUMENTS void; Shepler Expires December 22, 2006 [Page 263] Internet-Draft NFSv4 Minor Version 1 June 2006 21.1.3. RESULTS void; 21.1.4. DESCRIPTION Standard NULL procedure. Void argument, void response. This procedure has no functionality associated with it. Because of this it is sometimes used to measure the overhead of processing a service request. Therefore, the server should ensure that no unnecessary work is done in servicing this procedure. 21.1.5. ERRORS None. 21.2. Procedure 1: COMPOUND - Compound Operations 21.2.1. SYNOPSIS compoundargs -> compoundres 21.2.2. ARGUMENTS union nfs_argop4 switch (nfs_opnum4 argop) { case : ; ... }; struct COMPOUND4args { utf8str_cs tag; uint32_t minorversion; nfs_argop4 argarray<>; }; 21.2.3. RESULTS union nfs_resop4 switch (nfs_opnum4 resop){ case : ; ... }; struct COMPOUND4res { nfsstat4 status; utf8str_cs tag; nfs_resop4 resarray<>; }; Shepler Expires December 22, 2006 [Page 264] Internet-Draft NFSv4 Minor Version 1 June 2006 21.2.4. DESCRIPTION The COMPOUND procedure is used to combine one or more of the NFS operations into a single RPC request. The main NFS RPC program has two main procedures: NULL and COMPOUND. All other operations use the COMPOUND procedure as a wrapper. The COMPOUND procedure is used to combine individual operations into a single RPC request. The server interprets each of the operations in turn. If an operation is executed by the server and the status of that operation is NFS4_OK, then the next operation in the COMPOUND procedure is executed. The server continues this process until there are no more operations to be executed or one of the operations has a status value other than NFS4_OK. In the processing of the COMPOUND procedure, the server may find that it does not have the available resources to execute any or all of the operations within the COMPOUND sequence. In this case, the error NFS4ERR_RESOURCE will be returned for the particular operation within the COMPOUND procedure where the resource exhaustion occurred. This assumes that all previous operations within the COMPOUND sequence have been evaluated successfully. The results for all of the evaluated operations must be returned to the client. The server will generally choose between two methods of decoding the client's request. The first would be the traditional one pass XDR decode. If there is an XDR decoding error in this case, the RPC XDR decode error would be returned. The second method would be to make an initial pass to decode the basic COMPOUND request and then to XDR decode the individual operations; the most interesting is the decode of attributes. In this case, the server may encounter an XDR decode error during the second pass. In this case, the server would return the error NFS4ERR_BADXDR to signify the decode error. The COMPOUND arguments contain a "minorversion" field. The initial and default value for this field is 0 (zero). This field will be used by future minor versions such that the client can communicate to the server what minor version is being requested. If the server receives a COMPOUND procedure with a minorversion field value that it does not support, the server MUST return an error of NFS4ERR_MINOR_VERS_MISMATCH and a zero length resultdata array. Contained within the COMPOUND results is a "status" field. If the results array length is non-zero, this status must be equivalent to the status of the last operation that was executed within the COMPOUND procedure. Therefore, if an operation incurred an error then the "status" value will be the same error value as is being returned for the operation that failed. Shepler Expires December 22, 2006 [Page 265] Internet-Draft NFSv4 Minor Version 1 June 2006 Note that operations, 0 (zero) and 1 (one) are not defined for the COMPOUND procedure. Operation 2 is not defined but reserved for future definition and use with minor versioning. If the server receives a operation array that contains operation 2 and the minorversion field has a value of 0 (zero), an error of NFS4ERR_OP_ILLEGAL, as described in the next paragraph, is returned to the client. If an operation array contains an operation 2 and the minorversion field is non-zero and the server does not support the minor version, the server returns an error of NFS4ERR_MINOR_VERS_MISMATCH. Therefore, the NFS4ERR_MINOR_VERS_MISMATCH error takes precedence over all other errors. It is possible that the server receives a request that contains an operation that is less than the first legal operation (OP_ACCESS) or greater than the last legal operation (OP_RELEASE_LOCKOWNER). In this case, the server's response will encode the opcode OP_ILLEGAL rather than the illegal opcode of the request. The status field in the ILLEGAL return results will set to NFS4ERR_OP_ILLEGAL. The COMPOUND procedure's return results will also be NFS4ERR_OP_ILLEGAL. The definition of the "tag" in the request is left to the implementor. It may be used to summarize the content of the compound request for the benefit of packet sniffers and engineers debugging implementations. However, the value of "tag" in the response SHOULD be the same value as provided in the request. This applies to the tag field of the CB_COMPOUND procedure as well. 21.2.5. IMPLEMENTATION Since an error of any type may occur after only a portion of the operations have been evaluated, the client must be prepared to recover from any failure. If the source of an NFS4ERR_RESOURCE error was a complex or lengthy set of operations, it is likely that if the number of operations were reduced the server would be able to evaluate them successfully. Therefore, the client is responsible for dealing with this type of complexity in recovery. 21.2.6. ERRORS All errors defined in the protocol 22. NFS version 4.1 Operations Shepler Expires December 22, 2006 [Page 266] Internet-Draft NFSv4 Minor Version 1 June 2006 22.1. Operation 3: ACCESS - Check Access Rights 22.1.1. SYNOPSIS (cfh), accessreq -> supported, accessrights 22.1.2. ARGUMENTS const ACCESS4_READ = 0x00000001; const ACCESS4_LOOKUP = 0x00000002; const ACCESS4_MODIFY = 0x00000004; const ACCESS4_EXTEND = 0x00000008; const ACCESS4_DELETE = 0x00000010; const ACCESS4_EXECUTE = 0x00000020; struct ACCESS4args { /* CURRENT_FH: object */ uint32_t access; }; 22.1.3. RESULTS struct ACCESS4resok { uint32_t supported; uint32_t access; }; union ACCESS4res switch (nfsstat4 status) { case NFS4_OK: ACCESS4resok resok4; default: void; }; 22.1.4. DESCRIPTION ACCESS determines the access rights that a user, as identified by the credentials in the RPC request, has with respect to the file system object specified by the current filehandle. The client encodes the set of access rights that are to be checked in the bit mask "access". The server checks the permissions encoded in the bit mask. If a status of NFS4_OK is returned, two bit masks are included in the response. The first, "supported", represents the access rights for which the server can verify reliably. The second, "access", represents the access rights available to the user for the filehandle provided. On success, the current filehandle retains its value. Note that the supported field will contain only as many values as was Shepler Expires December 22, 2006 [Page 267] Internet-Draft NFSv4 Minor Version 1 June 2006 originally sent in the arguments. For example, if the client sends an ACCESS operation with only the ACCESS4_READ value set and the server supports this value, the server will return only ACCESS4_READ even if it could have reliably checked other values. The results of this operation are necessarily advisory in nature. A return status of NFS4_OK and the appropriate bit set in the bit mask does not imply that such access will be allowed to the file system object in the future. This is because access rights can be revoked by the server at any time. The following access permissions may be requested: ACCESS4_READ Read data from file or read a directory. ACCESS4_LOOKUP Look up a name in a directory (no meaning for non- directory objects). ACCESS4_MODIFY Rewrite existing file data or modify existing directory entries. ACCESS4_EXTEND Write new data or add directory entries. ACCESS4_DELETE Delete an existing directory entry. ACCESS4_EXECUTE Execute file (no meaning for a directory). On success, the current filehandle retains its value. 22.1.5. IMPLEMENTATION In general, it is not sufficient for the client to attempt to deduce access permissions by inspecting the uid, gid, and mode fields in the file attributes or by attempting to interpret the contents of the ACL attribute. This is because the server may perform uid or gid mapping or enforce additional access control restrictions. It is also possible that the server may not be in the same ID space as the client. In these cases (and perhaps others), the client can not reliably perform an access check with only current file attributes. In the NFS version 2 protocol, the only reliable way to determine whether an operation was allowed was to try it and see if it succeeded or failed. Using the ACCESS operation in the NFS version 4 protocol, the client can ask the server to indicate whether or not one or more classes of operations are permitted. The ACCESS operation is provided to allow clients to check before doing a series of operations which will result in an access failure. The OPEN operation provides a point where the server can verify access to the Shepler Expires December 22, 2006 [Page 268] Internet-Draft NFSv4 Minor Version 1 June 2006 file object and method to return that information to the client. The ACCESS operation is still useful for directory operations or for use in the case the UNIX API "access" is used on the client. The information returned by the server in response to an ACCESS call is not permanent. It was correct at the exact time that the server performed the checks, but not necessarily afterwards. The server can revoke access permission at any time. The client should use the effective credentials of the user to build the authentication information in the ACCESS request used to determine access rights. It is the effective user and group credentials that are used in subsequent read and write operations. Many implementations do not directly support the ACCESS4_DELETE permission. Operating systems like UNIX will ignore the ACCESS4_DELETE bit if set on an access request on a non-directory object. In these systems, delete permission on a file is determined by the access permissions on the directory in which the file resides, instead of being determined by the permissions of the file itself. Therefore, the mask returned enumerating which access rights can be determined will have the ACCESS4_DELETE value set to 0. This indicates to the client that the server was unable to check that particular access right. The ACCESS4_DELETE bit in the access mask returned will then be ignored by the client. 22.1.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADHANDLE NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_IO NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE 22.2. Operation 4: CLOSE - Close File 22.2.1. SYNOPSIS (cfh), seqid, open_stateid -> open_stateid 22.2.2. ARGUMENTS struct CLOSE4args { /* CURRENT_FH: object */ seqid4 seqid stateid4 open_stateid; }; Shepler Expires December 22, 2006 [Page 269] Internet-Draft NFSv4 Minor Version 1 June 2006 22.2.3. RESULTS union CLOSE4res switch (nfsstat4 status) { case NFS4_OK: stateid4 open_stateid; default: void; }; 22.2.4. DESCRIPTION The CLOSE operation releases share reservations for the regular or named attribute file as specified by the current filehandle. The share reservations and other state information released at the server as a result of this CLOSE is only associated with the supplied stateid. The sequence id provides for the correct ordering. State associated with other OPENs is not affected. If record locks are held, the client SHOULD release all locks before issuing a CLOSE. The server MAY free all outstanding locks on CLOSE but some servers may not support the CLOSE of a file that still has record locks held. The server MUST return failure if any locks would exist after the CLOSE. On success, the current filehandle retains its value. 22.2.5. IMPLEMENTATION Even though CLOSE returns a stateid, this stateid is not useful to the client and should be treated as deprecated. CLOSE "shuts down" the state associated with all OPENs for the file by a single open_owner. As noted above, CLOSE will either release all file locking state or return an error. Therefore, the stateid returned by CLOSE is not useful for operations that follow. 22.2.6. ERRORS NFS4ERR_ADMIN_REVOKED NFS4ERR_BADHANDLE NFS4ERR_BAD_SEQID NFS4ERR_BAD_STATEID NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_EXPIRED NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_ISDIR NFS4ERR_LEASE_MOVED NFS4ERR_LOCKS_HELD NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_OLD_STATEID NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_STALE_STATEID 22.3. Operation 5: COMMIT - Commit Cached Data Shepler Expires December 22, 2006 [Page 270] Internet-Draft NFSv4 Minor Version 1 June 2006 22.3.1. SYNOPSIS (cfh), offset, count -> verifier 22.3.2. ARGUMENTS struct COMMIT4args { /* CURRENT_FH: file */ offset4 offset; count4 count; }; 22.3.3. RESULTS struct COMMIT4resok { verifier4 writeverf; }; union COMMIT4res switch (nfsstat4 status) { case NFS4_OK: COMMIT4resok resok4; default: void; }; 22.3.4. DESCRIPTION The COMMIT operation forces or flushes data to stable storage for the file specified by the current filehandle. The flushed data is that which was previously written with a WRITE operation which had the stable field set to UNSTABLE4. The offset specifies the position within the file where the flush is to begin. An offset value of 0 (zero) means to flush data starting at the beginning of the file. The count specifies the number of bytes of data to flush. If count is 0 (zero), a flush from offset to the end of the file is done. The server returns a write verifier upon successful completion of the COMMIT. The write verifier is used by the client to determine if the server has restarted or rebooted between the initial WRITE(s) and the COMMIT. The client does this by comparing the write verifier returned from the initial writes and the verifier returned by the COMMIT operation. The server must vary the value of the write verifier at each server event or instantiation that may lead to a loss of uncommitted data. Most commonly this occurs when the server is rebooted; however, other events at the server may result in uncommitted data loss as well. Shepler Expires December 22, 2006 [Page 271] Internet-Draft NFSv4 Minor Version 1 June 2006 On success, the current filehandle retains its value. 22.3.5. IMPLEMENTATION The COMMIT operation is similar in operation and semantics to the POSIX fsync(2) system call that synchronizes a file's state with the disk (file data and metadata is flushed to disk or stable storage). COMMIT performs the same operation for a client, flushing any unsynchronized data and metadata on the server to the server's disk or stable storage for the specified file. Like fsync(2), it may be that there is some modified data or no modified data to synchronize. The data may have been synchronized by the server's normal periodic buffer synchronization activity. COMMIT should return NFS4_OK, unless there has been an unexpected error. COMMIT differs from fsync(2) in that it is possible for the client to flush a range of the file (most likely triggered by a buffer- reclamation scheme on the client before file has been completely written). The server implementation of COMMIT is reasonably simple. If the server receives a full file COMMIT request, that is starting at offset 0 and count 0, it should do the equivalent of fsync()'ing the file. Otherwise, it should arrange to have the cached data in the range specified by offset and count to be flushed to stable storage. In both cases, any metadata associated with the file must be flushed to stable storage before returning. It is not an error for there to be nothing to flush on the server. This means that the data and metadata that needed to be flushed have already been flushed or lost during the last server failure. The client implementation of COMMIT is a little more complex. There are two reasons for wanting to commit a client buffer to stable storage. The first is that the client wants to reuse a buffer. In this case, the offset and count of the buffer are sent to the server in the COMMIT request. The server then flushes any cached data based on the offset and count, and flushes any metadata associated with the file. It then returns the status of the flush and the write verifier. The other reason for the client to generate a COMMIT is for a full file flush, such as may be done at close. In this case, the client would gather all of the buffers for this file that contain uncommitted data, do the COMMIT operation with an offset of 0 and count of 0, and then free all of those buffers. Any other dirty buffers would be sent to the server in the normal fashion. After a buffer is written by the client with the stable parameter set to UNSTABLE4, the buffer must be considered as modified by the client until the buffer has either been flushed via a COMMIT operation or Shepler Expires December 22, 2006 [Page 272] Internet-Draft NFSv4 Minor Version 1 June 2006 written via a WRITE operation with stable parameter set to FILE_SYNC4 or DATA_SYNC4. This is done to prevent the buffer from being freed and reused before the data can be flushed to stable storage on the server. When a response is returned from either a WRITE or a COMMIT operation and it contains a write verifier that is different than previously returned by the server, the client will need to retransmit all of the buffers containing uncommitted cached data to the server. How this is to be done is up to the implementor. If there is only one buffer of interest, then it should probably be sent back over in a WRITE request with the appropriate stable parameter. If there is more than one buffer, it might be worthwhile retransmitting all of the buffers in WRITE requests with the stable parameter set to UNSTABLE4 and then retransmitting the COMMIT operation to flush all of the data on the server to stable storage. The timing of these retransmissions is left to the implementor. The above description applies to page-cache-based systems as well as buffer-cache-based systems. In those systems, the virtual memory system will need to be modified instead of the buffer cache. 22.3.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADHANDLE NFS4ERR_BADXDR NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_IO NFS4ERR_ISDIR NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_RESOURCE NFS4ERR_ROFS NFS4ERR_SERVERFAULT NFS4ERR_STALE 22.4. Operation 6: CREATE - Create a Non-Regular File Object 22.4.1. SYNOPSIS (cfh), name, type, attrs -> (cfh), change_info, attrs_set Shepler Expires December 22, 2006 [Page 273] Internet-Draft NFSv4 Minor Version 1 June 2006 22.4.2. ARGUMENTS union createtype4 switch (nfs_ftype4 type) { case NF4LNK: linktext4 linkdata; case NF4BLK: case NF4CHR: specdata4 devdata; case NF4SOCK: case NF4FIFO: case NF4DIR: void; }; struct CREATE4args { /* CURRENT_FH: directory for creation */ createtype4 objtype; component4 objname; fattr4 createattrs; }; 22.4.3. RESULTS struct CREATE4resok { change_info4 cinfo; bitmap4 attrset; /* attributes set */ }; union CREATE4res switch (nfsstat4 status) { case NFS4_OK: CREATE4resok resok4; default: void; }; 22.4.4. DESCRIPTION The CREATE operation creates a non-regular file object in a directory with a given name. The OPEN operation MUST be used to create a regular file. The objname specifies the name for the new object. The objtype determines the type of object to be created: directory, symlink, etc. If an object of the same name already exists in the directory, the server will return the error NFS4ERR_EXIST. For the directory where the new file object was created, the server Shepler Expires December 22, 2006 [Page 274] Internet-Draft NFSv4 Minor Version 1 June 2006 returns change_info4 information in cinfo. With the atomic field of the change_info4 struct, the server will indicate if the before and after change attributes were obtained atomically with respect to the file object creation. If the objname has a length of 0 (zero), or if objname does not obey the UTF-8 definition, the error NFS4ERR_INVAL will be returned. The current filehandle is replaced by that of the new object. The createattrs specifies the initial set of attributes for the object. The set of attributes may include any writable attribute valid for the object type. When the operation is successful, the server will return to the client an attribute mask signifying which attributes were successfully set for the object. If createattrs includes neither the owner attribute nor an ACL with an ACE for the owner, and if the server's filesystem both supports and requires an owner attribute (or an owner ACE) then the server MUST derive the owner (or the owner ACE). This would typically be from the principal indicated in the RPC credentials of the call, but the server's operating environment or filesystem semantics may dictate other methods of derivation. Similarly, if createattrs includes neither the group attribute nor a group ACE, and if the server's filesystem both supports and requires the notion of a group attribute (or group ACE), the server MUST derive the group attribute (or the corresponding owner ACE) for the file. This could be from the RPC call's credentials, such as the group principal if the credentials include it (such as with AUTH_SYS), from the group identifier associated with the principal in the credentials (for e.g., POSIX systems have a passwd database that has the group identifier for every user identifier), inherited from directory the object is created in, or whatever else the server's operating environment or filesystem semantics dictate. This applies to the OPEN operation too. Conversely, it is possible the client will specify in createattrs an owner attribute or group attribute or ACL that the principal indicated the RPC call's credentials does not have permissions to create files for. The error to be returned in this instance is NFS4ERR_PERM. This applies to the OPEN operation too. 22.4.5. IMPLEMENTATION If the client desires to set attribute values after the create, a SETATTR operation can be added to the COMPOUND request so that the appropriate attributes will be set. Shepler Expires December 22, 2006 [Page 275] Internet-Draft NFSv4 Minor Version 1 June 2006 22.4.6. ERRORS NFS4ERR_ACCESS NFS4ERR_ATTRNOTSUPP NFS4ERR_BADCHAR NFS4ERR_BADHANDLE NFS4ERR_BADNAME NFS4ERR_BADOWNER NFS4ERR_BADTYPE NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_DQUOT NFS4ERR_EXIST NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_IO NFS4ERR_MOVED NFS4ERR_NAMETOOLONG NFS4ERR_NOFILEHANDLE NFS4ERR_NOSPC NFS4ERR_NOTDIR NFS4ERR_PERM NFS4ERR_RESOURCE NFS4ERR_ROFS NFS4ERR_SERVERFAULT NFS4ERR_STALE 22.5. Operation 7: DELEGPURGE - Purge Delegations Awaiting Recovery 22.5.1. SYNOPSIS clientid -> 22.5.2. ARGUMENTS struct DELEGPURGE4args { clientid4 clientid; }; 22.5.3. RESULTS struct DELEGPURGE4res { nfsstat4 status; }; 22.5.4. DESCRIPTION Purges all of the delegations awaiting recovery for a given client. This is useful for clients which do not commit delegation information to stable storage to indicate that conflicting requests need not be delayed by the server awaiting recovery of delegation information. This operation should be used by clients that record delegation information on stable storage on the client. In this case, DELEGPURGE should be issued immediately after doing delegation recovery on all delegations known to the client. Doing so will notify the server that no additional delegations for the client will be recovered allowing it to free resources, and avoid delaying other clients who make requests that conflict with the unrecovered delegations. The set of delegations known to the server and the client may be different. The reason for this is that a client may fail after making a request which resulted in delegation but before it received the results and committed them to the client's stable storage. The server MAY support DELEGPURGE, but if it does not, it MUST NOT Shepler Expires December 22, 2006 [Page 276] Internet-Draft NFSv4 Minor Version 1 June 2006 support CLAIM_DELEGATE_PREV. 22.5.5. ERRORS NFS4ERR_BADXDR NFS4ERR_NOTSUPP NFS4ERR_LEASE_MOVED NFS4ERR_MOVED NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE_CLIENTID 22.6. Operation 8: DELEGRETURN - Return Delegation 22.6.1. SYNOPSIS (cfh), stateid -> 22.6.2. ARGUMENTS struct DELEGRETURN4args { /* CURRENT_FH: delegated file */ stateid4 stateid; }; 22.6.3. RESULTS struct DELEGRETURN4res { nfsstat4 status; }; 22.6.4. DESCRIPTION Returns the delegation represented by the current filehandle and stateid. Delegations may be returned when recalled or voluntarily (i.e. before the server has recalled them). In either case the client must properly propagate state changed under the context of the delegation to the server before returning the delegation. 22.6.5. ERRORS NFS4ERR_ADMIN_REVOKED NFS4ERR_BAD_STATEID NFS4ERR_BADXDR NFS4ERR_EXPIRED NFS4ERR_INVAL NFS4ERR_LEASE_MOVED NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_NOTSUPP NFS4ERR_OLD_STATEID NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_STALE_STATEID 22.7. Operation 9: GETATTR - Get Attributes Shepler Expires December 22, 2006 [Page 277] Internet-Draft NFSv4 Minor Version 1 June 2006 22.7.1. SYNOPSIS (cfh), attrbits -> attrbits, attrvals 22.7.2. ARGUMENTS struct GETATTR4args { /* CURRENT_FH: directory or file */ bitmap4 attr_request; }; 22.7.3. RESULTS struct GETATTR4resok { fattr4 obj_attributes; }; union GETATTR4res switch (nfsstat4 status) { case NFS4_OK: GETATTR4resok resok4; default: void; }; 22.7.4. DESCRIPTION The GETATTR operation will obtain attributes for the filesystem object specified by the current filehandle. The client sets a bit in the bitmap argument for each attribute value that it would like the server to return. The server returns an attribute bitmap that indicates the attribute values for which it was able to return, followed by the attribute values ordered lowest attribute number first. The server must return a value for each attribute that the client requests if the attribute is supported by the server. If the server does not support an attribute or cannot approximate a useful value then it must not return the attribute value and must not set the attribute bit in the result bitmap. The server must return an error if it supports an attribute but cannot obtain its value. In that case no attribute values will be returned. All servers must support the mandatory attributes as specified in File Attributes (Section 3). On success, the current filehandle retains its value. Shepler Expires December 22, 2006 [Page 278] Internet-Draft NFSv4 Minor Version 1 June 2006 22.7.5. IMPLEMENTATION 22.7.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADHANDLE NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_IO NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE 22.8. Operation 10: GETFH - Get Current Filehandle 22.8.1. SYNOPSIS (cfh) -> filehandle 22.8.2. ARGUMENTS /* CURRENT_FH: */ void; 22.8.3. RESULTS struct GETFH4resok { nfs_fh4 object; }; union GETFH4res switch (nfsstat4 status) { case NFS4_OK: GETFH4resok resok4; default: void; }; 22.8.4. DESCRIPTION This operation returns the current filehandle value. On success, the current filehandle retains its value. 22.8.5. IMPLEMENTATION Operations that change the current filehandle like LOOKUP or CREATE do not automatically return the new filehandle as a result. For instance, if a client needs to lookup a directory entry and obtain its filehandle then the following request is needed. Shepler Expires December 22, 2006 [Page 279] Internet-Draft NFSv4 Minor Version 1 June 2006 PUTFH (directory filehandle) LOOKUP (entry name) GETFH 22.8.6. ERRORS NFS4ERR_BADHANDLE NFS4ERR_FHEXPIRED NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE 22.9. Operation 11: LINK - Create Link to a File 22.9.1. SYNOPSIS (sfh), (cfh), newname -> (cfh), change_info 22.9.2. ARGUMENTS struct LINK4args { /* SAVED_FH: source object */ /* CURRENT_FH: target directory */ component4 newname; }; 22.9.3. RESULTS struct LINK4resok { change_info4 cinfo; }; union LINK4res switch (nfsstat4 status) { case NFS4_OK: LINK4resok resok4; default: void; }; 22.9.4. DESCRIPTION The LINK operation creates an additional newname for the file represented by the saved filehandle, as set by the SAVEFH operation, in the directory represented by the current filehandle. The existing file and the target directory must reside within the same filesystem on the server. On success, the current filehandle will continue to be the target directory. If an object exists in the target directory with the same name as newname, the server must return NFS4ERR_EXIST. Shepler Expires December 22, 2006 [Page 280] Internet-Draft NFSv4 Minor Version 1 June 2006 For the target directory, the server returns change_info4 information in cinfo. With the atomic field of the change_info4 struct, the server will indicate if the before and after change attributes were obtained atomically with respect to the link creation. If the newname has a length of 0 (zero), or if newname does not obey the UTF-8 definition, the error NFS4ERR_INVAL will be returned. 22.9.5. IMPLEMENTATION Changes to any property of the "hard" linked files are reflected in all of the linked files. When a link is made to a file, the attributes for the file should have a value for numlinks that is one greater than the value before the LINK operation. The statement "file and the target directory must reside within the same filesystem on the server" means that the fsid fields in the attributes for the objects are the same. If they reside on different filesystems, the error, NFS4ERR_XDEV, is returned. On some servers, the filenames, "." and "..", are illegal as newname. In the case that newname is already linked to the file represented by the saved filehandle, the server will return NFS4ERR_EXIST. Note that symbolic links are created with the CREATE operation. 22.9.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADCHAR NFS4ERR_BADHANDLE NFS4ERR_BADNAME NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_DQUOT NFS4ERR_EXIST NFS4ERR_FHEXPIRED NFS4ERR_FILE_OPEN NFS4ERR_INVAL NFS4ERR_IO NFS4ERR_ISDIR NFS4ERR_MLINK NFS4ERR_MOVED NFS4ERR_NAMETOOLONG NFS4ERR_NOENT NFS4ERR_NOFILEHANDLE NFS4ERR_NOSPC NFS4ERR_NOTDIR NFS4ERR_NOTSUPP NFS4ERR_RESOURCE NFS4ERR_ROFS NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_WRONGSEC NFS4ERR_XDEV 22.10. Operation 12: LOCK - Create Lock 22.10.1. SYNOPSIS (cfh) locktype, reclaim, offset, length, locker -> stateid Shepler Expires December 22, 2006 [Page 281] Internet-Draft NFSv4 Minor Version 1 June 2006 22.10.2. ARGUMENTS struct open_to_lock_owner4 { seqid4 open_seqid; stateid4 open_stateid; seqid4 lock_seqid; lock_owner4 lock_owner; }; struct exist_lock_owner4 { stateid4 lock_stateid; seqid4 lock_seqid; }; union locker4 switch (bool new_lock_owner) { case TRUE: open_to_lock_owner4 open_owner; case FALSE: exist_lock_owner4 lock_owner; }; enum nfs_lock_type4 { READ_LT = 1, WRITE_LT = 2, READW_LT = 3, /* blocking read */ WRITEW_LT = 4 /* blocking write */ }; struct LOCK4args { /* CURRENT_FH: file */ nfs_lock_type4 locktype; bool reclaim; offset4 offset; length4 length; locker4 locker; }; Shepler Expires December 22, 2006 [Page 282] Internet-Draft NFSv4 Minor Version 1 June 2006 22.10.3. RESULTS struct LOCK4denied { offset4 offset; length4 length; nfs_lock_type4 locktype; lock_owner4 owner; }; struct LOCK4resok { stateid4 lock_stateid; }; union LOCK4res switch (nfsstat4 status) { case NFS4_OK: LOCK4resok resok4; case NFS4ERR_DENIED: LOCK4denied denied; default: void; }; 22.10.4. DESCRIPTION The LOCK operation requests a record lock for the byte range specified by the offset and length parameters. The lock type is also specified to be one of the nfs_lock_type4s. If this is a reclaim request, the reclaim parameter will be TRUE; Bytes in a file may be locked even if those bytes are not currently allocated to the file. To lock the file from a specific offset through the end-of-file (no matter how long the file actually is) use a length field with all bits set to 1 (one). If the length is zero, or if a length which is not all bits set to one is specified, and length when added to the offset exceeds the maximum 64-bit unsigned integer value, the error NFS4ERR_INVAL will result. Some servers may only support locking for byte offsets that fit within 32 bits. If the client specifies a range that includes a byte beyond the last byte offset of the 32-bit range, but does not include the last byte offset of the 32-bit and all of the byte offsets beyond it, up to the end of the valid 64-bit range, such a 32-bit server MUST return the error NFS4ERR_BAD_RANGE. In the case that the lock is denied, the owner, offset, and length of a conflicting lock are returned. On success, the current filehandle retains its value. Shepler Expires December 22, 2006 [Page 283] Internet-Draft NFSv4 Minor Version 1 June 2006 22.10.5. IMPLEMENTATION If the server is unable to determine the exact offset and length of the conflicting lock, the same offset and length that were provided in the arguments should be returned in the denied results. The File Locking section contains a full description of this and the other file locking operations. LOCK operations are subject to permission checks and to checks against the access type of the associated file. However, the specific right and modes required for various type of locks, reflect the semantics of the server-exported filesystem, and are not specified by the protocol. For example, Windows 2000 allows a write lock of a file open for READ, while a POSIX-compliant system does not. When the client makes a lock request that corresponds to a range that the lockowner has locked already (with the same or different lock type), or to a sub-region of such a range, or to a region which includes multiple locks already granted to that lockowner, in whole or in part, and the server does not support such locking operations (i.e. does not support POSIX locking semantics), the server will return the error NFS4ERR_LOCK_RANGE. In that case, the client may return an error, or it may emulate the required operations, using only LOCK for ranges that do not include any bytes already locked by that lock_owner and LOCKU of locks held by that lock_owner (specifying an exactly-matching range and type). Similarly, when the client makes a lock request that amounts to upgrading (changing from a read lock to a write lock) or downgrading (changing from write lock to a read lock) an existing record lock, and the server does not support such a lock, the server will return NFS4ERR_LOCK_NOTSUPP. Such operations may not perfectly reflect the required semantics in the face of conflicting lock requests from other clients. The locker argument specifies the lock_owner that is associated with the LOCK request. The locker4 structure is a switched union that indicates whether the lock_owner is known to the server or if the lock_owner is new to the server. In the case that the lock_owner is known to the server and has an established lock_seqid, the argument is just the lock_owner and lock_seqid. In the case that the lock_owner is not known to the server, the argument contains not only the lock_owner and lock_seqid but also the open_stateid and open_seqid. The new lock_owner case covers the very first lock done by the lock_owner and offers a method to use the established state of the open_stateid to transition to the use of the lock_owner. Shepler Expires December 22, 2006 [Page 284] Internet-Draft NFSv4 Minor Version 1 June 2006 22.10.6. ERRORS NFS4ERR_ACCESS NFS4ERR_ADMIN_REVOKED NFS4ERR_BADHANDLE NFS4ERR_BAD_RANGE NFS4ERR_BAD_SEQID NFS4ERR_BAD_STATEID NFS4ERR_BADXDR NFS4ERR_DEADLOCK NFS4ERR_DELAY NFS4ERR_DENIED NFS4ERR_EXPIRED NFS4ERR_FHEXPIRED NFS4ERR_GRACE NFS4ERR_INVAL NFS4ERR_ISDIR NFS4ERR_LEASE_MOVED NFS4ERR_LOCK_NOTSUPP NFS4ERR_LOCK_RANGE NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_NO_GRACE NFS4ERR_OLD_STATEID NFS4ERR_OPENMODE NFS4ERR_RECLAIM_BAD NFS4ERR_RECLAIM_CONFLICT NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_STALE_CLIENTID NFS4ERR_STALE_STATEID 22.11. Operation 13: LOCKT - Test For Lock 22.11.1. SYNOPSIS (cfh) locktype, offset, length owner -> {void, NFS4ERR_DENIED -> owner} 22.11.2. ARGUMENTS struct LOCKT4args { /* CURRENT_FH: file */ nfs_lock_type4 locktype; offset4 offset; length4 length; lock_owner4 owner; }; 22.11.3. RESULTS struct LOCK4denied { offset4 offset; length4 length; nfs_lock_type4 locktype; lock_owner4 owner; }; union LOCKT4res switch (nfsstat4 status) { case NFS4ERR_DENIED: LOCK4denied denied; case NFS4_OK: void; default: void; }; Shepler Expires December 22, 2006 [Page 285] Internet-Draft NFSv4 Minor Version 1 June 2006 22.11.4. DESCRIPTION The LOCKT operation tests the lock as specified in the arguments. If a conflicting lock exists, the owner, offset, length, and type of the conflicting lock are returned; if no lock is held, nothing other than NFS4_OK is returned. Lock types READ_LT and READW_LT are processed in the same way in that a conflicting lock test is done without regard to blocking or non-blocking. The same is true for WRITE_LT and WRITEW_LT. The ranges are specified as for LOCK. The NFS4ERR_INVAL and NFS4ERR_BAD_RANGE errors are returned under the same circumstances as for LOCK. On success, the current filehandle retains its value. 22.11.5. IMPLEMENTATION If the server is unable to determine the exact offset and length of the conflicting lock, the same offset and length that were provided in the arguments should be returned in the denied results. The File Locking section contains further discussion of the file locking mechanisms. LOCKT uses a lock_owner4 rather a stateid4, as is used in LOCK to identify the owner. This is because the client does not have to open the file to test for the existence of a lock, so a stateid may not be available. The test for conflicting locks should exclude locks for the current lockowner. Note that since such locks are not examined the possible existence of overlapping ranges may not affect the results of LOCKT. If the server does examine locks that match the lockowner for the purpose of range checking, NFS4ERR_LOCK_RANGE may be returned.. In the event that it returns NFS4_OK, clients may do a LOCK and receive NFS4ERR_LOCK_RANGE on the LOCK request because of the flexibility provided to the server. 22.11.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADHANDLE NFS4ERR_BAD_RANGE NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_DENIED NFS4ERR_FHEXPIRED NFS4ERR_GRACE NFS4ERR_INVAL NFS4ERR_ISDIR NFS4ERR_LEASE_MOVED NFS4ERR_LOCK_RANGE NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_STALE_CLIENTID Shepler Expires December 22, 2006 [Page 286] Internet-Draft NFSv4 Minor Version 1 June 2006 22.12. Operation 14: LOCKU - Unlock File 22.12.1. SYNOPSIS (cfh) type, seqid, stateid, offset, length -> stateid 22.12.2. ARGUMENTS struct LOCKU4args { /* CURRENT_FH: file */ nfs_lock_type4 locktype; seqid4 seqid; stateid4 stateid; offset4 offset; length4 length; }; 22.12.3. RESULTS union LOCKU4res switch (nfsstat4 status) { case NFS4_OK: stateid4 stateid; default: void; }; 22.12.4. DESCRIPTION The LOCKU operation unlocks the record lock specified by the parameters. The client may set the locktype field to any value that is legal for the nfs_lock_type4 enumerated type, and the server MUST accept any legal value for locktype. Any legal value for locktype has no effect on the success or failure of the LOCKU operation. The ranges are specified as for LOCK. The NFS4ERR_INVAL and NFS4ERR_BAD_RANGE errors are returned under the same circumstances as for LOCK. On success, the current filehandle retains its value. 22.12.5. IMPLEMENTATION If the area to be unlocked does not correspond exactly to a lock actually held by the lockowner the server may return the error NFS4ERR_LOCK_RANGE. This includes the case in which the area is not locked, where the area is a sub-range of the area locked, where it overlaps the area locked without matching exactly or the area specified includes multiple locks held by the lockowner. In all of Shepler Expires December 22, 2006 [Page 287] Internet-Draft NFSv4 Minor Version 1 June 2006 these cases, allowed by POSIX locking semantics, a client receiving this error, should if it desires support for such operations, simulate the operation using LOCKU on ranges corresponding to locks it actually holds, possibly followed by LOCK requests for the sub- ranges not being unlocked. 22.12.6. ERRORS NFS4ERR_ACCESS NFS4ERR_ADMIN_REVOKED NFS4ERR_BADHANDLE NFS4ERR_BAD_RANGE NFS4ERR_BAD_SEQID NFS4ERR_BAD_STATEID NFS4ERR_BADXDR NFS4ERR_EXPIRED NFS4ERR_FHEXPIRED NFS4ERR_GRACE NFS4ERR_INVAL NFS4ERR_ISDIR NFS4ERR_LEASE_MOVED NFS4ERR_LOCK_RANGE NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_OLD_STATEID NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_STALE_STATEID 22.13. Operation 15: LOOKUP - Lookup Filename 22.13.1. SYNOPSIS (cfh), component -> (cfh) 22.13.2. ARGUMENTS struct LOOKUP4args { /* CURRENT_FH: directory */ component4 objname; }; 22.13.3. RESULTS struct LOOKUP4res { /* CURRENT_FH: object */ nfsstat4 status; }; 22.13.4. DESCRIPTION This operation LOOKUPs or finds a filesystem object using the directory specified by the current filehandle. LOOKUP evaluates the component and if the object exists the current filehandle is replaced with the component's filehandle. If the component cannot be evaluated either because it does not exist or because the client does not have permission to evaluate the component, then an error will be returned and the current filehandle will be unchanged. Shepler Expires December 22, 2006 [Page 288] Internet-Draft NFSv4 Minor Version 1 June 2006 If the component is a zero length string or if any component does not obey the UTF-8 definition, the error NFS4ERR_INVAL will be returned. 22.13.5. IMPLEMENTATION If the client wants to achieve the effect of a multi-component lookup, it may construct a COMPOUND request such as (and obtain each filehandle): PUTFH (directory filehandle) LOOKUP "pub" GETFH LOOKUP "foo" GETFH LOOKUP "bar" GETFH NFS version 4 servers depart from the semantics of previous NFS versions in allowing LOOKUP requests to cross mountpoints on the server. The client can detect a mountpoint crossing by comparing the fsid attribute of the directory with the fsid attribute of the directory looked up. If the fsids are different then the new directory is a server mountpoint. UNIX clients that detect a mountpoint crossing will need to mount the server's filesystem. This needs to be done to maintain the file object identity checking mechanisms common to UNIX clients. Servers that limit NFS access to "shares" or "exported" filesystems should provide a pseudo-filesystem into which the exported filesystems can be integrated, so that clients can browse the server's name space. The clients view of a pseudo filesystem will be limited to paths that lead to exported filesystems. Note: previous versions of the protocol assigned special semantics to the names "." and "..". NFS version 4 assigns no special semantics to these names. The LOOKUPP operator must be used to lookup a parent directory. Note that this operation does not follow symbolic links. The client is responsible for all parsing of filenames including filenames that are modified by symbolic links encountered during the lookup process. If the current filehandle supplied is not a directory but a symbolic link, the error NFS4ERR_SYMLINK is returned as the error. For all other non-directory file types, the error NFS4ERR_NOTDIR is returned. Shepler Expires December 22, 2006 [Page 289] Internet-Draft NFSv4 Minor Version 1 June 2006 22.13.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADCHAR NFS4ERR_BADHANDLE NFS4ERR_BADNAME NFS4ERR_BADXDR NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_IO NFS4ERR_MOVED NFS4ERR_NAMETOOLONG NFS4ERR_NOENT NFS4ERR_NOFILEHANDLE NFS4ERR_NOTDIR NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_SYMLINK NFS4ERR_WRONGSEC 22.14. Operation 16: LOOKUPP - Lookup Parent Directory 22.14.1. SYNOPSIS (cfh) -> (cfh) 22.14.2. ARGUMENTS /* CURRENT_FH: object */ void; 22.14.3. RESULTS struct LOOKUPP4res { /* CURRENT_FH: directory */ nfsstat4 status; }; 22.14.4. DESCRIPTION The current filehandle is assumed to refer to a regular directory or a named attribute directory. LOOKUPP assigns the filehandle for its parent directory to be the current filehandle. If there is no parent directory an NFS4ERR_NOENT error must be returned. Therefore, NFS4ERR_NOENT will be returned by the server when the current filehandle is at the root or top of the server's file tree. As for LOOKUP, LOOKUPP will also cross mountpoints. If the current filehandle is not a directory or named attribute directory, the error NFS4ERR_NOTDIR is returned. If the requester's security flavor does not match that configured for the parent directory, then the server SHOULD return NFS4ERR_WRONGSEC (a future minor revision of NFSv4 may upgrade this to MUST) in the LOOKUPP response. However, if the server does so, it MUST support the new SECINFO_NO_NAME operation, so that the client can gracefully determine the correct security flavor. See the discussion of the SECINFO_NO_NAME operation for a description. Shepler Expires December 22, 2006 [Page 290] Internet-Draft NFSv4 Minor Version 1 June 2006 22.14.5. IMPLEMENTATION 22.14.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADHANDLE NFS4ERR_FHEXPIRED NFS4ERR_IO NFS4ERR_MOVED NFS4ERR_NOENT NFS4ERR_NOFILEHANDLE NFS4ERR_NOTDIR NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_WRONGSEC 22.15. Operation 17: NVERIFY - Verify Difference in Attributes 22.15.1. SYNOPSIS (cfh), fattr -> - 22.15.2. ARGUMENTS struct NVERIFY4args { /* CURRENT_FH: object */ fattr4 obj_attributes; }; 22.15.3. RESULTS struct NVERIFY4res { nfsstat4 status; }; 22.15.4. DESCRIPTION This operation is used to prefix a sequence of operations to be performed if one or more attributes have changed on some filesystem object. If all the attributes match then the error NFS4ERR_SAME must be returned. On success, the current filehandle retains its value. 22.15.5. IMPLEMENTATION This operation is useful as a cache validation operator. If the object to which the attributes belong has changed then the following operations may obtain new data associated with that object. For instance, to check if a file has been changed and obtain new data if it has: PUTFH (public) LOOKUP "foobar" NVERIFY attrbits attrs READ 0 32767 Shepler Expires December 22, 2006 [Page 291] Internet-Draft NFSv4 Minor Version 1 June 2006 In the case that a recommended attribute is specified in the NVERIFY operation and the server does not support that attribute for the filesystem object, the error NFS4ERR_ATTRNOTSUPP is returned to the client. When the attribute rdattr_error or any write-only attribute (e.g. time_modify_set) is specified, the error NFS4ERR_INVAL is returned to the client. 22.15.6. ERRORS NFS4ERR_ACCESS NFS4ERR_ATTRNOTSUPP NFS4ERR_BADCHAR NFS4ERR_BADHANDLE NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_IO NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_RESOURCE NFS4ERR_SAME NFS4ERR_SERVERFAULT NFS4ERR_STALE 22.16. Operation 18: OPEN - Open a Regular File 22.16.1. SYNOPSIS (cfh), seqid, share_access, share_deny, owner, openhow, claim -> (cfh), stateid, cinfo, rflags, open_confirm, attrset delegation 22.16.2. ARGUMENTS const OPEN4_SHARE_ACCESS_READ = 0x00000001; const OPEN4_SHARE_ACCESS_WRITE = 0x00000002; const OPEN4_SHARE_ACCESS_BOTH = 0x00000003; const OPEN4_SHARE_DENY_NONE = 0x00000000; const OPEN4_SHARE_DENY_READ = 0x00000001; const OPEN4_SHARE_DENY_WRITE = 0x00000002; const OPEN4_SHARE_DENY_BOTH = 0x00000003; /* new flags for share_access field of OPEN4args */ const OPEN4_SHARE_ACCESS_WANT_DELEG_MASK = 0x1C; const OPEN4_SHARE_ACCESS_WANT_READ_DELEG = 0x04; const OPEN4_SHARE_ACCESS_WANT_WRITE_DELEG = 0x08; const OPEN4_SHARE_ACCESS_WANT_ANY_DELEG = 0x0C; const OPEN4_SHARE_ACCESS_WANT_NO_DELEG = 0x10; const OPEN4_SHARE_ACCESS_WANT_CANCEL = 0x14; const OPEN4_SHARE_ACCESS_WANT_SIGNAL_DELEG_WHEN_RESRC_AVAIL = 0x20; const OPEN4_SHARE_ACCESS_WANT_PUSH_DELEG_WHEN_UNCONTENDED = 0x40; struct OPEN4args { seqid4 seqid; uint32_t share_access; uint32_t share_deny; Shepler Expires December 22, 2006 [Page 292] Internet-Draft NFSv4 Minor Version 1 June 2006 open_owner4 owner; openflag4 openhow; open_claim4 claim; }; enum createmode4 { UNCHECKED4 = 0, GUARDED4 = 1, EXCLUSIVE4 = 2 }; union createhow4 switch (createmode4 mode) { case UNCHECKED4: case GUARDED4: fattr4 createattrs; case EXCLUSIVE4: verifier4 createverf; }; enum opentype4 { OPEN4_NOCREATE = 0, OPEN4_CREATE = 1 }; union openflag4 switch (opentype4 opentype) { case OPEN4_CREATE: createhow4 how; default: void; }; /* Next definitions used for OPEN delegation */ enum limit_by4 { NFS_LIMIT_SIZE = 1, NFS_LIMIT_BLOCKS = 2 /* others as needed */ }; struct nfs_modified_limit4 { uint32_t num_blocks; uint32_t bytes_per_block; }; union nfs_space_limit4 switch (limit_by4 limitby) { /* limit specified as file size */ case NFS_LIMIT_SIZE: uint64_t filesize; /* limit specified by number of blocks */ Shepler Expires December 22, 2006 [Page 293] Internet-Draft NFSv4 Minor Version 1 June 2006 case NFS_LIMIT_BLOCKS: nfs_modified_limit4 mod_blocks; } ; enum open_delegation_type4 { OPEN_DELEGATE_NONE = 0, OPEN_DELEGATE_READ = 1, OPEN_DELEGATE_WRITE = 2, OPEN_DELEGATE_NONE_EXT = 3 /* new to v4.1 */ }; enum open_claim_type4 { CLAIM_NULL = 0, CLAIM_PREVIOUS = 1, CLAIM_DELEGATE_CUR = 2, CLAIM_DELEGATE_PREV = 3 /* * Like CLAIM_NULL, but object identified * by the current filehandle. */ CLAIM_FH = 4, /* new to v4.1 */ /* * Like CLAIM_DELEGATE_CUR, but object identified * by current filehandle. */ CLAIM_DELEG_CUR_FH = 5, /* new to v4.1 */ /* * Like CLAIM_DELEGATE_PREV, but object identified * by current filehandle. */ CLAIM_DELEG_PREV_FH = 6 /* new to v4.1 */ }; struct open_claim_delegate_cur4 { stateid4 delegate_stateid; component4 file; }; union open_claim4 switch (open_claim_type4 claim) { /* * No special rights to file. Ordinary OPEN of the specified file. */ case CLAIM_NULL: /* CURRENT_FH: directory */ component4 file; Shepler Expires December 22, 2006 [Page 294] Internet-Draft NFSv4 Minor Version 1 June 2006 /* * Right to the file established by an open previous to server * reboot. File identified by filehandle obtained at that time * rather than by name. */ case CLAIM_PREVIOUS: /* CURRENT_FH: file being reclaimed */ open_delegation_type4 delegate_type; /* * Right to file based on a delegation granted by the server. * File is specified by name. */ case CLAIM_DELEGATE_CUR: /* CURRENT_FH: directory */ open_claim_delegate_cur4 delegate_cur_info; /* Right to file based on a delegation granted to a previous boot * instance of the client. File is specified by name. */ case CLAIM_DELEGATE_PREV: /* CURRENT_FH: directory */ component4 file_delegate_prev; /* * Like CLAIM_NULL. No special rights to file. Ordinary * OPEN of the specified file. File is identified by * by file handle. */ case CLAIM_FH: /* new to v4.1 */ /* CURRENT_FH: file being opened */ void; /* * Like CLAIM_DELEGATE_PREV. Right to file based on a * delegation granted to a previous boot * instance of the client. File is identified by * by file handle. */ case CLAIM_DELEG_PREV_FH: /* new to v4.1 */ /* CURRENT_FH: file being opened */ void; /* * Like CLAIM_DELEGATE_CUR. Right to file based on * a delegation granted by the server. * File is identified by filehandle. */ case CLAIM_DELEG_CUR_FH: /* new to v4.1 */ Shepler Expires December 22, 2006 [Page 295] Internet-Draft NFSv4 Minor Version 1 June 2006 /* CURRENT_FH: file being opened */ stateid4 oc_delegate_stateid; }; 22.16.3. RESULTS struct open_read_delegation4 { stateid4 stateid; /* Stateid for delegation*/ bool recall; /* Pre-recalled flag for delegations obtained by reclaim (CLAIM_PREVIOUS) */ nfsace4 permissions; /* Defines users who don't need an ACCESS call to open for read */ }; struct open_write_delegation4 { stateid4 stateid; /* Stateid for delegation*/ bool recall; /* Pre-recalled flag for delegations obtained by reclaim (CLAIM_PREVIOUS) */ nfs_space_limit4 space_limit; /* Defines condition that the client must check to determine whether the file needs to be flushed to the server on close. */ nfsace4 permissions; /* Defines users who don't need an ACCESS call as part of a delegated open. */ }; enum why_no_delegation4 { /* new to v4.1 */ WND_NOT_WANTED = 0, WND_CONTENTION = 1, WND_RESOURCE = 2, WND_NOT_SUPP_FTYPE = 3, WND_WRITE_DELEG_NOT_SUPP_FTYPE = 4, WND_NOT_SUPP_UPGRADE = 5, WND_NOT_SUPP_DOWNGRADE = 6, WND_CANCELED = 7, WND_IS_DIR = 8, /* not needed if v4.1 sessions are mandatory */ WND_OPEN_CONFIRM_NEEDED = 9 }; Shepler Expires December 22, 2006 [Page 296] Internet-Draft NFSv4 Minor Version 1 June 2006 union open_none_delegation4 /* new to v4.1 */ switch (why_no_delegation4) { case WND_CONTENTION: bool ond_server_will_push_deleg; case WND_RESOURCE: bool ond_server_will_signal_avail; default: void; }; union open_delegation4 switch (open_delegation_type4 delegation_type) { case OPEN_DELEGATE_NONE: /* deprecated in v4.1 */ void; case OPEN_DELEGATE_READ: open_read_delegation4 read; case OPEN_DELEGATE_WRITE: open_write_delegation4 write; case OPEN_DELEGATE_NONE_EXT: /* new to v4.1 */ open_none_delegation4 od_whynone; }; const OPEN4_RESULT_CONFIRM = 0x00000002; const OPEN4_RESULT_LOCKTYPE_POSIX = 0x00000004; struct OPEN4resok { stateid4 stateid; /* Stateid for open */ change_info4 cinfo; /* Directory Change Info */ uint32_t rflags; /* Result flags */ bitmap4 attrset; /* attributes on create */ open_delegation4 delegation; /* Info on any open delegation */ }; union OPEN4res switch (nfsstat4 status) { case NFS4_OK: /* CURRENT_FH: opened file */ OPEN4resok resok4; default: void; }; 22.16.4. DESCRIPTION The OPEN operation creates and/or opens a regular file in a directory with the provided name. If the file does not exist at the server and creation is desired, specification of the method of creation is provided by the openhow parameter. The client has the choice of Shepler Expires December 22, 2006 [Page 297] Internet-Draft NFSv4 Minor Version 1 June 2006 three creation methods: UNCHECKED, GUARDED, or EXCLUSIVE. If the current filehandle is a named attribute directory, OPEN will then create or open a named attribute file. Note that exclusive create of a named attribute is not supported. If the createmode is EXCLUSIVE4 and the current filehandle is a named attribute directory, the server will return EINVAL. UNCHECKED means that the file should be created if a file of that name does not exist and encountering an existing regular file of that name is not an error. For this type of create, createattrs specifies the initial set of attributes for the file. The set of attributes may include any writable attribute valid for regular files. When an UNCHECKED create encounters an existing file, the attributes specified by createattrs are not used, except that when an size of zero is specified, the existing file is truncated. If GUARDED is specified, the server checks for the presence of a duplicate object by name before performing the create. If a duplicate exists, an error of NFS4ERR_EXIST is returned as the status. If the object does not exist, the request is performed as described for UNCHECKED. For each of these cases (UNCHECKED and GUARDED) where the operation is successful, the server will return to the client an attribute mask signifying which attributes were successfully set for the object. EXCLUSIVE specifies that the server is to follow exclusive creation semantics, using the verifier to ensure exclusive creation of the target. The server should check for the presence of a duplicate object by name. If the object does not exist, the server creates the object and stores the verifier with the object. If the object does exist and the stored verifier matches the client provided verifier, the server uses the existing object as the newly created object. If the stored verifier does not match, then an error of NFS4ERR_EXIST is returned. No attributes may be provided in this case, since the server may use an attribute of the target object to store the verifier. If the server uses an attribute to store the exclusive create verifier, it will signify which attribute by setting the appropriate bit in the attribute mask that is returned in the results. For the target directory, the server returns change_info4 information in cinfo. With the atomic field of the change_info4 struct, the server will indicate if the before and after change attributes were obtained atomically with respect to the link creation. Upon successful creation, the current filehandle is replaced by that of the new object. The OPEN operation provides for Windows share reservation capability Shepler Expires December 22, 2006 [Page 298] Internet-Draft NFSv4 Minor Version 1 June 2006 with the use of the share_access and share_deny fields of the OPEN arguments. The client specifies at OPEN the required share_access and share_deny modes. For clients that do not directly support SHAREs (i.e. UNIX), the expected deny value is DENY_NONE. In the case that there is a existing SHARE reservation that conflicts with the OPEN request, the server returns the error NFS4ERR_SHARE_DENIED. For a complete SHARE request, the client must provide values for the owner and seqid fields for the OPEN argument. For additional discussion of SHARE semantics see the section on 'Share Reservations'. In the case that the client is recovering state from a server failure, the claim field of the OPEN argument is used to signify that the request is meant to reclaim state previously held. The "claim" field of the OPEN argument is used to specify the file to be opened and the state information which the client claims to possess. There are four basic claim types which cover the various situations for an OPEN. They are as follows: +---------------------+---------------------------------------------+ | open type | description | +---------------------+---------------------------------------------+ | CLAIM_NULL | | | CLAIM_FH | For the client, this is a new OPEN request | | | and there is no previous state associate | | | with the file for the client. With | | | CLAIM_NULL the file is identified by the | | | current file handle and the specified | | | component name. With CLAIM_FH (new to v4.1) | | | the file is identified by just the current | | | file handle. | | CLAIM_PREVIOUS | The client is claiming basic OPEN state for | | | a file that was held previous to a server | | | reboot. Generally used when a server is | | | returning persistent filehandles; the | | | client may not have the file name to | | | reclaim the OPEN. | | CLAIM_DELEGATE_CUR | | | CLAIM_DELEG_PREV_FH | The client is claiming a delegation for | | | OPEN as granted by the server. Generally | | | this is done as part of recalling a | | | delegation. With CLAIM_DELEGATE_CUR, the | | | file is identified by the current file | | | handle and and the specified component | | | name. With CLAIM_DELEG_PREV_FH (new to | | | v4.1), the file is identified by just the | | | current file handle. | Shepler Expires December 22, 2006 [Page 299] Internet-Draft NFSv4 Minor Version 1 June 2006 | CLAIM_DELEGATE_PREV | | | CLAIM_DELEG_PREV_FH | The client is claiming a delegation granted | | | to a previous client instance; used after | | | the client reboots. The server MAY support | | | CLAIM_DELEGATE_PREV or CLAIM_DELEG_PREV_FH. | | | If it does support either open type, | | | SETCLIENTID_CONFIRM MUST NOT remove the | | | client's delegation state, and the server | | | MUST support the DELEGPURGE operation. | +---------------------+---------------------------------------------+ For OPEN requests whose claim type is other than CLAIM_PREVIOUS (i.e. requests other than those devoted to reclaiming opens after a server reboot) that reach the server during its grace or lease expiration period, the server returns an error of NFS4ERR_GRACE. For any OPEN request, the server may return an open delegation, which allows further opens and closes to be handled locally on the client as described in the section Open Delegation. Note that delegation is up to the server to decide. The client should never assume that delegation will or will not be granted in a particular instance. It should always be prepared for either case. A partial exception is the reclaim (CLAIM_PREVIOUS) case, in which a delegation type is claimed. In this case, delegation will always be granted, although the server may specify an immediate recall in the delegation structure. The rflags returned by a successful OPEN allow the server to return information governing how the open file is to be handled. OPEN4_RESULT_CONFIRM indicates that the client MUST execute an OPEN_CONFIRM operation before using the open file. OPEN4_RESULT_LOCKTYPE_POSIX indicates the server's file locking behavior supports the complete set of Posix locking techniques. From this the client can choose to manage file locking state in a way to handle a mis-match of file locking management. If the component is of zero length, NFS4ERR_INVAL will be returned. The component is also subject to the normal UTF-8, character support, and name checks. See the section "UTF-8 Related Errors" for further discussion. When an OPEN is done and the specified lockowner already has the resulting filehandle open, the result is to "OR" together the new share and deny status together with the existing status. In this case, only a single CLOSE need be done, even though multiple OPENs were completed. When such an OPEN is done, checking of share reservations for the new OPEN proceeds normally, with no exception for the existing OPEN held by the same lockowner. Shepler Expires December 22, 2006 [Page 300] Internet-Draft NFSv4 Minor Version 1 June 2006 If the underlying filesystem at the server is only accessible in a read-only mode and the OPEN request has specified ACCESS_WRITE or ACCESS_BOTH, the server will return NFS4ERR_ROFS to indicate a read- only filesystem. As with the CREATE operation, the server MUST derive the owner, owner ACE, group, or group ACE if any of the four attributes are required and supported by the server's filesystem. For an OPEN with the EXCLUSIVE4 createmode, the server has no choice, since such OPEN calls do not include the createattrs field. Conversely, if createattrs is specified, and includes owner or group (or corresponding ACEs) that the principal in the RPC call's credentials does not have authorization to create files for, then the server may return NFS4ERR_PERM. In the case of a OPEN which specifies a size of zero (e.g. truncation) and the file has named attributes, the named attributes are left as is. They are not removed. NFSv4.1 gives more precise control to clients over acquistion of delegations via the following new flags for the share_access field of OPEN4args: OPEN4_SHARE_ACCESS_WANT_READ_DELEG OPEN4_SHARE_ACCESS_WANT_WRITE_DELEG OPEN4_SHARE_ACCESS_WANT_ANY_DELEG OPEN4_SHARE_ACCESS_WANT_NO_DELEG OPEN4_SHARE_ACCESS_WANT_CANCEL OPEN4_SHARE_ACCESS_WANT_SIGNAL_DELEG_WHEN_RESRC_AVAIL OPEN4_SHARE_ACCESS_WANT_PUSH_DELEG_WHEN_UNCONTENDED Only one of the flags named with prefix OPEN4_SHARE_ACCESS_WANT_ can be specified. If none of the flags with prefix OPEN4_SHARE_ACCESS_WANT_ are specified, then the client is indicating no desire for a delegation. The server MAY return no delegation in the OPEN response. If the server supports the new flags and the client issues one or more of the new flags, then in the event the server does not return a delegation, it MUST return a delegation type of OPEN_DELEGATE_NONE_EXT. od_whynone indicates why no delegation was returned and will be one of: Shepler Expires December 22, 2006 [Page 301] Internet-Draft NFSv4 Minor Version 1 June 2006 WND_NOT_WANTED The client specified OPEN4_SHARE_ACCESS_WANT_NO_DELEG. WND_CONTENTION There is a conflicting delegation or open on the file. WND_RESOURCE Resource limitationa prevent the server from granting a delegation. WND_NOT_SUPP_FTYPE The server does not support delegations on this file type. WND_WRITE_DELEG_NOT_SUPP_FTYPE The server does not support write delegations on this file type. WND_CANCELED The client specified OPEN4_SHARE_ACCESS_WANT_CANCEL and now any "want" for this file object is cancelled. WND_IS_DIR The specified file object is a directory, and the operation is OPEN or WANT_DELEGATION which do not support delegations on directories. OPEN4_SHARE_ACCESS_WANT_READ_DELEG, OPEN_SHARE_ACCESS_WANT_WRITE_DELEG, or OPEN_SHARE_ACCESS_WANT_ANY_DELEG mean, respectively, the client wants a read, write, or any delegation regardless which of OPEN4_SHARE_ACCESS_READ, OPEN4_SHARE_ACCESS_WRITE, or OPEN4_SHARE_ACCESS_BOTH is set. If the client has a read delegation on a file, and requests a write delegation, then the client is requesting atomic upgrade of its read delegation to a write delegation. If the client has a write delegation on a file, and requests a read delegation, then the client is requesting atomic downgrade to a read delegation. A server MAY support atomic upgrade or downgrade. OPEN4_SHARE_ACCESS_WANT_NO_DELEG means the client wants no delegation. OPEN4_SHARE_ACCESS_WANT_CANCEL means the client wants no delegation and wants to cancel any previously registered "want" for a delegation. The client may set one or both of OPEN4_SHARE_ACCESS_WANT_SIGNAL_DELEG_WHEN_RESRC_AVAIL and OPEN4_SHARE_ACCESS_WANT_PUSH_DELEG_WHEN_UNCONTENDED. If the client specifies OPEN4_SHARE_ACCESS_WANT_SIGNAL_DELEG_WHEN_RESRC_AVAIL, then it wishes Shepler Expires December 22, 2006 [Page 302] Internet-Draft NFSv4 Minor Version 1 June 2006 to register a "want" for a delegation, in the event the OPEN results do not include a delegation. If so and the server denies the delegation due to insufficient resources, the server MAY later inform the client, via the CB_RECALLABLE_OBJ_AVAIL operation, that the resource limitation condition has eased. The server will tell the client that it intends to send a future CB_RECALLABLE_OBJ_AVAIL operation by setting delegation_type in the results to OPEN_DELEGATE_NONE_EXT, why_no_delegation4 to WND_RESOURCE, and ond_server_will_signal_avail to TRUE. If ond_server_will_signal_avail is set TRUE, the server MUST later send a CB_RECALLABLE_OBJ_AVAIL operation. If the client specifies OPEN4_SHARE_ACCESS_WANT_SIGNAL_DELEG_WHEN_UNCONTENDED, then it wishes to register a "want" for a delegation, in the event the OPEN results do not include a delegation. If so and the server denies the delegation due to insufficient resources, the server MAY later inform the client, via the CB_PUSH_DELEG operation operation, that the resource limitation condition has eased. The server will tell the client that it intends to send a future CB_PUSH_DELEG operation by setting delegation_type in the results to OPEN_DELEGATE_NONE_EXT, why_no_delegation4 to WND_CONTENTION, and ond_server_will_push_deleg to TRUE. If ond_server_will_push_deleg is set TRUE, the server MUST later send a CB_RECALLABLE_OBJ_AVAIL operation. If the client has previously registered a want for a delegation on a file, and then sends a request to register a want for a delegation on the same file, the server MUST return a new error: NFS4ERR_DELEG_ALREADY_WANTED. If the client wishes to register a different type of delegation want for the same file, it MUST cancel the existing delegation WANT. 22.16.5. IMPLEMENTATION The OPEN operation contains support for EXCLUSIVE create. The mechanism is similar to the support in NFS version 3 [RFC1813]. As in NFS version 3, this mechanism provides reliable exclusive creation. Exclusive create is invoked when the how parameter is EXCLUSIVE. In this case, the client provides a verifier that can reasonably be expected to be unique. A combination of a client identifier, perhaps the client network address, and a unique number generated by the client, perhaps the RPC transaction identifier, may be appropriate. If the object does not exist, the server creates the object and stores the verifier in stable storage. For filesystems that do not provide a mechanism for the storage of arbitrary file attributes, the server may use one or more elements of the object meta-data to store Shepler Expires December 22, 2006 [Page 303] Internet-Draft NFSv4 Minor Version 1 June 2006 the verifier. The verifier must be stored in stable storage to prevent erroneous failure on retransmission of the request. It is assumed that an exclusive create is being performed because exclusive semantics are critical to the application. Because of the expected usage, exclusive CREATE does not rely solely on the normally volatile duplicate request cache for storage of the verifier. The duplicate request cache in volatile storage does not survive a crash and may actually flush on a long network partition, opening failure windows. In the UNIX local filesystem environment, the expected storage location for the verifier on creation is the meta-data (time stamps) of the object. For this reason, an exclusive object create may not include initial attributes because the server would have nowhere to store the verifier. If the server can not support these exclusive create semantics, possibly because of the requirement to commit the verifier to stable storage, it should fail the OPEN request with the error, NFS4ERR_NOTSUPP. During an exclusive CREATE request, if the object already exists, the server reconstructs the object's verifier and compares it with the verifier in the request. If they match, the server treats the request as a success. The request is presumed to be a duplicate of an earlier, successful request for which the reply was lost and that the server duplicate request cache mechanism did not detect. If the verifiers do not match, the request is rejected with the status, NFS4ERR_EXIST. Once the client has performed a successful exclusive create, it must issue a SETATTR to set the correct object attributes. Until it does so, it should not rely upon any of the object attributes, since the server implementation may need to overload object meta-data to store the verifier. The subsequent SETATTR must not occur in the same COMPOUND request as the OPEN. This separation will guarantee that the exclusive create mechanism will continue to function properly in the face of retransmission of the request. Use of the GUARDED attribute does not provide exactly-once semantics. In particular, if a reply is lost and the server does not detect the retransmission of the request, the operation can fail with NFS4ERR_EXIST, even though the create was performed successfully. The client would use this behavior in the case that the application has not requested an exclusive create but has asked to have the file truncated when the file is opened. In the case of the client timing out and retransmitting the create request, the client can use GUARDED to prevent against a sequence like: create, write, create (retransmitted) from occurring. Shepler Expires December 22, 2006 [Page 304] Internet-Draft NFSv4 Minor Version 1 June 2006 For SHARE reservations, the client must specify a value for share_access that is one of READ, WRITE, or BOTH. For share_deny, the client must specify one of NONE, READ, WRITE, or BOTH. If the client fails to do this, the server must return NFS4ERR_INVAL. Based on the share_access value (READ, WRITE, or BOTH) the client should check that the requester has the proper access rights to perform the specified operation. This would generally be the results of applying the ACL access rules to the file for the current requester. However, just as with the ACCESS operation, the client should not attempt to second-guess the server's decisions, as access rights may change and may be subject to server administrative controls outside the ACL framework. If the requester is not authorized to READ or WRITE (depending on the share_access value), the server must return NFS4ERR_ACCESS. Note that since the NFS version 4 protocol does not impose any requirement that READs and WRITEs issued for an open file have the same credentials as the OPEN itself, the server still must do appropriate access checking on the READs and WRITEs themselves. If the component provided to OPEN is a symbolic link, the error NFS4ERR_SYMLINK will be returned to the client. If the current filehandle is not a directory, the error NFS4ERR_NOTDIR will be returned. "WARNING TO CLIENT IMPLEMENTORS" OPEN resembles LOOKUP in that it generates a filehandle for the client to use. Unlike LOOKUP though, OPEN creates server state on the filehandle. In normal circumstances, the client can only release this state with a CLOSE operation. CLOSE uses the current filehandle to determine which file to close. Therefore the client MUST follow every OPEN operation with a GETFH operation in the same COMPOUND procedure. This will supply the client with the filehandle such that CLOSE can be used appropriately. Simply waiting for the lease on the file to expire is insufficient because the server may maintain the state indefinitely as long as another client does not attempt to make a conflicting access to the same file. 22.16.6. ERRORS NFS4ERR_ACCESS NFS4ERR_ADMIN_REVOKED NFS4ERR_ATTRNOTSUPP NFS4ERR_BADCHAR NFS4ERR_BADHANDLE NFS4ERR_BADNAME NFS4ERR_BADOWNER NFS4ERR_BAD_SEQID NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_DQUOT NFS4ERR_EXIST NFS4ERR_EXPIRED NFS4ERR_FHEXPIRED NFS4ERR_GRACE NFS4ERR_IO NFS4ERR_INVAL NFS4ERR_ISDIR NFS4ERR_LEASE_MOVED Shepler Expires December 22, 2006 [Page 305] Internet-Draft NFSv4 Minor Version 1 June 2006 NFS4ERR_MOVED NFS4ERR_NAMETOOLONG NFS4ERR_NOENT NFS4ERR_NOFILEHANDLE NFS4ERR_NOSPC NFS4ERR_NOTDIR NFS4ERR_NO_GRACE NFS4ERR_PERM NFS4ERR_RECLAIM_BAD NFS4ERR_RECLAIM_CONFLICT NFS4ERR_RESOURCE NFS4ERR_ROFS NFS4ERR_SERVERFAULT NFS4ERR_SHARE_DENIED NFS4ERR_STALE NFS4ERR_STALE_CLIENTID NFS4ERR_SYMLINK NFS4ERR_WRONGSEC 22.17. Operation 19: OPENATTR - Open Named Attribute Directory 22.17.1. SYNOPSIS (cfh) createdir -> (cfh) 22.17.2. ARGUMENTS struct OPENATTR4args { /* CURRENT_FH: object */ bool createdir; }; 22.17.3. RESULTS struct OPENATTR4res { /* CURRENT_FH: named attr directory*/ nfsstat4 status; }; 22.17.4. DESCRIPTION The OPENATTR operation is used to obtain the filehandle of the named attribute directory associated with the current filehandle. The result of the OPENATTR will be a filehandle to an object of type NF4ATTRDIR. From this filehandle, READDIR and LOOKUP operations can be used to obtain filehandles for the various named attributes associated with the original filesystem object. Filehandles returned within the named attribute directory will have a type of NF4NAMEDATTR. The createdir argument allows the client to signify if a named attribute directory should be created as a result of the OPENATTR operation. Some clients may use the OPENATTR operation with a value of FALSE for createdir to determine if any named attributes exist for the object. If none exist, then NFS4ERR_NOENT will be returned. If createdir has a value of TRUE and no named attribute directory exists, one is created. The creation of a named attribute directory assumes that the server has implemented named attribute support in this fashion and is not required to do so by this definition. Shepler Expires December 22, 2006 [Page 306] Internet-Draft NFSv4 Minor Version 1 June 2006 22.17.5. IMPLEMENTATION If the server does not support named attributes for the current filehandle, an error of NFS4ERR_NOTSUPP will be returned to the client. 22.17.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADHANDLE NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_DQUOT NFS4ERR_FHEXPIRED NFS4ERR_IO NFS4ERR_MOVED NFS4ERR_NOENT NFS4ERR_NOFILEHANDLE NFS4ERR_NOSPC NFS4ERR_NOTSUPP NFS4ERR_RESOURCE NFS4ERR_ROFS NFS4ERR_SERVERFAULT NFS4ERR_STALE 22.18. Operation 20: OPEN_CONFIRM - Confirm Open 22.18.1. SYNOPSIS (cfh), seqid, stateid-> stateid 22.18.2. ARGUMENTS struct OPEN_CONFIRM4args { /* CURRENT_FH: opened file */ stateid4 open_stateid; seqid4 seqid; }; 22.18.3. RESULTS struct OPEN_CONFIRM4resok { stateid4 open_stateid; }; union OPEN_CONFIRM4res switch (nfsstat4 status) { case NFS4_OK: OPEN_CONFIRM4resok resok4; default: void; }; 22.18.4. DESCRIPTION This operation is used to confirm the sequence id usage for the first time that a open_owner is used by a client. The stateid returned from the OPEN operation is used as the argument for this operation along with the next sequence id for the open_owner. The sequence id passed to the OPEN_CONFIRM must be 1 (one) greater than the seqid passed to the OPEN operation from which the open_confirm value was Shepler Expires December 22, 2006 [Page 307] Internet-Draft NFSv4 Minor Version 1 June 2006 obtained. If the server receives an unexpected sequence id with respect to the original open, then the server assumes that the client will not confirm the original OPEN and all state associated with the original OPEN is released by the server. On success, the current filehandle retains its value. 22.18.5. IMPLEMENTATION A given client might generate many open_owner4 data structures for a given clientid. The client will periodically either dispose of its open_owner4s or stop using them for indefinite periods of time. The latter situation is why the NFS version 4 protocol does not have an explicit operation to exit an open_owner4: such an operation is of no use in that situation. Instead, to avoid unbounded memory use, the server needs to implement a strategy for disposing of open_owner4s that have no current lock, open, or delegation state for any files and have not been used recently. The time period used to determine when to dispose of open_owner4s is an implementation choice. The time period should certainly be no less than the lease time plus any grace period the server wishes to implement beyond a lease time. The OPEN_CONFIRM operation allows the server to safely dispose of unused open_owner4 data structures. In the case that a client issues an OPEN operation and the server no longer has a record of the open_owner4, the server needs ensure that this is a new OPEN and not a replay or retransmission. Servers must not require confirmation on OPENs that grant delegations or are doing reclaim operations. See section "Use of Open Confirmation" for details. The server can easily avoid this by noting whether it has disposed of one open_owner4 for the given clientid. If the server does not support delegation, it might simply maintain a single bit that notes whether any open_owner4 (for any client) has been disposed of. The server must hold unconfirmed OPEN state until one of three events occur. First, the client sends an OPEN_CONFIRM request with the appropriate sequence id and stateid within the lease period. In this case, the OPEN state on the server goes to confirmed, and the open_owner4 on the server is fully established. Second, the client sends another OPEN request with a sequence id that is incorrect for the open_owner4 (out of sequence). In this case, the server assumes the second OPEN request is valid and the first one is a replay. The server cancels the OPEN state of the first OPEN request, establishes an unconfirmed OPEN state for the second OPEN request, and responds to the second OPEN request with an indication Shepler Expires December 22, 2006 [Page 308] Internet-Draft NFSv4 Minor Version 1 June 2006 that an OPEN_CONFIRM is needed. The process then repeats itself. While there is a potential for a denial of service attack on the client, it is mitigated if the client and server require the use of a security flavor based on Kerberos V5, LIPKEY, or some other flavor that uses cryptography. What if the server is in the unconfirmed OPEN state for a given open_owner4, and it receives an operation on the open_owner4 that has a stateid but the operation is not OPEN, or it is OPEN_CONFIRM but with the wrong stateid? Then, even if the seqid is correct, the server returns NFS4ERR_BAD_STATEID, because the server assumes the operation is a replay: if the server has no established OPEN state, then there is no way, for example, a LOCK operation could be valid. Third, neither of the two aforementioned events occur for the open_owner4 within the lease period. In this case, the OPEN state is canceled and disposal of the open_owner4 can occur. 22.18.6. ERRORS NFS4ERR_ADMIN_REVOKED NFS4ERR_BADHANDLE NFS4ERR_BAD_SEQID NFS4ERR_BAD_STATEID NFS4ERR_BADXDR NFS4ERR_EXPIRED NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_ISDIR NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_OLD_STATEID NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_STALE_STATEID 22.19. Operation 21: OPEN_DOWNGRADE - Reduce Open File Access 22.19.1. SYNOPSIS (cfh), stateid, seqid, access, deny -> stateid 22.19.2. ARGUMENTS struct OPEN_DOWNGRADE4args { /* CURRENT_FH: opened file */ stateid4 stateid; seqid4 seqid; uint32_t share_access; uint32_t share_deny; }; Shepler Expires December 22, 2006 [Page 309] Internet-Draft NFSv4 Minor Version 1 June 2006 22.19.3. RESULTS struct OPEN_DOWNGRADE4resok { stateid4 stateid; }; union OPEN_DOWNGRADE4res switch(nfsstat4 status) { case NFS4_OK: OPEN_DOWNGRADE4resok resok4; default: void; }; 22.19.4. DESCRIPTION This operation is used to adjust the share_access and share_deny bits for a given open. This is necessary when a given lockowner opens the same file multiple times with different share_access and share_deny flags. In this situation, a close of one of the opens may change the appropriate share_access and share_deny flags to remove bits associated with opens no longer in effect. The share_access and share_deny bits specified in this operation replace the current ones for the specified open file. The share_access and share_deny bits specified must be exactly equal to the union of the share_access and share_deny bits specified for some subset of the OPENs in effect for current openowner on the current file. If that constraint is not respected, the error NFS4ERR_INVAL should be returned. Since share_access and share_deny bits are subsets of those already granted, it is not possible for this request to be denied because of conflicting share reservations. On success, the current filehandle retains its value. 22.19.5. ERRORS NFS4ERR_ADMIN_REVOKED NFS4ERR_BADHANDLE NFS4ERR_BAD_SEQID NFS4ERR_BAD_STATEID NFS4ERR_BADXDR NFS4ERR_EXPIRED NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_OLD_STATEID NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_STALE_STATEID 22.20. Operation 22: PUTFH - Set Current Filehandle 22.20.1. SYNOPSIS filehandle -> (cfh) Shepler Expires December 22, 2006 [Page 310] Internet-Draft NFSv4 Minor Version 1 June 2006 22.20.2. ARGUMENTS struct PUTFH4args { nfs_fh4 object; }; 22.20.3. RESULTS struct PUTFH4res { /* CURRENT_FH: */ nfsstat4 status; }; 22.20.4. DESCRIPTION Replaces the current filehandle with the filehandle provided as an argument. If the security mechanism used by the requester does not meet the requirements of the filehandle provided to this operation, the server MUST return NFS4ERR_WRONGSEC. 22.20.5. IMPLEMENTATION Commonly used as the first operator in an NFS request to set the context for following operations. 22.20.6. ERRORS NFS4ERR_BADHANDLE NFS4ERR_BADXDR NFS4ERR_FHEXPIRED NFS4ERR_MOVED NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_WRONGSEC 22.21. Operation 23: PUTPUBFH - Set Public Filehandle 22.21.1. SYNOPSIS - -> (cfh) 22.21.2. ARGUMENT void; Shepler Expires December 22, 2006 [Page 311] Internet-Draft NFSv4 Minor Version 1 June 2006 22.21.3. RESULT struct PUTPUBFH4res { /* CURRENT_FH: public fh */ nfsstat4 status; }; 22.21.4. DESCRIPTION Replaces the current filehandle with the filehandle that represents the public filehandle of the server's name space. This filehandle may be different from the "root" filehandle which may be associated with some other directory on the server. The public filehandle represents the concepts embodied in [RFC2054], [RFC2055], [RFC2224]. The intent for NFS version 4 is that the public filehandle (represented by the PUTPUBFH operation) be used as a method of providing WebNFS server compatibility with NFS versions 2 and 3. The public filehandle and the root filehandle (represented by the PUTROOTFH operation) should be equivalent. If the public and root filehandles are not equivalent, then the public filehandle MUST be a descendant of the root filehandle. 22.21.5. IMPLEMENTATION Used as the first operator in an NFS request to set the context for following operations. With the NFS version 2 and 3 public filehandle, the client is able to specify whether the path name provided in the LOOKUP should be evaluated as either an absolute path relative to the server's root or relative to the public filehandle. [RFC2224] contains further discussion of the functionality. With NFS version 4, that type of specification is not directly available in the LOOKUP operation. The reason for this is because the component separators needed to specify absolute vs. relative are not allowed in NFS version 4. Therefore, the client is responsible for constructing its request such that the use of either PUTROOTFH or PUTPUBFH are used to signify absolute or relative evaluation of an NFS URL respectively. Note that there are warnings mentioned in [RFC2224] with respect to the use of absolute evaluation and the restrictions the server may place on that evaluation with respect to how much of its namespace has been made available. These same warnings apply to NFS version 4. It is likely, therefore that because of server implementation details, an NFS version 3 absolute public filehandle lookup may Shepler Expires December 22, 2006 [Page 312] Internet-Draft NFSv4 Minor Version 1 June 2006 behave differently than an NFS version 4 absolute resolution. There is a form of security negotiation as described in [RFC2755] that uses the public filehandle a method of employing SNEGO. This method is not available with NFS version 4 as filehandles are not overloaded with special meaning and therefore do not provide the same framework as NFS versions 2 and 3. Clients should therefore use the security negotiation mechanisms described in this RFC. 22.21.6. ERRORS NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_WRONGSEC 22.22. Operation 24: PUTROOTFH - Set Root Filehandle 22.22.1. SYNOPSIS - -> (cfh) 22.22.2. ARGUMENTS void; 22.22.3. RESULTS struct PUTROOTFH4res { /* CURRENT_FH: root fh */ nfsstat4 status; }; 22.22.4. DESCRIPTION Replaces the current filehandle with the filehandle that represents the root of the server's name space. From this filehandle a LOOKUP operation can locate any other filehandle on the server. This filehandle may be different from the "public" filehandle which may be associated with some other directory on the server. 22.22.5. IMPLEMENTATION Commonly used as the first operator in an NFS request to set the context for following operations. 22.22.6. ERRORS NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_WRONGSEC Shepler Expires December 22, 2006 [Page 313] Internet-Draft NFSv4 Minor Version 1 June 2006 22.23. Operation 25: READ - Read from File 22.23.1. SYNOPSIS (cfh), stateid, offset, count -> eof, data 22.23.2. ARGUMENTS struct READ4args { /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; count4 count; }; 22.23.3. RESULTS struct READ4resok { bool eof; opaque data<>; }; union READ4res switch (nfsstat4 status) { case NFS4_OK: READ4resok resok4; default: void; }; 22.23.4. DESCRIPTION The READ operation reads data from the regular file identified by the current filehandle. The client provides an offset of where the READ is to start and a count of how many bytes are to be read. An offset of 0 (zero) means to read data starting at the beginning of the file. If offset is greater than or equal to the size of the file, the status, NFS4_OK, is returned with a data length set to 0 (zero) and eof is set to TRUE. The READ is subject to access permissions checking. If the client specifies a count value of 0 (zero), the READ succeeds and returns 0 (zero) bytes of data again subject to access permissions checking. The server may choose to return fewer bytes than specified by the client. The client needs to check for this condition and handle the condition appropriately. The stateid value for a READ request represents a value returned from Shepler Expires December 22, 2006 [Page 314] Internet-Draft NFSv4 Minor Version 1 June 2006 a previous record lock or share reservation request. The stateid is used by the server to verify that the associated share reservation and any record locks are still valid and to update lease timeouts for the client. If the read ended at the end-of-file (formally, in a correctly formed READ request, if offset + count is equal to the size of the file), or the read request extends beyond the size of the file (if offset + count is greater than the size of the file), eof is returned as TRUE; otherwise it is FALSE. A successful READ of an empty file will always return eof as TRUE. If the current filehandle is not a regular file, an error will be returned to the client. In the case the current filehandle represents a directory, NFS4ERR_ISDIR is return; otherwise, NFS4ERR_INVAL is returned. For a READ with a stateid value of all bits 0, the server MAY allow the READ to be serviced subject to mandatory file locks or the current share deny modes for the file. For a READ with a stateid value of all bits 1, the server MAY allow READ operations to bypass locking checks at the server. On success, the current filehandle retains its value. 22.23.5. IMPLEMENTATION It is possible for the server to return fewer than count bytes of data. If the server returns less than the count requested and eof is set to FALSE, the client should issue another READ to get the remaining data. A server may return less data than requested under several circumstances. The file may have been truncated by another client or perhaps on the server itself, changing the file size from what the requesting client believes to be the case. This would reduce the actual amount of data available to the client. It is possible that the server may back off the transfer size and reduce the read request return. Server resource exhaustion may also occur necessitating a smaller read return. If mandatory file locking is on for the file, and if the region corresponding to the data to be read from file is write locked by an owner not associated the stateid, the server will return the NFS4ERR_LOCKED error. The client should try to get the appropriate read record lock via the LOCK operation before re-attempting the READ. When the READ completes, the client should release the record lock via LOCKU. Shepler Expires December 22, 2006 [Page 315] Internet-Draft NFSv4 Minor Version 1 June 2006 22.23.6. ERRORS NFS4ERR_ACCESS NFS4ERR_ADMIN_REVOKED NFS4ERR_BADHANDLE NFS4ERR_BAD_STATEID NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_EXPIRED NFS4ERR_FHEXPIRED NFS4ERR_GRACE NFS4ERR_IO NFS4ERR_INVAL NFS4ERR_ISDIR NFS4ERR_LEASE_MOVED NFS4ERR_LOCKED NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_NXIO NFS4ERR_OLD_STATEID NFS4ERR_OPENMODE NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_STALE_STATEID 22.24. Operation 26: READDIR - Read Directory 22.24.1. SYNOPSIS (cfh), cookie, cookieverf, dircount, maxcount, attr_request -> cookieverf { cookie, name, attrs } 22.24.2. ARGUMENTS struct READDIR4args { /* CURRENT_FH: directory */ nfs_cookie4 cookie; verifier4 cookieverf; count4 dircount; count4 maxcount; bitmap4 attr_request; }; Shepler Expires December 22, 2006 [Page 316] Internet-Draft NFSv4 Minor Version 1 June 2006 22.24.3. RESULTS struct entry4 { nfs_cookie4 cookie; component4 name; fattr4 attrs; entry4 *nextentry; }; struct dirlist4 { entry4 *entries; bool eof; }; struct READDIR4resok { verifier4 cookieverf; dirlist4 reply; }; union READDIR4res switch (nfsstat4 status) { case NFS4_OK: READDIR4resok resok4; default: void; }; 22.24.4. DESCRIPTION The READDIR operation retrieves a variable number of entries from a filesystem directory and returns client requested attributes for each entry along with information to allow the client to request additional directory entries in a subsequent READDIR. The arguments contain a cookie value that represents where the READDIR should start within the directory. A value of 0 (zero) for the cookie is used to start reading at the beginning of the directory. For subsequent READDIR requests, the client specifies a cookie value that is provided by the server on a previous READDIR request. The cookieverf value should be set to 0 (zero) when the cookie value is 0 (zero) (first directory read). On subsequent requests, it should be a cookieverf as returned by the server. The cookieverf must match that returned by the READDIR in which the cookie was acquired. If the server determines that the cookieverf is no longer valid for the directory, the error NFS4ERR_NOT_SAME must be returned. Shepler Expires December 22, 2006 [Page 317] Internet-Draft NFSv4 Minor Version 1 June 2006 The dircount portion of the argument is a hint of the maximum number of bytes of directory information that should be returned. This value represents the length of the names of the directory entries and the cookie value for these entries. This length represents the XDR encoding of the data (names and cookies) and not the length in the native format of the server. The maxcount value of the argument is the maximum number of bytes for the result. This maximum size represents all of the data being returned within the READDIR4resok structure and includes the XDR overhead. The server may return less data. If the server is unable to return a single directory entry within the maxcount limit, the error NFS4ERR_TOOSMALL will be returned to the client. Finally, attr_request represents the list of attributes to be returned for each directory entry supplied by the server. On successful return, the server's response will provide a list of directory entries. Each of these entries contains the name of the directory entry, a cookie value for that entry, and the associated attributes as requested. The "eof" flag has a value of TRUE if there are no more entries in the directory. The cookie value is only meaningful to the server and is used as a "bookmark" for the directory entry. As mentioned, this cookie is used by the client for subsequent READDIR operations so that it may continue reading a directory. The cookie is similar in concept to a READ offset but should not be interpreted as such by the client. Ideally, the cookie value should not change if the directory is modified since the client may be caching these values. In some cases, the server may encounter an error while obtaining the attributes for a directory entry. Instead of returning an error for the entire READDIR operation, the server can instead return the attribute 'fattr4_rdattr_error'. With this, the server is able to communicate the failure to the client and not fail the entire operation in the instance of what might be a transient failure. Obviously, the client must request the fattr4_rdattr_error attribute for this method to work properly. If the client does not request the attribute, the server has no choice but to return failure for the entire READDIR operation. For some filesystem environments, the directory entries "." and ".." have special meaning and in other environments, they may not. If the server supports these special entries within a directory, they should not be returned to the client as part of the READDIR response. To enable some client environments, the cookie values of 0, 1, and 2 are to be considered reserved. Note that the UNIX client will use these Shepler Expires December 22, 2006 [Page 318] Internet-Draft NFSv4 Minor Version 1 June 2006 values when combining the server's response and local representations to enable a fully formed UNIX directory presentation to the application. For READDIR arguments, cookie values of 1 and 2 should not be used and for READDIR results cookie values of 0, 1, and 2 should not be returned. On success, the current filehandle retains its value. 22.24.5. IMPLEMENTATION The server's filesystem directory representations can differ greatly. A client's programming interfaces may also be bound to the local operating environment in a way that does not translate well into the NFS protocol. Therefore the use of the dircount and maxcount fields are provided to allow the client the ability to provide guidelines to the server. If the client is aggressive about attribute collection during a READDIR, the server has an idea of how to limit the encoded response. The dircount field provides a hint on the number of entries based solely on the names of the directory entries. Since it is a hint, it may be possible that a dircount value is zero. In this case, the server is free to ignore the dircount value and return directory information based on the specified maxcount value. The cookieverf may be used by the server to help manage cookie values that may become stale. It should be a rare occurrence that a server is unable to continue properly reading a directory with the provided cookie/cookieverf pair. The server should make every effort to avoid this condition since the application at the client may not be able to properly handle this type of failure. The use of the cookieverf will also protect the client from using READDIR cookie values that may be stale. For example, if the file system has been migrated, the server may or may not be able to use the same cookie values to service READDIR as the previous server used. With the client providing the cookieverf, the server is able to provide the appropriate response to the client. This prevents the case where the server may accept a cookie value but the underlying directory has changed and the response is invalid from the client's context of its previous READDIR. Since some servers will not be returning "." and ".." entries as has been done with previous versions of the NFS protocol, the client that requires these entries be present in READDIR responses must fabricate them. Shepler Expires December 22, 2006 [Page 319] Internet-Draft NFSv4 Minor Version 1 June 2006 22.24.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADHANDLE NFS4ERR_BAD_COOKIE NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_IO NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_NOTDIR NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_TOOSMALL 22.25. Operation 27: READLINK - Read Symbolic Link 22.25.1. SYNOPSIS (cfh) -> linktext 22.25.2. ARGUMENTS /* CURRENT_FH: symlink */ void; 22.25.3. RESULTS struct READLINK4resok { linktext4 link; }; union READLINK4res switch (nfsstat4 status) { case NFS4_OK: READLINK4resok resok4; default: void; }; 22.25.4. DESCRIPTION READLINK reads the data associated with a symbolic link. The data is a UTF-8 string that is opaque to the server. That is, whether created by an NFS client or created locally on the server, the data in a symbolic link is not interpreted when created, but is simply stored. On success, the current filehandle retains its value. 22.25.5. IMPLEMENTATION A symbolic link is nominally a pointer to another file. The data is not necessarily interpreted by the server, just stored in the file. It is possible for a client implementation to store a path name that is not meaningful to the server operating system in a symbolic link. A READLINK operation returns the data to the client for Shepler Expires December 22, 2006 [Page 320] Internet-Draft NFSv4 Minor Version 1 June 2006 interpretation. If different implementations want to share access to symbolic links, then they must agree on the interpretation of the data in the symbolic link. The READLINK operation is only allowed on objects of type NF4LNK. The server should return the error, NFS4ERR_INVAL, if the object is not of type, NF4LNK. 22.25.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADHANDLE NFS4ERR_DELAY NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_IO NFS4ERR_ISDIR NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_NOTSUPP NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE 22.26. Operation 28: REMOVE - Remove Filesystem Object 22.26.1. SYNOPSIS (cfh), filename -> change_info 22.26.2. ARGUMENTS struct REMOVE4args { /* CURRENT_FH: directory */ component4 target; }; 22.26.3. RESULTS struct REMOVE4resok { change_info4 cinfo; } union REMOVE4res switch (nfsstat4 status) { case NFS4_OK: REMOVE4resok resok4; default: void; } 22.26.4. DESCRIPTION The REMOVE operation removes (deletes) a directory entry named by filename from the directory corresponding to the current filehandle. If the entry in the directory was the last reference to the corresponding filesystem object, the object may be destroyed. Shepler Expires December 22, 2006 [Page 321] Internet-Draft NFSv4 Minor Version 1 June 2006 For the directory where the filename was removed, the server returns change_info4 information in cinfo. With the atomic field of the change_info4 struct, the server will indicate if the before and after change attributes were obtained atomically with respect to the removal. If the target has a length of 0 (zero), or if target does not obey the UTF-8 definition, the error NFS4ERR_INVAL will be returned. On success, the current filehandle retains its value. 22.26.5. IMPLEMENTATION NFS versions 2 and 3 required a different operator RMDIR for directory removal and REMOVE for non-directory removal. This allowed clients to skip checking the file type when being passed a non- directory delete system call (e.g. unlink() in POSIX) to remove a directory, as well as the converse (e.g. a rmdir() on a non- directory) because they knew the server would check the file type. NFS version 4 REMOVE can be used to delete any directory entry independent of its file type. The implementor of an NFS version 4 client's entry points from the unlink() and rmdir() system calls should first check the file type against the types the system call is allowed to remove before issuing a REMOVE. Alternatively, the implementor can produce a COMPOUND call that includes a LOOKUP/VERIFY sequence to verify the file type before a REMOVE operation in the same COMPOUND call. The concept of last reference is server specific. However, if the numlinks field in the previous attributes of the object had the value 1, the client should not rely on referring to the object via a filehandle. Likewise, the client should not rely on the resources (disk space, directory entry, and so on) formerly associated with the object becoming immediately available. Thus, if a client needs to be able to continue to access a file after using REMOVE to remove it, the client should take steps to make sure that the file will still be accessible. The usual mechanism used is to RENAME the file from its old name to a new hidden name. If the server finds that the file is still open when the REMOVE arrives: .in 7 .IP o The server SHOULD NOT delete the file's directory entry if the file was opened with OPEN4_SHARE_DENY_WRITE or OPEN4_SHARE_DENY_BOTH. .IP o If the file was not opened with OPEN4_SHARE_DENY_WRITE or OPEN4_SHARE_DENY_BOTH, the server SHOULD delete the file's directory entry. However, until last CLOSE of the file, the server MAY continue to allow access to the file via its filehandle. .in 5 Shepler Expires December 22, 2006 [Page 322] Internet-Draft NFSv4 Minor Version 1 June 2006 22.26.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADCHAR NFS4ERR_BADHANDLE NFS4ERR_BADNAME NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_FHEXPIRED NFS4ERR_FILE_OPEN NFS4ERR_INVAL NFS4ERR_IO NFS4ERR_MOVED NFS4ERR_NAMETOOLONG NFS4ERR_NOENT NFS4ERR_NOFILEHANDLE NFS4ERR_NOTDIR NFS4ERR_NOTEMPTY NFS4ERR_RESOURCE NFS4ERR_ROFS NFS4ERR_SERVERFAULT NFS4ERR_STALE 22.27. Operation 29: RENAME - Rename Directory Entry 22.27.1. SYNOPSIS (sfh), oldname, (cfh), newname -> source_change_info, target_change_info 22.27.2. ARGUMENTS struct RENAME4args { /* SAVED_FH: source directory */ component4 oldname; /* CURRENT_FH: target directory */ component4 newname; }; 22.27.3. RESULTS struct RENAME4resok { change_info4 source_cinfo; change_info4 target_cinfo; }; union RENAME4res switch (nfsstat4 status) { case NFS4_OK: RENAME4resok resok4; default: void; }; 22.27.4. DESCRIPTION The RENAME operation renames the object identified by oldname in the source directory corresponding to the saved filehandle, as set by the SAVEFH operation, to newname in the target directory corresponding to the current filehandle. The operation is required to be atomic to the client. Source and target directories must reside on the same filesystem on the server. On success, the current filehandle will continue to be the target directory. Shepler Expires December 22, 2006 [Page 323] Internet-Draft NFSv4 Minor Version 1 June 2006 If the target directory already contains an entry with the name, newname, the source object must be compatible with the target: either both are non-directories or both are directories and the target must be empty. If compatible, the existing target is removed before the rename occurs (See the IMPLEMENTATION subsection of the section "Operation 28: REMOVE - Remove Filesystem Object" for client and server actions whenever a target is removed). If they are not compatible or if the target is a directory but not empty, the server will return the error, NFS4ERR_EXIST. If oldname and newname both refer to the same file (they might be hard links of each other), then RENAME should perform no action and return success. For both directories involved in the RENAME, the server returns change_info4 information. With the atomic field of the change_info4 struct, the server will indicate if the before and after change attributes were obtained atomically with respect to the rename. If the oldname refers to a named attribute and the saved and current filehandles refer to different filesystem objects, the server will return NFS4ERR_XDEV just as if the saved and current filehandles represented directories on different filesystems. If the oldname or newname has a length of 0 (zero), or if oldname or newname does not obey the UTF-8 definition, the error NFS4ERR_INVAL will be returned. 22.27.5. IMPLEMENTATION The RENAME operation must be atomic to the client. The statement "source and target directories must reside on the same filesystem on the server" means that the fsid fields in the attributes for the directories are the same. If they reside on different filesystems, the error, NFS4ERR_XDEV, is returned. Based on the value of the fh_expire_type attribute for the object, the filehandle may or may not expire on a RENAME. However, server implementors are strongly encouraged to attempt to keep filehandles from expiring in this fashion. On some servers, the file names "." and ".." are illegal as either oldname or newname, and will result in the error NFS4ERR_BADNAME. In addition, on many servers the case of oldname or newname being an alias for the source directory will be checked for. Such servers will return the error NFS4ERR_INVAL in these cases. If either of the source or target filehandles are not directories, Shepler Expires December 22, 2006 [Page 324] Internet-Draft NFSv4 Minor Version 1 June 2006 the server will return NFS4ERR_NOTDIR. 22.27.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADCHAR NFS4ERR_BADHANDLE NFS4ERR_BADNAME NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_DQUOT NFS4ERR_EXIST NFS4ERR_FHEXPIRED NFS4ERR_FILE_OPEN NFS4ERR_INVAL NFS4ERR_IO NFS4ERR_MOVED NFS4ERR_NAMETOOLONG NFS4ERR_NOENT NFS4ERR_NOFILEHANDLE NFS4ERR_NOSPC NFS4ERR_NOTDIR NFS4ERR_NOTEMPTY NFS4ERR_RESOURCE NFS4ERR_ROFS NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_WRONGSEC NFS4ERR_XDEV 22.28. Operation 30: RENEW - Renew a Lease 22.28.1. SYNOPSIS clientid -> () 22.28.2. ARGUMENTS struct RENEW4args { clientid4 clientid; }; 22.28.3. RESULTS struct RENEW4res { nfsstat4 status; }; 22.28.4. DESCRIPTION The RENEW operation is used by the client to renew leases which it currently holds at a server. In processing the RENEW request, the server renews all leases associated with the client. The associated leases are determined by the clientid provided via the SETCLIENTID operation. 22.28.5. IMPLEMENTATION When the client holds delegations, it needs to use RENEW to detect when the server has determined that the callback path is down. When the server has made such a determination, only the RENEW operation will renew the lease on delegations. If the server determines the callback path is down, it returns NFS4ERR_CB_PATH_DOWN. Even though it returns NFS4ERR_CB_PATH_DOWN, the server MUST renew the lease on the record locks and share reservations that the client has established on the server. If for some reason the lock and share Shepler Expires December 22, 2006 [Page 325] Internet-Draft NFSv4 Minor Version 1 June 2006 reservation lease cannot be renewed, then the server MUST return an error other than NFS4ERR_CB_PATH_DOWN, even if the callback path is also down. The client that issues RENEW MUST choose the principal, RPC security flavor, and if applicable, GSS-API mechanism and service via one of the following algorithms: The client uses the same principal, RPC security flavor -- and if the flavor was RPCSEC_GSS -- the same mechanism and service that was used when the client id was established via SETCLIENTID_CONFIRM. The client uses any principal, RPC security flavor mechanism and service combination that currently has an OPEN file on the server. I.e. the same principal had a successful OPEN operation, the file is still open by that principal, and the flavor, mechanism, and service of RENEW match that of the previous OPEN. The server MUST reject a RENEW that does not use one the aforementioned algorithms, with the error NFS4ERR_ACCESS. 22.28.6. ERRORS NFS4ERR_ACCESS NFS4ERR_ADMIN_REVOKED NFS4ERR_BADXDR NFS4ERR_CB_PATH_DOWN NFS4ERR_EXPIRED NFS4ERR_LEASE_MOVED NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE_CLIENTID 22.29. Operation 31: RESTOREFH - Restore Saved Filehandle 22.29.1. SYNOPSIS (sfh) -> (cfh) 22.29.2. ARGUMENTS /* SAVED_FH: */ void; 22.29.3. RESULTS struct RESTOREFH4res { /* CURRENT_FH: value of saved fh */ nfsstat4 status; }; Shepler Expires December 22, 2006 [Page 326] Internet-Draft NFSv4 Minor Version 1 June 2006 22.29.4. DESCRIPTION Set the current filehandle to the value in the saved filehandle. If there is no saved filehandle then return the error NFS4ERR_RESTOREFH. 22.29.5. IMPLEMENTATION Operations like OPEN and LOOKUP use the current filehandle to represent a directory and replace it with a new filehandle. Assuming the previous filehandle was saved with a SAVEFH operator, the previous filehandle can be restored as the current filehandle. This is commonly used to obtain post-operation attributes for the directory, e.g. PUTFH (directory filehandle) SAVEFH GETATTR attrbits (pre-op dir attrs) CREATE optbits "foo" attrs GETATTR attrbits (file attributes) RESTOREFH GETATTR attrbits (post-op dir attrs) 22.29.6. ERRORS NFS4ERR_BADHANDLE NFS4ERR_FHEXPIRED NFS4ERR_MOVED NFS4ERR_RESOURCE NFS4ERR_RESTOREFH NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_WRONGSEC 22.30. Operation 32: SAVEFH - Save Current Filehandle 22.30.1. SYNOPSIS (cfh) -> (sfh) 22.30.2. ARGUMENTS /* CURRENT_FH: */ void; 22.30.3. RESULTS struct SAVEFH4res { /* SAVED_FH: value of current fh */ nfsstat4 status; }; Shepler Expires December 22, 2006 [Page 327] Internet-Draft NFSv4 Minor Version 1 June 2006 22.30.4. DESCRIPTION Save the current filehandle. If a previous filehandle was saved then it is no longer accessible. The saved filehandle can be restored as the current filehandle with the RESTOREFH operator. On success, the current filehandle retains its value. 22.30.5. IMPLEMENTATION 22.30.6. ERRORS NFS4ERR_BADHANDLE NFS4ERR_FHEXPIRED NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE 22.31. Operation 33: SECINFO - Obtain Available Security 22.31.1. SYNOPSIS (cfh), name -> { secinfo } 22.31.2. ARGUMENTS struct SECINFO4args { /* CURRENT_FH: directory */ component4 name; }; Shepler Expires December 22, 2006 [Page 328] Internet-Draft NFSv4 Minor Version 1 June 2006 22.31.3. RESULTS enum rpc_gss_svc_t { /* From RFC 2203 */ RPC_GSS_SVC_NONE = 1, RPC_GSS_SVC_INTEGRITY = 2, RPC_GSS_SVC_PRIVACY = 3 }; struct rpcsec_gss_info { sec_oid4 oid; qop4 qop; rpc_gss_svc_t service; }; union secinfo4 switch (uint32_t flavor) { case RPCSEC_GSS: rpcsec_gss_info flavor_info; default: void; }; typedef secinfo4 SECINFO4resok<>; union SECINFO4res switch (nfsstat4 status) { case NFS4_OK: SECINFO4resok resok4; default: void; }; 22.31.4. DESCRIPTION The SECINFO operation is used by the client to obtain a list of valid RPC authentication flavors for a specific directory filehandle, file name pair. SECINFO should apply the same access methodology used for LOOKUP when evaluating the name. Therefore, if the requester does not have the appropriate access to LOOKUP the name then SECINFO must behave the same way and return NFS4ERR_ACCESS. The result will contain an array which represents the security mechanisms available, with an order corresponding to the server's preferences, the most preferred being first in the array. The client is free to pick whatever security mechanism it both desires and supports, or to pick in the server's preference order the first one it supports. The array entries are represented by the secinfo4 structure. The field 'flavor' will contain a value of AUTH_NONE, AUTH_SYS (as defined in [RFC1831]), or RPCSEC_GSS (as defined in [RFC2203]). The field flavor can also any other security flavor Shepler Expires December 22, 2006 [Page 329] Internet-Draft NFSv4 Minor Version 1 June 2006 registered with IANA. For the flavors AUTH_NONE and AUTH_SYS, no additional security information is returned. The same is true of many (if not most) other security flavors, including AUTH_DH. For a return value of RPCSEC_GSS, a security triple is returned that contains the mechanism object id (as defined in [RFC2743]), the quality of protection (as defined in [RFC2743]) and the service type (as defined in [RFC2203]). It is possible for SECINFO to return multiple entries with flavor equal to RPCSEC_GSS with different security triple values. On success, the current filehandle retains its value. If the name has a length of 0 (zero), or if name does not obey the UTF-8 definition, the error NFS4ERR_INVAL will be returned. 22.31.5. IMPLEMENTATION The SECINFO operation is expected to be used by the NFS client when the error value of NFS4ERR_WRONGSEC is returned from another NFS operation. This signifies to the client that the server's security policy is different from what the client is currently using. At this point, the client is expected to obtain a list of possible security flavors and choose what best suits its policies. As mentioned, the server's security policies will determine when a client request receives NFS4ERR_WRONGSEC. The operations which may receive this error are: LINK, LOOKUP, LOOKUPP, OPEN, PUTFH, PUTPUBFH, PUTROOTFH, RESTOREFH, RENAME, and indirectly READDIR. LINK and RENAME will only receive this error if the security used for the operation is inappropriate for saved filehandle. With the exception of READDIR, these operations represent the point at which the client can instantiate a filehandle into the "current filehandle" at the server. The filehandle is either provided by the client (PUTFH, PUTPUBFH, PUTROOTFH) or generated as a result of a name to filehandle translation (LOOKUP and OPEN). RESTOREFH is different because the filehandle is a result of a previous SAVEFH. Even though the filehandle, for RESTOREFH, might have previously passed the server's inspection for a security match, the server will check it again on RESTOREFH to ensure that the security policy has not changed. If the client wants to resolve an error return of NFS4ERR_WRONGSEC, the following will occur: o For LOOKUP and OPEN, the client will use SECINFO with the same current filehandle and name as provided in the original LOOKUP or OPEN to enumerate the available security triples. Shepler Expires December 22, 2006 [Page 330] Internet-Draft NFSv4 Minor Version 1 June 2006 o For LINK, PUTFH, PUTROOTFH, PUTPUBFH, RENAME, and RESTOREFH, the client will use SECINFO_NO_NAME { style = current_fh }. The client will prefix the SECINFO_NO_NAME operation with the appropriate PUTFH, PUTPUBFH, or PUTROOTFH operation that provides the file handled originally provided by the PUTFH, PUTPUBFH, PUTROOTFH, or RESTOREFH, or for the failed LINK or RENAME, the SAVEFH. o NOTE: In NFSv4.0, the client was required to use SECINFO, and had to reconstruct the parent of the original file handle, and the component name of the original filehandle. o For LOOKUPP, the client will use SECINFO_NO_NAME { style = parent } and provide the filehandle with equals the filehandle originally provided to LOOKUPP. The READDIR operation will not directly return the NFS4ERR_WRONGSEC error. However, if the READDIR request included a request for attributes, it is possible that the READDIR request's security triple did not match that of a directory entry. If this is the case and the client has requested the rdattr_error attribute, the server will return the NFS4ERR_WRONGSEC error in rdattr_error for the entry. See the section "Security Considerations" for a discussion on the recommendations for security flavor used by SECINFO and SECINFO_NO_NAME. 22.31.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADCHAR NFS4ERR_BADHANDLE NFS4ERR_BADNAME NFS4ERR_BADXDR NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_MOVED NFS4ERR_NAMETOOLONG NFS4ERR_NOENT NFS4ERR_NOFILEHANDLE NFS4ERR_NOTDIR NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE 22.32. Operation 34: SETATTR - Set Attributes 22.32.1. SYNOPSIS (cfh), stateid, attrmask, attr_vals -> attrsset 22.32.2. ARGUMENTS struct SETATTR4args { /* CURRENT_FH: target object */ stateid4 stateid; fattr4 obj_attributes; }; Shepler Expires December 22, 2006 [Page 331] Internet-Draft NFSv4 Minor Version 1 June 2006 22.32.3. RESULTS struct SETATTR4res { nfsstat4 status; bitmap4 attrsset; }; 22.32.4. DESCRIPTION The SETATTR operation changes one or more of the attributes of a filesystem object. The new attributes are specified with a bitmap and the attributes that follow the bitmap in bit order. The stateid argument for SETATTR is used to provide file locking context that is necessary for SETATTR requests that set the size attribute. Since setting the size attribute modifies the file's data, it has the same locking requirements as a corresponding WRITE. Any SETATTR that sets the size attribute is incompatible with a share reservation that specifies DENY_WRITE. The area between the old end- of-file and the new end-of-file is considered to be modified just as would have been the case had the area in question been specified as the target of WRITE, for the purpose of checking conflicts with record locks, for those cases in which a server is implementing mandatory record locking behavior. A valid stateid should always be specified. When the file size attribute is not set, the special stateid consisting of all bits zero should be passed. On either success or failure of the operation, the server will return the attrsset bitmask to represent what (if any) attributes were successfully set. The attrsset in the response is a subset of the bitmap4 that is part of the obj_attributes in the argument. On success, the current filehandle retains its value. 22.32.5. IMPLEMENTATION If the request specifies the owner attribute to be set, the server should allow the operation to succeed if the current owner of the object matches the value specified in the request. Some servers may be implemented in a way as to prohibit the setting of the owner attribute unless the requester has privilege to do so. If the server is lenient in this one case of matching owner values, the client implementation may be simplified in cases of creation of an object followed by a SETATTR. The file size attribute is used to request changes to the size of a file. A value of 0 (zero) causes the file to be truncated, a value less than the current size of the file causes data from new size to Shepler Expires December 22, 2006 [Page 332] Internet-Draft NFSv4 Minor Version 1 June 2006 the end of the file to be discarded, and a size greater than the current size of the file causes logically zeroed data bytes to be added to the end of the file. Servers are free to implement this using holes or actual zero data bytes. Clients should not make any assumptions regarding a server's implementation of this feature, beyond that the bytes returned will be zeroed. Servers must support extending the file size via SETATTR. SETATTR is not guaranteed atomic. A failed SETATTR may partially change a file's attributes. Changing the size of a file with SETATTR indirectly changes the time_modify. A client must account for this as size changes can result in data deletion. The attributes time_access_set and time_modify_set are write-only attributes constructed as a switched union so the client can direct the server in setting the time values. If the switched union specifies SET_TO_CLIENT_TIME4, the client has provided an nfstime4 to be used for the operation. If the switch union does not specify SET_TO_CLIENT_TIME4, the server is to use its current time for the SETATTR operation. If server and client times differ, programs that compare client time to file times can break. A time maintenance protocol should be used to limit client/server time skew. Use of a COMPOUND containing a VERIFY operation specifying only the change attribute, immediately followed by a SETATTR, provides a means whereby a client may specify a request that emulates the functionality of the SETATTR guard mechanism of NFS version 3. Since the function of the guard mechanism is to avoid changes to the file attributes based on stale information, delays between checking of the guard condition and the setting of the attributes have the potential to compromise this function, as would the corresponding delay in the NFS version 4 emulation. Therefore, NFS version 4 servers should take care to avoid such delays, to the degree possible, when executing such a request. If the server does not support an attribute as requested by the client, the server should return NFS4ERR_ATTRNOTSUPP. A mask of the attributes actually set is returned by SETATTR in all cases. That mask must not include attributes bits not requested to be set by the client, and must be equal to the mask of attributes requested to be set only if the SETATTR completes without error. Shepler Expires December 22, 2006 [Page 333] Internet-Draft NFSv4 Minor Version 1 June 2006 22.32.6. ERRORS NFS4ERR_ACCESS NFS4ERR_ADMIN_REVOKED NFS4ERR_ATTRNOTSUPP NFS4ERR_BADCHAR NFS4ERR_BADHANDLE NFS4ERR_BADOWNER NFS4ERR_BAD_STATEID NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_DQUOT NFS4ERR_EXPIRED NFS4ERR_FBIG NFS4ERR_FHEXPIRED NFS4ERR_GRACE NFS4ERR_INVAL NFS4ERR_IO NFS4ERR_ISDIR NFS4ERR_LOCKED NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_NOSPC NFS4ERR_OLD_STATEID NFS4ERR_OPENMODE NFS4ERR_PERM NFS4ERR_RESOURCE NFS4ERR_ROFS NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_STALE_STATEID 22.33. Operation 35: SETCLIENTID - Negotiate Clientid 22.33.1. SYNOPSIS client, callback, callback_ident -> clientid, setclientid_confirm 22.33.2. ARGUMENTS struct SETCLIENTID4args { nfs_client_id4 client; cb_client4 callback; uint32_t callback_ident; }; 22.33.3. RESULTS struct SETCLIENTID4resok { clientid4 clientid; verifier4 setclientid_confirm; }; union SETCLIENTID4res switch (nfsstat4 status) { case NFS4_OK: SETCLIENTID4resok resok4; case NFS4ERR_CLID_INUSE: clientaddr4 client_using; default: void; }; 22.33.4. DESCRIPTION The client uses the SETCLIENTID operation to notify the server of its intention to use a particular client identifier, callback, and callback_ident for subsequent requests that entail creating lock, share reservation, and delegation state on the server. Upon successful completion the server will return a short hand clientid Shepler Expires December 22, 2006 [Page 334] Internet-Draft NFSv4 Minor Version 1 June 2006 which, if confirmed via a separate step, will be used in subsequent file locking and file open requests. Confirmation of the clientid must be done via the SETCLIENTID_CONFIRM operation to return the clientid and setclientid_confirm values, as verifiers, to the server. The reason why two verifiers are necessary is that it is possible to use SETCLIENTID and SETCLIENTID_CONFIRM to modify the callback and callback_ident information but not the short hand clientid. In that event, the setclientid_confirm value is effectively the only verifier. The callback information provided in this operation will be used if the client is provided an open delegation at a future point. Therefore, the client must correctly reflect the program and port numbers for the callback program at the time SETCLIENTID is used. The callback_ident value is used by the server on the callback. The client can use leverage the callback_ident eliminate the need for more than one callback RPC program number while still being able to determine which server is initiating the callback. 22.33.5. IMPLEMENTATION To understand how to implement SETCLIENTID, make the following notations. Let: x be the value of the client.id subfield of the SETCLIENTID4args structure. v be the value of the client.verifier subfield of the SETCLIENTID4args structure. c be the value of the clientid field returned in the SETCLIENTID4resok structure. k represent the value combination of the fields callback and callback_ident fields of the SETCLIENTID4args structure. s be the setclientid_confirm value returned in the SETCLIENTID4resok structure. { v, x, c, k, s } be a quintuple for a client record. A client record is confirmed if there has been a SETCLIENTID_CONFIRM operation to confirm it. Otherwise it is unconfirmed. An unconfirmed record is established by a SETCLIENTID call. Since SETCLIENTID is a non-idempotent operation, let us assume that the server is implementing the duplicate request cache (DRC). Shepler Expires December 22, 2006 [Page 335] Internet-Draft NFSv4 Minor Version 1 June 2006 When the server gets a SETCLIENTID { v, x, k } request, it processes it in the following manner. o It first looks up the request in the DRC. If there is a hit, it returns the result cached in the DRC. The server does NOT remove client state (locks, shares, delegations) nor does it modify any recorded callback and callback_ident information for client { x }. For any DRC miss, the server takes the client id string x, and searches for client records for x that the server may have recorded from previous SETCLIENTID calls. For any confirmed record with the same id string x, if the recorded principal does not match that of SETCLIENTID call, then the server returns a NFS4ERR_CLID_INUSE error. For brevity of discussion, the remaining description of the processing assumes that there was a DRC miss, and that where the server has previously recorded a confirmed record for client x, the aforementioned principal check has successfully passed. o The server checks if it has recorded a confirmed record for { v, x, c, l, s }, where l may or may not equal k. If so, and since the id verifier v of the request matches that which is confirmed and recorded, the server treats this as a probable callback information update and records an unconfirmed { v, x, c, k, t } and leaves the confirmed { v, x, c, l, s } in place, such that t != s. It does not matter if k equals l or not. Any pre-existing unconfirmed { v, x, c, *, * } is removed. The server returns { c, t }. It is indeed returning the old clientid4 value c, because the client apparently only wants to update callback value k to value l. It's possible this request is one from the Byzantine router that has stale callback information, but this is not a problem. The callback information update is only confirmed if followed up by a SETCLIENTID_CONFIRM { c, t }. The server awaits confirmation of k via SETCLIENTID_CONFIRM { c, t }. The server does NOT remove client (lock/share/delegation) state for x. o The server has previously recorded a confirmed { u, x, c, l, s } record such that v != u, l may or may not equal k, and has not recorded any unconfirmed { *, x, *, *, * } record for x. The server records an unconfirmed { v, x, d, k, t } (d != c, t != s). The server returns { d, t }. Shepler Expires December 22, 2006 [Page 336] Internet-Draft NFSv4 Minor Version 1 June 2006 The server awaits confirmation of { d, k } via SETCLIENTID_CONFIRM { d, t }. The server does NOT remove client (lock/share/delegation) state for x. o The server has previously recorded a confirmed { u, x, c, l, s } record such that v != u, l may or may not equal k, and recorded an unconfirmed { w, x, d, m, t } record such that c != d, t != s, m may or may not equal k, m may or may not equal l, and k may or may not equal l. Whether w == v or w != v makes no difference. The server simply removes the unconfirmed { w, x, d, m, t } record and replaces it with an unconfirmed { v, x, e, k, r } record, such that e != d, e != c, r != t, r != s. The server returns { e, r }. The server awaits confirmation of { e, k } via SETCLIENTID_CONFIRM { e, r }. The server does NOT remove client (lock/share/delegation) state for x. o The server has no confirmed { *, x, *, *, * } for x. It may or may not have recorded an unconfirmed { u, x, c, l, s }, where l may or may not equal k, and u may or may not equal v. Any unconfirmed record { u, x, c, l, * }, regardless whether u == v or l == k, is replaced with an unconfirmed record { v, x, d, k, t } where d != c, t != s. The server returns { d, t }. The server awaits confirmation of { d, k } via SETCLIENTID_CONFIRM { d, t }. The server does NOT remove client (lock/share/ delegation) state for x. .in -2 The server generates the clientid and setclientid_confirm values and must take care to ensure that these values are extremely unlikely to ever be regenerated. 22.33.6. ERRORS NFS4ERR_BADXDR NFS4ERR_CLID_INUSE NFS4ERR_INVAL NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT Shepler Expires December 22, 2006 [Page 337] Internet-Draft NFSv4 Minor Version 1 June 2006 22.34. Operation 36: SETCLIENTID_CONFIRM - Confirm Clientid 22.34.1. SYNOPSIS clientid, verifier -> - 22.34.2. ARGUMENTS struct SETCLIENTID_CONFIRM4args { clientid4 clientid; verifier4 setclientid_confirm; }; 22.34.3. RESULTS struct SETCLIENTID_CONFIRM4res { nfsstat4 status; }; 22.34.4. DESCRIPTION This operation is used by the client to confirm the results from a previous call to SETCLIENTID. The client provides the server supplied (from a SETCLIENTID response) clientid. The server responds with a simple status of success or failure. 22.34.5. IMPLEMENTATION The client must use the SETCLIENTID_CONFIRM operation to confirm the following two distinct cases: o The client's use of a new shorthand client identifier (as returned from the server in the response to SETCLIENTID), a new callback value (as specified in the arguments to SETCLIENTID) and a new callback_ident (as specified in the arguments to SETCLIENTID) value. The client's use of SETCLIENTID_CONFIRM in this case also confirms the removal of any of the client's previous relevant leased state. Relevant leased client state includes record locks, share reservations, and where the server does not support the CLAIM_DELEGATE_PREV claim type, delegations. If the server supports CLAIM_DELEGATE_PREV, then SETCLIENTID_CONFIRM MUST NOT remove delegations for this client; relevant leased client state would then just include record locks and share reservations. o The client's re-use of an old, previously confirmed, shorthand client identifier, a new callback value, and a new callback_ident value. The client's use of SETCLIENTID_CONFIRM in this case MUST NOT result in the removal of any previous leased state (locks, Shepler Expires December 22, 2006 [Page 338] Internet-Draft NFSv4 Minor Version 1 June 2006 share reservations, and delegations) We use the same notation and definitions for v, x, c, k, s, and unconfirmed and confirmed client records as introduced in the description of the SETCLIENTID operation. The arguments to SETCLIENTID_CONFIRM are indicated by the notation { c, s }, where c is a value of type clientid4, and s is a value of type verifier4 corresponding to the setclientid_confirm field. As with SETCLIENTID, SETCLIENTID_CONFIRM is a non-idempotent operation, and we assume that the server is implementing the duplicate request cache (DRC). When the server gets a SETCLIENTID_CONFIRM { c, s } request, it processes it in the following manner. o It first looks up the request in the DRC. If there is a hit, it returns the result cached in the DRC. The server does not remove any relevant leased client state nor does it modify any recorded callback and callback_ident information for client { x } as represented by the short hand value c. For a DRC miss, the server checks for client records that match the short hand value c. The processing cases are as follows: o The server has recorded an unconfirmed { v, x, c, k, s } record and a confirmed { v, x, c, l, t } record, such that s != t. If the principals of the records do not match that of the SETCLIENTID_CONFIRM, the server returns NFS4ERR_CLID_INUSE, and no relevant leased client state is removed and no recorded callback and callback_ident information for client { x } is changed. Otherwise, the confirmed { v, x, c, l, t } record is removed and the unconfirmed { v, x, c, k, s } is marked as confirmed, thereby modifying recorded and confirmed callback and callback_ident information for client { x }. The server does not remove any relevant leased client state. The server returns NFS4_OK. o The server has not recorded an unconfirmed { v, x, c, *, * } and has recorded a confirmed { v, x, c, *, s }. If the principals of the record and of SETCLIENTID_CONFIRM do not match, the server returns NFS4ERR_CLID_INUSE without removing any relevant leased client state and without changing recorded callback and callback_ident values for client { x }. If the principals match, then what has likely happened is that the Shepler Expires December 22, 2006 [Page 339] Internet-Draft NFSv4 Minor Version 1 June 2006 client never got the response from the SETCLIENTID_CONFIRM, and the DRC entry has been purged. Whatever the scenario, since the principals match, as well as { c, s } matching a confirmed record, the server leaves client x's relevant leased client state intact, leaves its callback and callback_ident values unmodified, and returns NFS4_OK. o The server has not recorded a confirmed { *, *, c, *, * }, and has recorded an unconfirmed { *, x, c, k, s }. Even if this is a retry from client, nonetheless the client's first SETCLIENTID_CONFIRM attempt was not received by the server. Retry or not, the server doesn't know, but it processes it as if were a first try. If the principal of the unconfirmed { *, x, c, k, s } record mismatches that of the SETCLIENTID_CONFIRM request the server returns NFS4ERR_CLID_INUSE without removing any relevant leased client state. Otherwise, the server records a confirmed { *, x, c, k, s }. If there is also a confirmed { *, x, d, *, t }, the server MUST remove the client x's relevant leased client state, and overwrite the callback state with k. The confirmed record { *, x, d, *, t } is removed. Server returns NFS4_OK. o The server has no record of a confirmed or unconfirmed { *, *, c, *, s }. The server returns NFS4ERR_STALE_CLIENTID. The server does not remove any relevant leased client state, nor does it modify any recorded callback and callback_ident information for any client. The server needs to cache unconfirmed { v, x, c, k, s } client records and await for some time their confirmation. As should be clear from the record processing discussions for SETCLIENTID and SETCLIENTID_CONFIRM, there are cases where the server does not deterministically remove unconfirmed client records. To avoid running out of resources, the server is not required to hold unconfirmed records indefinitely. One strategy the server might use is to set a limit on how many unconfirmed client records it will maintain, and then when the limit would be exceeded, remove the oldest record. Another strategy might be to remove an unconfirmed record when some amount of time has elapsed. The choice of the amount of time is fairly arbitrary but it is surely no higher than the server's lease time period. Consider that leases need to be renewed before the lease time expires via an operation from the client. If the client cannot issue a SETCLIENTID_CONFIRM after a SETCLIENTID before a period of time equal to that of a lease expires, then the client is unlikely to be able maintain state on the server Shepler Expires December 22, 2006 [Page 340] Internet-Draft NFSv4 Minor Version 1 June 2006 during steady state operation. If the client does send a SETCLIENTID_CONFIRM for an unconfirmed record that the server has already deleted, the client will get NFS4ERR_STALE_CLIENTID back. If so, the client should then start over, and send SETCLIENTID to reestablish an unconfirmed client record and get back an unconfirmed clientid and setclientid_confirm verifier. The client should then send the SETCLIENTID_CONFIRM to confirm the clientid. SETCLIENTID_CONFIRM does not establish or renew a lease. However, if SETCLIENTID_CONFIRM removes relevant leased client state, and that state does not include existing delegations, the server MUST allow the client a period of time no less than the value of lease_time attribute, to reclaim, (via the CLAIM_DELEGATE_PREV claim type of the OPEN operation) its delegations before removing unreclaimed delegations. 22.34.6. ERRORS NFS4ERR_BADXDR NFS4ERR_CLID_INUSE NFS4ERR_DELAY NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE_CLIENTID 22.35. Operation 37: VERIFY - Verify Same Attributes 22.35.1. SYNOPSIS (cfh), fattr -> - 22.35.2. ARGUMENTS struct VERIFY4args { /* CURRENT_FH: object */ fattr4 obj_attributes; }; 22.35.3. RESULTS struct VERIFY4res { nfsstat4 status; }; 22.35.4. DESCRIPTION The VERIFY operation is used to verify that attributes have a value assumed by the client before proceeding with following operations in the compound request. If any of the attributes do not match then the error NFS4ERR_NOT_SAME must be returned. The current filehandle Shepler Expires December 22, 2006 [Page 341] Internet-Draft NFSv4 Minor Version 1 June 2006 retains its value after successful completion of the operation. 22.35.5. IMPLEMENTATION One possible use of the VERIFY operation is the following compound sequence. With this the client is attempting to verify that the file being removed will match what the client expects to be removed. This sequence can help prevent the unintended deletion of a file. PUTFH (directory filehandle) LOOKUP (file name) VERIFY (filehandle == fh) PUTFH (directory filehandle) REMOVE (file name) This sequence does not prevent a second client from removing and creating a new file in the middle of this sequence but it does help avoid the unintended result. In the case that a recommended attribute is specified in the VERIFY operation and the server does not support that attribute for the filesystem object, the error NFS4ERR_ATTRNOTSUPP is returned to the client. When the attribute rdattr_error or any write-only attribute (e.g. time_modify_set) is specified, the error NFS4ERR_INVAL is returned to the client. 22.35.6. ERRORS NFS4ERR_ACCESS NFS4ERR_ATTRNOTSUPP NFS4ERR_BADCHAR NFS4ERR_BADHANDLE NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_NOT_SAME NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE 22.36. Operation 38: WRITE - Write to File 22.36.1. SYNOPSIS (cfh), stateid, offset, stable, data -> count, committed, writeverf Shepler Expires December 22, 2006 [Page 342] Internet-Draft NFSv4 Minor Version 1 June 2006 22.36.2. ARGUMENTS enum stable_how4 { UNSTABLE4 = 0, DATA_SYNC4 = 1, FILE_SYNC4 = 2 }; struct WRITE4args { /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; stable_how4 stable; opaque data<>; }; 22.36.3. RESULTS struct WRITE4resok { count4 count; stable_how4 committed; verifier4 writeverf; }; union WRITE4res switch (nfsstat4 status) { case NFS4_OK: WRITE4resok resok4; default: void; }; 22.36.4. DESCRIPTION The WRITE operation is used to write data to a regular file. The target file is specified by the current filehandle. The offset specifies the offset where the data should be written. An offset of 0 (zero) specifies that the write should start at the beginning of the file. The count, as encoded as part of the opaque data parameter, represents the number of bytes of data that are to be written. If the count is 0 (zero), the WRITE will succeed and return a count of 0 (zero) subject to permissions checking. The server may choose to write fewer bytes than requested by the client. Part of the write request is a specification of how the write is to be performed. The client specifies with the stable parameter the method of how the data is to be processed by the server. If stable is FILE_SYNC4, the server must commit the data written plus all filesystem metadata to stable storage before returning results. This Shepler Expires December 22, 2006 [Page 343] Internet-Draft NFSv4 Minor Version 1 June 2006 corresponds to the NFS version 2 protocol semantics. Any other behavior constitutes a protocol violation. If stable is DATA_SYNC4, then the server must commit all of the data to stable storage and enough of the metadata to retrieve the data before returning. The server implementor is free to implement DATA_SYNC4 in the same fashion as FILE_SYNC4, but with a possible performance drop. If stable is UNSTABLE4, the server is free to commit any part of the data and the metadata to stable storage, including all or none, before returning a reply to the client. There is no guarantee whether or when any uncommitted data will subsequently be committed to stable storage. The only guarantees made by the server are that it will not destroy any data without changing the value of verf and that it will not commit the data and metadata at a level less than that requested by the client. The stateid value for a WRITE request represents a value returned from a previous record lock or share reservation request. The stateid is used by the server to verify that the associated share reservation and any record locks are still valid and to update lease timeouts for the client. Upon successful completion, the following results are returned. The count result is the number of bytes of data written to the file. The server may write fewer bytes than requested. If so, the actual number of bytes written starting at location, offset, is returned. The server also returns an indication of the level of commitment of the data and metadata via committed. If the server committed all data and metadata to stable storage, committed should be set to FILE_SYNC4. If the level of commitment was at least as strong as DATA_SYNC4, then committed should be set to DATA_SYNC4. Otherwise, committed must be returned as UNSTABLE4. If stable was FILE4_SYNC, then committed must also be FILE_SYNC4: anything else constitutes a protocol violation. If stable was DATA_SYNC4, then committed may be FILE_SYNC4 or DATA_SYNC4: anything else constitutes a protocol violation. If stable was UNSTABLE4, then committed may be either FILE_SYNC4, DATA_SYNC4, or UNSTABLE4. The final portion of the result is the write verifier. The write verifier is a cookie that the client can use to determine whether the server has changed instance (boot) state between a call to WRITE and a subsequent call to either WRITE or COMMIT. This cookie must be consistent during a single instance of the NFS version 4 protocol service and must be unique between instances of the NFS version 4 protocol server, where uncommitted data may be lost. If a client writes data to the server with the stable argument set to UNSTABLE4 and the reply yields a committed response of DATA_SYNC4 or Shepler Expires December 22, 2006 [Page 344] Internet-Draft NFSv4 Minor Version 1 June 2006 UNSTABLE4, the client will follow up some time in the future with a COMMIT operation to synchronize outstanding asynchronous data and metadata with the server's stable storage, barring client error. It is possible that due to client crash or other error that a subsequent COMMIT will not be received by the server. For a WRITE with a stateid value of all bits 0, the server MAY allow the WRITE to be serviced subject to mandatory file locks or the current share deny modes for the file. For a WRITE with a stateid value of all bits 1, the server MUST NOT allow the WRITE operation to bypass locking checks at the server and are treated exactly the same as if a stateid of all bits 0 were used. On success, the current filehandle retains its value. 22.36.5. IMPLEMENTATION It is possible for the server to write fewer bytes of data than requested by the client. In this case, the server should not return an error unless no data was written at all. If the server writes less than the number of bytes specified, the client should issue another WRITE to write the remaining data. It is assumed that the act of writing data to a file will cause the time_modified of the file to be updated. However, the time_modified of the file should not be changed unless the contents of the file are changed. Thus, a WRITE request with count set to 0 should not cause the time_modified of the file to be updated. The definition of stable storage has been historically a point of contention. The following expected properties of stable storage may help in resolving design issues in the implementation. Stable storage is persistent storage that survives: 1. Repeated power failures. 2. Hardware failures (of any board, power supply, etc.). 3. Repeated software crashes, including reboot cycle. This definition does not address failure of the stable storage module itself. The verifier is defined to allow a client to detect different instances of an NFS version 4 protocol server over which cached, uncommitted data may be lost. In the most likely case, the verifier allows the client to detect server reboots. This information is required so that the client can safely determine whether the server Shepler Expires December 22, 2006 [Page 345] Internet-Draft NFSv4 Minor Version 1 June 2006 could have lost cached data. If the server fails unexpectedly and the client has uncommitted data from previous WRITE requests (done with the stable argument set to UNSTABLE4 and in which the result committed was returned as UNSTABLE4 as well) it may not have flushed cached data to stable storage. The burden of recovery is on the client and the client will need to retransmit the data to the server. A suggested verifier would be to use the time that the server was booted or the time the server was last started (if restarting the server without a reboot results in lost buffers). The committed field in the results allows the client to do more effective caching. If the server is committing all WRITE requests to stable storage, then it should return with committed set to FILE_SYNC4, regardless of the value of the stable field in the arguments. A server that uses an NVRAM accelerator may choose to implement this policy. The client can use this to increase the effectiveness of the cache by discarding cached data that has already been committed on the server. Some implementations may return NFS4ERR_NOSPC instead of NFS4ERR_DQUOT when a user's quota is exceeded. In the case that the current filehandle is a directory, the server will return NFS4ERR_ISDIR. If the current filehandle is not a regular file or a directory, the server will return NFS4ERR_INVAL. If mandatory file locking is on for the file, and corresponding record of the data to be written file is read or write locked by an owner that is not associated with the stateid, the server will return NFS4ERR_LOCKED. If so, the client must check if the owner corresponding to the stateid used with the WRITE operation has a conflicting read lock that overlaps with the region that was to be written. If the stateid's owner has no conflicting read lock, then the client should try to get the appropriate write record lock via the LOCK operation before re-attempting the WRITE. When the WRITE completes, the client should release the record lock via LOCKU. If the stateid's owner had a conflicting read lock, then the client has no choice but to return an error to the application that attempted the WRITE. The reason is that since the stateid's owner had a read lock, the server either attempted to temporarily effectively upgrade this read lock to a write lock, or the server has no upgrade capability. If the server attempted to upgrade the read lock and failed, it is pointless for the client to re-attempt the upgrade via the LOCK operation, because there might be another client also trying to upgrade. If two clients are blocked trying upgrade the same lock, the clients deadlock. If the server has no upgrade capability, then it is pointless to try a LOCK operation to upgrade. Shepler Expires December 22, 2006 [Page 346] Internet-Draft NFSv4 Minor Version 1 June 2006 22.36.6. ERRORS NFS4ERR_ACCESS NFS4ERR_ADMIN_REVOKED NFS4ERR_BADHANDLE NFS4ERR_BAD_STATEID NFS4ERR_BADXDR NFS4ERR_DELAY NFS4ERR_DQUOT NFS4ERR_EXPIRED NFS4ERR_FBIG NFS4ERR_FHEXPIRED NFS4ERR_GRACE NFS4ERR_INVAL NFS4ERR_IO NFS4ERR_ISDIR NFS4ERR_LEASE_MOVED NFS4ERR_LOCKED NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_NOSPC NFS4ERR_NXIO NFS4ERR_OLD_STATEID NFS4ERR_OPENMODE NFS4ERR_RESOURCE NFS4ERR_ROFS NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_STALE_STATEID 22.37. Operation 39: RELEASE_LOCKOWNER - Release Lockowner State 22.37.1. SYNOPSIS lockowner -> () 22.37.2. ARGUMENTS struct RELEASE_LOCKOWNER4args { lock_owner4 lock_owner; }; 22.37.3. RESULTS struct RELEASE_LOCKOWNER4res { nfsstat4 status; }; 22.37.4. DESCRIPTION This operation is used to notify the server that the lock_owner is no longer in use by the client. This allows the server to release cached state related to the specified lock_owner. If file locks, associated with the lock_owner, are held at the server, the error NFS4ERR_LOCKS_HELD will be returned and no further action will be taken. 22.37.5. IMPLEMENTATION The client may choose to use this operation to ease the amount of server state that is held. Depending on behavior of applications at the client, it may be important for the client to use this operation since the server has certain obligations with respect to holding a reference to a lock_owner as long as the associated file is open. Therefore, if the client knows for certain that the lock_owner will no longer be used under the context of the associated open_owner4, it should use RELEASE_LOCKOWNER. Shepler Expires December 22, 2006 [Page 347] Internet-Draft NFSv4 Minor Version 1 June 2006 22.37.6. ERRORS NFS4ERR_ADMIN_REVOKED NFS4ERR_BADXDR NFS4ERR_EXPIRED NFS4ERR_LEASE_MOVED NFS4ERR_LOCKS_HELD NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE_CLIENTID 22.38. Operation 10044: ILLEGAL - Illegal operation 22.38.1. SYNOPSIS -> () 22.38.2. ARGUMENTS void; 22.38.3. RESULTS struct ILLEGAL4res { nfsstat4 status; }; 22.38.4. DESCRIPTION This operation is a placeholder for encoding a result to handle the case of the client sending an operation code within COMPOUND that is not supported. See the COMPOUND procedure description for more details. The status field of ILLEGAL4res MUST be set to NFS4ERR_OP_ILLEGAL. 22.38.5. IMPLEMENTATION A client will probably not send an operation with code OP_ILLEGAL but if it does, the response will be ILLEGAL4res just as it would be with any other invalid operation code. Note that if the server gets an illegal operation code that is not OP_ILLEGAL, and if the server checks for legal operation codes during the XDR decode phase, then the ILLEGAL4res would not be returned. 22.38.6. ERRORS NFS4ERR_OP_ILLEGAL 22.39. SECINFO_NO_NAME - Get Security on Unnamed Object Obtain available security mechanisms with the use of the parent of an object or the current filehandle. Shepler Expires December 22, 2006 [Page 348] Internet-Draft NFSv4 Minor Version 1 June 2006 22.39.1. SYNOPSIS (cfh), secinfo_style -> { secinfo } 22.39.2. ARGUMENT enum secinfo_style_4 { current_fh = 0, parent = 1 }; typedef secinfo_style_4 SECINFO_NO_NAME4args; 22.39.3. RESULT typedef SECINFO4res SECINFO_NO_NAME4res; 22.39.4. DESCRIPTION Like the SECINFO operation, SECINFO_NO_NAME is used by the client to obtain a list of valid RPC authentication flavors for a specific file object. Unlike SECINFO, SECINFO_NO_NAME only works with objects are accessed by file handle. There are two styles of SECINFO_NO_NAME, as determined by the value of the secinfo_style_4 enumeration. If "current_fh" is passed, then SECINFO_NO_NAME is querying for the required security for the current filehandle. If "parent" is passed, then SECINFO_NO_NAME is querying for the required security of the current filehandles's parent. If the style selected is "parent", then SECINFO should apply the same access methodology used for LOOKUPP when evaluating the traversal to the parent directory. Therefore, if the requester does not have the appropriate access to LOOKUPP the parent then SECINFO_NO_NAME must behave the same way and return NFS4ERR_ACCESS. Note that if PUTFH, PUTPUBFH, or PUTROOTFH return NFS4ERR_WRONGSEC, this is tantamount to the server asserting that the client will have to guess what the required security is, because there is no way to query. Therefore, the client must iterate through the security triples available at the client and reattempt the PUTFH, PUTROOTFH or PUTPUBFH operation. In the unfortunate event none of the MANDATORY security triples are supported by the client and server, the client SHOULD try using others that support integrity. Failing that, the client can try using other forms (e.g. AUTH_SYS and AUTH_NONE), but because such forms lack integrity checks, this puts the client at risk. The server implementor should pay particular attention to the section Shepler Expires December 22, 2006 [Page 349] Internet-Draft NFSv4 Minor Version 1 June 2006 "Clarification of Security Negotiation in NFSv4.1" for implementation suggestions for avoiding NFS4ERR_WRONGSEC error returns from PUTFH, PUTROOTFH or PUTPUBFH. Everything else about SECINFO_NO_NAME is the same as SECINFO. See the previous discussion on SECINFO. 22.39.5. IMPLEMENTATION See the previous dicussion on SECINFO. 22.39.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADCHAR NFS4ERR_BADHANDLE NFS4ERR_BADNAME NFS4ERR_BADXDR NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_MOVED NFS4ERR_NAMETOOLONG NFS4ERR_NOENT NFS4ERR_NOFILEHANDLE NFS4ERR_NOTDIR NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE 22.40. CREATECLIENTID - Instantiate Clientid Create a clientid 22.40.1. SYNOPSIS client -> clientid 22.40.2. ARGUMENT struct CREATECLIENTID4args { nfs_client_id4 clientdesc; }; 22.40.3. RESULT struct CREATECLIENTID4resok { clientid4 clientid; verifier4 clientid_confirm; }; union SETCLIENTID4res switch (nfsstat4 status) { case NFS4_OK: CREATECLIENTID4resok resok4; case NFS4ERR_CLID_INUSE: void; default: void; }; Shepler Expires December 22, 2006 [Page 350] Internet-Draft NFSv4 Minor Version 1 June 2006 22.40.4. DESCRIPTION The client uses the CREATECLIENTID operation to register a particular client identifier with the server. The clientid returned from this operation will be necessary for requests that create state on the server and will serve as a parent object to sessions created by the client. In order to verify the clientid it must first be used as an argument to CREATESESSION. 22.40.5. IMPLEMENTATION A server's client record is a 5-tuple: 1. clientdesc.id: The long form client identifier, sent via the client.id subfield of the CREATECLIENTID4args structure 2. clientdesc.verifier: A client-specific value used to indicate reboots, sent via the clientdesc.verifier subfield of the CREATECLIENTID4args structure 3. principal: The RPCSEC_GSS principal sent via the RPC headers 4. clientid: The shorthand client identifier, generated by the server and returned via the clientid field in the CREATECLIENTID4resok structure 5. confirmed: A private field on the server indicating whether or not a client record has been confirmed. A client record is confirmed if there has been a successful CREATESESSION operation to confirm it. Otherwise it is unconfirmed. An unconfirmed record is established by a CREATECLIENTID call. Any unconfirmed record that is not confirmed within a lease period may be removed. The following identifiers represent special values for the fields in the records. Shepler Expires December 22, 2006 [Page 351] Internet-Draft NFSv4 Minor Version 1 June 2006 id_arg: The value of the clientdesc.id subfield of the CREATECLIENTID4args structure of the current request. verifier_arg: The value of the clientdesc.verifier subfield of the CREATECLIENTID4args structure of the current request. old_verifier_arg: A value of the clientdesc.verifier field of a client record received in a previous request; this is distinct from verifier_arg. principal_arg: The value of the RPCSEC_GSS principal for the current request. old_principal_arg: A value of the RPCSEC_GSS principal received for a previous request. This is distinct from principal_arg. clientid_ret: The value of the clientid field the server will return in the CREATECLIENTID4resok structure for the current request. old_clientid_ret: The value of the clientid field the server returned in the CREATECLIENTID4resok structure for a previous request. This is distinct from clientid_ret. Since CREATECLIENTID is a non-idempotent operation, we must consider the possibility that replays may occur as a result of a client reboot, network partition, malfunctioning router, etc. Replays are identified by the value of the client field of CREATECLIENTID4args and the method for dealing with them is outlined in the scenarios below. The scenarios are described in terms of what client records whose clientdesc.id subfield have value equal to id_arg exist in the server's set of client records. Any cases in which there is more than one record with identical values for id_arg represent a server implementation error. Operation in the potential valid cases is Shepler Expires December 22, 2006 [Page 352] Internet-Draft NFSv4 Minor Version 1 June 2006 summarized as follows. 1. Common case If no client records with clientdesc.id matching id_arg exist, a new shorthand client identifier clientid_ret is generated, and the following unconfirmed record is added to the server's state. { id_arg, verifier_arg, principal_arg, clientid_ret, FALSE } Subsequently, the server returns clientid_ret. 2. Router Replay If the server has the following confirmed record, then this request is likely the result of a replayed request due to a faulty router or lost connection. { id_arg, verifier_arg, principal_arg, clientid_ret, TRUE } Since the record has been confirmed, the client must have received the server's reply from the initial CREATECLIENTID request. Since this is simply a spurious request, there is no modification to the server's state, and the server makes no reply to the client. 3. Client Collision If the server has the following confirmed record, then this request is likely the result of a chance collision between the values of the clientdesc.id subfield of CREATECLIENTID4args for two different clients. { id_arg, *, old_principal_arg, clientid_ret, TRUE } Since the value of the clientdesc.id subfield of each client record must be unique, there is no modification of the server's state, and NFS4ERR_CLID_INUSE is returned to indicate the client should retry with a different value for the clientdesc.id subfield of CREATECLIENTID4args. This scenario may also represent a malicious attempt to destroy a client's state on the server. For security reasons, the server MUST NOT remove the client's state when there is a principal mismatch. Shepler Expires December 22, 2006 [Page 353] Internet-Draft NFSv4 Minor Version 1 June 2006 4. Replay If the server has the following unconfirmed record then this request is likely the result of a client replay due to a network partition or some other connection failure. { id_arg, verifier_arg, principal_arg, clientid_ret, FALSE } Since the response to the CREATECLIENTID request that created this record may have been lost, it is not acceptable to drop this duplicate request. However, rather than processing it normally, the existing record is left unchanged and clientid_ret, which was generated for the previous request, is returned. 5. Change of Principal If the server has the following unconfirmed record then this request is likely the result of a client which has for whatever reasons changed principals (possibly to change security flavor) after calling CREATECLIENTID, but before calling CREATESESSION. { id_arg, verifier_arg, old_principal_arg, clientid_ret, FALSE} Since the client has not changed, the principal field of the unconfirmed record is updated to principal_arg and clientid_ret is again returned. There is a small possibility that this is merely a collision on the client field of CREATECLIENTID4args between unrelated clients, but since that is unlikely, and an unconfirmed record does not generally have any filesystem pertinent state, we can assume it is the same client without risking loss of any important state. After processing, the following record will exist on the server. { id_arg, verifier_arg, principal_arg, clientid_ret, FALSE} 6. Client Reboot If the server has the following confirmed client record, then this request is likely from a previously confirmed client which has rebooted. { id_arg, old_verifier_arg, principal_arg, clientid_ret, TRUE } Shepler Expires December 22, 2006 [Page 354] Internet-Draft NFSv4 Minor Version 1 June 2006 Since the previous incarnation of the same client will no longer be making requests, lock and share reservations should be released immediately rather than forcing the new incarnation to wait for the lease time on the previous incarnation to expire. Furthermore, session state should be removed since if the client had maintained that information across reboot, this request would not have been issued. If the server does not support the CLAIM_DELEGATE_PREV claim type, associated delegations should be purged as well; otherwise, delegations are retained and recovery proceeds according to RFC3530. The client record is updated with the new verifier and its status is changed to unconfirmed. After processing, clientid_ret is returned to the client and the following record will exist on the server. { id_arg, verifier_arg, principal_arg, clientid_ret, FALSE } 7. Reboot before confirmation If the server has the following unconfirmed record, then this request is likely from a client which rebooted before sending a CREATESESSION request. { id_arg, old_verifier_arg, *, clientid_ret, FALSE } Since this is believed to be a request from a new incarnation of the original client, the server updates the value of clientdesc.verifier and returns the original clientid_ret. After processing, the following state exists on the server. { id_arg, verifier_arg, *, clientid_ret, FALSE } 22.40.6. ERROR NFS4ERR_BADXDR NFS4ERR_CLID_INUSE NFS4ERR_INVAL NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT 22.41. CREATESESSION - Create New Session and Confirm Clientid Start up session and confirm clientid. 22.41.1. SYNOPSIS clientid, session_args -> sessionid, session_args Shepler Expires December 22, 2006 [Page 355] Internet-Draft NFSv4 Minor Version 1 June 2006 22.41.2. ARGUMENT struct CREATESESSION4args { clientid4 clientid; bool persist; count4 maxrequestsize; count4 maxresponsesize; count4 maxrequests; count4 headerpadsize; switch (bool clientid_confirm) { case TRUE: verifier4 setclientid_confirm; case FALSE: void; } switch (channelmode4 mode) { case DEFAULT: void; case STREAM: streamchannelattrs4 streamchanattrs; case RDMA: rdmachannelattrs4 rdmachanattrs; }; }; Shepler Expires December 22, 2006 [Page 356] Internet-Draft NFSv4 Minor Version 1 June 2006 22.41.3. RESULT typedef opaque sessionid4[16]; struct CREATESESSION4resok { sessionid4 sessionid; bool persist; count4 maxrequestsize; count4 maxresponsesize; count4 maxrequests; count4 headerpadsize; switch (channelmode4 mode) { case DEFAULT: void; case STREAM: streamchannelattrs4 streamchanattrs; case RDMA: rdmachannelattrs4 rdmachanattrs; }; }; union CREATESESSION4res switch (nfsstat4 status) { case NFS4_OK: CREATESESSION4resok resok4; default: void; }; 22.41.4. DESCRIPTION This operation is used by the client to create new session objects on the server. Additionally the first session created with a new shorthand client identifier serves to confirm the creation of that client's state on the server. The server returns the parameter values for the new session. 22.41.5. IMPLEMENTATION To describe the implementation, the same notation for client records introduced in the description of CREATECLIENTID is used with the following addition. clientid_arg: The value of the clientid field of the CREATESESSION4args structure of the current request. Since CREATESESSION is a non-idempotent operation, we must consider the possibility that replays may occur as a result of a client reboot, network partition, malfunctioning router, etc. Replays are Shepler Expires December 22, 2006 [Page 357] Internet-Draft NFSv4 Minor Version 1 June 2006 identified by the value of the clientid and sessionid fields of CREATESESSION4args and the method for dealing with them is outlined in the scenarios below. The processing of this operation is divided into two phases: clientid confirmation and session creation. In case the state for the provided clientid has not been verified, it is confirmed before the session is created. Otherwise the clientid confirmation phase is skipped and only the session creation phase occurs. Note that since only confirmed clients may create sessions, the clientid confirmation stage does not depend upon sessionid_arg. CLIENTID CONFIRMATION The operational cases are described in terms of what client records whose clientid field have value equal to clientid_arg exist in the server's set of client records. Any cases in which there is more than one record with identical values for clientid represent a server implementation error. Operation in the potential valid cases is summarized as follows. 1. Common Case If the server has the following unconfirmed record, then this is the expected confirmation of an unconfirmed record. { *, *, principal_arg, clientid_arg, FALSE } The confirmed field of the record is set to TRUE and processing of the operation continues normally. 2. Stale Clientid If the server contains no records with clientid equal to clientid_arg, then most likely the client's state has been purged during a period of inactivity, possibly due to a loss of connectivity. NFS4ERR_STALE_CLIENTID is returned, and no changes are made to any client records on the server. 3. Principal Change or Collision If the server has the following record, then the client has changed principals after the previous CREATECLIENTID request, or there has been a chance collision between shortand client identifiers. { *, *, old_principal_arg, clientid_arg, * } Shepler Expires December 22, 2006 [Page 358] Internet-Draft NFSv4 Minor Version 1 June 2006 Neither of these cases are permissible. Processing stops and NFS4ERR_CLID_INUSE is returned to the client. No changes are made to any client records on the server. SESSION CREATION To determine whether this request is a replay, the server examines the sessionid argument provided by the client. If the sessionid matches the identifier of a previously created session, then this request must be interpreted as a replay. No new state is created and a reply with the parameters of the existing session is returned to the client. If a session corresponding to the sessionid does not already exist, then the request is not a replay and is processed as follows. NOTE: It is the responsibility of the client to generate appropriate values for sessionid. Since the ordering of messages sent on different transport connections is not guaranteed, immediately reusing the sessionid of a previously destroyed session may yield unpredictable results. Client implementations should avoid recently used sessionids to ensure correct behavior. The server examines the persist, maxrequestsize, maxresponsesize, maxrequests and headerpadsize arguments. For each argument, if the value is acceptable to the server, it is recommended that the server use the provided value to create the new session. If it is not acceptable, the server may use a different value, but must return the value used to the client. These parameters have the following interpretation. persist: True if the client desires server support for "reliable" semantics. For sessions in which only idempotent operations will be used (e.g. a read-only session), clients should set this value to false. If the server does not or cannot provide "reliable" semantics this value must be set to false on return. maxrequestsize: The maximum size of a COMPOUND request that will be sent by the client including RPC headers. maxresponsesize: Shepler Expires December 22, 2006 [Page 359] Internet-Draft NFSv4 Minor Version 1 June 2006 The maximum size of a COMPOUND reply that the client will accept from the server including RPC headers. The server must not increase the value of this parameter. If a client sends a COMPOUND request for which the size of the reply would exceed this value, the server will return NFS4ERR_RESOURCE. maxrequests: The maximum number of concurrent COMPOUND requests that the client will issue on the session. Subsequent COMPOUND requests will each be assigned a slot identifier by the client on the range 0 to maxrequests - 1 inclusive. A slot id cannot be reused until the previous request on that slot has completed. headerpadsize: The maximum amount of padding the client is willing to apply to ensure that write payloads are aligned on some boundary at the server. The server should reply with its preferred value, or zero if padding is not in use. The server may decrease this value but must not increase it. The server creates the session by recording the parameter values used and if the persist parameter is true and has been accepted by the server, allocating space for the duplicate request cache (DRC). If the session state is created successfully, the server associates it with the session identifier provided by the client. This identifier must be unique among the client's active sessions but there is no need for it to be globally unique. Finally, the server returns the negotiated values used to create the session to the client. 22.41.6. ERRORS NFS4ERR_BADXDR NFS4ERR_CLID_INUSE NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE_CLIENTID 22.42. BIND_BACKCHANNEL - Create a callback channel binding Establish a callback channel on the connection. 22.42.1. SYNOPSIS Shepler Expires December 22, 2006 [Page 360] Internet-Draft NFSv4 Minor Version 1 June 2006 22.42.2. ARGUMENT struct BIND_BACKCHANNEL4args { clientid4 clientid; uint32_t callback_program; uint32_t callback_ident; count4 maxrequestsize; count4 maxresponsesize; count4 maxrequests; switch (channelmode4 mode) { case DEFAULT: void; case STREAM: streamchannelattrs4 streamchanattrs; case RDMA: rdmachannelattrs4 rdmachanattrs; }; }; 22.42.3. RESULT struct BIND_BACKCHANNEL4resok { count4 maxrequestsize; count4 maxresponsesize; count4 maxrequests; switch (channelmode4 mode) { case DEFAULT: void; case STREAM: streamchannelattrs4 streamchanattrs; case RDMA: rdmachannelattrs4 rdmachanattrs; }; }; union BIND_BACKCHANNEL4res switch (nfsstat4 status) { case NFS4_OK: BIND_BACKCHANNEL4resok resok4; default: void; }; 22.42.4. DESCRIPTION The BIND_BACKCHANNEL operation serves to establish the current connection as a designated callback channel for the specified session. Normally, only one callback channel is bound, however if Shepler Expires December 22, 2006 [Page 361] Internet-Draft NFSv4 Minor Version 1 June 2006 more than one are established, they are used at the server's prerogative, no affinity or preference is specified by the client. The arguments and results of the BIND_BACKCHANNEL call are a subset of the session parameters, and used identically to those values on the callback channel only. However, not all session operation channel parameters are relevant to the callback channel, for example header padding (since writes of bulk data are not performed in callbacks). 22.42.5. IMPLEMENTATION No discussion at this time. 22.42.6. ERRORS TBD 22.43. DESTROYSESSION - Destroy existing session Destroy existing session. 22.43.1. SYNOPSIS void -> status 22.43.2. ARGUMENT struct DESTROYSESSION4args { sessionid4 sessionid; }; 22.43.3. RESULT struct SESSION_DESTROYres { nfsstat status; }; 22.43.4. DESCRIPTION The SESSION_DESTROY operation closes the session and discards any active state such as locks, leases, and server duplicate request cache entries. Any remaining connections bound to the session are immediately unbound and may additionally be closed by the server. This operation must be the final, or only operation in any request. Because the operation results in destruction of the session, any duplicate request caching for this request, as well as previously Shepler Expires December 22, 2006 [Page 362] Internet-Draft NFSv4 Minor Version 1 June 2006 completed requests, will be lost. For this reason, it is advisable to not place this operation in a request with other state-modifying operations. In addition, a SEQUENCE operation is not required in the request. Note that because the operation will never be replayed by the server, a client that retransmits the request may receive an error in response, even though the session may have been successfully destroyed. 22.43.5. IMPLEMENTATION No discussion at this time. 22.43.6. ERRORS TBD 22.44. SEQUENCE - Supply per-procedure sequencing and control Supply per-procedure sequencing and control 22.44.1. SYNOPSIS control -> control 22.44.2. ARGUMENT typedef uint32_t sequenceid4; typedef uint32_t slotid4; struct SEQUENCE4args { sessionid4 sessionid; sequenceid4 sequenceid; slotid4 slotid; slotid4 maxslot; }; Shepler Expires December 22, 2006 [Page 363] Internet-Draft NFSv4 Minor Version 1 June 2006 22.44.3. RESULT struct SEQUENCE4resok { sessionid4 sessionid; sequenceid4 sequenceid; slotid4 slotid; slotid4 maxslot; slotid4 target_maxslot; }; union SEQUENCE4res switch (nfsstat4 status) { case NFS4_OK: SEQUENCE4resok resok4; default: void; }; 22.44.4. DESCRIPTION The SEQUENCE operation is used to manage operational accounting for the session on which the operation is sent. The contents include the client and session to which this request belongs, slotid and sequenceid, used by the server to implement session request control and the duplicate reply cache semantics, and exchanged slot counts which are used to adjust these values. This operation must appear once as the first operation in each COMPOUND sent after the channel is successfully bound, or a protocol error must result. 22.44.5. IMPLEMENTATION No discussion at this time. 22.44.6. ERRORS NFS4ERR_BADSESSION NFS4ERR_BADSLOT 22.45. GET_DIR_DELEGATION - Get a directory delegation Obtain a directory delegation. 22.45.1. SYNOPSIS (cfh), requested notification -> (cfh), cookieverf, stateid, supported notification Shepler Expires December 22, 2006 [Page 364] Internet-Draft NFSv4 Minor Version 1 June 2006 22.45.2. ARGUMENT /* * Notification types. */ const DIR_NOTIFICATION_NONE = 0x00000000; const DIR_NOTIFICATION_CHANGE_CHILD_ATTRIBUTES = 0x00000001; const DIR_NOTIFICATION_CHANGE_DIR_ATTRIBUTES = 0x00000002; const DIR_NOTIFICATION_REMOVE_ENTRY = 0x00000004; const DIR_NOTIFICATION_ADD_ENTRY = 0x00000008; const DIR_NOTIFICATION_RENAME_ENTRY = 0x00000010; const DIR_NOTIFICATION_CHANGE_COOKIE_VERIFIER = 0x00000020; typedef uint32_t dir_notification_type4; typedef nfstime4 attr_notice4; struct GET_DIR_DELEGATION4args { bool gdda_signal_deleg_avail; dir_notification_type4 notification_type; attr_notice4 child_attr_delay; attr_notice4 dir_attr_delay; }; 22.45.3. RESULT struct GET_DIR_DELEGATION4resok { verifier4 cookieverf; /* Stateid for get_dir_delegation */ stateid4 stateid; /* Which notifications can the server support */ dir_notification_type4 notification; bitmap4 child_attributes; bitmap4 dir_attributes; }; union GET_DIR_DELEGATION4res switch (nfsstat4 status) { case NFS4_OK: /* CURRENT_FH: delegated dir */ GET_DIR_DELEGATION4resok resok4; case NFS4ERR_DIRDELEG_UNAVAIL: bool gddr_will_signal_deleg_avail; default: void; }; Shepler Expires December 22, 2006 [Page 365] Internet-Draft NFSv4 Minor Version 1 June 2006 22.45.4. DESCRIPTION The GET_DIR_DELEGATION operation is used by a client to request a directory delegation. The directory is represented by the current filehandle. The client also specifies whether it wants the server to notify it when the directory changes in certain ways by setting one or more bits in a bitmap. The server may also choose not to grant the delegation. In that case the server will return NFS4ERR_DIRDELEG_UNAVAIL. If the server decides to hand out the delegation, it will return a cookie verifier for that directory. If the cookie verifier changes when the client is holding the delegation, the delegation will be recalled unless the client has asked for notification for this event. In that case a notification will be sent to the client. The server will also return a directory delegation stateid in addition to the cookie verifier as a result of the GET_DIR_DELEGATION operation. This stateid will appear in callback messages related to the delegation, such as notifications and delegation recalls. The client will use this stateid to return the delegation voluntarily or upon recall. A delegation is returned by calling the DELEGRETURN operation. The server may not be able to support notifications of certain events. If the client asks for such notifications, the server must inform the client of its inability to do so as part of the GET_DIR_DELEGATION reply by not setting the appropriate bits in the supported notifications bitmask contained in the reply. The GET_DIR_DELEGATION operation can be used for both normal and named attribute directories. It covers all the entries in the directory except the ".." entry. That means if a directory and its parent both hold directory delegations, any changes to the parent will not cause a notification to be sent for the child even though the child's ".." entry points to the parent. If client sets gdda_signal_deleg_avail to TRUE, then it is registering with the client a "want" for a directory delegation. If the server supports and will honor the "want", the results will have gddr_will_signal_deleg_avail set to TRUE. If so the client should expect a CB_RECALLABLE_OBJ_AVAIL operation to indicate that a directory delegation is available. 22.45.5. IMPLEMENTATION Directory delegation provides the benefit of improving cache consistency of namespace information. This is done through synchronous callbacks. A server must support synchronous callbacks Shepler Expires December 22, 2006 [Page 366] Internet-Draft NFSv4 Minor Version 1 June 2006 in order to support directory delegations. In addition to that, asynchronous notifications provide a way to reduce network traffic as well as improve client performance in certain conditions. Notifications would not be requested when the goal is just cache consitency. Notifications are specified in terms of potential changes to the directory. A client can ask to be notified whenever an entry is added to a directory by setting notification_type to DIR_NOTIFICATION_ADD_ENTRY. It can also ask for notifications on entry removal, renames, directory attribute changes and cookie verifier changes by setting notification_type flag appropriately. In addition to that, the client can also ask for notifications upon attribute changes to children in the directory to keep its attribute cache up to date. However any changes made to child attributes do not cause the delegation to be recalled. If a client is interested in directory entry caching, or negative name caching, it can set the notification_type appropriately and the server will notify it of all changes that would otherwise invalidate its name cache. The kind of notification a client asks for may depend on the directory size, its rate of change and the applications being used to access that directory. However, the conditions under which a client might ask for a notification, is out of the scope of this specification. The client will set one or more bits in a bitmap (notification_type) to let the server know what kind of notification(s) it is interested in. For attribute notifications it will set bits in another bitmap to indicate which attributes it wants to be notified of. If the server does not support notifications for changes to a certain attribute, it should not set that attribute in the supported attribute bitmap (notification) specified in the reply. In addition to that, the client will also let the server know if it wants to get the notification as soon as the attribute change occurs or after a certain delay by setting a delay factor, child_attr_delay for attribute changes to children and dir_attr_delay for attribute changes to the directory. If this delay factor is set to zero, that indicates to the server that the client wants to be notified of any attribute changes as soon as they occur. If the delay factor is set to N, the server will make a best effort guarantee that attribute updates are not out of sync by more than that. One value covers all attribute changes for the directory and another value covers all attribute changes for all children in the directory. If the client asks for a delay factor that the server does not support or that may cause significant resource consumption on the server by causing the server to send a lot of notifications, the server should not commit to sending out notifications for that attribute and therefore must not set the approprite bit in the child_attributes and dir_attributes Shepler Expires December 22, 2006 [Page 367] Internet-Draft NFSv4 Minor Version 1 June 2006 bitmaps in the response. The server will let the client know about which notifications it can support by setting appropriate bits in a bitmap. If it agrees to send attribute notifications, it will also set two attribute masks indicating which attributes it will send change notifications for. One of the masks covers changes in directory attributes and the other covers atttribute changes to any files in the directory. The client should use a security flavor that the filesystem is exported with. If it uses a different flavor, the server should return NFS4ERR_WRONGSEC. 22.45.6. ERRORS NFS4ERR_ACCESS NFS4ERR_BADHANDLE NFS4ERR_BADXDR NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_MOVED NFS4ERR_NOFILEHANDLE NFS4ERR_NOTDIR NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT NFS4ERR_STALE NFS4ERR_DIRDELEG_UNAVAIL NFS4ERR_WRONGSEC NFS4ERR_EIO NFS4ERR_NOTSUPP 22.46. LAYOUTGET - Get Layout Information 22.46.1. SYNOPSIS (cfh), signal_avail, layout_type, iomode, offset, length, minlength, maxcount -> layout example synopsis 22.46.2. ARGUMENT struct LAYOUTGET4args { /* CURRENT_FH: file */ bool loga_signal_layout_avail; pnfs_layouttype4 layout_type; pnfs_layoutiomode4 iomode; offset4 offset; length4 length; length4 minlength; count4 maxcount; }; Shepler Expires December 22, 2006 [Page 368] Internet-Draft NFSv4 Minor Version 1 June 2006 22.46.3. RESULT struct LAYOUTGET4resok { pnfs_layout4 layout; }; union LAYOUTGET4res switch (nfsstat4 status) { case NFS4_OK: LAYOUTGET4resok resok4; case NFS4ERR_LAYOUTTRYLATER: bool logr_will_signal_layout_avail; default: void; }; 22.46.4. DESCRIPTION Requests a layout for reading or writing (and reading) the file given by the filehandle at the byte range specified by offset and length. Layouts are identified by the clientid, filehandle, and layout type. The use of the iomode depends upon the layout type, but should reflect the client's data access intent. The LAYOUTGET operation returns layout information for the specified byte range, a layout segment. To get a layout segment from a specific offset through the end-of-file, regardless of the file's length, a length field with all bits set to 1 (one) should be used. If the length is zero, or if a length which is not all bits set to one is specified, and length when added to the offset exceeds the maximum 64-bit unsigned integer value, the error NFS4ERR_INVAL will result. The "minlength" field specifies the minimum size overlap with the requested offset and length that is to be returned. If this requirement cannot be met, no layout must be returned; the error NFS4ERR_LAYOUTTRYLATER can be returned. The "maxcount" field specifies the maximum layout size (in bytes) that the client can handle. If the size of the layout structure exceeds the size specified by maxcount, the metadata server will return the NFS4ERR_TOOSMALL error. As well, the metadata server may adjust the range of the returned layout segment based on striping patterns and usage implied by the iomode. The client must be prepared to get a layout that does not line up exactly with their request; there MUST be at least an overlap of "minlength" between the layout returned by the server and the client's request, or the server SHOULD reject the request. See Shepler Expires December 22, 2006 [Page 369] Internet-Draft NFSv4 Minor Version 1 June 2006 Section 14.3 for more details. The metadata server may also return a layout segment with an iomode other than that requested by the client. If it does so, it must ensure that the iomode is more permissive than the iomode requested. E.g., this allows an implementation to upgrade read-only requests to read/write requests at its discretion, within the limits of the layout type specific protocol. An iomode of either LAYOUTIOMODE_READ or LAYOUTIOMODE_RW must be returned. The format of the returned layout is specific to the underlying file system. Layout types other than the NFSv4 file layout type should be specified outside of this document. If layouts are not supported for the requested file or its containing file system the server SHOULD return NFS4ERR_LAYOUTUNAVAILABLE. If the layout type is not supported, the metadata server should return NFS4ERR_UNKNOWN_LAYOUTTYPE. If layouts are supported but no layout matches the client provided layout identification, the server should return NFS4ERR_BADLAYOUT. If an invalid iomode is specified, or an iomode of LAYOUTIOMODE_ANY is specified, the server should return NFS4ERR_BADIOMODE. If the layout for the file is unavailable due to transient conditions, e.g. file sharing prohibits layouts, the server must return NFS4ERR_LAYOUTTRYLATER. If the layout request is rejected due to an overlapping layout recall, the server must return NFS4ERR_RECALLCONFLICT. See Section 14.5.3 for details. If the layout conflicts with a mandatory byte range lock held on the file, and if the storage devices have no method of enforcing mandatory locks, other than through the restriction of layouts, the metadata server should return NFS4ERR_LOCKED. If client sets loga_signal_deleg_avail to TRUE, then it is registering with the client a "want" for a directory delegation. If the server supports and will honor the "want", the results will have logr_will_signal_deleg_avail set to TRUE. If so the client should expect a CB_RECALLABLE_OBJ_AVAIL operation to indicate that a layout is available. On success, the current filehandle retains its value. Shepler Expires December 22, 2006 [Page 370] Internet-Draft NFSv4 Minor Version 1 June 2006 22.46.5. IMPLEMENTATION Typically, LAYOUTGET will be called as part of a compound RPC after an OPEN operation and results in the client having location information for the file; a client may also hold a layout across multiple OPENs. The client specifies a layout type that limits what kind of layout the server will return. This prevents servers from issuing layouts that are unusable by the client. 22.46.6. ERRORS NFS4ERR_BADLAYOUT NFS4ERR_BADIOMODE NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_LAYOUTUNAVAILABLE NFS4ERR_LAYOUTTRYLATER NFS4ERR_LOCKED NFS4ERR_NOFILEHANDLE NFS4ERR_NOTSUPP NFS4ERR_RECALLCONFLICT NFS4ERR_STALE NFS4ERR_STALE_CLIENTID NFS4ERR_TOOSMALL NFS4ERR_UNKNOWN_LAYOUTTYPE 22.47. LAYOUTCOMMIT - Commit writes made using a layout 22.47.1. SYNOPSIS (cfh), clientid, offset, length, reclaim, last_write_offset, time_modify, time_access, layoutupdate -> newsize Shepler Expires December 22, 2006 [Page 371] Internet-Draft NFSv4 Minor Version 1 June 2006 22.47.2. ARGUMENT union newtime4 switch (bool timechanged) { case TRUE: nfstime4 time; case FALSE: void; }; union newoffset4 switch (bool newoffset) { case TRUE: offset4 offset; case FALSE: void; }; struct LAYOUTCOMMIT4args { /* CURRENT_FH: file */ clientid4 clientid; offset4 offset; length4 length; bool reclaim; newoffset4 last_write_offset; newtime4 time_modify; newtime4 time_access; pnfs_layoutupdate4 layoutupdate; }; 22.47.3. RESULT union LAYOUTCOMMIT4res switch (nfsstat4 status) { case NFS4_OK: LAYOUTCOMMIT4resok resok4; default: void; }; struct LAYOUTCOMMIT4resok { newsize4 newsize; }; 22.47.4. DESCRIPTION Commits changes in the layout segment represented by the current filehandle, clientid, and byte range. Since layouts are sub- dividable, a smaller portion of a layout, retrieved via LAYOUTGET, may be committed. The region being committed is specified through the byte range (length and offset). Note: the "layoutupdate" Shepler Expires December 22, 2006 [Page 372] Internet-Draft NFSv4 Minor Version 1 June 2006 structure does not include the length and offset, as they are already specified in the arguments. The LAYOUTCOMMIT operation indicates that the client has completed writes using a layout obtained by a previous LAYOUTGET. The client may have only written a subset of the data range it previously requested. LAYOUTCOMMIT allows it to commit or discard provisionally allocated space and to update the server with a new end of file. The layout referenced by LAYOUTCOMMIT is still valid after the operation completes and can be continued to be referenced by the clientid, filehandle, byte range, and layout type. The "reclaim" field set to "true" in a LAYOUTCOMMIT request specifies that the client is attempting to commit changes to a layout after the reboot of the metadata server during the metadata server's recovery grace period. This type of request may be necessary when the client has uncommitted writes to provisionally allocated regions of a file which were sent to the storage devices before the reboot of the metadata server. In this case the layout provided by the client MUST be a subset of a writable layout that the client held immediately before the reboot of the metadata server. The metadata server is free to accept or reject this request based on its own internal metadata consistency checks. If the metadata server finds that the layout provided by the client does not pass its consistency checks, it MUST reject the request with the status NFS4ERR_RECLAIM_BAD. The successful completion of the LAYOUTCOMMIT request with "reclaim" set to true does NOT provide the client with a layout for the file. It simply commits the changes to the file layout specified in the "layoutupdate" field. To obtain a layout for the file the client must issue a LAYOUTGET request to the server after the server's grace period has expired. If the metadata server receives a LAYOUTCOMMIT request with "reclaim" set to true when the metadata server is not in its recovery grace period, it MUST reject the request with the status NFS4ERR_NO_GRACE. Setting the "reclaim" field to "true" is required if and only if the committed layout was acquired before the metadata server reboot. Committing layouts that were acquired during the metadata server's grace period MUST set the "reclaim" field to "false". The "last_write_offset" field specifies the offset of the last byte written by the client previous to the LAYOUTCOMMIT. Note: this value is never equal to the file's size (at most it is one byte less than the file's size). The metadata server may use this information to determine whether the file's size needs to be updated. If the metadata server updates the file's size as the result of the LAYOUTCOMMIT operation, it must return the new size as part of the results. Shepler Expires December 22, 2006 [Page 373] Internet-Draft NFSv4 Minor Version 1 June 2006 The "time_modify" and "time_access" fields allow the client to suggest times it would like the metadata server to set. The metadata server may use these time values or it may use the time of the LAYOUTCOMMIT operation to set these time values. If the metadata server uses the client provided times, it should sanity check the values (e.g., to ensure time does not flow backwards). If the client wants to force the metadata server to set an exact time, the client should use a SETATTR operation in a compound right after LAYOUTCOMMIT. See Section 14.4 for more details. If the new client desires the resultant mtime or atime, it should issue a GETATTR following the LAYOUTCOMMIT; e.g., later in the same compound. The "layoutupdate" argument to LAYOUTCOMMIT provides a mechanism for a client to provide layout specific updates to the metadata server. For example, the layout update can describe what regions of the original layout have been used and what regions can be deallocated. There is no NFSv4 file layout specific layoutupdate structure. The layout information is more verbose for block devices than for objects and files because the latter hide the details of block allocation behind their storage protocols. At the minimum, the client needs to communicate changes to the end of file location back to the server, and, if desired, its view of the file modify and access time. For block/volume layouts, it needs to specify precisely which blocks have been used. If the layout identified in the arguments does not exist, the error NFS4ERR_BADLAYOUT is returned. The layout being committed may also be rejected if it does not correspond to an existing layout with an iomode of RW. If the LAYOUTCOMMIT request sets the "reclaim" field to "true" after the metadata server's grace period, NFS4ERR_NO_GRACE is returned. On success, the current filehandle retains its value. 22.47.5. IMPLEMENTATION Optionally, the client can also use LAYOUTCOMMIT with the "reclaim" field set to "true" to convey hints to modified file attributes or to report layout-type specific information such as I/O errors for object-based storage layouts, as normally done during normal operation. Doing so may help the metadata server to recover files more efficiently after reboot. For example, some file system implementations may require expansive recovery of filesystem objects if the metadata server does not get a positive indication from all clients holding a write layout that they have successfully completed all their writes. Sending a LAYOUTCOMMIT (if required) and then Shepler Expires December 22, 2006 [Page 374] Internet-Draft NFSv4 Minor Version 1 June 2006 following with LAYOUTRETURN can provide such an indication and allow for graceful and efficient recovery. 22.47.6. ERRORS NFS4ERR_BADLAYOUT NFS4ERR_BADIOMODE NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_NOFILEHANDLE NFS4ERR_NO_GRACE NFS4ERR_RECLAIM_BAD NFS4ERR_STALE NFS4ERR_STALE_CLIENTID NFS4ERR_UNKNOWN_LAYOUTTYPE 22.48. LAYOUTRETURN - Release Layout Information 22.48.1. SYNOPSIS (cfh), clientid, offset, length, reclaim, iomode, layout_type -> - 22.48.2. ARGUMENT struct LAYOUTRETURN4args { /* CURRENT_FH: file */ clientid4 clientid; offset4 offset; length4 length; bool reclaim; pnfs_layoutiomode4 iomode; pnfs_layouttype4 layout_type; }; 22.48.3. RESULT struct LAYOUTRETURN4res { nfsstat4 status; }; 22.48.4. DESCRIPTION Returns the layout segment represented by the current filehandle, clientid, byte range, iomode, and layout type. After this call, the client MUST NOT use the layout and the associated storage protocol to access the file data. The layout being returned may be a subdivision of a layout previously fetched through LAYOUTGET. As well, it may be a subset or superset of a layout specified by CB_LAYOUTRECALL. However, if it is a subset, the recall is not complete until the full byte range has been returned. It is also permissible, and no error should result, for a client to return a byte range covering a layout it does not hold. If the length is all 1s, the layout covers the range from offset to EOF. An iomode of ANY specifies that all layouts that match the other arguments to LAYOUTRETURN (i.e., clientid, byte range, and type) are being returned. Shepler Expires December 22, 2006 [Page 375] Internet-Draft NFSv4 Minor Version 1 June 2006 The "reclaim" field set to "true" in a LAYOUTRETURN request specifies that the client is attempting to return a layout that was acquired before the reboot of the metadata server during the metadata server's grace period. Returning layouts that were acquired during the metadata server's grace period MUST set the "reclaim" field to "false". See LAYOUTCOMMIT (Section 22.47) for more details. Layouts may be returned when recalled or voluntarily (i.e., before the server has recalled them). In either case the client must properly propagate state changed under the context of the layout to storage or to the server before returning the layout. If a client fails to return a layout in a timely manner, then the metadata server should use its control protocol with the storage devices to fence the client from accessing the data referenced by the layout. See Section 14.5 for more details. If the layout identified in the arguments does not exist, the error NFS4ERR_BADLAYOUT is returned. If a layout exists, but the iomode does not match, NFS4ERR_BADIOMODE is returned. If the LAYOUTRETURN request sets the "reclaim" field to "true" after the metadata server's grace period, NFS4ERR_NO_GRACE is returned. On success, the current filehandle retains its value. [[Comment.6: Should LAYOUTRETURN be modified to handle FSID callbacks?]] 22.48.5. IMPLEMENTATION 22.48.6. ERRORS NFS4ERR_BADLAYOUT NFS4ERR_BADIOMODE NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_NOFILEHANDLE NFS4ERR_NO_GRACE NFS4ERR_STALE NFS4ERR_STALE_CLIENTID NFS4ERR_UNKNOWN_LAYOUTTYPE 22.49. GETDEVICEINFO - Get Device Information 22.49.1. SYNOPSIS (cfh), device_id, layout_type, maxcount -> device_addr Shepler Expires December 22, 2006 [Page 376] Internet-Draft NFSv4 Minor Version 1 June 2006 22.49.2. ARGUMENT struct GETDEVICEINFO4args { /* CURRENT_FH: file */ pnfs_deviceid4 device_id; pnfs_layouttype4 layout_type; count4 maxcount; }; 22.49.3. RESULT struct GETDEVICEINFO4resok { opaque device_addr; }; union GETDEVICEINFO4res switch (nfsstat4 status) { case NFS4_OK: GETDEVICEINFO4resok resok4; default: void; }; 22.49.4. DESCRIPTION Returns device address information for a specified device. The device address MUST correspond to the layout type specified by the GETDEVICELIST4args. The current filehandle (cfh) is used to identify the file system; device IDs are unique per file system (FSID) and are qualified by the layout type. See Section 14.1.4 for more details on device ID assignment. If the size of the device address exceeds maxcount bytes, the metadata server will return the error NFS4ERR_TOOSMALL. If an invalid device ID is given, the metadata server will respond with NFS4ERR_INVAL. 22.49.5. IMPLEMENTATION 22.49.6. ERRORS NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_TOOSMALL NFS4ERR_UNKNOWN_LAYOUTTYPE 22.50. GETDEVICELIST Shepler Expires December 22, 2006 [Page 377] Internet-Draft NFSv4 Minor Version 1 June 2006 22.50.1. SYNOPSIS (cfh), layout_type, maxcount, cookie, cookieverf -> cookie, cookieverf, device_addrs<> 22.50.2. ARGUMENT struct GETDEVICELIST4args { /* CURRENT_FH: file */ pnfs_layouttype4 layout_type; count4 maxcount; nfs_cookie4 cookie; verifier4 cookieverf; }; 22.50.3. RESULT struct GETDEVICELIST4resok { nfs_cookie4 cookie; verifier4 cookieverf; pnfs_devlist_item4 device_addrs<>; bool eof; }; union GETDEVICELIST4res switch (nfsstat4 status) { case NFS4_OK: GETDEVICELIST4resok resok4; default: void; }; 22.50.4. DESCRIPTION In some applications, especially SAN environments, it is convenient to find out about all the devices associated with a file system. This lets a client determine if it has access to these devices, e.g., at mount time. This operation returns an array of items (pnfs_devlist_item4) that establish the association between the short pnfs_deviceid4 and the addressing information for that device, for a particular layout type. This operation may not be able to fetch all device information at once, thus it uses a cookie based approach, similar to READDIR, to fetch additional device information (see [6], section 14.2.24). The "eof" flag has a value of TRUE if there are no more entries to fetch. As in GETDEVICEINFO, the current filehandle (cfh) is used to identify the file system. Shepler Expires December 22, 2006 [Page 378] Internet-Draft NFSv4 Minor Version 1 June 2006 As in GETDEVICEINFO, maxcount specifies the maximum number of bytes to return. If the metadata server is unable to return a single device address, it will return the error NFS4ERR_TOOSMALL. If an invalid device ID is given, the metadata server will respond with NFS4ERR_INVAL. 22.50.5. IMPLEMENTATION 22.50.6. ERRORS NFS4ERR_BAD_COOKIE NFS4ERR_FHEXPIRED NFS4ERR_INVAL NFS4ERR_TOOSMALL NFS4ERR_UNKNOWN_LAYOUTTYPE 22.51. WANT_DELEGATION 22.51.1. SYNOPSIS (cfh), (clientid) -> stateid, delegation Shepler Expires December 22, 2006 [Page 379] Internet-Draft NFSv4 Minor Version 1 June 2006 22.51.2. ARGUMENT /* CURRENT_FH: */ /* CURRENT_CLIENTID: */ union deleg_claim4 switch (open_claim_type4 claim) { /* * No special rights to object. Ordinary delegation * request of the specified object. Object identified * by filehandle. */ case CLAIM_FH: /* new to v4.1 */ void; /* * Right to file based on a delegation granted to a previous boot * instance of the client. File is specified by filehandle. */ case CLAIM_DELEG_PREV_FH: /* new to v4.1 */ /* CURRENT_FH: file being opened */ void; /* * Right to the file established by an open previous to server * reboot. File identified by filehandle. * Used during server reclaim grace period. */ case CLAIM_PREVIOUS: /* CURRENT_FH: file being reclaimed */ open_delegation_type4 delegate_type; }; struct WANT_DELEGATION4args { uint32_t wda_want; deleg_claim4 wda_claim; }; Shepler Expires December 22, 2006 [Page 380] Internet-Draft NFSv4 Minor Version 1 June 2006 22.51.3. RESULT struct WANT_DELEGATION4resok { stateid4 wdr_stateid; open_delegation4 wdr_deleg; }; union WANT_DELEGATION4res switch (nfsstat4 status) { case NFS4_OK: WANT_DELEGATON4resok wdr_resok4; default: void; }; 22.51.4. DESCRIPTION Where this description mandates the return of a specific error code for a specific condition, and where multiple conditions apply, the server MAY return any of the mandated error codes. This operation allows a client to get a delegation on all types of files except directories. The server MAY support this operation. If the server does not support this operation, it MUST return NFS4ERR_NOTSUPP. This operation also allows the client to register a "want" for a delegation for the specified file object, and be notified via a callback when the delegation is available. The server MAY support notifications of availability via callbacks. If the server does not support registration of wants it MUST NOT return an error to indicate that. The client SHOULD NOT set OPEN4_SHARE_ACCESS_READ and SHOULD NOT set OPEN4_SHARE_ACCESS_WRITE in wda_want. If it does, the server MUST ignore them. The meanings of the following flags in wda_want are the same as they are in OPEN: OPEN4_SHARE_ACCESS_WANT_READ_DELEG OPEN4_SHARE_ACCESS_WANT_WRITE_DELEG OPEN4_SHARE_ACCESS_WANT_ANY_DELEG OPEN4_SHARE_ACCESS_WANT_NO_DELEG Shepler Expires December 22, 2006 [Page 381] Internet-Draft NFSv4 Minor Version 1 June 2006 OPEN4_SHARE_ACCESS_WANT_SIGNAL_DELEG_WHEN_RESRC_AVAIL OPEN4_SHARE_ACCESS_WANT_PUSH_DELEG_WHEN_UNCONTENDED Thef handling of the above flags in WANT_DELEGATION is the same as in OPEN. A request for a conflicting delegation MUST NOT trigger the recall of the existing delegation. 22.51.5. IMPLEMENTATION TBD 22.51.6. ERRORS TBD 23. NFS version 4.1 Callback Procedures The procedures used for callbacks are defined in the following sections. In the interest of clarity, the terms "client" and "server" refer to NFS clients and servers, despite the fact that for an individual callback RPC, the sense of these terms would be precisely the opposite. 23.1. Procedure 0: CB_NULL - No Operation 23.1.1. SYNOPSIS 23.1.2. ARGUMENTS void; 23.1.3. RESULTS void; 23.1.4. DESCRIPTION Standard NULL procedure. Void argument, void response. Even though there is no direct functionality associated with this procedure, the server will use CB_NULL to confirm the existence of a path for RPCs from server to client. Shepler Expires December 22, 2006 [Page 382] Internet-Draft NFSv4 Minor Version 1 June 2006 23.1.5. ERRORS None. 23.2. Procedure 1: CB_COMPOUND - Compound Operations 23.2.1. SYNOPSIS compoundargs -> compoundres 23.2.2. ARGUMENTS enum nfs_cb_opnum4 { OP_CB_GETATTR = 3, OP_CB_RECALL = 4, OP_CB_ILLEGAL = 10044 }; union nfs_cb_argop4 switch (unsigned argop) { case OP_CB_GETATTR: CB_GETATTR4args opcbgetattr; case OP_CB_RECALL: CB_RECALL4args opcbrecall; case OP_CB_ILLEGAL: void opcbillegal; }; struct CB_COMPOUND4args { utf8str_cs tag; uint32_t minorversion; uint32_t callback_ident; nfs_cb_argop4 argarray<>; }; 23.2.3. RESULTS union nfs_cb_resop4 switch (unsigned resop){ case OP_CB_GETATTR: CB_GETATTR4res opcbgetattr; case OP_CB_RECALL: CB_RECALL4res opcbrecall; }; struct CB_COMPOUND4res { nfsstat4 status; utf8str_cs tag; nfs_cb_resop4 resarray<>; }; Shepler Expires December 22, 2006 [Page 383] Internet-Draft NFSv4 Minor Version 1 June 2006 23.2.4. DESCRIPTION The CB_COMPOUND procedure is used to combine one or more of the callback procedures into a single RPC request. The main callback RPC program has two main procedures: CB_NULL and CB_COMPOUND. All other operations use the CB_COMPOUND procedure as a wrapper. In the processing of the CB_COMPOUND procedure, the client may find that it does not have the available resources to execute any or all of the operations within the CB_COMPOUND sequence. In this case, the error NFS4ERR_RESOURCE will be returned for the particular operation within the CB_COMPOUND procedure where the resource exhaustion occurred. This assumes that all previous operations within the CB_COMPOUND sequence have been evaluated successfully. Contained within the CB_COMPOUND results is a 'status' field. This status must be equivalent to the status of the last operation that was executed within the CB_COMPOUND procedure. Therefore, if an operation incurred an error then the 'status' value will be the same error value as is being returned for the operation that failed. For the definition of the "tag" field, see the section "Procedure 1: COMPOUND - Compound Operations". The value of callback_ident is supplied by the client during SETCLIENTID. The server must use the client supplied callback_ident during the CB_COMPOUND to allow the client to properly identify the server. Illegal operation codes are handled in the same way as they are handled for the COMPOUND procedure. 23.2.5. IMPLEMENTATION The CB_COMPOUND procedure is used to combine individual operations into a single RPC request. The client interprets each of the operations in turn. If an operation is executed by the client and the status of that operation is NFS4_OK, then the next operation in the CB_COMPOUND procedure is executed. The client continues this process until there are no more operations to be executed or one of the operations has a status value other than NFS4_OK. 23.2.6. ERRORS NFS4ERR_BADHANDLE NFS4ERR_BAD_STATEID NFS4ERR_BADXDR NFS4ERR_OP_ILLEGAL NFS4ERR_RESOURCE NFS4ERR_SERVERFAULT Shepler Expires December 22, 2006 [Page 384] Internet-Draft NFSv4 Minor Version 1 June 2006 24. CB_RECALLCREDIT - change flow control limits Change flow control limits 24.1. SYNOPSIS targetcount -> status 24.2. ARGUMENT struct CB_RECALLCREDIT4args { sessionid4 sessionid; uint32_t target; }; 24.3. RESULT struct CB_RECALLCREDIT4res { nfsstat4 status; }; 24.4. DESCRIPTION The CB_RECALLCREDIT operation requests the client to return session and transport credits to the server, by zero-length RDMA Sends or NULL NFSv4 operations. 24.5. IMPLEMENTATION No discussion at this time. 24.6. ERRORS NONE 25. CB_SEQUENCE - Supply callback channel sequencing and control Sequence and control 25.1. SYNOPSIS control -> control Shepler Expires December 22, 2006 [Page 385] Internet-Draft NFSv4 Minor Version 1 June 2006 25.2. ARGUMENT typedef uint32_t sequenceid4; typedef uint32_t slotid4; struct CB_SEQUENCE4args { sessionid4 sessionid; sequenceid4 sequenceid; slotid4 slotid; slotid4 maxslot; sequenceid4 referring_sequenceid; slotid4 referring_slotid; }; 25.3. RESULT struct CB_SEQUENCE4resok { sessionid4 sessionid; sequenceid4 sequenceid; slotid4 slotid; slotid4 maxslot; slotid4 target_maxslot; }; union CB_SEQUENCE4res switch (nfsstat4 status) { case NFS4_OK: CB_SEQUENCE4resok resok4; default: void; }; 25.4. DESCRIPTION The CB_SEQUENCE operation is used to manage operational accounting for the callback channel of the session on which the operation is sent. The contents include the client and session to which this request belongs, slotid and sequenceid, used by the server to implement session request control and the duplicate reply cache semantics, and exchanged slot counts which are used to adjust these values. This operation must appear once as the first operation in each CB_COMPOUND sent after the callback channel is successfully bound, or a protocol error must result. 25.5. IMPLEMENTATION No discussion at this time. Shepler Expires December 22, 2006 [Page 386] Internet-Draft NFSv4 Minor Version 1 June 2006 25.6. ERRORS NFS4ERR_BADSESSION NFS4ERR_BADSLOT 26. CB_NOTIFY - Notify directory changes Tell the client of directory changes. 26.1. SYNOPSIS stateid, notification -> {} 26.2. ARGUMENT /* * Notification information sent to the client. */ union dir_notification4 switch (dir_notification_type4 notification_type) { case DIR_NOTIFICATION_CHANGE_CHILD_ATTRIBUTES: dir_notification_attribute4 change_child_attributes; case DIR_NOTIFICATION_CHANGE_DIR_ATTRIBUTES: fattr4 change_dir_attributes; case DIR_NOTIFICATION_REMOVE_ENTRY: dir_notification_remove4 remove_notification; case DIR_NOTIFICATION_ADD_ENTRY: dir_notification_add4 add_notification; case DIR_NOTIFICATION_RENAME_ENTRY: dir_notification_rename4 rename_notification; case DIR_NOTIFICATION_CHANGE_COOKIE_VERIFIER: dir_notification_verifier4 verf_notification; }; /* * Changed entry information. */ struct dir_entry { component4 file; fattr4 attrs; }; struct dir_notification_attribute4 { dir_entry changed_entry; }; struct dir_notification_remove4 { dir_entry old_entry; Shepler Expires December 22, 2006 [Page 387] Internet-Draft NFSv4 Minor Version 1 June 2006 nfs_cookie4 old_entry_cookie; }; struct dir_notification_rename4 { dir_entry old_entry; dir_notification_add4 new_entry; }; struct dir_notification_verifier4 { verifier4 old_cookieverf; verifier4 new_cookieverf; }; struct dir_notification_add4 { dir_entry new_entry; /* what READDIR would have returned for this entry */ nfs_cookie4 new_entry_cookie; bool last_entry; prev_entry_info4 prev_info; }; union prev_entry_info4 switch (bool isprev) { case TRUE: /* A previous entry exists */ prev_entry4 prev_entry_info; case FALSE: /* we are adding to an empty directory */ void; }; /* * Previous entry information */ struct prev_entry4 { dir_entry prev_entry; /* what READDIR returned for this entry */ nfs_cookie4 prev_entry_cookie; }; struct CB_NOTIFY4args { stateid4 stateid; dir_notification4 changes<>; }; Shepler Expires December 22, 2006 [Page 388] Internet-Draft NFSv4 Minor Version 1 June 2006 26.3. RESULT struct CB_NOTIFY4res { nfsstat4 status; }; 26.4. DESCRIPTION The CB_NOTIFY operation is used by the server to send notifications to clients about changes in a delegated directory. These notifications are sent over the callback path. The notification is sent once the original request has been processed on the server. The server will send an array of notifications for all changes that might have occurred in the directory. The dir_notification_type4 can only have one bit set for each notification in the array. If the client holding the delegation makes any changes in the directory that cause files or sub directories to be added or removed, the server will notify that client of the resulting change(s). If the client holding the delegation is making attribute or cookie verifier changes only, the server does not need to send notifications to that client. The server will send the following information for each operation: ADDING A FILE The server will send information about the new entry being created along with the cookie for that entry. The entry information contains the nfs name of the entry and attributes. If this entry is added to the end of the directory, the server will set a last_entry flag to true. If the file is added such that there is atleast one entry before it, the server will also return the previous entry information along with its cookie. This is to help clients find the right location in their DNLC or directory caches where this entry should be cached. REMOVING A FILE The server will send information about the directory entry being deleted. The server will also send the cookie value for the deleted entry so that clients can get to the cached information for this entry. RENAMING A FILE The server will send information about both the old entry and the new entry. This includes name and attributes for each entry. This notification is only sent if both entries are in the same directory. If the rename is across directories, the server will send a remove notification to one directory and an add notification to the other directory, assuming both have a directory delegation. Shepler Expires December 22, 2006 [Page 389] Internet-Draft NFSv4 Minor Version 1 June 2006 FILE/DIR ATTRIBUTE CHANGE The client will use the attribute mask to inform the server of attributes for which it wants to receive notifications. This change notification can be requested for both changes to the attributes of the directory as well as changes to any file attributes in the directory by using two separate attribute masks. The client can not ask for change attribute notification per file. One attribute mask covers all the files in the directory. Upon any attribute change, the server will send back the values of changed attributes. Notifications might not make sense for some filesystem wide attributes and it is up to the server to decide which subset it wants to support. The client can negotiate the frequency of attribute notifications by letting the server know how often it wants to be notified of an attribute change. The server will return supported notification frequencies or an indication that no notification is permitted for directory or child attributes by setting the dir_notif_delay and dir_entry_notif_delay attributes respectively. COOKIE VERIFIER CHANGE If the cookie verifier changes while a client is holding a delegation, the server will notify the client so that it can invalidate its cookies and reissue a READDIR to get the new set of cookies. 26.5. IMPLEMENTATION 26.6. ERRORS NFS4ERR_BAD_STATEID NFS4ERR_INVAL NFS4ERR_BADXDR NFS4ERR_SERVERFAULT 27. CB_RECALL_ANY - Keep any N delegations Notify client to return delegation and keep N of them. 27.1. SYNOPSIS N, type_mask -> {} Shepler Expires December 22, 2006 [Page 390] Internet-Draft NFSv4 Minor Version 1 June 2006 27.2. ARGUMENT const TYPE_MASK_RDATA_DLG = 0; const TYPE_MASK_WDATA_DLG = 1; const TYPE_MASK_DIR_DLG = 2; const TYPE_MASK_FILE_LAYOUT = 3; const TYPE_MASK_BLK_LAYOUT_MIN = 4; const TYPE_MASK_BLK_LAYOUT_MAX = 7; const TYPE_MASK_OBJ_LAYOUT_MIN = 8; const TYPE_MASK_OBJ_LAYOUT_MAX = 11; const TYPE_MASK_OTHER_LAYOUT_MIN = 12; const TYPE_MASK_OTHER_LAYOUT_MAX = 15; struct CB_RECALLANYY4args { uint4 objects_to_keep; bitmap4 type_mask; } 27.3. RESULT struct CB_RECALLANY4res { nfsstat4 status; }; 27.4. DESCRIPTION The server may decide that it cannot hold all of the state for recallable objects, such as delegations and layouts, without running out of resources. In such a case, it is free to recall individual objects to reduce the load but this would be far from optimal. Because the general purpose of such recallable objects as delegations is to eliminate client interaction with the server, the server cannot interpret lack of recent use as indicating that the object is no longer useful. The absence of visible use may be the result of a large number of potential operations eliminated. In the case of layouts, the layout will be used explicitly but the meta-data server does not have direct knowledge of such use. In order to implement an effective reclaim scheme for such objects, the server's knowledge of available resources must be used to determine when objects must be recalled with the clients selecting the actual objects to be returned. Server implementations may differ in their resource allocation requirements. For example, one server may share resources among all classes of recallable objects whereas another may use separate resource pools for layouts and for delegations, or further separate Shepler Expires December 22, 2006 [Page 391] Internet-Draft NFSv4 Minor Version 1 June 2006 resources by types of delegations. When a given resource pool is over-utilized, the server can issue a CB_RECALL_ANY to clients holding recallable objects of the types involved, allowing it to keep a certain number of such objects and return any excess. A mask specifies which types of objects are to be limited. The client chooses, based on its own knowledge of current usefulness, which of the objects in that class should be returned. For NFSv4.1, sixteen bits are defined. For some of these, ranges are defined and it is up to the definition of the storage protocol to specify how these are to be used. There are ranges for blocks-based storage protocols, for object-based storage protocols and a reserved range for other experimental storage protocols. The RFC defining such a storage protocol needs to specify how particular bits within its range are to be used. For example, it may specify a mapping between attributes of the layout (read vs. write, size of area) and the bit to be used or it may define a field in the layout where the associated bit position is made available by the server to the client. When an undefined bit is set in the type mask, NFS4ERR_INVAL should be returned. However even if a client does not support an object of the specified type, if the bit is defined, NFS4ERR_INVAL should not be returned. Future minor versions of NFSv4 may expand the set of valid type mask bits. CB_RECALL_ANY specifies a count of objects that the client may keep as opposed to a count that the client must return. This is to avoid potential race between a CB_RECALL_ANY that had a count of objects to free with a set of client-originated operations to return layouts or delegations. As a result of the race, the client and server would have differing ideas as to how many objects to return. Hence the client could mistakenly free too many. If resource demands prompt it, the server may send another CB_RECALL_ANY with a lower count, even it has not yet received an acknowledgement from the client for a previous CB_RECALL_ANY with the same type mask. Although the possibility exists that these will be received by the client in a order different from the order in which they were sent, any such permutation of the callback stream is harmless. It is the job of the client to bring down the size of the recallable object set in line with each CB_RECALL_ANY received and until that obligation is met it cannot be canceled or modified by any subsequent CB_RECALL_ANY for the same type mask. Thus if the server sends two CB_RECALL_ANY's, the effect will be the same as if the lower count was sent, whatever the order of recall receipt. Note that this means that a server may not cancel the effect of a Shepler Expires December 22, 2006 [Page 392] Internet-Draft NFSv4 Minor Version 1 June 2006 CB_RECALL_ANY by sending another recall with a higher count. When a CB_RECALL_ANY is received and the count is already within the limit set or is above a limit that the client is working to get down to, that callback has no effect. The client can choose to return any type of object specified by the mask. If a server wishes to limit use of objects of a specific type, it should only specify that type in the mask sent. The client may not return requested objects and it is up to the server to handle this situation, typically by doing specific recalls to properly limit resource usage. The server should give the client enough time to return objects before proceeding to specific recalls. This time should not be less than the lease period. Servers are generally free not to give out recallable objects when insufficient resources are available. Note that the effect of such a policy is implicitly to give precedence to existing objects relative to requested ones, with the result that resources might not be optimally used. To prevent this, servers are well advised to make the point at which they start issuing CB_RECALL_ANY callbacks somewhat below that at which they cease to give out new delegations and layouts. This allows the client to purge its less-used objects whenever appropriate and so continue to have its subsequent requests given new resources freed up by object returns. 27.5. IMPLEMENTATION 27.6. ERRORS NFS4ERR_RESOURCE NFS4ERR_INVAL 28. CB_SIZECHANGED 28.1. SYNOPSIS fh, size -> - 28.2. ARGUMENT struct CB_SIZECHANGEDargs { nfs_fh4 fh; length4 size; }; Shepler Expires December 22, 2006 [Page 393] Internet-Draft NFSv4 Minor Version 1 June 2006 28.3. RESULT struct CB_SIZECHANGEDres { nfsstat4 status; }; 28.4. DESCRIPTION The CB_SIZECHANGED operation is used to notify the client that the size pertaining to the filehandle associated with "fh", has changed. The new size is specified. Upon reception of this notification callback, the client should update its internal size for the file. If the layout being held for the file is of the NFSv4 file layout type, then the size field within that layout should be updated (see Section 16.5). For other layout types see Section 14.4.2 for more details. If the handle specified is not one for which the client holds a layout, an NFS4ERR_BADHANDLE error is returned. 28.5. IMPLEMENTATION 28.6. ERRORS NFS4ERR_BADHANDLE 29. CB_LAYOUTRECALL 29.1. SYNOPSIS layout_type, iomode, layoutchanged, layoutrecall -> - Shepler Expires December 22, 2006 [Page 394] Internet-Draft NFSv4 Minor Version 1 June 2006 29.2. ARGUMENT enum layoutrecall_type4 { RECALL_FILE = 1, RECALL_FSID = 2 }; struct layoutrecall_file4 { nfs_fh4 fh; offset4 offset; length4 length; }; union layoutrecall4 switch(layoutrecall_type4 recalltype) { case RECALL_FILE: layoutrecall_file4 layout; case RECALL_FSID: fsid4 fsid; }; struct CB_LAYOUTRECALLargs { pnfs_layouttype4 layout_type; pnfs_layoutiomode4 iomode; bool layoutchanged; layoutrecall4 layoutrecall; }; 29.3. RESULT struct CB_LAYOUTRECALLres { nfsstat4 status; }; 29.4. DESCRIPTION The CB_LAYOUTRECALL operation is used to begin the process of recalling a layout, a portion thereof, or all layouts pertaining to a particular file system (FSID). If RECALL_FILE is specified, the offset and length fields specify the portion of the layout to be returned. The iomode specifies the set of layouts to be returned. An iomode of ANY specifies that all matching layouts, regardless of iomode, must be returned; otherwise, only layouts that exactly match the iomode must be returned. If the "layoutchanged" field is TRUE, then the client SHOULD not flush its dirty data to the devices specified by the layout being recalled. Instead, it is preferable for the client to flush the dirty data through the metadata server. Alternatively, the client Shepler Expires December 22, 2006 [Page 395] Internet-Draft NFSv4 Minor Version 1 June 2006 may attempt to obtain a new layout. Note: in order to obtain a new layout the client must first return the old layout. Since obtaining a new layout is not guaranteed to succeed, the client must be ready to flush its dirty data through the metadata server. If RECALL_FSID is specified, the fsid specifies the file system for which any outstanding layouts must be returned. Layouts are returned through the LAYOUTRETURN operation. If the client does not hold any layout segment either matching or overlapping with the requested layout, it returns NFS4ERR_NOMATCHING_LAYOUT. If a length of all 1s is specified then the layout corresponding to the byte range from "offset" to the end- of-file MUST be returned. 29.5. IMPLEMENTATION The client should reply to the callback immediately. Replying does not complete the recall except when an error is returned. The recall is not complete until the layout(s) are returned using a LAYOUTRETURN. The client should complete any in-flight I/O operations using the recalled layout(s) before returning it/them via LAYOUTRETURN. If the client has buffered dirty data there are a number of options for flushing that data. If "layoutchanged" is false, the client may choose to write dirty data directly to storage before calling LAYOUTRETURN. However, if "layoutchanged" is true, the client may either choose to write it later using normal NFSv4 WRITE operations to the metadata server or it may attempt to obtain a new layout, after first returning the recalled layout, using the new layout to flush the dirty data. Regardless of whether the client is holding a layout, it may always write data through the metadata server. If dirty data is flushed while the layout is held, the client must still issue LAYOUTCOMMIT operations at the appropriate time, especially before issuing the LAYOUTRETURN. If a large amount of dirty data is outstanding, the client may issue LAYOUTRETURNs for portions of the layout being recalled; this allows the server to monitor the client's progress and adherence to the callback. However, the last LAYOUTRETURN in a sequence of returns, SHOULD specify the full range being recalled (see Section 14.5.2 for details). 29.6. ERRORS NFS4ERR_NOMATCHING_LAYOUT Shepler Expires December 22, 2006 [Page 396] Internet-Draft NFSv4 Minor Version 1 June 2006 30. CB_PUSH_DELEG 30.1. SYNOPSIS fh, stateid -> { } 30.2. ARGUMENT struct CB_PUSH_DELEG4args { nfs_fh4 pda_fh; stateid4 pda_stateid; open_delegation4 pda_delegation; }; 30.3. RESULT nfsstat4 status 30.4. DESCRIPTION CB_PUSH_DELEG is used by the server to both signal to the client that the delegation it wants is available and to simultaneously offer the delegation to the client. The client has the choice of accepting the delegation by returning NFS4_OK to the server, delaying the decision to accept the offered delegation by returning NFS4ERR_DELAY, delaying the decision till the next CB_COMPOUND by returing NFS4ERR_RESOURCE, or permanently rejecting the offer of the delegation via any other error status. The server MUST send in pda_delegation a delegation corresponding to the type of what the client requested in the OPEN, WANT_DELEGATION, or GET_DIR_DELEGATION request. If the client does return NFS4ERR_DELAY or NFS4ERR_RESOURCE, and there is a conflicting delegation request, the server MAY process it at the expense of the client that returned NFS4ERR_DELAY. The client's want will not be cancelled, but MAY processed behind other delegation requests or registered wants. 30.5. IMPLEMENTATION TBD 30.6. ERRORS TBD Shepler Expires December 22, 2006 [Page 397] Internet-Draft NFSv4 Minor Version 1 June 2006 31. CB_RECALLABLE_OBJ_AVAIL 31.1. SYNOPSIS TBD 31.2. ARGUMENT CB_RECALLANY4args 31.3. RESULT nfsstat4 status; 31.4. DESCRIPTION CB_RECALLABLE_OBJ_AVAIL is used by the server to signal the client that the server has resources to grant recallable objects that might previously have been denied by OPEN, WANT_DELEGATION, GET_DIR_DELEG, or LAYOUTGET. The argument, objects_to_keep means the total number of recallable objects of the types indicated in the argument type_mask that the server believes it can allow the client to have, including the number of such objects the client already has. A client that tries to acquire more recallable objects than the server informats it can have runs the risk of having objects recalled. 31.5. IMPLEMENTATION TBD 31.6. ERRORS TBD 32. References 32.1. Normative References [1] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", March 1997. [2] Srinivasan, R., "XDR: External Data Representation Standard", RFC 1832, August 1995. [3] Srinivasan, R., "RPC: Remote Procedure Call Protocol Shepler Expires December 22, 2006 [Page 398] Internet-Draft NFSv4 Minor Version 1 June 2006 Specification Version 2", RFC 1831, August 1995. [4] Linn, J., "Generic Security Service Application Program Interface Version 2, Update 1", RFC 2743, January 2000. [5] Hinden, R. and S. Deering, "IP Version 6 Addressing Architecture", RFC 1884, December 1995. [6] Shepler, S., Callaghan, B., Robinson, D., Thurlow, R., Beame, C., Eisler, M., and D. Noveck, "Network File System (NFS) version 4 Protocol", RFC 3530, April 2003. [7] International Organization for Standardization, "Information Technology - Universal Multiple-octet coded Character Set (UCS) - Part 1: Architecture and Basic Multilingual Plane", ISO Standard 10646-1, May 1993. [8] Alvestrand, H., "IETF Policy on Character Sets and Languages", BCP 18, RFC 2277, January 1998. 32.2. Informative References [9] Srinivasan, R., "Binding Protocols for ONC RPC Version 2", RFC 1833, August 1995. [10] Zelenka, J., Welch, B., and B. Halevy, "Object-based pNFS Operations", July 2005, . [11] Black, D., "pNFS Block/Volume Layout", July 2005, . [12] Satran, J., Meth, K., Sapuntzakis, C., Chadalapaka, M., and E. Zeidner, "Internet Small Computer Systems Interface (iSCSI)", RFC 3720, April 2004. [13] Snively, R., "Fibre Channel Protocol for SCSI, 2nd Version (FCP-2)", ANSI/INCITS 350-2003, Oct 2003. [14] Weber, R., "Object-Based Storage Device Commands (OSD)", ANSI/ INCITS 400-2004, July 2004, . Appendix A. Acknowledgments The initial drafts for the SECINFO extensions were edited by Mike Eisler with contributions from Tom Talpey, Saadia Khan, and Jon Shepler Expires December 22, 2006 [Page 399] Internet-Draft NFSv4 Minor Version 1 June 2006 Bauman. The initial drafts for the SESSIONS extensions were edited by Tom Talpey, Spencer Shepler, Jon Bauman with contributions from Charles Antonelli, Brent Callaghan, Mike Eisler, John Howard, Chet Juszczak, Trond Myklebust, Dave Noveck, John Scott, Mike stolarchuk and Mark Wittle. The initial drafts for the Directory Delegations support were contributed by Saadia Khan with input from Dave Noveck, Mike Eisler, Carl Burnett, Ted Anderson and Tom Talpey. The initial drafts for the parellel NFS support were edited by Brent Welch and Garth Goodson. Additional authors for those documents were Benny Halevy, David Black, and Andy Adamson. Additional input came from the informal group which contributed to the construction of the initial pNFS drafts; specific acknowledgement goes to Gary Grider, Peter Corbett, Dave Noveck, and Peter Honeyman. The pNFS work was inspired by the NASD and OSD work done by Garth Gibson. Gary Grider of the national labs (LANL) has also been a champion of high- performance parallel I/O. Author's Address Spencer Shepler Sun Microsystems, Inc. 7808 Moonflower Drive Austin, TX 78750 USA Phone: +1-512-349-9376 Email: spencer.shepler@sun.com Shepler Expires December 22, 2006 [Page 400] Internet-Draft NFSv4 Minor Version 1 June 2006 Full Copyright Statement Copyright (C) The Internet Society (2006). This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights. This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Intellectual Property The IETF takes no position regarding the validity or scope of any Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; nor does it represent that it has made any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can be found in BCP 78 and BCP 79. Copies of IPR disclosures made to the IETF Secretariat and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF on-line IPR repository at http://www.ietf.org/ipr. The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights that may cover technology that may be required to implement this standard. Please address the information to the IETF at ietf-ipr@ietf.org. Acknowledgment Funding for the RFC Editor function is provided by the IETF Administrative Support Activity (IASA). Shepler Expires December 22, 2006 [Page 401]