zonefs.rst (23920B)
1.. SPDX-License-Identifier: GPL-2.0 2 3================================================ 4ZoneFS - Zone filesystem for Zoned block devices 5================================================ 6 7Introduction 8============ 9 10zonefs is a very simple file system exposing each zone of a zoned block device 11as a file. Unlike a regular POSIX-compliant file system with native zoned block 12device support (e.g. f2fs), zonefs does not hide the sequential write 13constraint of zoned block devices to the user. Files representing sequential 14write zones of the device must be written sequentially starting from the end 15of the file (append only writes). 16 17As such, zonefs is in essence closer to a raw block device access interface 18than to a full-featured POSIX file system. The goal of zonefs is to simplify 19the implementation of zoned block device support in applications by replacing 20raw block device file accesses with a richer file API, avoiding relying on 21direct block device file ioctls which may be more obscure to developers. One 22example of this approach is the implementation of LSM (log-structured merge) 23tree structures (such as used in RocksDB and LevelDB) on zoned block devices 24by allowing SSTables to be stored in a zone file similarly to a regular file 25system rather than as a range of sectors of the entire disk. The introduction 26of the higher level construct "one file is one zone" can help reducing the 27amount of changes needed in the application as well as introducing support for 28different application programming languages. 29 30Zoned block devices 31------------------- 32 33Zoned storage devices belong to a class of storage devices with an address 34space that is divided into zones. A zone is a group of consecutive LBAs and all 35zones are contiguous (there are no LBA gaps). Zones may have different types. 36 37* Conventional zones: there are no access constraints to LBAs belonging to 38 conventional zones. Any read or write access can be executed, similarly to a 39 regular block device. 40* Sequential zones: these zones accept random reads but must be written 41 sequentially. Each sequential zone has a write pointer maintained by the 42 device that keeps track of the mandatory start LBA position of the next write 43 to the device. As a result of this write constraint, LBAs in a sequential zone 44 cannot be overwritten. Sequential zones must first be erased using a special 45 command (zone reset) before rewriting. 46 47Zoned storage devices can be implemented using various recording and media 48technologies. The most common form of zoned storage today uses the SCSI Zoned 49Block Commands (ZBC) and Zoned ATA Commands (ZAC) interfaces on Shingled 50Magnetic Recording (SMR) HDDs. 51 52Solid State Disks (SSD) storage devices can also implement a zoned interface 53to, for instance, reduce internal write amplification due to garbage collection. 54The NVMe Zoned NameSpace (ZNS) is a technical proposal of the NVMe standard 55committee aiming at adding a zoned storage interface to the NVMe protocol. 56 57Zonefs Overview 58=============== 59 60Zonefs exposes the zones of a zoned block device as files. The files 61representing zones are grouped by zone type, which are themselves represented 62by sub-directories. This file structure is built entirely using zone information 63provided by the device and so does not require any complex on-disk metadata 64structure. 65 66On-disk metadata 67---------------- 68 69zonefs on-disk metadata is reduced to an immutable super block which 70persistently stores a magic number and optional feature flags and values. On 71mount, zonefs uses blkdev_report_zones() to obtain the device zone configuration 72and populates the mount point with a static file tree solely based on this 73information. File sizes come from the device zone type and write pointer 74position managed by the device itself. 75 76The super block is always written on disk at sector 0. The first zone of the 77device storing the super block is never exposed as a zone file by zonefs. If 78the zone containing the super block is a sequential zone, the mkzonefs format 79tool always "finishes" the zone, that is, it transitions the zone to a full 80state to make it read-only, preventing any data write. 81 82Zone type sub-directories 83------------------------- 84 85Files representing zones of the same type are grouped together under the same 86sub-directory automatically created on mount. 87 88For conventional zones, the sub-directory "cnv" is used. This directory is 89however created if and only if the device has usable conventional zones. If 90the device only has a single conventional zone at sector 0, the zone will not 91be exposed as a file as it will be used to store the zonefs super block. For 92such devices, the "cnv" sub-directory will not be created. 93 94For sequential write zones, the sub-directory "seq" is used. 95 96These two directories are the only directories that exist in zonefs. Users 97cannot create other directories and cannot rename nor delete the "cnv" and 98"seq" sub-directories. 99 100The size of the directories indicated by the st_size field of struct stat, 101obtained with the stat() or fstat() system calls, indicates the number of files 102existing under the directory. 103 104Zone files 105---------- 106 107Zone files are named using the number of the zone they represent within the set 108of zones of a particular type. That is, both the "cnv" and "seq" directories 109contain files named "0", "1", "2", ... The file numbers also represent 110increasing zone start sector on the device. 111 112All read and write operations to zone files are not allowed beyond the file 113maximum size, that is, beyond the zone capacity. Any access exceeding the zone 114capacity is failed with the -EFBIG error. 115 116Creating, deleting, renaming or modifying any attribute of files and 117sub-directories is not allowed. 118 119The number of blocks of a file as reported by stat() and fstat() indicates the 120capacity of the zone file, or in other words, the maximum file size. 121 122Conventional zone files 123----------------------- 124 125The size of conventional zone files is fixed to the size of the zone they 126represent. Conventional zone files cannot be truncated. 127 128These files can be randomly read and written using any type of I/O operation: 129buffered I/Os, direct I/Os, memory mapped I/Os (mmap), etc. There are no I/O 130constraint for these files beyond the file size limit mentioned above. 131 132Sequential zone files 133--------------------- 134 135The size of sequential zone files grouped in the "seq" sub-directory represents 136the file's zone write pointer position relative to the zone start sector. 137 138Sequential zone files can only be written sequentially, starting from the file 139end, that is, write operations can only be append writes. Zonefs makes no 140attempt at accepting random writes and will fail any write request that has a 141start offset not corresponding to the end of the file, or to the end of the last 142write issued and still in-flight (for asynchronous I/O operations). 143 144Since dirty page writeback by the page cache does not guarantee a sequential 145write pattern, zonefs prevents buffered writes and writeable shared mappings 146on sequential files. Only direct I/O writes are accepted for these files. 147zonefs relies on the sequential delivery of write I/O requests to the device 148implemented by the block layer elevator. An elevator implementing the sequential 149write feature for zoned block device (ELEVATOR_F_ZBD_SEQ_WRITE elevator feature) 150must be used. This type of elevator (e.g. mq-deadline) is set by default 151for zoned block devices on device initialization. 152 153There are no restrictions on the type of I/O used for read operations in 154sequential zone files. Buffered I/Os, direct I/Os and shared read mappings are 155all accepted. 156 157Truncating sequential zone files is allowed only down to 0, in which case, the 158zone is reset to rewind the file zone write pointer position to the start of 159the zone, or up to the zone capacity, in which case the file's zone is 160transitioned to the FULL state (finish zone operation). 161 162Format options 163-------------- 164 165Several optional features of zonefs can be enabled at format time. 166 167* Conventional zone aggregation: ranges of contiguous conventional zones can be 168 aggregated into a single larger file instead of the default one file per zone. 169* File ownership: The owner UID and GID of zone files is by default 0 (root) 170 but can be changed to any valid UID/GID. 171* File access permissions: the default 640 access permissions can be changed. 172 173IO error handling 174----------------- 175 176Zoned block devices may fail I/O requests for reasons similar to regular block 177devices, e.g. due to bad sectors. However, in addition to such known I/O 178failure pattern, the standards governing zoned block devices behavior define 179additional conditions that result in I/O errors. 180 181* A zone may transition to the read-only condition (BLK_ZONE_COND_READONLY): 182 While the data already written in the zone is still readable, the zone can 183 no longer be written. No user action on the zone (zone management command or 184 read/write access) can change the zone condition back to a normal read/write 185 state. While the reasons for the device to transition a zone to read-only 186 state are not defined by the standards, a typical cause for such transition 187 would be a defective write head on an HDD (all zones under this head are 188 changed to read-only). 189 190* A zone may transition to the offline condition (BLK_ZONE_COND_OFFLINE): 191 An offline zone cannot be read nor written. No user action can transition an 192 offline zone back to an operational good state. Similarly to zone read-only 193 transitions, the reasons for a drive to transition a zone to the offline 194 condition are undefined. A typical cause would be a defective read-write head 195 on an HDD causing all zones on the platter under the broken head to be 196 inaccessible. 197 198* Unaligned write errors: These errors result from the host issuing write 199 requests with a start sector that does not correspond to a zone write pointer 200 position when the write request is executed by the device. Even though zonefs 201 enforces sequential file write for sequential zones, unaligned write errors 202 may still happen in the case of a partial failure of a very large direct I/O 203 operation split into multiple BIOs/requests or asynchronous I/O operations. 204 If one of the write request within the set of sequential write requests 205 issued to the device fails, all write requests queued after it will 206 become unaligned and fail. 207 208* Delayed write errors: similarly to regular block devices, if the device side 209 write cache is enabled, write errors may occur in ranges of previously 210 completed writes when the device write cache is flushed, e.g. on fsync(). 211 Similarly to the previous immediate unaligned write error case, delayed write 212 errors can propagate through a stream of cached sequential data for a zone 213 causing all data to be dropped after the sector that caused the error. 214 215All I/O errors detected by zonefs are notified to the user with an error code 216return for the system call that triggered or detected the error. The recovery 217actions taken by zonefs in response to I/O errors depend on the I/O type (read 218vs write) and on the reason for the error (bad sector, unaligned writes or zone 219condition change). 220 221* For read I/O errors, zonefs does not execute any particular recovery action, 222 but only if the file zone is still in a good condition and there is no 223 inconsistency between the file inode size and its zone write pointer position. 224 If a problem is detected, I/O error recovery is executed (see below table). 225 226* For write I/O errors, zonefs I/O error recovery is always executed. 227 228* A zone condition change to read-only or offline also always triggers zonefs 229 I/O error recovery. 230 231Zonefs minimal I/O error recovery may change a file size and file access 232permissions. 233 234* File size changes: 235 Immediate or delayed write errors in a sequential zone file may cause the file 236 inode size to be inconsistent with the amount of data successfully written in 237 the file zone. For instance, the partial failure of a multi-BIO large write 238 operation will cause the zone write pointer to advance partially, even though 239 the entire write operation will be reported as failed to the user. In such 240 case, the file inode size must be advanced to reflect the zone write pointer 241 change and eventually allow the user to restart writing at the end of the 242 file. 243 A file size may also be reduced to reflect a delayed write error detected on 244 fsync(): in this case, the amount of data effectively written in the zone may 245 be less than originally indicated by the file inode size. After such I/O 246 error, zonefs always fixes the file inode size to reflect the amount of data 247 persistently stored in the file zone. 248 249* Access permission changes: 250 A zone condition change to read-only is indicated with a change in the file 251 access permissions to render the file read-only. This disables changes to the 252 file attributes and data modification. For offline zones, all permissions 253 (read and write) to the file are disabled. 254 255Further action taken by zonefs I/O error recovery can be controlled by the user 256with the "errors=xxx" mount option. The table below summarizes the result of 257zonefs I/O error processing depending on the mount option and on the zone 258conditions:: 259 260 +--------------+-----------+-----------------------------------------+ 261 | | | Post error state | 262 | "errors=xxx" | device | access permissions | 263 | mount | zone | file file device zone | 264 | option | condition | size read write read write | 265 +--------------+-----------+-----------------------------------------+ 266 | | good | fixed yes no yes yes | 267 | remount-ro | read-only | as is yes no yes no | 268 | (default) | offline | 0 no no no no | 269 +--------------+-----------+-----------------------------------------+ 270 | | good | fixed yes no yes yes | 271 | zone-ro | read-only | as is yes no yes no | 272 | | offline | 0 no no no no | 273 +--------------+-----------+-----------------------------------------+ 274 | | good | 0 no no yes yes | 275 | zone-offline | read-only | 0 no no yes no | 276 | | offline | 0 no no no no | 277 +--------------+-----------+-----------------------------------------+ 278 | | good | fixed yes yes yes yes | 279 | repair | read-only | as is yes no yes no | 280 | | offline | 0 no no no no | 281 +--------------+-----------+-----------------------------------------+ 282 283Further notes: 284 285* The "errors=remount-ro" mount option is the default behavior of zonefs I/O 286 error processing if no errors mount option is specified. 287* With the "errors=remount-ro" mount option, the change of the file access 288 permissions to read-only applies to all files. The file system is remounted 289 read-only. 290* Access permission and file size changes due to the device transitioning zones 291 to the offline condition are permanent. Remounting or reformatting the device 292 with mkfs.zonefs (mkzonefs) will not change back offline zone files to a good 293 state. 294* File access permission changes to read-only due to the device transitioning 295 zones to the read-only condition are permanent. Remounting or reformatting 296 the device will not re-enable file write access. 297* File access permission changes implied by the remount-ro, zone-ro and 298 zone-offline mount options are temporary for zones in a good condition. 299 Unmounting and remounting the file system will restore the previous default 300 (format time values) access rights to the files affected. 301* The repair mount option triggers only the minimal set of I/O error recovery 302 actions, that is, file size fixes for zones in a good condition. Zones 303 indicated as being read-only or offline by the device still imply changes to 304 the zone file access permissions as noted in the table above. 305 306Mount options 307------------- 308 309zonefs defines several mount options: 310* errors=<behavior> 311* explicit-open 312 313"errors=<behavior>" option 314~~~~~~~~~~~~~~~~~~~~~~~~~~ 315 316The "errors=<behavior>" option mount option allows the user to specify zonefs 317behavior in response to I/O errors, inode size inconsistencies or zone 318condition changes. The defined behaviors are as follow: 319 320* remount-ro (default) 321* zone-ro 322* zone-offline 323* repair 324 325The run-time I/O error actions defined for each behavior are detailed in the 326previous section. Mount time I/O errors will cause the mount operation to fail. 327The handling of read-only zones also differs between mount-time and run-time. 328If a read-only zone is found at mount time, the zone is always treated in the 329same manner as offline zones, that is, all accesses are disabled and the zone 330file size set to 0. This is necessary as the write pointer of read-only zones 331is defined as invalib by the ZBC and ZAC standards, making it impossible to 332discover the amount of data that has been written to the zone. In the case of a 333read-only zone discovered at run-time, as indicated in the previous section. 334The size of the zone file is left unchanged from its last updated value. 335 336"explicit-open" option 337~~~~~~~~~~~~~~~~~~~~~~ 338 339A zoned block device (e.g. an NVMe Zoned Namespace device) may have limits on 340the number of zones that can be active, that is, zones that are in the 341implicit open, explicit open or closed conditions. This potential limitation 342translates into a risk for applications to see write IO errors due to this 343limit being exceeded if the zone of a file is not already active when a write 344request is issued by the user. 345 346To avoid these potential errors, the "explicit-open" mount option forces zones 347to be made active using an open zone command when a file is opened for writing 348for the first time. If the zone open command succeeds, the application is then 349guaranteed that write requests can be processed. Conversely, the 350"explicit-open" mount option will result in a zone close command being issued 351to the device on the last close() of a zone file if the zone is not full nor 352empty. 353 354Runtime sysfs attributes 355------------------------ 356 357zonefs defines several sysfs attributes for mounted devices. All attributes 358are user readable and can be found in the directory /sys/fs/zonefs/<dev>/, 359where <dev> is the name of the mounted zoned block device. 360 361The attributes defined are as follows. 362 363* **max_wro_seq_files**: This attribute reports the maximum number of 364 sequential zone files that can be open for writing. This number corresponds 365 to the maximum number of explicitly or implicitly open zones that the device 366 supports. A value of 0 means that the device has no limit and that any zone 367 (any file) can be open for writing and written at any time, regardless of the 368 state of other zones. When the *explicit-open* mount option is used, zonefs 369 will fail any open() system call requesting to open a sequential zone file for 370 writing when the number of sequential zone files already open for writing has 371 reached the *max_wro_seq_files* limit. 372* **nr_wro_seq_files**: This attribute reports the current number of sequential 373 zone files open for writing. When the "explicit-open" mount option is used, 374 this number can never exceed *max_wro_seq_files*. If the *explicit-open* 375 mount option is not used, the reported number can be greater than 376 *max_wro_seq_files*. In such case, it is the responsibility of the 377 application to not write simultaneously more than *max_wro_seq_files* 378 sequential zone files. Failure to do so can result in write errors. 379* **max_active_seq_files**: This attribute reports the maximum number of 380 sequential zone files that are in an active state, that is, sequential zone 381 files that are partially writen (not empty nor full) or that have a zone that 382 is explicitly open (which happens only if the *explicit-open* mount option is 383 used). This number is always equal to the maximum number of active zones that 384 the device supports. A value of 0 means that the mounted device has no limit 385 on the number of sequential zone files that can be active. 386* **nr_active_seq_files**: This attributes reports the current number of 387 sequential zone files that are active. If *max_active_seq_files* is not 0, 388 then the value of *nr_active_seq_files* can never exceed the value of 389 *nr_active_seq_files*, regardless of the use of the *explicit-open* mount 390 option. 391 392Zonefs User Space Tools 393======================= 394 395The mkzonefs tool is used to format zoned block devices for use with zonefs. 396This tool is available on Github at: 397 398https://github.com/damien-lemoal/zonefs-tools 399 400zonefs-tools also includes a test suite which can be run against any zoned 401block device, including null_blk block device created with zoned mode. 402 403Examples 404-------- 405 406The following formats a 15TB host-managed SMR HDD with 256 MB zones 407with the conventional zones aggregation feature enabled:: 408 409 # mkzonefs -o aggr_cnv /dev/sdX 410 # mount -t zonefs /dev/sdX /mnt 411 # ls -l /mnt/ 412 total 0 413 dr-xr-xr-x 2 root root 1 Nov 25 13:23 cnv 414 dr-xr-xr-x 2 root root 55356 Nov 25 13:23 seq 415 416The size of the zone files sub-directories indicate the number of files 417existing for each type of zones. In this example, there is only one 418conventional zone file (all conventional zones are aggregated under a single 419file):: 420 421 # ls -l /mnt/cnv 422 total 137101312 423 -rw-r----- 1 root root 140391743488 Nov 25 13:23 0 424 425This aggregated conventional zone file can be used as a regular file:: 426 427 # mkfs.ext4 /mnt/cnv/0 428 # mount -o loop /mnt/cnv/0 /data 429 430The "seq" sub-directory grouping files for sequential write zones has in this 431example 55356 zones:: 432 433 # ls -lv /mnt/seq 434 total 14511243264 435 -rw-r----- 1 root root 0 Nov 25 13:23 0 436 -rw-r----- 1 root root 0 Nov 25 13:23 1 437 -rw-r----- 1 root root 0 Nov 25 13:23 2 438 ... 439 -rw-r----- 1 root root 0 Nov 25 13:23 55354 440 -rw-r----- 1 root root 0 Nov 25 13:23 55355 441 442For sequential write zone files, the file size changes as data is appended at 443the end of the file, similarly to any regular file system:: 444 445 # dd if=/dev/zero of=/mnt/seq/0 bs=4096 count=1 conv=notrunc oflag=direct 446 1+0 records in 447 1+0 records out 448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044121 s, 9.3 MB/s 449 450 # ls -l /mnt/seq/0 451 -rw-r----- 1 root root 4096 Nov 25 13:23 /mnt/seq/0 452 453The written file can be truncated to the zone size, preventing any further 454write operation:: 455 456 # truncate -s 268435456 /mnt/seq/0 457 # ls -l /mnt/seq/0 458 -rw-r----- 1 root root 268435456 Nov 25 13:49 /mnt/seq/0 459 460Truncation to 0 size allows freeing the file zone storage space and restart 461append-writes to the file:: 462 463 # truncate -s 0 /mnt/seq/0 464 # ls -l /mnt/seq/0 465 -rw-r----- 1 root root 0 Nov 25 13:49 /mnt/seq/0 466 467Since files are statically mapped to zones on the disk, the number of blocks 468of a file as reported by stat() and fstat() indicates the capacity of the file 469zone:: 470 471 # stat /mnt/seq/0 472 File: /mnt/seq/0 473 Size: 0 Blocks: 524288 IO Block: 4096 regular empty file 474 Device: 870h/2160d Inode: 50431 Links: 1 475 Access: (0640/-rw-r-----) Uid: ( 0/ root) Gid: ( 0/ root) 476 Access: 2019-11-25 13:23:57.048971997 +0900 477 Modify: 2019-11-25 13:52:25.553805765 +0900 478 Change: 2019-11-25 13:52:25.553805765 +0900 479 Birth: - 480 481The number of blocks of the file ("Blocks") in units of 512B blocks gives the 482maximum file size of 524288 * 512 B = 256 MB, corresponding to the device zone 483capacity in this example. Of note is that the "IO block" field always 484indicates the minimum I/O size for writes and corresponds to the device 485physical sector size.