cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

orangefs.rst (20098B)


      1.. SPDX-License-Identifier: GPL-2.0
      2
      3========
      4ORANGEFS
      5========
      6
      7OrangeFS is an LGPL userspace scale-out parallel storage system. It is ideal
      8for large storage problems faced by HPC, BigData, Streaming Video,
      9Genomics, Bioinformatics.
     10
     11Orangefs, originally called PVFS, was first developed in 1993 by
     12Walt Ligon and Eric Blumer as a parallel file system for Parallel
     13Virtual Machine (PVM) as part of a NASA grant to study the I/O patterns
     14of parallel programs.
     15
     16Orangefs features include:
     17
     18  * Distributes file data among multiple file servers
     19  * Supports simultaneous access by multiple clients
     20  * Stores file data and metadata on servers using local file system
     21    and access methods
     22  * Userspace implementation is easy to install and maintain
     23  * Direct MPI support
     24  * Stateless
     25
     26
     27Mailing List Archives
     28=====================
     29
     30http://lists.orangefs.org/pipermail/devel_lists.orangefs.org/
     31
     32
     33Mailing List Submissions
     34========================
     35
     36devel@lists.orangefs.org
     37
     38
     39Documentation
     40=============
     41
     42http://www.orangefs.org/documentation/
     43
     44Running ORANGEFS On a Single Server
     45===================================
     46
     47OrangeFS is usually run in large installations with multiple servers and
     48clients, but a complete filesystem can be run on a single machine for
     49development and testing.
     50
     51On Fedora, install orangefs and orangefs-server::
     52
     53    dnf -y install orangefs orangefs-server
     54
     55There is an example server configuration file in
     56/etc/orangefs/orangefs.conf.  Change localhost to your hostname if
     57necessary.
     58
     59To generate a filesystem to run xfstests against, see below.
     60
     61There is an example client configuration file in /etc/pvfs2tab.  It is a
     62single line.  Uncomment it and change the hostname if necessary.  This
     63controls clients which use libpvfs2.  This does not control the
     64pvfs2-client-core.
     65
     66Create the filesystem::
     67
     68    pvfs2-server -f /etc/orangefs/orangefs.conf
     69
     70Start the server::
     71
     72    systemctl start orangefs-server
     73
     74Test the server::
     75
     76    pvfs2-ping -m /pvfsmnt
     77
     78Start the client.  The module must be compiled in or loaded before this
     79point::
     80
     81    systemctl start orangefs-client
     82
     83Mount the filesystem::
     84
     85    mount -t pvfs2 tcp://localhost:3334/orangefs /pvfsmnt
     86
     87Userspace Filesystem Source
     88===========================
     89
     90http://www.orangefs.org/download
     91
     92Orangefs versions prior to 2.9.3 would not be compatible with the
     93upstream version of the kernel client.
     94
     95
     96Building ORANGEFS on a Single Server
     97====================================
     98
     99Where OrangeFS cannot be installed from distribution packages, it may be
    100built from source.
    101
    102You can omit --prefix if you don't care that things are sprinkled around
    103in /usr/local.  As of version 2.9.6, OrangeFS uses Berkeley DB by
    104default, we will probably be changing the default to LMDB soon.
    105
    106::
    107
    108    ./configure --prefix=/opt/ofs --with-db-backend=lmdb --disable-usrint
    109
    110    make
    111
    112    make install
    113
    114Create an orangefs config file by running pvfs2-genconfig and
    115specifying a target config file. Pvfs2-genconfig will prompt you
    116through. Generally it works fine to take the defaults, but you
    117should use your server's hostname, rather than "localhost" when
    118it comes to that question::
    119
    120    /opt/ofs/bin/pvfs2-genconfig /etc/pvfs2.conf
    121
    122Create an /etc/pvfs2tab file (localhost is fine)::
    123
    124    echo tcp://localhost:3334/orangefs /pvfsmnt pvfs2 defaults,noauto 0 0 > \
    125	/etc/pvfs2tab
    126
    127Create the mount point you specified in the tab file if needed::
    128
    129    mkdir /pvfsmnt
    130
    131Bootstrap the server::
    132
    133    /opt/ofs/sbin/pvfs2-server -f /etc/pvfs2.conf
    134
    135Start the server::
    136
    137    /opt/ofs/sbin/pvfs2-server /etc/pvfs2.conf
    138
    139Now the server should be running. Pvfs2-ls is a simple
    140test to verify that the server is running::
    141
    142    /opt/ofs/bin/pvfs2-ls /pvfsmnt
    143
    144If stuff seems to be working, load the kernel module and
    145turn on the client core::
    146
    147    /opt/ofs/sbin/pvfs2-client -p /opt/ofs/sbin/pvfs2-client-core
    148
    149Mount your filesystem::
    150
    151    mount -t pvfs2 tcp://`hostname`:3334/orangefs /pvfsmnt
    152
    153
    154Running xfstests
    155================
    156
    157It is useful to use a scratch filesystem with xfstests.  This can be
    158done with only one server.
    159
    160Make a second copy of the FileSystem section in the server configuration
    161file, which is /etc/orangefs/orangefs.conf.  Change the Name to scratch.
    162Change the ID to something other than the ID of the first FileSystem
    163section (2 is usually a good choice).
    164
    165Then there are two FileSystem sections: orangefs and scratch.
    166
    167This change should be made before creating the filesystem.
    168
    169::
    170
    171    pvfs2-server -f /etc/orangefs/orangefs.conf
    172
    173To run xfstests, create /etc/xfsqa.config::
    174
    175    TEST_DIR=/orangefs
    176    TEST_DEV=tcp://localhost:3334/orangefs
    177    SCRATCH_MNT=/scratch
    178    SCRATCH_DEV=tcp://localhost:3334/scratch
    179
    180Then xfstests can be run::
    181
    182    ./check -pvfs2
    183
    184
    185Options
    186=======
    187
    188The following mount options are accepted:
    189
    190  acl
    191    Allow the use of Access Control Lists on files and directories.
    192
    193  intr
    194    Some operations between the kernel client and the user space
    195    filesystem can be interruptible, such as changes in debug levels
    196    and the setting of tunable parameters.
    197
    198  local_lock
    199    Enable posix locking from the perspective of "this" kernel. The
    200    default file_operations lock action is to return ENOSYS. Posix
    201    locking kicks in if the filesystem is mounted with -o local_lock.
    202    Distributed locking is being worked on for the future.
    203
    204
    205Debugging
    206=========
    207
    208If you want the debug (GOSSIP) statements in a particular
    209source file (inode.c for example) go to syslog::
    210
    211  echo inode > /sys/kernel/debug/orangefs/kernel-debug
    212
    213No debugging (the default)::
    214
    215  echo none > /sys/kernel/debug/orangefs/kernel-debug
    216
    217Debugging from several source files::
    218
    219  echo inode,dir > /sys/kernel/debug/orangefs/kernel-debug
    220
    221All debugging::
    222
    223  echo all > /sys/kernel/debug/orangefs/kernel-debug
    224
    225Get a list of all debugging keywords::
    226
    227  cat /sys/kernel/debug/orangefs/debug-help
    228
    229
    230Protocol between Kernel Module and Userspace
    231============================================
    232
    233Orangefs is a user space filesystem and an associated kernel module.
    234We'll just refer to the user space part of Orangefs as "userspace"
    235from here on out. Orangefs descends from PVFS, and userspace code
    236still uses PVFS for function and variable names. Userspace typedefs
    237many of the important structures. Function and variable names in
    238the kernel module have been transitioned to "orangefs", and The Linux
    239Coding Style avoids typedefs, so kernel module structures that
    240correspond to userspace structures are not typedefed.
    241
    242The kernel module implements a pseudo device that userspace
    243can read from and write to. Userspace can also manipulate the
    244kernel module through the pseudo device with ioctl.
    245
    246The Bufmap
    247----------
    248
    249At startup userspace allocates two page-size-aligned (posix_memalign)
    250mlocked memory buffers, one is used for IO and one is used for readdir
    251operations. The IO buffer is 41943040 bytes and the readdir buffer is
    2524194304 bytes. Each buffer contains logical chunks, or partitions, and
    253a pointer to each buffer is added to its own PVFS_dev_map_desc structure
    254which also describes its total size, as well as the size and number of
    255the partitions.
    256
    257A pointer to the IO buffer's PVFS_dev_map_desc structure is sent to a
    258mapping routine in the kernel module with an ioctl. The structure is
    259copied from user space to kernel space with copy_from_user and is used
    260to initialize the kernel module's "bufmap" (struct orangefs_bufmap), which
    261then contains:
    262
    263  * refcnt
    264    - a reference counter
    265  * desc_size - PVFS2_BUFMAP_DEFAULT_DESC_SIZE (4194304) - the IO buffer's
    266    partition size, which represents the filesystem's block size and
    267    is used for s_blocksize in super blocks.
    268  * desc_count - PVFS2_BUFMAP_DEFAULT_DESC_COUNT (10) - the number of
    269    partitions in the IO buffer.
    270  * desc_shift - log2(desc_size), used for s_blocksize_bits in super blocks.
    271  * total_size - the total size of the IO buffer.
    272  * page_count - the number of 4096 byte pages in the IO buffer.
    273  * page_array - a pointer to ``page_count * (sizeof(struct page*))`` bytes
    274    of kcalloced memory. This memory is used as an array of pointers
    275    to each of the pages in the IO buffer through a call to get_user_pages.
    276  * desc_array - a pointer to ``desc_count * (sizeof(struct orangefs_bufmap_desc))``
    277    bytes of kcalloced memory. This memory is further intialized:
    278
    279      user_desc is the kernel's copy of the IO buffer's ORANGEFS_dev_map_desc
    280      structure. user_desc->ptr points to the IO buffer.
    281
    282      ::
    283
    284	pages_per_desc = bufmap->desc_size / PAGE_SIZE
    285	offset = 0
    286
    287        bufmap->desc_array[0].page_array = &bufmap->page_array[offset]
    288        bufmap->desc_array[0].array_count = pages_per_desc = 1024
    289        bufmap->desc_array[0].uaddr = (user_desc->ptr) + (0 * 1024 * 4096)
    290        offset += 1024
    291                           .
    292                           .
    293                           .
    294        bufmap->desc_array[9].page_array = &bufmap->page_array[offset]
    295        bufmap->desc_array[9].array_count = pages_per_desc = 1024
    296        bufmap->desc_array[9].uaddr = (user_desc->ptr) +
    297                                               (9 * 1024 * 4096)
    298        offset += 1024
    299
    300  * buffer_index_array - a desc_count sized array of ints, used to
    301    indicate which of the IO buffer's partitions are available to use.
    302  * buffer_index_lock - a spinlock to protect buffer_index_array during update.
    303  * readdir_index_array - a five (ORANGEFS_READDIR_DEFAULT_DESC_COUNT) element
    304    int array used to indicate which of the readdir buffer's partitions are
    305    available to use.
    306  * readdir_index_lock - a spinlock to protect readdir_index_array during
    307    update.
    308
    309Operations
    310----------
    311
    312The kernel module builds an "op" (struct orangefs_kernel_op_s) when it
    313needs to communicate with userspace. Part of the op contains the "upcall"
    314which expresses the request to userspace. Part of the op eventually
    315contains the "downcall" which expresses the results of the request.
    316
    317The slab allocator is used to keep a cache of op structures handy.
    318
    319At init time the kernel module defines and initializes a request list
    320and an in_progress hash table to keep track of all the ops that are
    321in flight at any given time.
    322
    323Ops are stateful:
    324
    325 * unknown
    326	    - op was just initialized
    327 * waiting
    328	    - op is on request_list (upward bound)
    329 * inprogr
    330	    - op is in progress (waiting for downcall)
    331 * serviced
    332	    - op has matching downcall; ok
    333 * purged
    334	    - op has to start a timer since client-core
    335              exited uncleanly before servicing op
    336 * given up
    337	    - submitter has given up waiting for it
    338
    339When some arbitrary userspace program needs to perform a
    340filesystem operation on Orangefs (readdir, I/O, create, whatever)
    341an op structure is initialized and tagged with a distinguishing ID
    342number. The upcall part of the op is filled out, and the op is
    343passed to the "service_operation" function.
    344
    345Service_operation changes the op's state to "waiting", puts
    346it on the request list, and signals the Orangefs file_operations.poll
    347function through a wait queue. Userspace is polling the pseudo-device
    348and thus becomes aware of the upcall request that needs to be read.
    349
    350When the Orangefs file_operations.read function is triggered, the
    351request list is searched for an op that seems ready-to-process.
    352The op is removed from the request list. The tag from the op and
    353the filled-out upcall struct are copy_to_user'ed back to userspace.
    354
    355If any of these (and some additional protocol) copy_to_users fail,
    356the op's state is set to "waiting" and the op is added back to
    357the request list. Otherwise, the op's state is changed to "in progress",
    358and the op is hashed on its tag and put onto the end of a list in the
    359in_progress hash table at the index the tag hashed to.
    360
    361When userspace has assembled the response to the upcall, it
    362writes the response, which includes the distinguishing tag, back to
    363the pseudo device in a series of io_vecs. This triggers the Orangefs
    364file_operations.write_iter function to find the op with the associated
    365tag and remove it from the in_progress hash table. As long as the op's
    366state is not "canceled" or "given up", its state is set to "serviced".
    367The file_operations.write_iter function returns to the waiting vfs,
    368and back to service_operation through wait_for_matching_downcall.
    369
    370Service operation returns to its caller with the op's downcall
    371part (the response to the upcall) filled out.
    372
    373The "client-core" is the bridge between the kernel module and
    374userspace. The client-core is a daemon. The client-core has an
    375associated watchdog daemon. If the client-core is ever signaled
    376to die, the watchdog daemon restarts the client-core. Even though
    377the client-core is restarted "right away", there is a period of
    378time during such an event that the client-core is dead. A dead client-core
    379can't be triggered by the Orangefs file_operations.poll function.
    380Ops that pass through service_operation during a "dead spell" can timeout
    381on the wait queue and one attempt is made to recycle them. Obviously,
    382if the client-core stays dead too long, the arbitrary userspace processes
    383trying to use Orangefs will be negatively affected. Waiting ops
    384that can't be serviced will be removed from the request list and
    385have their states set to "given up". In-progress ops that can't
    386be serviced will be removed from the in_progress hash table and
    387have their states set to "given up".
    388
    389Readdir and I/O ops are atypical with respect to their payloads.
    390
    391  - readdir ops use the smaller of the two pre-allocated pre-partitioned
    392    memory buffers. The readdir buffer is only available to userspace.
    393    The kernel module obtains an index to a free partition before launching
    394    a readdir op. Userspace deposits the results into the indexed partition
    395    and then writes them to back to the pvfs device.
    396
    397  - io (read and write) ops use the larger of the two pre-allocated
    398    pre-partitioned memory buffers. The IO buffer is accessible from
    399    both userspace and the kernel module. The kernel module obtains an
    400    index to a free partition before launching an io op. The kernel module
    401    deposits write data into the indexed partition, to be consumed
    402    directly by userspace. Userspace deposits the results of read
    403    requests into the indexed partition, to be consumed directly
    404    by the kernel module.
    405
    406Responses to kernel requests are all packaged in pvfs2_downcall_t
    407structs. Besides a few other members, pvfs2_downcall_t contains a
    408union of structs, each of which is associated with a particular
    409response type.
    410
    411The several members outside of the union are:
    412
    413 ``int32_t type``
    414    - type of operation.
    415 ``int32_t status``
    416    - return code for the operation.
    417 ``int64_t trailer_size``
    418    - 0 unless readdir operation.
    419 ``char *trailer_buf``
    420    - initialized to NULL, used during readdir operations.
    421
    422The appropriate member inside the union is filled out for any
    423particular response.
    424
    425  PVFS2_VFS_OP_FILE_IO
    426    fill a pvfs2_io_response_t
    427
    428  PVFS2_VFS_OP_LOOKUP
    429    fill a PVFS_object_kref
    430
    431  PVFS2_VFS_OP_CREATE
    432    fill a PVFS_object_kref
    433
    434  PVFS2_VFS_OP_SYMLINK
    435    fill a PVFS_object_kref
    436
    437  PVFS2_VFS_OP_GETATTR
    438    fill in a PVFS_sys_attr_s (tons of stuff the kernel doesn't need)
    439    fill in a string with the link target when the object is a symlink.
    440
    441  PVFS2_VFS_OP_MKDIR
    442    fill a PVFS_object_kref
    443
    444  PVFS2_VFS_OP_STATFS
    445    fill a pvfs2_statfs_response_t with useless info <g>. It is hard for
    446    us to know, in a timely fashion, these statistics about our
    447    distributed network filesystem.
    448
    449  PVFS2_VFS_OP_FS_MOUNT
    450    fill a pvfs2_fs_mount_response_t which is just like a PVFS_object_kref
    451    except its members are in a different order and "__pad1" is replaced
    452    with "id".
    453
    454  PVFS2_VFS_OP_GETXATTR
    455    fill a pvfs2_getxattr_response_t
    456
    457  PVFS2_VFS_OP_LISTXATTR
    458    fill a pvfs2_listxattr_response_t
    459
    460  PVFS2_VFS_OP_PARAM
    461    fill a pvfs2_param_response_t
    462
    463  PVFS2_VFS_OP_PERF_COUNT
    464    fill a pvfs2_perf_count_response_t
    465
    466  PVFS2_VFS_OP_FSKEY
    467    file a pvfs2_fs_key_response_t
    468
    469  PVFS2_VFS_OP_READDIR
    470    jamb everything needed to represent a pvfs2_readdir_response_t into
    471    the readdir buffer descriptor specified in the upcall.
    472
    473Userspace uses writev() on /dev/pvfs2-req to pass responses to the requests
    474made by the kernel side.
    475
    476A buffer_list containing:
    477
    478  - a pointer to the prepared response to the request from the
    479    kernel (struct pvfs2_downcall_t).
    480  - and also, in the case of a readdir request, a pointer to a
    481    buffer containing descriptors for the objects in the target
    482    directory.
    483
    484... is sent to the function (PINT_dev_write_list) which performs
    485the writev.
    486
    487PINT_dev_write_list has a local iovec array: struct iovec io_array[10];
    488
    489The first four elements of io_array are initialized like this for all
    490responses::
    491
    492  io_array[0].iov_base = address of local variable "proto_ver" (int32_t)
    493  io_array[0].iov_len = sizeof(int32_t)
    494
    495  io_array[1].iov_base = address of global variable "pdev_magic" (int32_t)
    496  io_array[1].iov_len = sizeof(int32_t)
    497
    498  io_array[2].iov_base = address of parameter "tag" (PVFS_id_gen_t)
    499  io_array[2].iov_len = sizeof(int64_t)
    500
    501  io_array[3].iov_base = address of out_downcall member (pvfs2_downcall_t)
    502                         of global variable vfs_request (vfs_request_t)
    503  io_array[3].iov_len = sizeof(pvfs2_downcall_t)
    504
    505Readdir responses initialize the fifth element io_array like this::
    506
    507  io_array[4].iov_base = contents of member trailer_buf (char *)
    508                         from out_downcall member of global variable
    509                         vfs_request
    510  io_array[4].iov_len = contents of member trailer_size (PVFS_size)
    511                        from out_downcall member of global variable
    512                        vfs_request
    513
    514Orangefs exploits the dcache in order to avoid sending redundant
    515requests to userspace. We keep object inode attributes up-to-date with
    516orangefs_inode_getattr. Orangefs_inode_getattr uses two arguments to
    517help it decide whether or not to update an inode: "new" and "bypass".
    518Orangefs keeps private data in an object's inode that includes a short
    519timeout value, getattr_time, which allows any iteration of
    520orangefs_inode_getattr to know how long it has been since the inode was
    521updated. When the object is not new (new == 0) and the bypass flag is not
    522set (bypass == 0) orangefs_inode_getattr returns without updating the inode
    523if getattr_time has not timed out. Getattr_time is updated each time the
    524inode is updated.
    525
    526Creation of a new object (file, dir, sym-link) includes the evaluation of
    527its pathname, resulting in a negative directory entry for the object.
    528A new inode is allocated and associated with the dentry, turning it from
    529a negative dentry into a "productive full member of society". Orangefs
    530obtains the new inode from Linux with new_inode() and associates
    531the inode with the dentry by sending the pair back to Linux with
    532d_instantiate().
    533
    534The evaluation of a pathname for an object resolves to its corresponding
    535dentry. If there is no corresponding dentry, one is created for it in
    536the dcache. Whenever a dentry is modified or verified Orangefs stores a
    537short timeout value in the dentry's d_time, and the dentry will be trusted
    538for that amount of time. Orangefs is a network filesystem, and objects
    539can potentially change out-of-band with any particular Orangefs kernel module
    540instance, so trusting a dentry is risky. The alternative to trusting
    541dentries is to always obtain the needed information from userspace - at
    542least a trip to the client-core, maybe to the servers. Obtaining information
    543from a dentry is cheap, obtaining it from userspace is relatively expensive,
    544hence the motivation to use the dentry when possible.
    545
    546The timeout values d_time and getattr_time are jiffy based, and the
    547code is designed to avoid the jiffy-wrap problem::
    548
    549    "In general, if the clock may have wrapped around more than once, there
    550    is no way to tell how much time has elapsed. However, if the times t1
    551    and t2 are known to be fairly close, we can reliably compute the
    552    difference in a way that takes into account the possibility that the
    553    clock may have wrapped between times."
    554
    555from course notes by instructor Andy Wang
    556