cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

deadline-iosched.rst (2950B)


      1==============================
      2Deadline IO scheduler tunables
      3==============================
      4
      5This little file attempts to document how the deadline io scheduler works.
      6In particular, it will clarify the meaning of the exposed tunables that may be
      7of interest to power users.
      8
      9Selecting IO schedulers
     10-----------------------
     11Refer to Documentation/block/switching-sched.rst for information on
     12selecting an io scheduler on a per-device basis.
     13
     14------------------------------------------------------------------------------
     15
     16read_expire	(in ms)
     17-----------------------
     18
     19The goal of the deadline io scheduler is to attempt to guarantee a start
     20service time for a request. As we focus mainly on read latencies, this is
     21tunable. When a read request first enters the io scheduler, it is assigned
     22a deadline that is the current time + the read_expire value in units of
     23milliseconds.
     24
     25
     26write_expire	(in ms)
     27-----------------------
     28
     29Similar to read_expire mentioned above, but for writes.
     30
     31
     32fifo_batch	(number of requests)
     33------------------------------------
     34
     35Requests are grouped into ``batches`` of a particular data direction (read or
     36write) which are serviced in increasing sector order.  To limit extra seeking,
     37deadline expiries are only checked between batches.  fifo_batch controls the
     38maximum number of requests per batch.
     39
     40This parameter tunes the balance between per-request latency and aggregate
     41throughput.  When low latency is the primary concern, smaller is better (where
     42a value of 1 yields first-come first-served behaviour).  Increasing fifo_batch
     43generally improves throughput, at the cost of latency variation.
     44
     45
     46writes_starved	(number of dispatches)
     47--------------------------------------
     48
     49When we have to move requests from the io scheduler queue to the block
     50device dispatch queue, we always give a preference to reads. However, we
     51don't want to starve writes indefinitely either. So writes_starved controls
     52how many times we give preference to reads over writes. When that has been
     53done writes_starved number of times, we dispatch some writes based on the
     54same criteria as reads.
     55
     56
     57front_merges	(bool)
     58----------------------
     59
     60Sometimes it happens that a request enters the io scheduler that is contiguous
     61with a request that is already on the queue. Either it fits in the back of that
     62request, or it fits at the front. That is called either a back merge candidate
     63or a front merge candidate. Due to the way files are typically laid out,
     64back merges are much more common than front merges. For some work loads, you
     65may even know that it is a waste of time to spend any time attempting to
     66front merge requests. Setting front_merges to 0 disables this functionality.
     67Front merges may still occur due to the cached last_merge hint, but since
     68that comes at basically 0 cost we leave that on. We simply disable the
     69rbtree front sector lookup when the io scheduler merge function is called.
     70
     71
     72Nov 11 2002, Jens Axboe <jens.axboe@oracle.com>