cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

Requirements.rst (136447B)


      1=================================
      2A Tour Through RCU's Requirements
      3=================================
      4
      5Copyright IBM Corporation, 2015
      6
      7Author: Paul E. McKenney
      8
      9The initial version of this document appeared in the
     10`LWN <https://lwn.net/>`_ on those articles:
     11`part 1 <https://lwn.net/Articles/652156/>`_,
     12`part 2 <https://lwn.net/Articles/652677/>`_, and
     13`part 3 <https://lwn.net/Articles/653326/>`_.
     14
     15Introduction
     16------------
     17
     18Read-copy update (RCU) is a synchronization mechanism that is often used
     19as a replacement for reader-writer locking. RCU is unusual in that
     20updaters do not block readers, which means that RCU's read-side
     21primitives can be exceedingly fast and scalable. In addition, updaters
     22can make useful forward progress concurrently with readers. However, all
     23this concurrency between RCU readers and updaters does raise the
     24question of exactly what RCU readers are doing, which in turn raises the
     25question of exactly what RCU's requirements are.
     26
     27This document therefore summarizes RCU's requirements, and can be
     28thought of as an informal, high-level specification for RCU. It is
     29important to understand that RCU's specification is primarily empirical
     30in nature; in fact, I learned about many of these requirements the hard
     31way. This situation might cause some consternation, however, not only
     32has this learning process been a lot of fun, but it has also been a
     33great privilege to work with so many people willing to apply
     34technologies in interesting new ways.
     35
     36All that aside, here are the categories of currently known RCU
     37requirements:
     38
     39#. `Fundamental Requirements`_
     40#. `Fundamental Non-Requirements`_
     41#. `Parallelism Facts of Life`_
     42#. `Quality-of-Implementation Requirements`_
     43#. `Linux Kernel Complications`_
     44#. `Software-Engineering Requirements`_
     45#. `Other RCU Flavors`_
     46#. `Possible Future Changes`_
     47
     48This is followed by a summary_, however, the answers to
     49each quick quiz immediately follows the quiz. Select the big white space
     50with your mouse to see the answer.
     51
     52Fundamental Requirements
     53------------------------
     54
     55RCU's fundamental requirements are the closest thing RCU has to hard
     56mathematical requirements. These are:
     57
     58#. `Grace-Period Guarantee`_
     59#. `Publish/Subscribe Guarantee`_
     60#. `Memory-Barrier Guarantees`_
     61#. `RCU Primitives Guaranteed to Execute Unconditionally`_
     62#. `Guaranteed Read-to-Write Upgrade`_
     63
     64Grace-Period Guarantee
     65~~~~~~~~~~~~~~~~~~~~~~
     66
     67RCU's grace-period guarantee is unusual in being premeditated: Jack
     68Slingwine and I had this guarantee firmly in mind when we started work
     69on RCU (then called “rclock”) in the early 1990s. That said, the past
     70two decades of experience with RCU have produced a much more detailed
     71understanding of this guarantee.
     72
     73RCU's grace-period guarantee allows updaters to wait for the completion
     74of all pre-existing RCU read-side critical sections. An RCU read-side
     75critical section begins with the marker rcu_read_lock() and ends
     76with the marker rcu_read_unlock(). These markers may be nested, and
     77RCU treats a nested set as one big RCU read-side critical section.
     78Production-quality implementations of rcu_read_lock() and
     79rcu_read_unlock() are extremely lightweight, and in fact have
     80exactly zero overhead in Linux kernels built for production use with
     81``CONFIG_PREEMPTION=n``.
     82
     83This guarantee allows ordering to be enforced with extremely low
     84overhead to readers, for example:
     85
     86   ::
     87
     88       1 int x, y;
     89       2
     90       3 void thread0(void)
     91       4 {
     92       5   rcu_read_lock();
     93       6   r1 = READ_ONCE(x);
     94       7   r2 = READ_ONCE(y);
     95       8   rcu_read_unlock();
     96       9 }
     97      10
     98      11 void thread1(void)
     99      12 {
    100      13   WRITE_ONCE(x, 1);
    101      14   synchronize_rcu();
    102      15   WRITE_ONCE(y, 1);
    103      16 }
    104
    105Because the synchronize_rcu() on line 14 waits for all pre-existing
    106readers, any instance of thread0() that loads a value of zero from
    107``x`` must complete before thread1() stores to ``y``, so that
    108instance must also load a value of zero from ``y``. Similarly, any
    109instance of thread0() that loads a value of one from ``y`` must have
    110started after the synchronize_rcu() started, and must therefore also
    111load a value of one from ``x``. Therefore, the outcome:
    112
    113   ::
    114
    115      (r1 == 0 && r2 == 1)
    116
    117cannot happen.
    118
    119+-----------------------------------------------------------------------+
    120| **Quick Quiz**:                                                       |
    121+-----------------------------------------------------------------------+
    122| Wait a minute! You said that updaters can make useful forward         |
    123| progress concurrently with readers, but pre-existing readers will     |
    124| block synchronize_rcu()!!!                                            |
    125| Just who are you trying to fool???                                    |
    126+-----------------------------------------------------------------------+
    127| **Answer**:                                                           |
    128+-----------------------------------------------------------------------+
    129| First, if updaters do not wish to be blocked by readers, they can use |
    130| call_rcu() or kfree_rcu(), which will be discussed later.             |
    131| Second, even when using synchronize_rcu(), the other update-side      |
    132| code does run concurrently with readers, whether pre-existing or not. |
    133+-----------------------------------------------------------------------+
    134
    135This scenario resembles one of the first uses of RCU in
    136`DYNIX/ptx <https://en.wikipedia.org/wiki/DYNIX>`__, which managed a
    137distributed lock manager's transition into a state suitable for handling
    138recovery from node failure, more or less as follows:
    139
    140   ::
    141
    142       1 #define STATE_NORMAL        0
    143       2 #define STATE_WANT_RECOVERY 1
    144       3 #define STATE_RECOVERING    2
    145       4 #define STATE_WANT_NORMAL   3
    146       5
    147       6 int state = STATE_NORMAL;
    148       7
    149       8 void do_something_dlm(void)
    150       9 {
    151      10   int state_snap;
    152      11
    153      12   rcu_read_lock();
    154      13   state_snap = READ_ONCE(state);
    155      14   if (state_snap == STATE_NORMAL)
    156      15     do_something();
    157      16   else
    158      17     do_something_carefully();
    159      18   rcu_read_unlock();
    160      19 }
    161      20
    162      21 void start_recovery(void)
    163      22 {
    164      23   WRITE_ONCE(state, STATE_WANT_RECOVERY);
    165      24   synchronize_rcu();
    166      25   WRITE_ONCE(state, STATE_RECOVERING);
    167      26   recovery();
    168      27   WRITE_ONCE(state, STATE_WANT_NORMAL);
    169      28   synchronize_rcu();
    170      29   WRITE_ONCE(state, STATE_NORMAL);
    171      30 }
    172
    173The RCU read-side critical section in do_something_dlm() works with
    174the synchronize_rcu() in start_recovery() to guarantee that
    175do_something() never runs concurrently with recovery(), but with
    176little or no synchronization overhead in do_something_dlm().
    177
    178+-----------------------------------------------------------------------+
    179| **Quick Quiz**:                                                       |
    180+-----------------------------------------------------------------------+
    181| Why is the synchronize_rcu() on line 28 needed?                       |
    182+-----------------------------------------------------------------------+
    183| **Answer**:                                                           |
    184+-----------------------------------------------------------------------+
    185| Without that extra grace period, memory reordering could result in    |
    186| do_something_dlm() executing do_something() concurrently with         |
    187| the last bits of recovery().                                          |
    188+-----------------------------------------------------------------------+
    189
    190In order to avoid fatal problems such as deadlocks, an RCU read-side
    191critical section must not contain calls to synchronize_rcu().
    192Similarly, an RCU read-side critical section must not contain anything
    193that waits, directly or indirectly, on completion of an invocation of
    194synchronize_rcu().
    195
    196Although RCU's grace-period guarantee is useful in and of itself, with
    197`quite a few use cases <https://lwn.net/Articles/573497/>`__, it would
    198be good to be able to use RCU to coordinate read-side access to linked
    199data structures. For this, the grace-period guarantee is not sufficient,
    200as can be seen in function add_gp_buggy() below. We will look at the
    201reader's code later, but in the meantime, just think of the reader as
    202locklessly picking up the ``gp`` pointer, and, if the value loaded is
    203non-\ ``NULL``, locklessly accessing the ``->a`` and ``->b`` fields.
    204
    205   ::
    206
    207       1 bool add_gp_buggy(int a, int b)
    208       2 {
    209       3   p = kmalloc(sizeof(*p), GFP_KERNEL);
    210       4   if (!p)
    211       5     return -ENOMEM;
    212       6   spin_lock(&gp_lock);
    213       7   if (rcu_access_pointer(gp)) {
    214       8     spin_unlock(&gp_lock);
    215       9     return false;
    216      10   }
    217      11   p->a = a;
    218      12   p->b = a;
    219      13   gp = p; /* ORDERING BUG */
    220      14   spin_unlock(&gp_lock);
    221      15   return true;
    222      16 }
    223
    224The problem is that both the compiler and weakly ordered CPUs are within
    225their rights to reorder this code as follows:
    226
    227   ::
    228
    229       1 bool add_gp_buggy_optimized(int a, int b)
    230       2 {
    231       3   p = kmalloc(sizeof(*p), GFP_KERNEL);
    232       4   if (!p)
    233       5     return -ENOMEM;
    234       6   spin_lock(&gp_lock);
    235       7   if (rcu_access_pointer(gp)) {
    236       8     spin_unlock(&gp_lock);
    237       9     return false;
    238      10   }
    239      11   gp = p; /* ORDERING BUG */
    240      12   p->a = a;
    241      13   p->b = a;
    242      14   spin_unlock(&gp_lock);
    243      15   return true;
    244      16 }
    245
    246If an RCU reader fetches ``gp`` just after ``add_gp_buggy_optimized``
    247executes line 11, it will see garbage in the ``->a`` and ``->b`` fields.
    248And this is but one of many ways in which compiler and hardware
    249optimizations could cause trouble. Therefore, we clearly need some way
    250to prevent the compiler and the CPU from reordering in this manner,
    251which brings us to the publish-subscribe guarantee discussed in the next
    252section.
    253
    254Publish/Subscribe Guarantee
    255~~~~~~~~~~~~~~~~~~~~~~~~~~~
    256
    257RCU's publish-subscribe guarantee allows data to be inserted into a
    258linked data structure without disrupting RCU readers. The updater uses
    259rcu_assign_pointer() to insert the new data, and readers use
    260rcu_dereference() to access data, whether new or old. The following
    261shows an example of insertion:
    262
    263   ::
    264
    265       1 bool add_gp(int a, int b)
    266       2 {
    267       3   p = kmalloc(sizeof(*p), GFP_KERNEL);
    268       4   if (!p)
    269       5     return -ENOMEM;
    270       6   spin_lock(&gp_lock);
    271       7   if (rcu_access_pointer(gp)) {
    272       8     spin_unlock(&gp_lock);
    273       9     return false;
    274      10   }
    275      11   p->a = a;
    276      12   p->b = a;
    277      13   rcu_assign_pointer(gp, p);
    278      14   spin_unlock(&gp_lock);
    279      15   return true;
    280      16 }
    281
    282The rcu_assign_pointer() on line 13 is conceptually equivalent to a
    283simple assignment statement, but also guarantees that its assignment
    284will happen after the two assignments in lines 11 and 12, similar to the
    285C11 ``memory_order_release`` store operation. It also prevents any
    286number of “interesting” compiler optimizations, for example, the use of
    287``gp`` as a scratch location immediately preceding the assignment.
    288
    289+-----------------------------------------------------------------------+
    290| **Quick Quiz**:                                                       |
    291+-----------------------------------------------------------------------+
    292| But rcu_assign_pointer() does nothing to prevent the two              |
    293| assignments to ``p->a`` and ``p->b`` from being reordered. Can't that |
    294| also cause problems?                                                  |
    295+-----------------------------------------------------------------------+
    296| **Answer**:                                                           |
    297+-----------------------------------------------------------------------+
    298| No, it cannot. The readers cannot see either of these two fields      |
    299| until the assignment to ``gp``, by which time both fields are fully   |
    300| initialized. So reordering the assignments to ``p->a`` and ``p->b``   |
    301| cannot possibly cause any problems.                                   |
    302+-----------------------------------------------------------------------+
    303
    304It is tempting to assume that the reader need not do anything special to
    305control its accesses to the RCU-protected data, as shown in
    306do_something_gp_buggy() below:
    307
    308   ::
    309
    310       1 bool do_something_gp_buggy(void)
    311       2 {
    312       3   rcu_read_lock();
    313       4   p = gp;  /* OPTIMIZATIONS GALORE!!! */
    314       5   if (p) {
    315       6     do_something(p->a, p->b);
    316       7     rcu_read_unlock();
    317       8     return true;
    318       9   }
    319      10   rcu_read_unlock();
    320      11   return false;
    321      12 }
    322
    323However, this temptation must be resisted because there are a
    324surprisingly large number of ways that the compiler (or weak ordering
    325CPUs like the DEC Alpha) can trip this code up. For but one example, if
    326the compiler were short of registers, it might choose to refetch from
    327``gp`` rather than keeping a separate copy in ``p`` as follows:
    328
    329   ::
    330
    331       1 bool do_something_gp_buggy_optimized(void)
    332       2 {
    333       3   rcu_read_lock();
    334       4   if (gp) { /* OPTIMIZATIONS GALORE!!! */
    335       5     do_something(gp->a, gp->b);
    336       6     rcu_read_unlock();
    337       7     return true;
    338       8   }
    339       9   rcu_read_unlock();
    340      10   return false;
    341      11 }
    342
    343If this function ran concurrently with a series of updates that replaced
    344the current structure with a new one, the fetches of ``gp->a`` and
    345``gp->b`` might well come from two different structures, which could
    346cause serious confusion. To prevent this (and much else besides),
    347do_something_gp() uses rcu_dereference() to fetch from ``gp``:
    348
    349   ::
    350
    351       1 bool do_something_gp(void)
    352       2 {
    353       3   rcu_read_lock();
    354       4   p = rcu_dereference(gp);
    355       5   if (p) {
    356       6     do_something(p->a, p->b);
    357       7     rcu_read_unlock();
    358       8     return true;
    359       9   }
    360      10   rcu_read_unlock();
    361      11   return false;
    362      12 }
    363
    364The rcu_dereference() uses volatile casts and (for DEC Alpha) memory
    365barriers in the Linux kernel. Should a |high-quality implementation of
    366C11 memory_order_consume [PDF]|_
    367ever appear, then rcu_dereference() could be implemented as a
    368``memory_order_consume`` load. Regardless of the exact implementation, a
    369pointer fetched by rcu_dereference() may not be used outside of the
    370outermost RCU read-side critical section containing that
    371rcu_dereference(), unless protection of the corresponding data
    372element has been passed from RCU to some other synchronization
    373mechanism, most commonly locking or reference counting
    374(see ../../rcuref.rst).
    375
    376.. |high-quality implementation of C11 memory_order_consume [PDF]| replace:: high-quality implementation of C11 ``memory_order_consume`` [PDF]
    377.. _high-quality implementation of C11 memory_order_consume [PDF]: http://www.rdrop.com/users/paulmck/RCU/consume.2015.07.13a.pdf
    378
    379In short, updaters use rcu_assign_pointer() and readers use
    380rcu_dereference(), and these two RCU API elements work together to
    381ensure that readers have a consistent view of newly added data elements.
    382
    383Of course, it is also necessary to remove elements from RCU-protected
    384data structures, for example, using the following process:
    385
    386#. Remove the data element from the enclosing structure.
    387#. Wait for all pre-existing RCU read-side critical sections to complete
    388   (because only pre-existing readers can possibly have a reference to
    389   the newly removed data element).
    390#. At this point, only the updater has a reference to the newly removed
    391   data element, so it can safely reclaim the data element, for example,
    392   by passing it to kfree().
    393
    394This process is implemented by remove_gp_synchronous():
    395
    396   ::
    397
    398       1 bool remove_gp_synchronous(void)
    399       2 {
    400       3   struct foo *p;
    401       4
    402       5   spin_lock(&gp_lock);
    403       6   p = rcu_access_pointer(gp);
    404       7   if (!p) {
    405       8     spin_unlock(&gp_lock);
    406       9     return false;
    407      10   }
    408      11   rcu_assign_pointer(gp, NULL);
    409      12   spin_unlock(&gp_lock);
    410      13   synchronize_rcu();
    411      14   kfree(p);
    412      15   return true;
    413      16 }
    414
    415This function is straightforward, with line 13 waiting for a grace
    416period before line 14 frees the old data element. This waiting ensures
    417that readers will reach line 7 of do_something_gp() before the data
    418element referenced by ``p`` is freed. The rcu_access_pointer() on
    419line 6 is similar to rcu_dereference(), except that:
    420
    421#. The value returned by rcu_access_pointer() cannot be
    422   dereferenced. If you want to access the value pointed to as well as
    423   the pointer itself, use rcu_dereference() instead of
    424   rcu_access_pointer().
    425#. The call to rcu_access_pointer() need not be protected. In
    426   contrast, rcu_dereference() must either be within an RCU
    427   read-side critical section or in a code segment where the pointer
    428   cannot change, for example, in code protected by the corresponding
    429   update-side lock.
    430
    431+-----------------------------------------------------------------------+
    432| **Quick Quiz**:                                                       |
    433+-----------------------------------------------------------------------+
    434| Without the rcu_dereference() or the rcu_access_pointer(),            |
    435| what destructive optimizations might the compiler make use of?        |
    436+-----------------------------------------------------------------------+
    437| **Answer**:                                                           |
    438+-----------------------------------------------------------------------+
    439| Let's start with what happens to do_something_gp() if it fails to     |
    440| use rcu_dereference(). It could reuse a value formerly fetched        |
    441| from this same pointer. It could also fetch the pointer from ``gp``   |
    442| in a byte-at-a-time manner, resulting in *load tearing*, in turn      |
    443| resulting a bytewise mash-up of two distinct pointer values. It might |
    444| even use value-speculation optimizations, where it makes a wrong      |
    445| guess, but by the time it gets around to checking the value, an       |
    446| update has changed the pointer to match the wrong guess. Too bad      |
    447| about any dereferences that returned pre-initialization garbage in    |
    448| the meantime!                                                         |
    449| For remove_gp_synchronous(), as long as all modifications to          |
    450| ``gp`` are carried out while holding ``gp_lock``, the above           |
    451| optimizations are harmless. However, ``sparse`` will complain if you  |
    452| define ``gp`` with ``__rcu`` and then access it without using either  |
    453| rcu_access_pointer() or rcu_dereference().                            |
    454+-----------------------------------------------------------------------+
    455
    456In short, RCU's publish-subscribe guarantee is provided by the
    457combination of rcu_assign_pointer() and rcu_dereference(). This
    458guarantee allows data elements to be safely added to RCU-protected
    459linked data structures without disrupting RCU readers. This guarantee
    460can be used in combination with the grace-period guarantee to also allow
    461data elements to be removed from RCU-protected linked data structures,
    462again without disrupting RCU readers.
    463
    464This guarantee was only partially premeditated. DYNIX/ptx used an
    465explicit memory barrier for publication, but had nothing resembling
    466rcu_dereference() for subscription, nor did it have anything
    467resembling the dependency-ordering barrier that was later subsumed
    468into rcu_dereference() and later still into READ_ONCE(). The
    469need for these operations made itself known quite suddenly at a
    470late-1990s meeting with the DEC Alpha architects, back in the days when
    471DEC was still a free-standing company. It took the Alpha architects a
    472good hour to convince me that any sort of barrier would ever be needed,
    473and it then took me a good *two* hours to convince them that their
    474documentation did not make this point clear. More recent work with the C
    475and C++ standards committees have provided much education on tricks and
    476traps from the compiler. In short, compilers were much less tricky in
    477the early 1990s, but in 2015, don't even think about omitting
    478rcu_dereference()!
    479
    480Memory-Barrier Guarantees
    481~~~~~~~~~~~~~~~~~~~~~~~~~
    482
    483The previous section's simple linked-data-structure scenario clearly
    484demonstrates the need for RCU's stringent memory-ordering guarantees on
    485systems with more than one CPU:
    486
    487#. Each CPU that has an RCU read-side critical section that begins
    488   before synchronize_rcu() starts is guaranteed to execute a full
    489   memory barrier between the time that the RCU read-side critical
    490   section ends and the time that synchronize_rcu() returns. Without
    491   this guarantee, a pre-existing RCU read-side critical section might
    492   hold a reference to the newly removed ``struct foo`` after the
    493   kfree() on line 14 of remove_gp_synchronous().
    494#. Each CPU that has an RCU read-side critical section that ends after
    495   synchronize_rcu() returns is guaranteed to execute a full memory
    496   barrier between the time that synchronize_rcu() begins and the
    497   time that the RCU read-side critical section begins. Without this
    498   guarantee, a later RCU read-side critical section running after the
    499   kfree() on line 14 of remove_gp_synchronous() might later run
    500   do_something_gp() and find the newly deleted ``struct foo``.
    501#. If the task invoking synchronize_rcu() remains on a given CPU,
    502   then that CPU is guaranteed to execute a full memory barrier sometime
    503   during the execution of synchronize_rcu(). This guarantee ensures
    504   that the kfree() on line 14 of remove_gp_synchronous() really
    505   does execute after the removal on line 11.
    506#. If the task invoking synchronize_rcu() migrates among a group of
    507   CPUs during that invocation, then each of the CPUs in that group is
    508   guaranteed to execute a full memory barrier sometime during the
    509   execution of synchronize_rcu(). This guarantee also ensures that
    510   the kfree() on line 14 of remove_gp_synchronous() really does
    511   execute after the removal on line 11, but also in the case where the
    512   thread executing the synchronize_rcu() migrates in the meantime.
    513
    514+-----------------------------------------------------------------------+
    515| **Quick Quiz**:                                                       |
    516+-----------------------------------------------------------------------+
    517| Given that multiple CPUs can start RCU read-side critical sections at |
    518| any time without any ordering whatsoever, how can RCU possibly tell   |
    519| whether or not a given RCU read-side critical section starts before a |
    520| given instance of synchronize_rcu()?                                  |
    521+-----------------------------------------------------------------------+
    522| **Answer**:                                                           |
    523+-----------------------------------------------------------------------+
    524| If RCU cannot tell whether or not a given RCU read-side critical      |
    525| section starts before a given instance of synchronize_rcu(), then     |
    526| it must assume that the RCU read-side critical section started first. |
    527| In other words, a given instance of synchronize_rcu() can avoid       |
    528| waiting on a given RCU read-side critical section only if it can      |
    529| prove that synchronize_rcu() started first.                           |
    530| A related question is “When rcu_read_lock() doesn't generate any      |
    531| code, why does it matter how it relates to a grace period?” The       |
    532| answer is that it is not the relationship of rcu_read_lock()          |
    533| itself that is important, but rather the relationship of the code     |
    534| within the enclosed RCU read-side critical section to the code        |
    535| preceding and following the grace period. If we take this viewpoint,  |
    536| then a given RCU read-side critical section begins before a given     |
    537| grace period when some access preceding the grace period observes the |
    538| effect of some access within the critical section, in which case none |
    539| of the accesses within the critical section may observe the effects   |
    540| of any access following the grace period.                             |
    541|                                                                       |
    542| As of late 2016, mathematical models of RCU take this viewpoint, for  |
    543| example, see slides 62 and 63 of the `2016 LinuxCon                   |
    544| EU <http://www2.rdrop.com/users/paulmck/scalability/paper/LinuxMM.201 |
    545| 6.10.04c.LCE.pdf>`__                                                  |
    546| presentation.                                                         |
    547+-----------------------------------------------------------------------+
    548
    549+-----------------------------------------------------------------------+
    550| **Quick Quiz**:                                                       |
    551+-----------------------------------------------------------------------+
    552| The first and second guarantees require unbelievably strict ordering! |
    553| Are all these memory barriers *really* required?                      |
    554+-----------------------------------------------------------------------+
    555| **Answer**:                                                           |
    556+-----------------------------------------------------------------------+
    557| Yes, they really are required. To see why the first guarantee is      |
    558| required, consider the following sequence of events:                  |
    559|                                                                       |
    560| #. CPU 1: rcu_read_lock()                                             |
    561| #. CPU 1: ``q = rcu_dereference(gp); /* Very likely to return p. */`` |
    562| #. CPU 0: ``list_del_rcu(p);``                                        |
    563| #. CPU 0: synchronize_rcu() starts.                                   |
    564| #. CPU 1: ``do_something_with(q->a);``                                |
    565|    ``/* No smp_mb(), so might happen after kfree(). */``              |
    566| #. CPU 1: rcu_read_unlock()                                           |
    567| #. CPU 0: synchronize_rcu() returns.                                  |
    568| #. CPU 0: ``kfree(p);``                                               |
    569|                                                                       |
    570| Therefore, there absolutely must be a full memory barrier between the |
    571| end of the RCU read-side critical section and the end of the grace    |
    572| period.                                                               |
    573|                                                                       |
    574| The sequence of events demonstrating the necessity of the second rule |
    575| is roughly similar:                                                   |
    576|                                                                       |
    577| #. CPU 0: ``list_del_rcu(p);``                                        |
    578| #. CPU 0: synchronize_rcu() starts.                                   |
    579| #. CPU 1: rcu_read_lock()                                             |
    580| #. CPU 1: ``q = rcu_dereference(gp);``                                |
    581|    ``/* Might return p if no memory barrier. */``                     |
    582| #. CPU 0: synchronize_rcu() returns.                                  |
    583| #. CPU 0: ``kfree(p);``                                               |
    584| #. CPU 1: ``do_something_with(q->a); /* Boom!!! */``                  |
    585| #. CPU 1: rcu_read_unlock()                                           |
    586|                                                                       |
    587| And similarly, without a memory barrier between the beginning of the  |
    588| grace period and the beginning of the RCU read-side critical section, |
    589| CPU 1 might end up accessing the freelist.                            |
    590|                                                                       |
    591| The “as if” rule of course applies, so that any implementation that   |
    592| acts as if the appropriate memory barriers were in place is a correct |
    593| implementation. That said, it is much easier to fool yourself into    |
    594| believing that you have adhered to the as-if rule than it is to       |
    595| actually adhere to it!                                                |
    596+-----------------------------------------------------------------------+
    597
    598+-----------------------------------------------------------------------+
    599| **Quick Quiz**:                                                       |
    600+-----------------------------------------------------------------------+
    601| You claim that rcu_read_lock() and rcu_read_unlock() generate         |
    602| absolutely no code in some kernel builds. This means that the         |
    603| compiler might arbitrarily rearrange consecutive RCU read-side        |
    604| critical sections. Given such rearrangement, if a given RCU read-side |
    605| critical section is done, how can you be sure that all prior RCU      |
    606| read-side critical sections are done? Won't the compiler              |
    607| rearrangements make that impossible to determine?                     |
    608+-----------------------------------------------------------------------+
    609| **Answer**:                                                           |
    610+-----------------------------------------------------------------------+
    611| In cases where rcu_read_lock() and rcu_read_unlock() generate         |
    612| absolutely no code, RCU infers quiescent states only at special       |
    613| locations, for example, within the scheduler. Because calls to        |
    614| schedule() had better prevent calling-code accesses to shared         |
    615| variables from being rearranged across the call to schedule(), if     |
    616| RCU detects the end of a given RCU read-side critical section, it     |
    617| will necessarily detect the end of all prior RCU read-side critical   |
    618| sections, no matter how aggressively the compiler scrambles the code. |
    619| Again, this all assumes that the compiler cannot scramble code across |
    620| calls to the scheduler, out of interrupt handlers, into the idle      |
    621| loop, into user-mode code, and so on. But if your kernel build allows |
    622| that sort of scrambling, you have broken far more than just RCU!      |
    623+-----------------------------------------------------------------------+
    624
    625Note that these memory-barrier requirements do not replace the
    626fundamental RCU requirement that a grace period wait for all
    627pre-existing readers. On the contrary, the memory barriers called out in
    628this section must operate in such a way as to *enforce* this fundamental
    629requirement. Of course, different implementations enforce this
    630requirement in different ways, but enforce it they must.
    631
    632RCU Primitives Guaranteed to Execute Unconditionally
    633~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    634
    635The common-case RCU primitives are unconditional. They are invoked, they
    636do their job, and they return, with no possibility of error, and no need
    637to retry. This is a key RCU design philosophy.
    638
    639However, this philosophy is pragmatic rather than pigheaded. If someone
    640comes up with a good justification for a particular conditional RCU
    641primitive, it might well be implemented and added. After all, this
    642guarantee was reverse-engineered, not premeditated. The unconditional
    643nature of the RCU primitives was initially an accident of
    644implementation, and later experience with synchronization primitives
    645with conditional primitives caused me to elevate this accident to a
    646guarantee. Therefore, the justification for adding a conditional
    647primitive to RCU would need to be based on detailed and compelling use
    648cases.
    649
    650Guaranteed Read-to-Write Upgrade
    651~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    652
    653As far as RCU is concerned, it is always possible to carry out an update
    654within an RCU read-side critical section. For example, that RCU
    655read-side critical section might search for a given data element, and
    656then might acquire the update-side spinlock in order to update that
    657element, all while remaining in that RCU read-side critical section. Of
    658course, it is necessary to exit the RCU read-side critical section
    659before invoking synchronize_rcu(), however, this inconvenience can
    660be avoided through use of the call_rcu() and kfree_rcu() API
    661members described later in this document.
    662
    663+-----------------------------------------------------------------------+
    664| **Quick Quiz**:                                                       |
    665+-----------------------------------------------------------------------+
    666| But how does the upgrade-to-write operation exclude other readers?    |
    667+-----------------------------------------------------------------------+
    668| **Answer**:                                                           |
    669+-----------------------------------------------------------------------+
    670| It doesn't, just like normal RCU updates, which also do not exclude   |
    671| RCU readers.                                                          |
    672+-----------------------------------------------------------------------+
    673
    674This guarantee allows lookup code to be shared between read-side and
    675update-side code, and was premeditated, appearing in the earliest
    676DYNIX/ptx RCU documentation.
    677
    678Fundamental Non-Requirements
    679----------------------------
    680
    681RCU provides extremely lightweight readers, and its read-side
    682guarantees, though quite useful, are correspondingly lightweight. It is
    683therefore all too easy to assume that RCU is guaranteeing more than it
    684really is. Of course, the list of things that RCU does not guarantee is
    685infinitely long, however, the following sections list a few
    686non-guarantees that have caused confusion. Except where otherwise noted,
    687these non-guarantees were premeditated.
    688
    689#. `Readers Impose Minimal Ordering`_
    690#. `Readers Do Not Exclude Updaters`_
    691#. `Updaters Only Wait For Old Readers`_
    692#. `Grace Periods Don't Partition Read-Side Critical Sections`_
    693#. `Read-Side Critical Sections Don't Partition Grace Periods`_
    694
    695Readers Impose Minimal Ordering
    696~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    697
    698Reader-side markers such as rcu_read_lock() and
    699rcu_read_unlock() provide absolutely no ordering guarantees except
    700through their interaction with the grace-period APIs such as
    701synchronize_rcu(). To see this, consider the following pair of
    702threads:
    703
    704   ::
    705
    706       1 void thread0(void)
    707       2 {
    708       3   rcu_read_lock();
    709       4   WRITE_ONCE(x, 1);
    710       5   rcu_read_unlock();
    711       6   rcu_read_lock();
    712       7   WRITE_ONCE(y, 1);
    713       8   rcu_read_unlock();
    714       9 }
    715      10
    716      11 void thread1(void)
    717      12 {
    718      13   rcu_read_lock();
    719      14   r1 = READ_ONCE(y);
    720      15   rcu_read_unlock();
    721      16   rcu_read_lock();
    722      17   r2 = READ_ONCE(x);
    723      18   rcu_read_unlock();
    724      19 }
    725
    726After thread0() and thread1() execute concurrently, it is quite
    727possible to have
    728
    729   ::
    730
    731      (r1 == 1 && r2 == 0)
    732
    733(that is, ``y`` appears to have been assigned before ``x``), which would
    734not be possible if rcu_read_lock() and rcu_read_unlock() had
    735much in the way of ordering properties. But they do not, so the CPU is
    736within its rights to do significant reordering. This is by design: Any
    737significant ordering constraints would slow down these fast-path APIs.
    738
    739+-----------------------------------------------------------------------+
    740| **Quick Quiz**:                                                       |
    741+-----------------------------------------------------------------------+
    742| Can't the compiler also reorder this code?                            |
    743+-----------------------------------------------------------------------+
    744| **Answer**:                                                           |
    745+-----------------------------------------------------------------------+
    746| No, the volatile casts in READ_ONCE() and WRITE_ONCE()                |
    747| prevent the compiler from reordering in this particular case.         |
    748+-----------------------------------------------------------------------+
    749
    750Readers Do Not Exclude Updaters
    751~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    752
    753Neither rcu_read_lock() nor rcu_read_unlock() exclude updates.
    754All they do is to prevent grace periods from ending. The following
    755example illustrates this:
    756
    757   ::
    758
    759       1 void thread0(void)
    760       2 {
    761       3   rcu_read_lock();
    762       4   r1 = READ_ONCE(y);
    763       5   if (r1) {
    764       6     do_something_with_nonzero_x();
    765       7     r2 = READ_ONCE(x);
    766       8     WARN_ON(!r2); /* BUG!!! */
    767       9   }
    768      10   rcu_read_unlock();
    769      11 }
    770      12
    771      13 void thread1(void)
    772      14 {
    773      15   spin_lock(&my_lock);
    774      16   WRITE_ONCE(x, 1);
    775      17   WRITE_ONCE(y, 1);
    776      18   spin_unlock(&my_lock);
    777      19 }
    778
    779If the thread0() function's rcu_read_lock() excluded the
    780thread1() function's update, the WARN_ON() could never fire. But
    781the fact is that rcu_read_lock() does not exclude much of anything
    782aside from subsequent grace periods, of which thread1() has none, so
    783the WARN_ON() can and does fire.
    784
    785Updaters Only Wait For Old Readers
    786~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    787
    788It might be tempting to assume that after synchronize_rcu()
    789completes, there are no readers executing. This temptation must be
    790avoided because new readers can start immediately after
    791synchronize_rcu() starts, and synchronize_rcu() is under no
    792obligation to wait for these new readers.
    793
    794+-----------------------------------------------------------------------+
    795| **Quick Quiz**:                                                       |
    796+-----------------------------------------------------------------------+
    797| Suppose that synchronize_rcu() did wait until *all* readers had       |
    798| completed instead of waiting only on pre-existing readers. For how    |
    799| long would the updater be able to rely on there being no readers?     |
    800+-----------------------------------------------------------------------+
    801| **Answer**:                                                           |
    802+-----------------------------------------------------------------------+
    803| For no time at all. Even if synchronize_rcu() were to wait until      |
    804| all readers had completed, a new reader might start immediately after |
    805| synchronize_rcu() completed. Therefore, the code following            |
    806| synchronize_rcu() can *never* rely on there being no readers.         |
    807+-----------------------------------------------------------------------+
    808
    809Grace Periods Don't Partition Read-Side Critical Sections
    810~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    811
    812It is tempting to assume that if any part of one RCU read-side critical
    813section precedes a given grace period, and if any part of another RCU
    814read-side critical section follows that same grace period, then all of
    815the first RCU read-side critical section must precede all of the second.
    816However, this just isn't the case: A single grace period does not
    817partition the set of RCU read-side critical sections. An example of this
    818situation can be illustrated as follows, where ``x``, ``y``, and ``z``
    819are initially all zero:
    820
    821   ::
    822
    823       1 void thread0(void)
    824       2 {
    825       3   rcu_read_lock();
    826       4   WRITE_ONCE(a, 1);
    827       5   WRITE_ONCE(b, 1);
    828       6   rcu_read_unlock();
    829       7 }
    830       8
    831       9 void thread1(void)
    832      10 {
    833      11   r1 = READ_ONCE(a);
    834      12   synchronize_rcu();
    835      13   WRITE_ONCE(c, 1);
    836      14 }
    837      15
    838      16 void thread2(void)
    839      17 {
    840      18   rcu_read_lock();
    841      19   r2 = READ_ONCE(b);
    842      20   r3 = READ_ONCE(c);
    843      21   rcu_read_unlock();
    844      22 }
    845
    846It turns out that the outcome:
    847
    848   ::
    849
    850      (r1 == 1 && r2 == 0 && r3 == 1)
    851
    852is entirely possible. The following figure show how this can happen,
    853with each circled ``QS`` indicating the point at which RCU recorded a
    854*quiescent state* for each thread, that is, a state in which RCU knows
    855that the thread cannot be in the midst of an RCU read-side critical
    856section that started before the current grace period:
    857
    858.. kernel-figure:: GPpartitionReaders1.svg
    859
    860If it is necessary to partition RCU read-side critical sections in this
    861manner, it is necessary to use two grace periods, where the first grace
    862period is known to end before the second grace period starts:
    863
    864   ::
    865
    866       1 void thread0(void)
    867       2 {
    868       3   rcu_read_lock();
    869       4   WRITE_ONCE(a, 1);
    870       5   WRITE_ONCE(b, 1);
    871       6   rcu_read_unlock();
    872       7 }
    873       8
    874       9 void thread1(void)
    875      10 {
    876      11   r1 = READ_ONCE(a);
    877      12   synchronize_rcu();
    878      13   WRITE_ONCE(c, 1);
    879      14 }
    880      15
    881      16 void thread2(void)
    882      17 {
    883      18   r2 = READ_ONCE(c);
    884      19   synchronize_rcu();
    885      20   WRITE_ONCE(d, 1);
    886      21 }
    887      22
    888      23 void thread3(void)
    889      24 {
    890      25   rcu_read_lock();
    891      26   r3 = READ_ONCE(b);
    892      27   r4 = READ_ONCE(d);
    893      28   rcu_read_unlock();
    894      29 }
    895
    896Here, if ``(r1 == 1)``, then thread0()'s write to ``b`` must happen
    897before the end of thread1()'s grace period. If in addition
    898``(r4 == 1)``, then thread3()'s read from ``b`` must happen after
    899the beginning of thread2()'s grace period. If it is also the case
    900that ``(r2 == 1)``, then the end of thread1()'s grace period must
    901precede the beginning of thread2()'s grace period. This mean that
    902the two RCU read-side critical sections cannot overlap, guaranteeing
    903that ``(r3 == 1)``. As a result, the outcome:
    904
    905   ::
    906
    907      (r1 == 1 && r2 == 1 && r3 == 0 && r4 == 1)
    908
    909cannot happen.
    910
    911This non-requirement was also non-premeditated, but became apparent when
    912studying RCU's interaction with memory ordering.
    913
    914Read-Side Critical Sections Don't Partition Grace Periods
    915~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    916
    917It is also tempting to assume that if an RCU read-side critical section
    918happens between a pair of grace periods, then those grace periods cannot
    919overlap. However, this temptation leads nowhere good, as can be
    920illustrated by the following, with all variables initially zero:
    921
    922   ::
    923
    924       1 void thread0(void)
    925       2 {
    926       3   rcu_read_lock();
    927       4   WRITE_ONCE(a, 1);
    928       5   WRITE_ONCE(b, 1);
    929       6   rcu_read_unlock();
    930       7 }
    931       8
    932       9 void thread1(void)
    933      10 {
    934      11   r1 = READ_ONCE(a);
    935      12   synchronize_rcu();
    936      13   WRITE_ONCE(c, 1);
    937      14 }
    938      15
    939      16 void thread2(void)
    940      17 {
    941      18   rcu_read_lock();
    942      19   WRITE_ONCE(d, 1);
    943      20   r2 = READ_ONCE(c);
    944      21   rcu_read_unlock();
    945      22 }
    946      23
    947      24 void thread3(void)
    948      25 {
    949      26   r3 = READ_ONCE(d);
    950      27   synchronize_rcu();
    951      28   WRITE_ONCE(e, 1);
    952      29 }
    953      30
    954      31 void thread4(void)
    955      32 {
    956      33   rcu_read_lock();
    957      34   r4 = READ_ONCE(b);
    958      35   r5 = READ_ONCE(e);
    959      36   rcu_read_unlock();
    960      37 }
    961
    962In this case, the outcome:
    963
    964   ::
    965
    966      (r1 == 1 && r2 == 1 && r3 == 1 && r4 == 0 && r5 == 1)
    967
    968is entirely possible, as illustrated below:
    969
    970.. kernel-figure:: ReadersPartitionGP1.svg
    971
    972Again, an RCU read-side critical section can overlap almost all of a
    973given grace period, just so long as it does not overlap the entire grace
    974period. As a result, an RCU read-side critical section cannot partition
    975a pair of RCU grace periods.
    976
    977+-----------------------------------------------------------------------+
    978| **Quick Quiz**:                                                       |
    979+-----------------------------------------------------------------------+
    980| How long a sequence of grace periods, each separated by an RCU        |
    981| read-side critical section, would be required to partition the RCU    |
    982| read-side critical sections at the beginning and end of the chain?    |
    983+-----------------------------------------------------------------------+
    984| **Answer**:                                                           |
    985+-----------------------------------------------------------------------+
    986| In theory, an infinite number. In practice, an unknown number that is |
    987| sensitive to both implementation details and timing considerations.   |
    988| Therefore, even in practice, RCU users must abide by the theoretical  |
    989| rather than the practical answer.                                     |
    990+-----------------------------------------------------------------------+
    991
    992Parallelism Facts of Life
    993-------------------------
    994
    995These parallelism facts of life are by no means specific to RCU, but the
    996RCU implementation must abide by them. They therefore bear repeating:
    997
    998#. Any CPU or task may be delayed at any time, and any attempts to avoid
    999   these delays by disabling preemption, interrupts, or whatever are
   1000   completely futile. This is most obvious in preemptible user-level
   1001   environments and in virtualized environments (where a given guest
   1002   OS's VCPUs can be preempted at any time by the underlying
   1003   hypervisor), but can also happen in bare-metal environments due to
   1004   ECC errors, NMIs, and other hardware events. Although a delay of more
   1005   than about 20 seconds can result in splats, the RCU implementation is
   1006   obligated to use algorithms that can tolerate extremely long delays,
   1007   but where “extremely long” is not long enough to allow wrap-around
   1008   when incrementing a 64-bit counter.
   1009#. Both the compiler and the CPU can reorder memory accesses. Where it
   1010   matters, RCU must use compiler directives and memory-barrier
   1011   instructions to preserve ordering.
   1012#. Conflicting writes to memory locations in any given cache line will
   1013   result in expensive cache misses. Greater numbers of concurrent
   1014   writes and more-frequent concurrent writes will result in more
   1015   dramatic slowdowns. RCU is therefore obligated to use algorithms that
   1016   have sufficient locality to avoid significant performance and
   1017   scalability problems.
   1018#. As a rough rule of thumb, only one CPU's worth of processing may be
   1019   carried out under the protection of any given exclusive lock. RCU
   1020   must therefore use scalable locking designs.
   1021#. Counters are finite, especially on 32-bit systems. RCU's use of
   1022   counters must therefore tolerate counter wrap, or be designed such
   1023   that counter wrap would take way more time than a single system is
   1024   likely to run. An uptime of ten years is quite possible, a runtime of
   1025   a century much less so. As an example of the latter, RCU's
   1026   dyntick-idle nesting counter allows 54 bits for interrupt nesting
   1027   level (this counter is 64 bits even on a 32-bit system). Overflowing
   1028   this counter requires 2\ :sup:`54` half-interrupts on a given CPU
   1029   without that CPU ever going idle. If a half-interrupt happened every
   1030   microsecond, it would take 570 years of runtime to overflow this
   1031   counter, which is currently believed to be an acceptably long time.
   1032#. Linux systems can have thousands of CPUs running a single Linux
   1033   kernel in a single shared-memory environment. RCU must therefore pay
   1034   close attention to high-end scalability.
   1035
   1036This last parallelism fact of life means that RCU must pay special
   1037attention to the preceding facts of life. The idea that Linux might
   1038scale to systems with thousands of CPUs would have been met with some
   1039skepticism in the 1990s, but these requirements would have otherwise
   1040have been unsurprising, even in the early 1990s.
   1041
   1042Quality-of-Implementation Requirements
   1043--------------------------------------
   1044
   1045These sections list quality-of-implementation requirements. Although an
   1046RCU implementation that ignores these requirements could still be used,
   1047it would likely be subject to limitations that would make it
   1048inappropriate for industrial-strength production use. Classes of
   1049quality-of-implementation requirements are as follows:
   1050
   1051#. `Specialization`_
   1052#. `Performance and Scalability`_
   1053#. `Forward Progress`_
   1054#. `Composability`_
   1055#. `Corner Cases`_
   1056
   1057These classes is covered in the following sections.
   1058
   1059Specialization
   1060~~~~~~~~~~~~~~
   1061
   1062RCU is and always has been intended primarily for read-mostly
   1063situations, which means that RCU's read-side primitives are optimized,
   1064often at the expense of its update-side primitives. Experience thus far
   1065is captured by the following list of situations:
   1066
   1067#. Read-mostly data, where stale and inconsistent data is not a problem:
   1068   RCU works great!
   1069#. Read-mostly data, where data must be consistent: RCU works well.
   1070#. Read-write data, where data must be consistent: RCU *might* work OK.
   1071   Or not.
   1072#. Write-mostly data, where data must be consistent: RCU is very
   1073   unlikely to be the right tool for the job, with the following
   1074   exceptions, where RCU can provide:
   1075
   1076   a. Existence guarantees for update-friendly mechanisms.
   1077   b. Wait-free read-side primitives for real-time use.
   1078
   1079This focus on read-mostly situations means that RCU must interoperate
   1080with other synchronization primitives. For example, the add_gp() and
   1081remove_gp_synchronous() examples discussed earlier use RCU to
   1082protect readers and locking to coordinate updaters. However, the need
   1083extends much farther, requiring that a variety of synchronization
   1084primitives be legal within RCU read-side critical sections, including
   1085spinlocks, sequence locks, atomic operations, reference counters, and
   1086memory barriers.
   1087
   1088+-----------------------------------------------------------------------+
   1089| **Quick Quiz**:                                                       |
   1090+-----------------------------------------------------------------------+
   1091| What about sleeping locks?                                            |
   1092+-----------------------------------------------------------------------+
   1093| **Answer**:                                                           |
   1094+-----------------------------------------------------------------------+
   1095| These are forbidden within Linux-kernel RCU read-side critical        |
   1096| sections because it is not legal to place a quiescent state (in this  |
   1097| case, voluntary context switch) within an RCU read-side critical      |
   1098| section. However, sleeping locks may be used within userspace RCU     |
   1099| read-side critical sections, and also within Linux-kernel sleepable   |
   1100| RCU `(SRCU) <Sleepable RCU_>`__ read-side critical sections. In       |
   1101| addition, the -rt patchset turns spinlocks into a sleeping locks so   |
   1102| that the corresponding critical sections can be preempted, which also |
   1103| means that these sleeplockified spinlocks (but not other sleeping     |
   1104| locks!) may be acquire within -rt-Linux-kernel RCU read-side critical |
   1105| sections.                                                             |
   1106| Note that it *is* legal for a normal RCU read-side critical section   |
   1107| to conditionally acquire a sleeping locks (as in                      |
   1108| mutex_trylock()), but only as long as it does not loop                |
   1109| indefinitely attempting to conditionally acquire that sleeping locks. |
   1110| The key point is that things like mutex_trylock() either return       |
   1111| with the mutex held, or return an error indication if the mutex was   |
   1112| not immediately available. Either way, mutex_trylock() returns        |
   1113| immediately without sleeping.                                         |
   1114+-----------------------------------------------------------------------+
   1115
   1116It often comes as a surprise that many algorithms do not require a
   1117consistent view of data, but many can function in that mode, with
   1118network routing being the poster child. Internet routing algorithms take
   1119significant time to propagate updates, so that by the time an update
   1120arrives at a given system, that system has been sending network traffic
   1121the wrong way for a considerable length of time. Having a few threads
   1122continue to send traffic the wrong way for a few more milliseconds is
   1123clearly not a problem: In the worst case, TCP retransmissions will
   1124eventually get the data where it needs to go. In general, when tracking
   1125the state of the universe outside of the computer, some level of
   1126inconsistency must be tolerated due to speed-of-light delays if nothing
   1127else.
   1128
   1129Furthermore, uncertainty about external state is inherent in many cases.
   1130For example, a pair of veterinarians might use heartbeat to determine
   1131whether or not a given cat was alive. But how long should they wait
   1132after the last heartbeat to decide that the cat is in fact dead? Waiting
   1133less than 400 milliseconds makes no sense because this would mean that a
   1134relaxed cat would be considered to cycle between death and life more
   1135than 100 times per minute. Moreover, just as with human beings, a cat's
   1136heart might stop for some period of time, so the exact wait period is a
   1137judgment call. One of our pair of veterinarians might wait 30 seconds
   1138before pronouncing the cat dead, while the other might insist on waiting
   1139a full minute. The two veterinarians would then disagree on the state of
   1140the cat during the final 30 seconds of the minute following the last
   1141heartbeat.
   1142
   1143Interestingly enough, this same situation applies to hardware. When push
   1144comes to shove, how do we tell whether or not some external server has
   1145failed? We send messages to it periodically, and declare it failed if we
   1146don't receive a response within a given period of time. Policy decisions
   1147can usually tolerate short periods of inconsistency. The policy was
   1148decided some time ago, and is only now being put into effect, so a few
   1149milliseconds of delay is normally inconsequential.
   1150
   1151However, there are algorithms that absolutely must see consistent data.
   1152For example, the translation between a user-level SystemV semaphore ID
   1153to the corresponding in-kernel data structure is protected by RCU, but
   1154it is absolutely forbidden to update a semaphore that has just been
   1155removed. In the Linux kernel, this need for consistency is accommodated
   1156by acquiring spinlocks located in the in-kernel data structure from
   1157within the RCU read-side critical section, and this is indicated by the
   1158green box in the figure above. Many other techniques may be used, and
   1159are in fact used within the Linux kernel.
   1160
   1161In short, RCU is not required to maintain consistency, and other
   1162mechanisms may be used in concert with RCU when consistency is required.
   1163RCU's specialization allows it to do its job extremely well, and its
   1164ability to interoperate with other synchronization mechanisms allows the
   1165right mix of synchronization tools to be used for a given job.
   1166
   1167Performance and Scalability
   1168~~~~~~~~~~~~~~~~~~~~~~~~~~~
   1169
   1170Energy efficiency is a critical component of performance today, and
   1171Linux-kernel RCU implementations must therefore avoid unnecessarily
   1172awakening idle CPUs. I cannot claim that this requirement was
   1173premeditated. In fact, I learned of it during a telephone conversation
   1174in which I was given “frank and open” feedback on the importance of
   1175energy efficiency in battery-powered systems and on specific
   1176energy-efficiency shortcomings of the Linux-kernel RCU implementation.
   1177In my experience, the battery-powered embedded community will consider
   1178any unnecessary wakeups to be extremely unfriendly acts. So much so that
   1179mere Linux-kernel-mailing-list posts are insufficient to vent their ire.
   1180
   1181Memory consumption is not particularly important for in most situations,
   1182and has become decreasingly so as memory sizes have expanded and memory
   1183costs have plummeted. However, as I learned from Matt Mackall's
   1184`bloatwatch <http://elinux.org/Linux_Tiny-FAQ>`__ efforts, memory
   1185footprint is critically important on single-CPU systems with
   1186non-preemptible (``CONFIG_PREEMPTION=n``) kernels, and thus `tiny
   1187RCU <https://lore.kernel.org/r/20090113221724.GA15307@linux.vnet.ibm.com>`__
   1188was born. Josh Triplett has since taken over the small-memory banner
   1189with his `Linux kernel tinification <https://tiny.wiki.kernel.org/>`__
   1190project, which resulted in `SRCU <Sleepable RCU_>`__ becoming optional
   1191for those kernels not needing it.
   1192
   1193The remaining performance requirements are, for the most part,
   1194unsurprising. For example, in keeping with RCU's read-side
   1195specialization, rcu_dereference() should have negligible overhead
   1196(for example, suppression of a few minor compiler optimizations).
   1197Similarly, in non-preemptible environments, rcu_read_lock() and
   1198rcu_read_unlock() should have exactly zero overhead.
   1199
   1200In preemptible environments, in the case where the RCU read-side
   1201critical section was not preempted (as will be the case for the
   1202highest-priority real-time process), rcu_read_lock() and
   1203rcu_read_unlock() should have minimal overhead. In particular, they
   1204should not contain atomic read-modify-write operations, memory-barrier
   1205instructions, preemption disabling, interrupt disabling, or backwards
   1206branches. However, in the case where the RCU read-side critical section
   1207was preempted, rcu_read_unlock() may acquire spinlocks and disable
   1208interrupts. This is why it is better to nest an RCU read-side critical
   1209section within a preempt-disable region than vice versa, at least in
   1210cases where that critical section is short enough to avoid unduly
   1211degrading real-time latencies.
   1212
   1213The synchronize_rcu() grace-period-wait primitive is optimized for
   1214throughput. It may therefore incur several milliseconds of latency in
   1215addition to the duration of the longest RCU read-side critical section.
   1216On the other hand, multiple concurrent invocations of
   1217synchronize_rcu() are required to use batching optimizations so that
   1218they can be satisfied by a single underlying grace-period-wait
   1219operation. For example, in the Linux kernel, it is not unusual for a
   1220single grace-period-wait operation to serve more than `1,000 separate
   1221invocations <https://www.usenix.org/conference/2004-usenix-annual-technical-conference/making-rcu-safe-deep-sub-millisecond-response>`__
   1222of synchronize_rcu(), thus amortizing the per-invocation overhead
   1223down to nearly zero. However, the grace-period optimization is also
   1224required to avoid measurable degradation of real-time scheduling and
   1225interrupt latencies.
   1226
   1227In some cases, the multi-millisecond synchronize_rcu() latencies are
   1228unacceptable. In these cases, synchronize_rcu_expedited() may be
   1229used instead, reducing the grace-period latency down to a few tens of
   1230microseconds on small systems, at least in cases where the RCU read-side
   1231critical sections are short. There are currently no special latency
   1232requirements for synchronize_rcu_expedited() on large systems, but,
   1233consistent with the empirical nature of the RCU specification, that is
   1234subject to change. However, there most definitely are scalability
   1235requirements: A storm of synchronize_rcu_expedited() invocations on
   12364096 CPUs should at least make reasonable forward progress. In return
   1237for its shorter latencies, synchronize_rcu_expedited() is permitted
   1238to impose modest degradation of real-time latency on non-idle online
   1239CPUs. Here, “modest” means roughly the same latency degradation as a
   1240scheduling-clock interrupt.
   1241
   1242There are a number of situations where even
   1243synchronize_rcu_expedited()'s reduced grace-period latency is
   1244unacceptable. In these situations, the asynchronous call_rcu() can
   1245be used in place of synchronize_rcu() as follows:
   1246
   1247   ::
   1248
   1249       1 struct foo {
   1250       2   int a;
   1251       3   int b;
   1252       4   struct rcu_head rh;
   1253       5 };
   1254       6
   1255       7 static void remove_gp_cb(struct rcu_head *rhp)
   1256       8 {
   1257       9   struct foo *p = container_of(rhp, struct foo, rh);
   1258      10
   1259      11   kfree(p);
   1260      12 }
   1261      13
   1262      14 bool remove_gp_asynchronous(void)
   1263      15 {
   1264      16   struct foo *p;
   1265      17
   1266      18   spin_lock(&gp_lock);
   1267      19   p = rcu_access_pointer(gp);
   1268      20   if (!p) {
   1269      21     spin_unlock(&gp_lock);
   1270      22     return false;
   1271      23   }
   1272      24   rcu_assign_pointer(gp, NULL);
   1273      25   call_rcu(&p->rh, remove_gp_cb);
   1274      26   spin_unlock(&gp_lock);
   1275      27   return true;
   1276      28 }
   1277
   1278A definition of ``struct foo`` is finally needed, and appears on
   1279lines 1-5. The function remove_gp_cb() is passed to call_rcu()
   1280on line 25, and will be invoked after the end of a subsequent grace
   1281period. This gets the same effect as remove_gp_synchronous(), but
   1282without forcing the updater to wait for a grace period to elapse. The
   1283call_rcu() function may be used in a number of situations where
   1284neither synchronize_rcu() nor synchronize_rcu_expedited() would
   1285be legal, including within preempt-disable code, local_bh_disable()
   1286code, interrupt-disable code, and interrupt handlers. However, even
   1287call_rcu() is illegal within NMI handlers and from idle and offline
   1288CPUs. The callback function (remove_gp_cb() in this case) will be
   1289executed within softirq (software interrupt) environment within the
   1290Linux kernel, either within a real softirq handler or under the
   1291protection of local_bh_disable(). In both the Linux kernel and in
   1292userspace, it is bad practice to write an RCU callback function that
   1293takes too long. Long-running operations should be relegated to separate
   1294threads or (in the Linux kernel) workqueues.
   1295
   1296+-----------------------------------------------------------------------+
   1297| **Quick Quiz**:                                                       |
   1298+-----------------------------------------------------------------------+
   1299| Why does line 19 use rcu_access_pointer()? After all,                 |
   1300| call_rcu() on line 25 stores into the structure, which would          |
   1301| interact badly with concurrent insertions. Doesn't this mean that     |
   1302| rcu_dereference() is required?                                        |
   1303+-----------------------------------------------------------------------+
   1304| **Answer**:                                                           |
   1305+-----------------------------------------------------------------------+
   1306| Presumably the ``->gp_lock`` acquired on line 18 excludes any         |
   1307| changes, including any insertions that rcu_dereference() would        |
   1308| protect against. Therefore, any insertions will be delayed until      |
   1309| after ``->gp_lock`` is released on line 25, which in turn means that  |
   1310| rcu_access_pointer() suffices.                                        |
   1311+-----------------------------------------------------------------------+
   1312
   1313However, all that remove_gp_cb() is doing is invoking kfree() on
   1314the data element. This is a common idiom, and is supported by
   1315kfree_rcu(), which allows “fire and forget” operation as shown
   1316below:
   1317
   1318   ::
   1319
   1320       1 struct foo {
   1321       2   int a;
   1322       3   int b;
   1323       4   struct rcu_head rh;
   1324       5 };
   1325       6
   1326       7 bool remove_gp_faf(void)
   1327       8 {
   1328       9   struct foo *p;
   1329      10
   1330      11   spin_lock(&gp_lock);
   1331      12   p = rcu_dereference(gp);
   1332      13   if (!p) {
   1333      14     spin_unlock(&gp_lock);
   1334      15     return false;
   1335      16   }
   1336      17   rcu_assign_pointer(gp, NULL);
   1337      18   kfree_rcu(p, rh);
   1338      19   spin_unlock(&gp_lock);
   1339      20   return true;
   1340      21 }
   1341
   1342Note that remove_gp_faf() simply invokes kfree_rcu() and
   1343proceeds, without any need to pay any further attention to the
   1344subsequent grace period and kfree(). It is permissible to invoke
   1345kfree_rcu() from the same environments as for call_rcu().
   1346Interestingly enough, DYNIX/ptx had the equivalents of call_rcu()
   1347and kfree_rcu(), but not synchronize_rcu(). This was due to the
   1348fact that RCU was not heavily used within DYNIX/ptx, so the very few
   1349places that needed something like synchronize_rcu() simply
   1350open-coded it.
   1351
   1352+-----------------------------------------------------------------------+
   1353| **Quick Quiz**:                                                       |
   1354+-----------------------------------------------------------------------+
   1355| Earlier it was claimed that call_rcu() and kfree_rcu()                |
   1356| allowed updaters to avoid being blocked by readers. But how can that  |
   1357| be correct, given that the invocation of the callback and the freeing |
   1358| of the memory (respectively) must still wait for a grace period to    |
   1359| elapse?                                                               |
   1360+-----------------------------------------------------------------------+
   1361| **Answer**:                                                           |
   1362+-----------------------------------------------------------------------+
   1363| We could define things this way, but keep in mind that this sort of   |
   1364| definition would say that updates in garbage-collected languages      |
   1365| cannot complete until the next time the garbage collector runs, which |
   1366| does not seem at all reasonable. The key point is that in most cases, |
   1367| an updater using either call_rcu() or kfree_rcu() can proceed         |
   1368| to the next update as soon as it has invoked call_rcu() or            |
   1369| kfree_rcu(), without having to wait for a subsequent grace            |
   1370| period.                                                               |
   1371+-----------------------------------------------------------------------+
   1372
   1373But what if the updater must wait for the completion of code to be
   1374executed after the end of the grace period, but has other tasks that can
   1375be carried out in the meantime? The polling-style
   1376get_state_synchronize_rcu() and cond_synchronize_rcu() functions
   1377may be used for this purpose, as shown below:
   1378
   1379   ::
   1380
   1381       1 bool remove_gp_poll(void)
   1382       2 {
   1383       3   struct foo *p;
   1384       4   unsigned long s;
   1385       5
   1386       6   spin_lock(&gp_lock);
   1387       7   p = rcu_access_pointer(gp);
   1388       8   if (!p) {
   1389       9     spin_unlock(&gp_lock);
   1390      10     return false;
   1391      11   }
   1392      12   rcu_assign_pointer(gp, NULL);
   1393      13   spin_unlock(&gp_lock);
   1394      14   s = get_state_synchronize_rcu();
   1395      15   do_something_while_waiting();
   1396      16   cond_synchronize_rcu(s);
   1397      17   kfree(p);
   1398      18   return true;
   1399      19 }
   1400
   1401On line 14, get_state_synchronize_rcu() obtains a “cookie” from RCU,
   1402then line 15 carries out other tasks, and finally, line 16 returns
   1403immediately if a grace period has elapsed in the meantime, but otherwise
   1404waits as required. The need for ``get_state_synchronize_rcu`` and
   1405cond_synchronize_rcu() has appeared quite recently, so it is too
   1406early to tell whether they will stand the test of time.
   1407
   1408RCU thus provides a range of tools to allow updaters to strike the
   1409required tradeoff between latency, flexibility and CPU overhead.
   1410
   1411Forward Progress
   1412~~~~~~~~~~~~~~~~
   1413
   1414In theory, delaying grace-period completion and callback invocation is
   1415harmless. In practice, not only are memory sizes finite but also
   1416callbacks sometimes do wakeups, and sufficiently deferred wakeups can be
   1417difficult to distinguish from system hangs. Therefore, RCU must provide
   1418a number of mechanisms to promote forward progress.
   1419
   1420These mechanisms are not foolproof, nor can they be. For one simple
   1421example, an infinite loop in an RCU read-side critical section must by
   1422definition prevent later grace periods from ever completing. For a more
   1423involved example, consider a 64-CPU system built with
   1424``CONFIG_RCU_NOCB_CPU=y`` and booted with ``rcu_nocbs=1-63``, where
   1425CPUs 1 through 63 spin in tight loops that invoke call_rcu(). Even
   1426if these tight loops also contain calls to cond_resched() (thus
   1427allowing grace periods to complete), CPU 0 simply will not be able to
   1428invoke callbacks as fast as the other 63 CPUs can register them, at
   1429least not until the system runs out of memory. In both of these
   1430examples, the Spiderman principle applies: With great power comes great
   1431responsibility. However, short of this level of abuse, RCU is required
   1432to ensure timely completion of grace periods and timely invocation of
   1433callbacks.
   1434
   1435RCU takes the following steps to encourage timely completion of grace
   1436periods:
   1437
   1438#. If a grace period fails to complete within 100 milliseconds, RCU
   1439   causes future invocations of cond_resched() on the holdout CPUs
   1440   to provide an RCU quiescent state. RCU also causes those CPUs'
   1441   need_resched() invocations to return ``true``, but only after the
   1442   corresponding CPU's next scheduling-clock.
   1443#. CPUs mentioned in the ``nohz_full`` kernel boot parameter can run
   1444   indefinitely in the kernel without scheduling-clock interrupts, which
   1445   defeats the above need_resched() strategem. RCU will therefore
   1446   invoke resched_cpu() on any ``nohz_full`` CPUs still holding out
   1447   after 109 milliseconds.
   1448#. In kernels built with ``CONFIG_RCU_BOOST=y``, if a given task that
   1449   has been preempted within an RCU read-side critical section is
   1450   holding out for more than 500 milliseconds, RCU will resort to
   1451   priority boosting.
   1452#. If a CPU is still holding out 10 seconds into the grace period, RCU
   1453   will invoke resched_cpu() on it regardless of its ``nohz_full``
   1454   state.
   1455
   1456The above values are defaults for systems running with ``HZ=1000``. They
   1457will vary as the value of ``HZ`` varies, and can also be changed using
   1458the relevant Kconfig options and kernel boot parameters. RCU currently
   1459does not do much sanity checking of these parameters, so please use
   1460caution when changing them. Note that these forward-progress measures
   1461are provided only for RCU, not for `SRCU <Sleepable RCU_>`__ or `Tasks
   1462RCU`_.
   1463
   1464RCU takes the following steps in call_rcu() to encourage timely
   1465invocation of callbacks when any given non-\ ``rcu_nocbs`` CPU has
   146610,000 callbacks, or has 10,000 more callbacks than it had the last time
   1467encouragement was provided:
   1468
   1469#. Starts a grace period, if one is not already in progress.
   1470#. Forces immediate checking for quiescent states, rather than waiting
   1471   for three milliseconds to have elapsed since the beginning of the
   1472   grace period.
   1473#. Immediately tags the CPU's callbacks with their grace period
   1474   completion numbers, rather than waiting for the ``RCU_SOFTIRQ``
   1475   handler to get around to it.
   1476#. Lifts callback-execution batch limits, which speeds up callback
   1477   invocation at the expense of degrading realtime response.
   1478
   1479Again, these are default values when running at ``HZ=1000``, and can be
   1480overridden. Again, these forward-progress measures are provided only for
   1481RCU, not for `SRCU <Sleepable RCU_>`__ or `Tasks
   1482RCU`_. Even for RCU, callback-invocation forward
   1483progress for ``rcu_nocbs`` CPUs is much less well-developed, in part
   1484because workloads benefiting from ``rcu_nocbs`` CPUs tend to invoke
   1485call_rcu() relatively infrequently. If workloads emerge that need
   1486both ``rcu_nocbs`` CPUs and high call_rcu() invocation rates, then
   1487additional forward-progress work will be required.
   1488
   1489Composability
   1490~~~~~~~~~~~~~
   1491
   1492Composability has received much attention in recent years, perhaps in
   1493part due to the collision of multicore hardware with object-oriented
   1494techniques designed in single-threaded environments for single-threaded
   1495use. And in theory, RCU read-side critical sections may be composed, and
   1496in fact may be nested arbitrarily deeply. In practice, as with all
   1497real-world implementations of composable constructs, there are
   1498limitations.
   1499
   1500Implementations of RCU for which rcu_read_lock() and
   1501rcu_read_unlock() generate no code, such as Linux-kernel RCU when
   1502``CONFIG_PREEMPTION=n``, can be nested arbitrarily deeply. After all, there
   1503is no overhead. Except that if all these instances of
   1504rcu_read_lock() and rcu_read_unlock() are visible to the
   1505compiler, compilation will eventually fail due to exhausting memory,
   1506mass storage, or user patience, whichever comes first. If the nesting is
   1507not visible to the compiler, as is the case with mutually recursive
   1508functions each in its own translation unit, stack overflow will result.
   1509If the nesting takes the form of loops, perhaps in the guise of tail
   1510recursion, either the control variable will overflow or (in the Linux
   1511kernel) you will get an RCU CPU stall warning. Nevertheless, this class
   1512of RCU implementations is one of the most composable constructs in
   1513existence.
   1514
   1515RCU implementations that explicitly track nesting depth are limited by
   1516the nesting-depth counter. For example, the Linux kernel's preemptible
   1517RCU limits nesting to ``INT_MAX``. This should suffice for almost all
   1518practical purposes. That said, a consecutive pair of RCU read-side
   1519critical sections between which there is an operation that waits for a
   1520grace period cannot be enclosed in another RCU read-side critical
   1521section. This is because it is not legal to wait for a grace period
   1522within an RCU read-side critical section: To do so would result either
   1523in deadlock or in RCU implicitly splitting the enclosing RCU read-side
   1524critical section, neither of which is conducive to a long-lived and
   1525prosperous kernel.
   1526
   1527It is worth noting that RCU is not alone in limiting composability. For
   1528example, many transactional-memory implementations prohibit composing a
   1529pair of transactions separated by an irrevocable operation (for example,
   1530a network receive operation). For another example, lock-based critical
   1531sections can be composed surprisingly freely, but only if deadlock is
   1532avoided.
   1533
   1534In short, although RCU read-side critical sections are highly
   1535composable, care is required in some situations, just as is the case for
   1536any other composable synchronization mechanism.
   1537
   1538Corner Cases
   1539~~~~~~~~~~~~
   1540
   1541A given RCU workload might have an endless and intense stream of RCU
   1542read-side critical sections, perhaps even so intense that there was
   1543never a point in time during which there was not at least one RCU
   1544read-side critical section in flight. RCU cannot allow this situation to
   1545block grace periods: As long as all the RCU read-side critical sections
   1546are finite, grace periods must also be finite.
   1547
   1548That said, preemptible RCU implementations could potentially result in
   1549RCU read-side critical sections being preempted for long durations,
   1550which has the effect of creating a long-duration RCU read-side critical
   1551section. This situation can arise only in heavily loaded systems, but
   1552systems using real-time priorities are of course more vulnerable.
   1553Therefore, RCU priority boosting is provided to help deal with this
   1554case. That said, the exact requirements on RCU priority boosting will
   1555likely evolve as more experience accumulates.
   1556
   1557Other workloads might have very high update rates. Although one can
   1558argue that such workloads should instead use something other than RCU,
   1559the fact remains that RCU must handle such workloads gracefully. This
   1560requirement is another factor driving batching of grace periods, but it
   1561is also the driving force behind the checks for large numbers of queued
   1562RCU callbacks in the call_rcu() code path. Finally, high update
   1563rates should not delay RCU read-side critical sections, although some
   1564small read-side delays can occur when using
   1565synchronize_rcu_expedited(), courtesy of this function's use of
   1566smp_call_function_single().
   1567
   1568Although all three of these corner cases were understood in the early
   15691990s, a simple user-level test consisting of ``close(open(path))`` in a
   1570tight loop in the early 2000s suddenly provided a much deeper
   1571appreciation of the high-update-rate corner case. This test also
   1572motivated addition of some RCU code to react to high update rates, for
   1573example, if a given CPU finds itself with more than 10,000 RCU callbacks
   1574queued, it will cause RCU to take evasive action by more aggressively
   1575starting grace periods and more aggressively forcing completion of
   1576grace-period processing. This evasive action causes the grace period to
   1577complete more quickly, but at the cost of restricting RCU's batching
   1578optimizations, thus increasing the CPU overhead incurred by that grace
   1579period.
   1580
   1581Software-Engineering Requirements
   1582---------------------------------
   1583
   1584Between Murphy's Law and “To err is human”, it is necessary to guard
   1585against mishaps and misuse:
   1586
   1587#. It is all too easy to forget to use rcu_read_lock() everywhere
   1588   that it is needed, so kernels built with ``CONFIG_PROVE_RCU=y`` will
   1589   splat if rcu_dereference() is used outside of an RCU read-side
   1590   critical section. Update-side code can use
   1591   rcu_dereference_protected(), which takes a `lockdep
   1592   expression <https://lwn.net/Articles/371986/>`__ to indicate what is
   1593   providing the protection. If the indicated protection is not
   1594   provided, a lockdep splat is emitted.
   1595   Code shared between readers and updaters can use
   1596   rcu_dereference_check(), which also takes a lockdep expression,
   1597   and emits a lockdep splat if neither rcu_read_lock() nor the
   1598   indicated protection is in place. In addition,
   1599   rcu_dereference_raw() is used in those (hopefully rare) cases
   1600   where the required protection cannot be easily described. Finally,
   1601   rcu_read_lock_held() is provided to allow a function to verify
   1602   that it has been invoked within an RCU read-side critical section. I
   1603   was made aware of this set of requirements shortly after Thomas
   1604   Gleixner audited a number of RCU uses.
   1605#. A given function might wish to check for RCU-related preconditions
   1606   upon entry, before using any other RCU API. The
   1607   rcu_lockdep_assert() does this job, asserting the expression in
   1608   kernels having lockdep enabled and doing nothing otherwise.
   1609#. It is also easy to forget to use rcu_assign_pointer() and
   1610   rcu_dereference(), perhaps (incorrectly) substituting a simple
   1611   assignment. To catch this sort of error, a given RCU-protected
   1612   pointer may be tagged with ``__rcu``, after which sparse will
   1613   complain about simple-assignment accesses to that pointer. Arnd
   1614   Bergmann made me aware of this requirement, and also supplied the
   1615   needed `patch series <https://lwn.net/Articles/376011/>`__.
   1616#. Kernels built with ``CONFIG_DEBUG_OBJECTS_RCU_HEAD=y`` will splat if
   1617   a data element is passed to call_rcu() twice in a row, without a
   1618   grace period in between. (This error is similar to a double free.)
   1619   The corresponding ``rcu_head`` structures that are dynamically
   1620   allocated are automatically tracked, but ``rcu_head`` structures
   1621   allocated on the stack must be initialized with
   1622   init_rcu_head_on_stack() and cleaned up with
   1623   destroy_rcu_head_on_stack(). Similarly, statically allocated
   1624   non-stack ``rcu_head`` structures must be initialized with
   1625   init_rcu_head() and cleaned up with destroy_rcu_head().
   1626   Mathieu Desnoyers made me aware of this requirement, and also
   1627   supplied the needed
   1628   `patch <https://lore.kernel.org/r/20100319013024.GA28456@Krystal>`__.
   1629#. An infinite loop in an RCU read-side critical section will eventually
   1630   trigger an RCU CPU stall warning splat, with the duration of
   1631   “eventually” being controlled by the ``RCU_CPU_STALL_TIMEOUT``
   1632   ``Kconfig`` option, or, alternatively, by the
   1633   ``rcupdate.rcu_cpu_stall_timeout`` boot/sysfs parameter. However, RCU
   1634   is not obligated to produce this splat unless there is a grace period
   1635   waiting on that particular RCU read-side critical section.
   1636
   1637   Some extreme workloads might intentionally delay RCU grace periods,
   1638   and systems running those workloads can be booted with
   1639   ``rcupdate.rcu_cpu_stall_suppress`` to suppress the splats. This
   1640   kernel parameter may also be set via ``sysfs``. Furthermore, RCU CPU
   1641   stall warnings are counter-productive during sysrq dumps and during
   1642   panics. RCU therefore supplies the rcu_sysrq_start() and
   1643   rcu_sysrq_end() API members to be called before and after long
   1644   sysrq dumps. RCU also supplies the rcu_panic() notifier that is
   1645   automatically invoked at the beginning of a panic to suppress further
   1646   RCU CPU stall warnings.
   1647
   1648   This requirement made itself known in the early 1990s, pretty much
   1649   the first time that it was necessary to debug a CPU stall. That said,
   1650   the initial implementation in DYNIX/ptx was quite generic in
   1651   comparison with that of Linux.
   1652
   1653#. Although it would be very good to detect pointers leaking out of RCU
   1654   read-side critical sections, there is currently no good way of doing
   1655   this. One complication is the need to distinguish between pointers
   1656   leaking and pointers that have been handed off from RCU to some other
   1657   synchronization mechanism, for example, reference counting.
   1658#. In kernels built with ``CONFIG_RCU_TRACE=y``, RCU-related information
   1659   is provided via event tracing.
   1660#. Open-coded use of rcu_assign_pointer() and rcu_dereference()
   1661   to create typical linked data structures can be surprisingly
   1662   error-prone. Therefore, RCU-protected `linked
   1663   lists <https://lwn.net/Articles/609973/#RCU%20List%20APIs>`__ and,
   1664   more recently, RCU-protected `hash
   1665   tables <https://lwn.net/Articles/612100/>`__ are available. Many
   1666   other special-purpose RCU-protected data structures are available in
   1667   the Linux kernel and the userspace RCU library.
   1668#. Some linked structures are created at compile time, but still require
   1669   ``__rcu`` checking. The RCU_POINTER_INITIALIZER() macro serves
   1670   this purpose.
   1671#. It is not necessary to use rcu_assign_pointer() when creating
   1672   linked structures that are to be published via a single external
   1673   pointer. The RCU_INIT_POINTER() macro is provided for this task.
   1674
   1675This not a hard-and-fast list: RCU's diagnostic capabilities will
   1676continue to be guided by the number and type of usage bugs found in
   1677real-world RCU usage.
   1678
   1679Linux Kernel Complications
   1680--------------------------
   1681
   1682The Linux kernel provides an interesting environment for all kinds of
   1683software, including RCU. Some of the relevant points of interest are as
   1684follows:
   1685
   1686#. `Configuration`_
   1687#. `Firmware Interface`_
   1688#. `Early Boot`_
   1689#. `Interrupts and NMIs`_
   1690#. `Loadable Modules`_
   1691#. `Hotplug CPU`_
   1692#. `Scheduler and RCU`_
   1693#. `Tracing and RCU`_
   1694#. `Accesses to User Memory and RCU`_
   1695#. `Energy Efficiency`_
   1696#. `Scheduling-Clock Interrupts and RCU`_
   1697#. `Memory Efficiency`_
   1698#. `Performance, Scalability, Response Time, and Reliability`_
   1699
   1700This list is probably incomplete, but it does give a feel for the most
   1701notable Linux-kernel complications. Each of the following sections
   1702covers one of the above topics.
   1703
   1704Configuration
   1705~~~~~~~~~~~~~
   1706
   1707RCU's goal is automatic configuration, so that almost nobody needs to
   1708worry about RCU's ``Kconfig`` options. And for almost all users, RCU
   1709does in fact work well “out of the box.”
   1710
   1711However, there are specialized use cases that are handled by kernel boot
   1712parameters and ``Kconfig`` options. Unfortunately, the ``Kconfig``
   1713system will explicitly ask users about new ``Kconfig`` options, which
   1714requires almost all of them be hidden behind a ``CONFIG_RCU_EXPERT``
   1715``Kconfig`` option.
   1716
   1717This all should be quite obvious, but the fact remains that Linus
   1718Torvalds recently had to
   1719`remind <https://lore.kernel.org/r/CA+55aFy4wcCwaL4okTs8wXhGZ5h-ibecy_Meg9C4MNQrUnwMcg@mail.gmail.com>`__
   1720me of this requirement.
   1721
   1722Firmware Interface
   1723~~~~~~~~~~~~~~~~~~
   1724
   1725In many cases, kernel obtains information about the system from the
   1726firmware, and sometimes things are lost in translation. Or the
   1727translation is accurate, but the original message is bogus.
   1728
   1729For example, some systems' firmware overreports the number of CPUs,
   1730sometimes by a large factor. If RCU naively believed the firmware, as it
   1731used to do, it would create too many per-CPU kthreads. Although the
   1732resulting system will still run correctly, the extra kthreads needlessly
   1733consume memory and can cause confusion when they show up in ``ps``
   1734listings.
   1735
   1736RCU must therefore wait for a given CPU to actually come online before
   1737it can allow itself to believe that the CPU actually exists. The
   1738resulting “ghost CPUs” (which are never going to come online) cause a
   1739number of `interesting
   1740complications <https://paulmck.livejournal.com/37494.html>`__.
   1741
   1742Early Boot
   1743~~~~~~~~~~
   1744
   1745The Linux kernel's boot sequence is an interesting process, and RCU is
   1746used early, even before rcu_init() is invoked. In fact, a number of
   1747RCU's primitives can be used as soon as the initial task's
   1748``task_struct`` is available and the boot CPU's per-CPU variables are
   1749set up. The read-side primitives (rcu_read_lock(),
   1750rcu_read_unlock(), rcu_dereference(), and
   1751rcu_access_pointer()) will operate normally very early on, as will
   1752rcu_assign_pointer().
   1753
   1754Although call_rcu() may be invoked at any time during boot,
   1755callbacks are not guaranteed to be invoked until after all of RCU's
   1756kthreads have been spawned, which occurs at early_initcall() time.
   1757This delay in callback invocation is due to the fact that RCU does not
   1758invoke callbacks until it is fully initialized, and this full
   1759initialization cannot occur until after the scheduler has initialized
   1760itself to the point where RCU can spawn and run its kthreads. In theory,
   1761it would be possible to invoke callbacks earlier, however, this is not a
   1762panacea because there would be severe restrictions on what operations
   1763those callbacks could invoke.
   1764
   1765Perhaps surprisingly, synchronize_rcu() and
   1766synchronize_rcu_expedited(), will operate normally during very early
   1767boot, the reason being that there is only one CPU and preemption is
   1768disabled. This means that the call synchronize_rcu() (or friends)
   1769itself is a quiescent state and thus a grace period, so the early-boot
   1770implementation can be a no-op.
   1771
   1772However, once the scheduler has spawned its first kthread, this early
   1773boot trick fails for synchronize_rcu() (as well as for
   1774synchronize_rcu_expedited()) in ``CONFIG_PREEMPTION=y`` kernels. The
   1775reason is that an RCU read-side critical section might be preempted,
   1776which means that a subsequent synchronize_rcu() really does have to
   1777wait for something, as opposed to simply returning immediately.
   1778Unfortunately, synchronize_rcu() can't do this until all of its
   1779kthreads are spawned, which doesn't happen until some time during
   1780early_initcalls() time. But this is no excuse: RCU is nevertheless
   1781required to correctly handle synchronous grace periods during this time
   1782period. Once all of its kthreads are up and running, RCU starts running
   1783normally.
   1784
   1785+-----------------------------------------------------------------------+
   1786| **Quick Quiz**:                                                       |
   1787+-----------------------------------------------------------------------+
   1788| How can RCU possibly handle grace periods before all of its kthreads  |
   1789| have been spawned???                                                  |
   1790+-----------------------------------------------------------------------+
   1791| **Answer**:                                                           |
   1792+-----------------------------------------------------------------------+
   1793| Very carefully!                                                       |
   1794| During the “dead zone” between the time that the scheduler spawns the |
   1795| first task and the time that all of RCU's kthreads have been spawned, |
   1796| all synchronous grace periods are handled by the expedited            |
   1797| grace-period mechanism. At runtime, this expedited mechanism relies   |
   1798| on workqueues, but during the dead zone the requesting task itself    |
   1799| drives the desired expedited grace period. Because dead-zone          |
   1800| execution takes place within task context, everything works. Once the |
   1801| dead zone ends, expedited grace periods go back to using workqueues,  |
   1802| as is required to avoid problems that would otherwise occur when a    |
   1803| user task received a POSIX signal while driving an expedited grace    |
   1804| period.                                                               |
   1805|                                                                       |
   1806| And yes, this does mean that it is unhelpful to send POSIX signals to |
   1807| random tasks between the time that the scheduler spawns its first     |
   1808| kthread and the time that RCU's kthreads have all been spawned. If    |
   1809| there ever turns out to be a good reason for sending POSIX signals    |
   1810| during that time, appropriate adjustments will be made. (If it turns  |
   1811| out that POSIX signals are sent during this time for no good reason,  |
   1812| other adjustments will be made, appropriate or otherwise.)            |
   1813+-----------------------------------------------------------------------+
   1814
   1815I learned of these boot-time requirements as a result of a series of
   1816system hangs.
   1817
   1818Interrupts and NMIs
   1819~~~~~~~~~~~~~~~~~~~
   1820
   1821The Linux kernel has interrupts, and RCU read-side critical sections are
   1822legal within interrupt handlers and within interrupt-disabled regions of
   1823code, as are invocations of call_rcu().
   1824
   1825Some Linux-kernel architectures can enter an interrupt handler from
   1826non-idle process context, and then just never leave it, instead
   1827stealthily transitioning back to process context. This trick is
   1828sometimes used to invoke system calls from inside the kernel. These
   1829“half-interrupts” mean that RCU has to be very careful about how it
   1830counts interrupt nesting levels. I learned of this requirement the hard
   1831way during a rewrite of RCU's dyntick-idle code.
   1832
   1833The Linux kernel has non-maskable interrupts (NMIs), and RCU read-side
   1834critical sections are legal within NMI handlers. Thankfully, RCU
   1835update-side primitives, including call_rcu(), are prohibited within
   1836NMI handlers.
   1837
   1838The name notwithstanding, some Linux-kernel architectures can have
   1839nested NMIs, which RCU must handle correctly. Andy Lutomirski `surprised
   1840me <https://lore.kernel.org/r/CALCETrXLq1y7e_dKFPgou-FKHB6Pu-r8+t-6Ds+8=va7anBWDA@mail.gmail.com>`__
   1841with this requirement; he also kindly surprised me with `an
   1842algorithm <https://lore.kernel.org/r/CALCETrXSY9JpW3uE6H8WYk81sg56qasA2aqmjMPsq5dOtzso=g@mail.gmail.com>`__
   1843that meets this requirement.
   1844
   1845Furthermore, NMI handlers can be interrupted by what appear to RCU to be
   1846normal interrupts. One way that this can happen is for code that
   1847directly invokes rcu_irq_enter() and rcu_irq_exit() to be called
   1848from an NMI handler. This astonishing fact of life prompted the current
   1849code structure, which has rcu_irq_enter() invoking
   1850rcu_nmi_enter() and rcu_irq_exit() invoking rcu_nmi_exit().
   1851And yes, I also learned of this requirement the hard way.
   1852
   1853Loadable Modules
   1854~~~~~~~~~~~~~~~~
   1855
   1856The Linux kernel has loadable modules, and these modules can also be
   1857unloaded. After a given module has been unloaded, any attempt to call
   1858one of its functions results in a segmentation fault. The module-unload
   1859functions must therefore cancel any delayed calls to loadable-module
   1860functions, for example, any outstanding mod_timer() must be dealt
   1861with via del_timer_sync() or similar.
   1862
   1863Unfortunately, there is no way to cancel an RCU callback; once you
   1864invoke call_rcu(), the callback function is eventually going to be
   1865invoked, unless the system goes down first. Because it is normally
   1866considered socially irresponsible to crash the system in response to a
   1867module unload request, we need some other way to deal with in-flight RCU
   1868callbacks.
   1869
   1870RCU therefore provides rcu_barrier(), which waits until all
   1871in-flight RCU callbacks have been invoked. If a module uses
   1872call_rcu(), its exit function should therefore prevent any future
   1873invocation of call_rcu(), then invoke rcu_barrier(). In theory,
   1874the underlying module-unload code could invoke rcu_barrier()
   1875unconditionally, but in practice this would incur unacceptable
   1876latencies.
   1877
   1878Nikita Danilov noted this requirement for an analogous
   1879filesystem-unmount situation, and Dipankar Sarma incorporated
   1880rcu_barrier() into RCU. The need for rcu_barrier() for module
   1881unloading became apparent later.
   1882
   1883.. important::
   1884
   1885   The rcu_barrier() function is not, repeat,
   1886   *not*, obligated to wait for a grace period. It is instead only required
   1887   to wait for RCU callbacks that have already been posted. Therefore, if
   1888   there are no RCU callbacks posted anywhere in the system,
   1889   rcu_barrier() is within its rights to return immediately. Even if
   1890   there are callbacks posted, rcu_barrier() does not necessarily need
   1891   to wait for a grace period.
   1892
   1893+-----------------------------------------------------------------------+
   1894| **Quick Quiz**:                                                       |
   1895+-----------------------------------------------------------------------+
   1896| Wait a minute! Each RCU callbacks must wait for a grace period to     |
   1897| complete, and rcu_barrier() must wait for each pre-existing           |
   1898| callback to be invoked. Doesn't rcu_barrier() therefore need to       |
   1899| wait for a full grace period if there is even one callback posted     |
   1900| anywhere in the system?                                               |
   1901+-----------------------------------------------------------------------+
   1902| **Answer**:                                                           |
   1903+-----------------------------------------------------------------------+
   1904| Absolutely not!!!                                                     |
   1905| Yes, each RCU callbacks must wait for a grace period to complete, but |
   1906| it might well be partly (or even completely) finished waiting by the  |
   1907| time rcu_barrier() is invoked. In that case, rcu_barrier()            |
   1908| need only wait for the remaining portion of the grace period to       |
   1909| elapse. So even if there are quite a few callbacks posted,            |
   1910| rcu_barrier() might well return quite quickly.                        |
   1911|                                                                       |
   1912| So if you need to wait for a grace period as well as for all          |
   1913| pre-existing callbacks, you will need to invoke both                  |
   1914| synchronize_rcu() and rcu_barrier(). If latency is a concern,         |
   1915| you can always use workqueues to invoke them concurrently.            |
   1916+-----------------------------------------------------------------------+
   1917
   1918Hotplug CPU
   1919~~~~~~~~~~~
   1920
   1921The Linux kernel supports CPU hotplug, which means that CPUs can come
   1922and go. It is of course illegal to use any RCU API member from an
   1923offline CPU, with the exception of `SRCU <Sleepable RCU_>`__ read-side
   1924critical sections. This requirement was present from day one in
   1925DYNIX/ptx, but on the other hand, the Linux kernel's CPU-hotplug
   1926implementation is “interesting.”
   1927
   1928The Linux-kernel CPU-hotplug implementation has notifiers that are used
   1929to allow the various kernel subsystems (including RCU) to respond
   1930appropriately to a given CPU-hotplug operation. Most RCU operations may
   1931be invoked from CPU-hotplug notifiers, including even synchronous
   1932grace-period operations such as (synchronize_rcu() and
   1933synchronize_rcu_expedited()).  However, these synchronous operations
   1934do block and therefore cannot be invoked from notifiers that execute via
   1935stop_machine(), specifically those between the ``CPUHP_AP_OFFLINE``
   1936and ``CPUHP_AP_ONLINE`` states.
   1937
   1938In addition, all-callback-wait operations such as rcu_barrier() may
   1939not be invoked from any CPU-hotplug notifier.  This restriction is due
   1940to the fact that there are phases of CPU-hotplug operations where the
   1941outgoing CPU's callbacks will not be invoked until after the CPU-hotplug
   1942operation ends, which could also result in deadlock. Furthermore,
   1943rcu_barrier() blocks CPU-hotplug operations during its execution,
   1944which results in another type of deadlock when invoked from a CPU-hotplug
   1945notifier.
   1946
   1947Finally, RCU must avoid deadlocks due to interaction between hotplug,
   1948timers and grace period processing. It does so by maintaining its own set
   1949of books that duplicate the centrally maintained ``cpu_online_mask``,
   1950and also by reporting quiescent states explicitly when a CPU goes
   1951offline.  This explicit reporting of quiescent states avoids any need
   1952for the force-quiescent-state loop (FQS) to report quiescent states for
   1953offline CPUs.  However, as a debugging measure, the FQS loop does splat
   1954if offline CPUs block an RCU grace period for too long.
   1955
   1956An offline CPU's quiescent state will be reported either:
   1957
   19581.  As the CPU goes offline using RCU's hotplug notifier (rcu_report_dead()).
   19592.  When grace period initialization (rcu_gp_init()) detects a
   1960    race either with CPU offlining or with a task unblocking on a leaf
   1961    ``rcu_node`` structure whose CPUs are all offline.
   1962
   1963The CPU-online path (rcu_cpu_starting()) should never need to report
   1964a quiescent state for an offline CPU.  However, as a debugging measure,
   1965it does emit a warning if a quiescent state was not already reported
   1966for that CPU.
   1967
   1968During the checking/modification of RCU's hotplug bookkeeping, the
   1969corresponding CPU's leaf node lock is held. This avoids race conditions
   1970between RCU's hotplug notifier hooks, the grace period initialization
   1971code, and the FQS loop, all of which refer to or modify this bookkeeping.
   1972
   1973Scheduler and RCU
   1974~~~~~~~~~~~~~~~~~
   1975
   1976RCU makes use of kthreads, and it is necessary to avoid excessive CPU-time
   1977accumulation by these kthreads. This requirement was no surprise, but
   1978RCU's violation of it when running context-switch-heavy workloads when
   1979built with ``CONFIG_NO_HZ_FULL=y`` `did come as a surprise
   1980[PDF] <http://www.rdrop.com/users/paulmck/scalability/paper/BareMetal.2015.01.15b.pdf>`__.
   1981RCU has made good progress towards meeting this requirement, even for
   1982context-switch-heavy ``CONFIG_NO_HZ_FULL=y`` workloads, but there is
   1983room for further improvement.
   1984
   1985There is no longer any prohibition against holding any of
   1986scheduler's runqueue or priority-inheritance spinlocks across an
   1987rcu_read_unlock(), even if interrupts and preemption were enabled
   1988somewhere within the corresponding RCU read-side critical section.
   1989Therefore, it is now perfectly legal to execute rcu_read_lock()
   1990with preemption enabled, acquire one of the scheduler locks, and hold
   1991that lock across the matching rcu_read_unlock().
   1992
   1993Similarly, the RCU flavor consolidation has removed the need for negative
   1994nesting.  The fact that interrupt-disabled regions of code act as RCU
   1995read-side critical sections implicitly avoids earlier issues that used
   1996to result in destructive recursion via interrupt handler's use of RCU.
   1997
   1998Tracing and RCU
   1999~~~~~~~~~~~~~~~
   2000
   2001It is possible to use tracing on RCU code, but tracing itself uses RCU.
   2002For this reason, rcu_dereference_raw_check() is provided for use
   2003by tracing, which avoids the destructive recursion that could otherwise
   2004ensue. This API is also used by virtualization in some architectures,
   2005where RCU readers execute in environments in which tracing cannot be
   2006used. The tracing folks both located the requirement and provided the
   2007needed fix, so this surprise requirement was relatively painless.
   2008
   2009Accesses to User Memory and RCU
   2010~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   2011
   2012The kernel needs to access user-space memory, for example, to access data
   2013referenced by system-call parameters.  The get_user() macro does this job.
   2014
   2015However, user-space memory might well be paged out, which means that
   2016get_user() might well page-fault and thus block while waiting for the
   2017resulting I/O to complete.  It would be a very bad thing for the compiler to
   2018reorder a get_user() invocation into an RCU read-side critical section.
   2019
   2020For example, suppose that the source code looked like this:
   2021
   2022  ::
   2023
   2024       1 rcu_read_lock();
   2025       2 p = rcu_dereference(gp);
   2026       3 v = p->value;
   2027       4 rcu_read_unlock();
   2028       5 get_user(user_v, user_p);
   2029       6 do_something_with(v, user_v);
   2030
   2031The compiler must not be permitted to transform this source code into
   2032the following:
   2033
   2034  ::
   2035
   2036       1 rcu_read_lock();
   2037       2 p = rcu_dereference(gp);
   2038       3 get_user(user_v, user_p); // BUG: POSSIBLE PAGE FAULT!!!
   2039       4 v = p->value;
   2040       5 rcu_read_unlock();
   2041       6 do_something_with(v, user_v);
   2042
   2043If the compiler did make this transformation in a ``CONFIG_PREEMPTION=n`` kernel
   2044build, and if get_user() did page fault, the result would be a quiescent
   2045state in the middle of an RCU read-side critical section.  This misplaced
   2046quiescent state could result in line 4 being a use-after-free access,
   2047which could be bad for your kernel's actuarial statistics.  Similar examples
   2048can be constructed with the call to get_user() preceding the
   2049rcu_read_lock().
   2050
   2051Unfortunately, get_user() doesn't have any particular ordering properties,
   2052and in some architectures the underlying ``asm`` isn't even marked
   2053``volatile``.  And even if it was marked ``volatile``, the above access to
   2054``p->value`` is not volatile, so the compiler would not have any reason to keep
   2055those two accesses in order.
   2056
   2057Therefore, the Linux-kernel definitions of rcu_read_lock() and
   2058rcu_read_unlock() must act as compiler barriers, at least for outermost
   2059instances of rcu_read_lock() and rcu_read_unlock() within a nested set
   2060of RCU read-side critical sections.
   2061
   2062Energy Efficiency
   2063~~~~~~~~~~~~~~~~~
   2064
   2065Interrupting idle CPUs is considered socially unacceptable, especially
   2066by people with battery-powered embedded systems. RCU therefore conserves
   2067energy by detecting which CPUs are idle, including tracking CPUs that
   2068have been interrupted from idle. This is a large part of the
   2069energy-efficiency requirement, so I learned of this via an irate phone
   2070call.
   2071
   2072Because RCU avoids interrupting idle CPUs, it is illegal to execute an
   2073RCU read-side critical section on an idle CPU. (Kernels built with
   2074``CONFIG_PROVE_RCU=y`` will splat if you try it.) The RCU_NONIDLE()
   2075macro and ``_rcuidle`` event tracing is provided to work around this
   2076restriction. In addition, rcu_is_watching() may be used to test
   2077whether or not it is currently legal to run RCU read-side critical
   2078sections on this CPU. I learned of the need for diagnostics on the one
   2079hand and RCU_NONIDLE() on the other while inspecting idle-loop code.
   2080Steven Rostedt supplied ``_rcuidle`` event tracing, which is used quite
   2081heavily in the idle loop. However, there are some restrictions on the
   2082code placed within RCU_NONIDLE():
   2083
   2084#. Blocking is prohibited. In practice, this is not a serious
   2085   restriction given that idle tasks are prohibited from blocking to
   2086   begin with.
   2087#. Although nesting RCU_NONIDLE() is permitted, they cannot nest
   2088   indefinitely deeply. However, given that they can be nested on the
   2089   order of a million deep, even on 32-bit systems, this should not be a
   2090   serious restriction. This nesting limit would probably be reached
   2091   long after the compiler OOMed or the stack overflowed.
   2092#. Any code path that enters RCU_NONIDLE() must sequence out of that
   2093   same RCU_NONIDLE(). For example, the following is grossly
   2094   illegal:
   2095
   2096      ::
   2097
   2098	  1     RCU_NONIDLE({
   2099	  2       do_something();
   2100	  3       goto bad_idea;  /* BUG!!! */
   2101	  4       do_something_else();});
   2102	  5   bad_idea:
   2103
   2104
   2105   It is just as illegal to transfer control into the middle of
   2106   RCU_NONIDLE()'s argument. Yes, in theory, you could transfer in
   2107   as long as you also transferred out, but in practice you could also
   2108   expect to get sharply worded review comments.
   2109
   2110It is similarly socially unacceptable to interrupt an ``nohz_full`` CPU
   2111running in userspace. RCU must therefore track ``nohz_full`` userspace
   2112execution. RCU must therefore be able to sample state at two points in
   2113time, and be able to determine whether or not some other CPU spent any
   2114time idle and/or executing in userspace.
   2115
   2116These energy-efficiency requirements have proven quite difficult to
   2117understand and to meet, for example, there have been more than five
   2118clean-sheet rewrites of RCU's energy-efficiency code, the last of which
   2119was finally able to demonstrate `real energy savings running on real
   2120hardware
   2121[PDF] <http://www.rdrop.com/users/paulmck/realtime/paper/AMPenergy.2013.04.19a.pdf>`__.
   2122As noted earlier, I learned of many of these requirements via angry
   2123phone calls: Flaming me on the Linux-kernel mailing list was apparently
   2124not sufficient to fully vent their ire at RCU's energy-efficiency bugs!
   2125
   2126Scheduling-Clock Interrupts and RCU
   2127~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   2128
   2129The kernel transitions between in-kernel non-idle execution, userspace
   2130execution, and the idle loop. Depending on kernel configuration, RCU
   2131handles these states differently:
   2132
   2133+-----------------+------------------+------------------+-----------------+
   2134| ``HZ`` Kconfig  | In-Kernel        | Usermode         | Idle            |
   2135+=================+==================+==================+=================+
   2136| ``HZ_PERIODIC`` | Can rely on      | Can rely on      | Can rely on     |
   2137|                 | scheduling-clock | scheduling-clock | RCU's           |
   2138|                 | interrupt.       | interrupt and    | dyntick-idle    |
   2139|                 |                  | its detection    | detection.      |
   2140|                 |                  | of interrupt     |                 |
   2141|                 |                  | from usermode.   |                 |
   2142+-----------------+------------------+------------------+-----------------+
   2143| ``NO_HZ_IDLE``  | Can rely on      | Can rely on      | Can rely on     |
   2144|                 | scheduling-clock | scheduling-clock | RCU's           |
   2145|                 | interrupt.       | interrupt and    | dyntick-idle    |
   2146|                 |                  | its detection    | detection.      |
   2147|                 |                  | of interrupt     |                 |
   2148|                 |                  | from usermode.   |                 |
   2149+-----------------+------------------+------------------+-----------------+
   2150| ``NO_HZ_FULL``  | Can only         | Can rely on      | Can rely on     |
   2151|                 | sometimes rely   | RCU's            | RCU's           |
   2152|                 | on               | dyntick-idle     | dyntick-idle    |
   2153|                 | scheduling-clock | detection.       | detection.      |
   2154|                 | interrupt. In    |                  |                 |
   2155|                 | other cases, it  |                  |                 |
   2156|                 | is necessary to  |                  |                 |
   2157|                 | bound kernel     |                  |                 |
   2158|                 | execution times  |                  |                 |
   2159|                 | and/or use       |                  |                 |
   2160|                 | IPIs.            |                  |                 |
   2161+-----------------+------------------+------------------+-----------------+
   2162
   2163+-----------------------------------------------------------------------+
   2164| **Quick Quiz**:                                                       |
   2165+-----------------------------------------------------------------------+
   2166| Why can't ``NO_HZ_FULL`` in-kernel execution rely on the              |
   2167| scheduling-clock interrupt, just like ``HZ_PERIODIC`` and             |
   2168| ``NO_HZ_IDLE`` do?                                                    |
   2169+-----------------------------------------------------------------------+
   2170| **Answer**:                                                           |
   2171+-----------------------------------------------------------------------+
   2172| Because, as a performance optimization, ``NO_HZ_FULL`` does not       |
   2173| necessarily re-enable the scheduling-clock interrupt on entry to each |
   2174| and every system call.                                                |
   2175+-----------------------------------------------------------------------+
   2176
   2177However, RCU must be reliably informed as to whether any given CPU is
   2178currently in the idle loop, and, for ``NO_HZ_FULL``, also whether that
   2179CPU is executing in usermode, as discussed
   2180`earlier <Energy Efficiency_>`__. It also requires that the
   2181scheduling-clock interrupt be enabled when RCU needs it to be:
   2182
   2183#. If a CPU is either idle or executing in usermode, and RCU believes it
   2184   is non-idle, the scheduling-clock tick had better be running.
   2185   Otherwise, you will get RCU CPU stall warnings. Or at best, very long
   2186   (11-second) grace periods, with a pointless IPI waking the CPU from
   2187   time to time.
   2188#. If a CPU is in a portion of the kernel that executes RCU read-side
   2189   critical sections, and RCU believes this CPU to be idle, you will get
   2190   random memory corruption. **DON'T DO THIS!!!**
   2191   This is one reason to test with lockdep, which will complain about
   2192   this sort of thing.
   2193#. If a CPU is in a portion of the kernel that is absolutely positively
   2194   no-joking guaranteed to never execute any RCU read-side critical
   2195   sections, and RCU believes this CPU to be idle, no problem. This
   2196   sort of thing is used by some architectures for light-weight
   2197   exception handlers, which can then avoid the overhead of
   2198   rcu_irq_enter() and rcu_irq_exit() at exception entry and
   2199   exit, respectively. Some go further and avoid the entireties of
   2200   irq_enter() and irq_exit().
   2201   Just make very sure you are running some of your tests with
   2202   ``CONFIG_PROVE_RCU=y``, just in case one of your code paths was in
   2203   fact joking about not doing RCU read-side critical sections.
   2204#. If a CPU is executing in the kernel with the scheduling-clock
   2205   interrupt disabled and RCU believes this CPU to be non-idle, and if
   2206   the CPU goes idle (from an RCU perspective) every few jiffies, no
   2207   problem. It is usually OK for there to be the occasional gap between
   2208   idle periods of up to a second or so.
   2209   If the gap grows too long, you get RCU CPU stall warnings.
   2210#. If a CPU is either idle or executing in usermode, and RCU believes it
   2211   to be idle, of course no problem.
   2212#. If a CPU is executing in the kernel, the kernel code path is passing
   2213   through quiescent states at a reasonable frequency (preferably about
   2214   once per few jiffies, but the occasional excursion to a second or so
   2215   is usually OK) and the scheduling-clock interrupt is enabled, of
   2216   course no problem.
   2217   If the gap between a successive pair of quiescent states grows too
   2218   long, you get RCU CPU stall warnings.
   2219
   2220+-----------------------------------------------------------------------+
   2221| **Quick Quiz**:                                                       |
   2222+-----------------------------------------------------------------------+
   2223| But what if my driver has a hardware interrupt handler that can run   |
   2224| for many seconds? I cannot invoke schedule() from an hardware         |
   2225| interrupt handler, after all!                                         |
   2226+-----------------------------------------------------------------------+
   2227| **Answer**:                                                           |
   2228+-----------------------------------------------------------------------+
   2229| One approach is to do ``rcu_irq_exit();rcu_irq_enter();`` every so    |
   2230| often. But given that long-running interrupt handlers can cause other |
   2231| problems, not least for response time, shouldn't you work to keep     |
   2232| your interrupt handler's runtime within reasonable bounds?            |
   2233+-----------------------------------------------------------------------+
   2234
   2235But as long as RCU is properly informed of kernel state transitions
   2236between in-kernel execution, usermode execution, and idle, and as long
   2237as the scheduling-clock interrupt is enabled when RCU needs it to be,
   2238you can rest assured that the bugs you encounter will be in some other
   2239part of RCU or some other part of the kernel!
   2240
   2241Memory Efficiency
   2242~~~~~~~~~~~~~~~~~
   2243
   2244Although small-memory non-realtime systems can simply use Tiny RCU, code
   2245size is only one aspect of memory efficiency. Another aspect is the size
   2246of the ``rcu_head`` structure used by call_rcu() and
   2247kfree_rcu(). Although this structure contains nothing more than a
   2248pair of pointers, it does appear in many RCU-protected data structures,
   2249including some that are size critical. The ``page`` structure is a case
   2250in point, as evidenced by the many occurrences of the ``union`` keyword
   2251within that structure.
   2252
   2253This need for memory efficiency is one reason that RCU uses hand-crafted
   2254singly linked lists to track the ``rcu_head`` structures that are
   2255waiting for a grace period to elapse. It is also the reason why
   2256``rcu_head`` structures do not contain debug information, such as fields
   2257tracking the file and line of the call_rcu() or kfree_rcu() that
   2258posted them. Although this information might appear in debug-only kernel
   2259builds at some point, in the meantime, the ``->func`` field will often
   2260provide the needed debug information.
   2261
   2262However, in some cases, the need for memory efficiency leads to even
   2263more extreme measures. Returning to the ``page`` structure, the
   2264``rcu_head`` field shares storage with a great many other structures
   2265that are used at various points in the corresponding page's lifetime. In
   2266order to correctly resolve certain `race
   2267conditions <https://lore.kernel.org/r/1439976106-137226-1-git-send-email-kirill.shutemov@linux.intel.com>`__,
   2268the Linux kernel's memory-management subsystem needs a particular bit to
   2269remain zero during all phases of grace-period processing, and that bit
   2270happens to map to the bottom bit of the ``rcu_head`` structure's
   2271``->next`` field. RCU makes this guarantee as long as call_rcu() is
   2272used to post the callback, as opposed to kfree_rcu() or some future
   2273“lazy” variant of call_rcu() that might one day be created for
   2274energy-efficiency purposes.
   2275
   2276That said, there are limits. RCU requires that the ``rcu_head``
   2277structure be aligned to a two-byte boundary, and passing a misaligned
   2278``rcu_head`` structure to one of the call_rcu() family of functions
   2279will result in a splat. It is therefore necessary to exercise caution
   2280when packing structures containing fields of type ``rcu_head``. Why not
   2281a four-byte or even eight-byte alignment requirement? Because the m68k
   2282architecture provides only two-byte alignment, and thus acts as
   2283alignment's least common denominator.
   2284
   2285The reason for reserving the bottom bit of pointers to ``rcu_head``
   2286structures is to leave the door open to “lazy” callbacks whose
   2287invocations can safely be deferred. Deferring invocation could
   2288potentially have energy-efficiency benefits, but only if the rate of
   2289non-lazy callbacks decreases significantly for some important workload.
   2290In the meantime, reserving the bottom bit keeps this option open in case
   2291it one day becomes useful.
   2292
   2293Performance, Scalability, Response Time, and Reliability
   2294~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   2295
   2296Expanding on the `earlier
   2297discussion <Performance and Scalability_>`__, RCU is used heavily by
   2298hot code paths in performance-critical portions of the Linux kernel's
   2299networking, security, virtualization, and scheduling code paths. RCU
   2300must therefore use efficient implementations, especially in its
   2301read-side primitives. To that end, it would be good if preemptible RCU's
   2302implementation of rcu_read_lock() could be inlined, however, doing
   2303this requires resolving ``#include`` issues with the ``task_struct``
   2304structure.
   2305
   2306The Linux kernel supports hardware configurations with up to 4096 CPUs,
   2307which means that RCU must be extremely scalable. Algorithms that involve
   2308frequent acquisitions of global locks or frequent atomic operations on
   2309global variables simply cannot be tolerated within the RCU
   2310implementation. RCU therefore makes heavy use of a combining tree based
   2311on the ``rcu_node`` structure. RCU is required to tolerate all CPUs
   2312continuously invoking any combination of RCU's runtime primitives with
   2313minimal per-operation overhead. In fact, in many cases, increasing load
   2314must *decrease* the per-operation overhead, witness the batching
   2315optimizations for synchronize_rcu(), call_rcu(),
   2316synchronize_rcu_expedited(), and rcu_barrier(). As a general
   2317rule, RCU must cheerfully accept whatever the rest of the Linux kernel
   2318decides to throw at it.
   2319
   2320The Linux kernel is used for real-time workloads, especially in
   2321conjunction with the `-rt
   2322patchset <https://wiki.linuxfoundation.org/realtime/>`__. The
   2323real-time-latency response requirements are such that the traditional
   2324approach of disabling preemption across RCU read-side critical sections
   2325is inappropriate. Kernels built with ``CONFIG_PREEMPTION=y`` therefore use
   2326an RCU implementation that allows RCU read-side critical sections to be
   2327preempted. This requirement made its presence known after users made it
   2328clear that an earlier `real-time
   2329patch <https://lwn.net/Articles/107930/>`__ did not meet their needs, in
   2330conjunction with some `RCU
   2331issues <https://lore.kernel.org/r/20050318002026.GA2693@us.ibm.com>`__
   2332encountered by a very early version of the -rt patchset.
   2333
   2334In addition, RCU must make do with a sub-100-microsecond real-time
   2335latency budget. In fact, on smaller systems with the -rt patchset, the
   2336Linux kernel provides sub-20-microsecond real-time latencies for the
   2337whole kernel, including RCU. RCU's scalability and latency must
   2338therefore be sufficient for these sorts of configurations. To my
   2339surprise, the sub-100-microsecond real-time latency budget `applies to
   2340even the largest systems
   2341[PDF] <http://www.rdrop.com/users/paulmck/realtime/paper/bigrt.2013.01.31a.LCA.pdf>`__,
   2342up to and including systems with 4096 CPUs. This real-time requirement
   2343motivated the grace-period kthread, which also simplified handling of a
   2344number of race conditions.
   2345
   2346RCU must avoid degrading real-time response for CPU-bound threads,
   2347whether executing in usermode (which is one use case for
   2348``CONFIG_NO_HZ_FULL=y``) or in the kernel. That said, CPU-bound loops in
   2349the kernel must execute cond_resched() at least once per few tens of
   2350milliseconds in order to avoid receiving an IPI from RCU.
   2351
   2352Finally, RCU's status as a synchronization primitive means that any RCU
   2353failure can result in arbitrary memory corruption that can be extremely
   2354difficult to debug. This means that RCU must be extremely reliable,
   2355which in practice also means that RCU must have an aggressive
   2356stress-test suite. This stress-test suite is called ``rcutorture``.
   2357
   2358Although the need for ``rcutorture`` was no surprise, the current
   2359immense popularity of the Linux kernel is posing interesting—and perhaps
   2360unprecedented—validation challenges. To see this, keep in mind that
   2361there are well over one billion instances of the Linux kernel running
   2362today, given Android smartphones, Linux-powered televisions, and
   2363servers. This number can be expected to increase sharply with the advent
   2364of the celebrated Internet of Things.
   2365
   2366Suppose that RCU contains a race condition that manifests on average
   2367once per million years of runtime. This bug will be occurring about
   2368three times per *day* across the installed base. RCU could simply hide
   2369behind hardware error rates, given that no one should really expect
   2370their smartphone to last for a million years. However, anyone taking too
   2371much comfort from this thought should consider the fact that in most
   2372jurisdictions, a successful multi-year test of a given mechanism, which
   2373might include a Linux kernel, suffices for a number of types of
   2374safety-critical certifications. In fact, rumor has it that the Linux
   2375kernel is already being used in production for safety-critical
   2376applications. I don't know about you, but I would feel quite bad if a
   2377bug in RCU killed someone. Which might explain my recent focus on
   2378validation and verification.
   2379
   2380Other RCU Flavors
   2381-----------------
   2382
   2383One of the more surprising things about RCU is that there are now no
   2384fewer than five *flavors*, or API families. In addition, the primary
   2385flavor that has been the sole focus up to this point has two different
   2386implementations, non-preemptible and preemptible. The other four flavors
   2387are listed below, with requirements for each described in a separate
   2388section.
   2389
   2390#. `Bottom-Half Flavor (Historical)`_
   2391#. `Sched Flavor (Historical)`_
   2392#. `Sleepable RCU`_
   2393#. `Tasks RCU`_
   2394
   2395Bottom-Half Flavor (Historical)
   2396~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   2397
   2398The RCU-bh flavor of RCU has since been expressed in terms of the other
   2399RCU flavors as part of a consolidation of the three flavors into a
   2400single flavor. The read-side API remains, and continues to disable
   2401softirq and to be accounted for by lockdep. Much of the material in this
   2402section is therefore strictly historical in nature.
   2403
   2404The softirq-disable (AKA “bottom-half”, hence the “_bh” abbreviations)
   2405flavor of RCU, or *RCU-bh*, was developed by Dipankar Sarma to provide a
   2406flavor of RCU that could withstand the network-based denial-of-service
   2407attacks researched by Robert Olsson. These attacks placed so much
   2408networking load on the system that some of the CPUs never exited softirq
   2409execution, which in turn prevented those CPUs from ever executing a
   2410context switch, which, in the RCU implementation of that time, prevented
   2411grace periods from ever ending. The result was an out-of-memory
   2412condition and a system hang.
   2413
   2414The solution was the creation of RCU-bh, which does
   2415local_bh_disable() across its read-side critical sections, and which
   2416uses the transition from one type of softirq processing to another as a
   2417quiescent state in addition to context switch, idle, user mode, and
   2418offline. This means that RCU-bh grace periods can complete even when
   2419some of the CPUs execute in softirq indefinitely, thus allowing
   2420algorithms based on RCU-bh to withstand network-based denial-of-service
   2421attacks.
   2422
   2423Because rcu_read_lock_bh() and rcu_read_unlock_bh() disable and
   2424re-enable softirq handlers, any attempt to start a softirq handlers
   2425during the RCU-bh read-side critical section will be deferred. In this
   2426case, rcu_read_unlock_bh() will invoke softirq processing, which can
   2427take considerable time. One can of course argue that this softirq
   2428overhead should be associated with the code following the RCU-bh
   2429read-side critical section rather than rcu_read_unlock_bh(), but the
   2430fact is that most profiling tools cannot be expected to make this sort
   2431of fine distinction. For example, suppose that a three-millisecond-long
   2432RCU-bh read-side critical section executes during a time of heavy
   2433networking load. There will very likely be an attempt to invoke at least
   2434one softirq handler during that three milliseconds, but any such
   2435invocation will be delayed until the time of the
   2436rcu_read_unlock_bh(). This can of course make it appear at first
   2437glance as if rcu_read_unlock_bh() was executing very slowly.
   2438
   2439The `RCU-bh
   2440API <https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__
   2441includes rcu_read_lock_bh(), rcu_read_unlock_bh(), rcu_dereference_bh(),
   2442rcu_dereference_bh_check(), and rcu_read_lock_bh_held(). However, the
   2443old RCU-bh update-side APIs are now gone, replaced by synchronize_rcu(),
   2444synchronize_rcu_expedited(), call_rcu(), and rcu_barrier().  In addition,
   2445anything that disables bottom halves also marks an RCU-bh read-side
   2446critical section, including local_bh_disable() and local_bh_enable(),
   2447local_irq_save() and local_irq_restore(), and so on.
   2448
   2449Sched Flavor (Historical)
   2450~~~~~~~~~~~~~~~~~~~~~~~~~
   2451
   2452The RCU-sched flavor of RCU has since been expressed in terms of the
   2453other RCU flavors as part of a consolidation of the three flavors into a
   2454single flavor. The read-side API remains, and continues to disable
   2455preemption and to be accounted for by lockdep. Much of the material in
   2456this section is therefore strictly historical in nature.
   2457
   2458Before preemptible RCU, waiting for an RCU grace period had the side
   2459effect of also waiting for all pre-existing interrupt and NMI handlers.
   2460However, there are legitimate preemptible-RCU implementations that do
   2461not have this property, given that any point in the code outside of an
   2462RCU read-side critical section can be a quiescent state. Therefore,
   2463*RCU-sched* was created, which follows “classic” RCU in that an
   2464RCU-sched grace period waits for pre-existing interrupt and NMI
   2465handlers. In kernels built with ``CONFIG_PREEMPTION=n``, the RCU and
   2466RCU-sched APIs have identical implementations, while kernels built with
   2467``CONFIG_PREEMPTION=y`` provide a separate implementation for each.
   2468
   2469Note well that in ``CONFIG_PREEMPTION=y`` kernels,
   2470rcu_read_lock_sched() and rcu_read_unlock_sched() disable and
   2471re-enable preemption, respectively. This means that if there was a
   2472preemption attempt during the RCU-sched read-side critical section,
   2473rcu_read_unlock_sched() will enter the scheduler, with all the
   2474latency and overhead entailed. Just as with rcu_read_unlock_bh(),
   2475this can make it look as if rcu_read_unlock_sched() was executing
   2476very slowly. However, the highest-priority task won't be preempted, so
   2477that task will enjoy low-overhead rcu_read_unlock_sched()
   2478invocations.
   2479
   2480The `RCU-sched
   2481API <https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__
   2482includes rcu_read_lock_sched(), rcu_read_unlock_sched(),
   2483rcu_read_lock_sched_notrace(), rcu_read_unlock_sched_notrace(),
   2484rcu_dereference_sched(), rcu_dereference_sched_check(), and
   2485rcu_read_lock_sched_held().  However, the old RCU-sched update-side APIs
   2486are now gone, replaced by synchronize_rcu(), synchronize_rcu_expedited(),
   2487call_rcu(), and rcu_barrier().  In addition, anything that disables
   2488preemption also marks an RCU-sched read-side critical section,
   2489including preempt_disable() and preempt_enable(), local_irq_save()
   2490and local_irq_restore(), and so on.
   2491
   2492Sleepable RCU
   2493~~~~~~~~~~~~~
   2494
   2495For well over a decade, someone saying “I need to block within an RCU
   2496read-side critical section” was a reliable indication that this someone
   2497did not understand RCU. After all, if you are always blocking in an RCU
   2498read-side critical section, you can probably afford to use a
   2499higher-overhead synchronization mechanism. However, that changed with
   2500the advent of the Linux kernel's notifiers, whose RCU read-side critical
   2501sections almost never sleep, but sometimes need to. This resulted in the
   2502introduction of `sleepable RCU <https://lwn.net/Articles/202847/>`__, or
   2503*SRCU*.
   2504
   2505SRCU allows different domains to be defined, with each such domain
   2506defined by an instance of an ``srcu_struct`` structure. A pointer to
   2507this structure must be passed in to each SRCU function, for example,
   2508``synchronize_srcu(&ss)``, where ``ss`` is the ``srcu_struct``
   2509structure. The key benefit of these domains is that a slow SRCU reader
   2510in one domain does not delay an SRCU grace period in some other domain.
   2511That said, one consequence of these domains is that read-side code must
   2512pass a “cookie” from srcu_read_lock() to srcu_read_unlock(), for
   2513example, as follows:
   2514
   2515   ::
   2516
   2517       1 int idx;
   2518       2
   2519       3 idx = srcu_read_lock(&ss);
   2520       4 do_something();
   2521       5 srcu_read_unlock(&ss, idx);
   2522
   2523As noted above, it is legal to block within SRCU read-side critical
   2524sections, however, with great power comes great responsibility. If you
   2525block forever in one of a given domain's SRCU read-side critical
   2526sections, then that domain's grace periods will also be blocked forever.
   2527Of course, one good way to block forever is to deadlock, which can
   2528happen if any operation in a given domain's SRCU read-side critical
   2529section can wait, either directly or indirectly, for that domain's grace
   2530period to elapse. For example, this results in a self-deadlock:
   2531
   2532   ::
   2533
   2534       1 int idx;
   2535       2
   2536       3 idx = srcu_read_lock(&ss);
   2537       4 do_something();
   2538       5 synchronize_srcu(&ss);
   2539       6 srcu_read_unlock(&ss, idx);
   2540
   2541However, if line 5 acquired a mutex that was held across a
   2542synchronize_srcu() for domain ``ss``, deadlock would still be
   2543possible. Furthermore, if line 5 acquired a mutex that was held across a
   2544synchronize_srcu() for some other domain ``ss1``, and if an
   2545``ss1``-domain SRCU read-side critical section acquired another mutex
   2546that was held across as ``ss``-domain synchronize_srcu(), deadlock
   2547would again be possible. Such a deadlock cycle could extend across an
   2548arbitrarily large number of different SRCU domains. Again, with great
   2549power comes great responsibility.
   2550
   2551Unlike the other RCU flavors, SRCU read-side critical sections can run
   2552on idle and even offline CPUs. This ability requires that
   2553srcu_read_lock() and srcu_read_unlock() contain memory barriers,
   2554which means that SRCU readers will run a bit slower than would RCU
   2555readers. It also motivates the smp_mb__after_srcu_read_unlock() API,
   2556which, in combination with srcu_read_unlock(), guarantees a full
   2557memory barrier.
   2558
   2559Also unlike other RCU flavors, synchronize_srcu() may **not** be
   2560invoked from CPU-hotplug notifiers, due to the fact that SRCU grace
   2561periods make use of timers and the possibility of timers being
   2562temporarily “stranded” on the outgoing CPU. This stranding of timers
   2563means that timers posted to the outgoing CPU will not fire until late in
   2564the CPU-hotplug process. The problem is that if a notifier is waiting on
   2565an SRCU grace period, that grace period is waiting on a timer, and that
   2566timer is stranded on the outgoing CPU, then the notifier will never be
   2567awakened, in other words, deadlock has occurred. This same situation of
   2568course also prohibits srcu_barrier() from being invoked from
   2569CPU-hotplug notifiers.
   2570
   2571SRCU also differs from other RCU flavors in that SRCU's expedited and
   2572non-expedited grace periods are implemented by the same mechanism. This
   2573means that in the current SRCU implementation, expediting a future grace
   2574period has the side effect of expediting all prior grace periods that
   2575have not yet completed. (But please note that this is a property of the
   2576current implementation, not necessarily of future implementations.) In
   2577addition, if SRCU has been idle for longer than the interval specified
   2578by the ``srcutree.exp_holdoff`` kernel boot parameter (25 microseconds
   2579by default), and if a synchronize_srcu() invocation ends this idle
   2580period, that invocation will be automatically expedited.
   2581
   2582As of v4.12, SRCU's callbacks are maintained per-CPU, eliminating a
   2583locking bottleneck present in prior kernel versions. Although this will
   2584allow users to put much heavier stress on call_srcu(), it is
   2585important to note that SRCU does not yet take any special steps to deal
   2586with callback flooding. So if you are posting (say) 10,000 SRCU
   2587callbacks per second per CPU, you are probably totally OK, but if you
   2588intend to post (say) 1,000,000 SRCU callbacks per second per CPU, please
   2589run some tests first. SRCU just might need a few adjustment to deal with
   2590that sort of load. Of course, your mileage may vary based on the speed
   2591of your CPUs and the size of your memory.
   2592
   2593The `SRCU
   2594API <https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__
   2595includes srcu_read_lock(), srcu_read_unlock(),
   2596srcu_dereference(), srcu_dereference_check(),
   2597synchronize_srcu(), synchronize_srcu_expedited(),
   2598call_srcu(), srcu_barrier(), and srcu_read_lock_held(). It
   2599also includes DEFINE_SRCU(), DEFINE_STATIC_SRCU(), and
   2600init_srcu_struct() APIs for defining and initializing
   2601``srcu_struct`` structures.
   2602
   2603More recently, the SRCU API has added polling interfaces:
   2604
   2605#. start_poll_synchronize_srcu() returns a cookie identifying
   2606   the completion of a future SRCU grace period and ensures
   2607   that this grace period will be started.
   2608#. poll_state_synchronize_srcu() returns ``true`` iff the
   2609   specified cookie corresponds to an already-completed
   2610   SRCU grace period.
   2611#. get_state_synchronize_srcu() returns a cookie just like
   2612   start_poll_synchronize_srcu() does, but differs in that
   2613   it does nothing to ensure that any future SRCU grace period
   2614   will be started.
   2615
   2616These functions are used to avoid unnecessary SRCU grace periods in
   2617certain types of buffer-cache algorithms having multi-stage age-out
   2618mechanisms.  The idea is that by the time the block has aged completely
   2619from the cache, an SRCU grace period will be very likely to have elapsed.
   2620
   2621Tasks RCU
   2622~~~~~~~~~
   2623
   2624Some forms of tracing use “trampolines” to handle the binary rewriting
   2625required to install different types of probes. It would be good to be
   2626able to free old trampolines, which sounds like a job for some form of
   2627RCU. However, because it is necessary to be able to install a trace
   2628anywhere in the code, it is not possible to use read-side markers such
   2629as rcu_read_lock() and rcu_read_unlock(). In addition, it does
   2630not work to have these markers in the trampoline itself, because there
   2631would need to be instructions following rcu_read_unlock(). Although
   2632synchronize_rcu() would guarantee that execution reached the
   2633rcu_read_unlock(), it would not be able to guarantee that execution
   2634had completely left the trampoline. Worse yet, in some situations
   2635the trampoline's protection must extend a few instructions *prior* to
   2636execution reaching the trampoline.  For example, these few instructions
   2637might calculate the address of the trampoline, so that entering the
   2638trampoline would be pre-ordained a surprisingly long time before execution
   2639actually reached the trampoline itself.
   2640
   2641The solution, in the form of `Tasks
   2642RCU <https://lwn.net/Articles/607117/>`__, is to have implicit read-side
   2643critical sections that are delimited by voluntary context switches, that
   2644is, calls to schedule(), cond_resched(), and
   2645synchronize_rcu_tasks(). In addition, transitions to and from
   2646userspace execution also delimit tasks-RCU read-side critical sections.
   2647
   2648The tasks-RCU API is quite compact, consisting only of
   2649call_rcu_tasks(), synchronize_rcu_tasks(), and
   2650rcu_barrier_tasks(). In ``CONFIG_PREEMPTION=n`` kernels, trampolines
   2651cannot be preempted, so these APIs map to call_rcu(),
   2652synchronize_rcu(), and rcu_barrier(), respectively. In
   2653``CONFIG_PREEMPTION=y`` kernels, trampolines can be preempted, and these
   2654three APIs are therefore implemented by separate functions that check
   2655for voluntary context switches.
   2656
   2657Tasks Rude RCU
   2658~~~~~~~~~~~~~~
   2659
   2660Some forms of tracing need to wait for all preemption-disabled regions
   2661of code running on any online CPU, including those executed when RCU is
   2662not watching.  This means that synchronize_rcu() is insufficient, and
   2663Tasks Rude RCU must be used instead.  This flavor of RCU does its work by
   2664forcing a workqueue to be scheduled on each online CPU, hence the "Rude"
   2665moniker.  And this operation is considered to be quite rude by real-time
   2666workloads that don't want their ``nohz_full`` CPUs receiving IPIs and
   2667by battery-powered systems that don't want their idle CPUs to be awakened.
   2668
   2669The tasks-rude-RCU API is also reader-marking-free and thus quite compact,
   2670consisting of call_rcu_tasks_rude(), synchronize_rcu_tasks_rude(),
   2671and rcu_barrier_tasks_rude().
   2672
   2673Tasks Trace RCU
   2674~~~~~~~~~~~~~~~
   2675
   2676Some forms of tracing need to sleep in readers, but cannot tolerate
   2677SRCU's read-side overhead, which includes a full memory barrier in both
   2678srcu_read_lock() and srcu_read_unlock().  This need is handled by a
   2679Tasks Trace RCU that uses scheduler locking and IPIs to synchronize with
   2680readers.  Real-time systems that cannot tolerate IPIs may build their
   2681kernels with ``CONFIG_TASKS_TRACE_RCU_READ_MB=y``, which avoids the IPIs at
   2682the expense of adding full memory barriers to the read-side primitives.
   2683
   2684The tasks-trace-RCU API is also reasonably compact,
   2685consisting of rcu_read_lock_trace(), rcu_read_unlock_trace(),
   2686rcu_read_lock_trace_held(), call_rcu_tasks_trace(),
   2687synchronize_rcu_tasks_trace(), and rcu_barrier_tasks_trace().
   2688
   2689Possible Future Changes
   2690-----------------------
   2691
   2692One of the tricks that RCU uses to attain update-side scalability is to
   2693increase grace-period latency with increasing numbers of CPUs. If this
   2694becomes a serious problem, it will be necessary to rework the
   2695grace-period state machine so as to avoid the need for the additional
   2696latency.
   2697
   2698RCU disables CPU hotplug in a few places, perhaps most notably in the
   2699rcu_barrier() operations. If there is a strong reason to use
   2700rcu_barrier() in CPU-hotplug notifiers, it will be necessary to
   2701avoid disabling CPU hotplug. This would introduce some complexity, so
   2702there had better be a *very* good reason.
   2703
   2704The tradeoff between grace-period latency on the one hand and
   2705interruptions of other CPUs on the other hand may need to be
   2706re-examined. The desire is of course for zero grace-period latency as
   2707well as zero interprocessor interrupts undertaken during an expedited
   2708grace period operation. While this ideal is unlikely to be achievable,
   2709it is quite possible that further improvements can be made.
   2710
   2711The multiprocessor implementations of RCU use a combining tree that
   2712groups CPUs so as to reduce lock contention and increase cache locality.
   2713However, this combining tree does not spread its memory across NUMA
   2714nodes nor does it align the CPU groups with hardware features such as
   2715sockets or cores. Such spreading and alignment is currently believed to
   2716be unnecessary because the hotpath read-side primitives do not access
   2717the combining tree, nor does call_rcu() in the common case. If you
   2718believe that your architecture needs such spreading and alignment, then
   2719your architecture should also benefit from the
   2720``rcutree.rcu_fanout_leaf`` boot parameter, which can be set to the
   2721number of CPUs in a socket, NUMA node, or whatever. If the number of
   2722CPUs is too large, use a fraction of the number of CPUs. If the number
   2723of CPUs is a large prime number, well, that certainly is an
   2724“interesting” architectural choice! More flexible arrangements might be
   2725considered, but only if ``rcutree.rcu_fanout_leaf`` has proven
   2726inadequate, and only if the inadequacy has been demonstrated by a
   2727carefully run and realistic system-level workload.
   2728
   2729Please note that arrangements that require RCU to remap CPU numbers will
   2730require extremely good demonstration of need and full exploration of
   2731alternatives.
   2732
   2733RCU's various kthreads are reasonably recent additions. It is quite
   2734likely that adjustments will be required to more gracefully handle
   2735extreme loads. It might also be necessary to be able to relate CPU
   2736utilization by RCU's kthreads and softirq handlers to the code that
   2737instigated this CPU utilization. For example, RCU callback overhead
   2738might be charged back to the originating call_rcu() instance, though
   2739probably not in production kernels.
   2740
   2741Additional work may be required to provide reasonable forward-progress
   2742guarantees under heavy load for grace periods and for callback
   2743invocation.
   2744
   2745Summary
   2746-------
   2747
   2748This document has presented more than two decade's worth of RCU
   2749requirements. Given that the requirements keep changing, this will not
   2750be the last word on this subject, but at least it serves to get an
   2751important subset of the requirements set forth.
   2752
   2753Acknowledgments
   2754---------------
   2755
   2756I am grateful to Steven Rostedt, Lai Jiangshan, Ingo Molnar, Oleg
   2757Nesterov, Borislav Petkov, Peter Zijlstra, Boqun Feng, and Andy
   2758Lutomirski for their help in rendering this article human readable, and
   2759to Michelle Rankin for her support of this effort. Other contributions
   2760are acknowledged in the Linux kernel's git archive.