cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

rcubarrier.rst (13632B)


      1.. _rcu_barrier:
      2
      3RCU and Unloadable Modules
      4==========================
      5
      6[Originally published in LWN Jan. 14, 2007: http://lwn.net/Articles/217484/]
      7
      8RCU (read-copy update) is a synchronization mechanism that can be thought
      9of as a replacement for read-writer locking (among other things), but with
     10very low-overhead readers that are immune to deadlock, priority inversion,
     11and unbounded latency. RCU read-side critical sections are delimited
     12by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPTION
     13kernels, generate no code whatsoever.
     14
     15This means that RCU writers are unaware of the presence of concurrent
     16readers, so that RCU updates to shared data must be undertaken quite
     17carefully, leaving an old version of the data structure in place until all
     18pre-existing readers have finished. These old versions are needed because
     19such readers might hold a reference to them. RCU updates can therefore be
     20rather expensive, and RCU is thus best suited for read-mostly situations.
     21
     22How can an RCU writer possibly determine when all readers are finished,
     23given that readers might well leave absolutely no trace of their
     24presence? There is a synchronize_rcu() primitive that blocks until all
     25pre-existing readers have completed. An updater wishing to delete an
     26element p from a linked list might do the following, while holding an
     27appropriate lock, of course::
     28
     29	list_del_rcu(p);
     30	synchronize_rcu();
     31	kfree(p);
     32
     33But the above code cannot be used in IRQ context -- the call_rcu()
     34primitive must be used instead. This primitive takes a pointer to an
     35rcu_head struct placed within the RCU-protected data structure and
     36another pointer to a function that may be invoked later to free that
     37structure. Code to delete an element p from the linked list from IRQ
     38context might then be as follows::
     39
     40	list_del_rcu(p);
     41	call_rcu(&p->rcu, p_callback);
     42
     43Since call_rcu() never blocks, this code can safely be used from within
     44IRQ context. The function p_callback() might be defined as follows::
     45
     46	static void p_callback(struct rcu_head *rp)
     47	{
     48		struct pstruct *p = container_of(rp, struct pstruct, rcu);
     49
     50		kfree(p);
     51	}
     52
     53
     54Unloading Modules That Use call_rcu()
     55-------------------------------------
     56
     57But what if p_callback is defined in an unloadable module?
     58
     59If we unload the module while some RCU callbacks are pending,
     60the CPUs executing these callbacks are going to be severely
     61disappointed when they are later invoked, as fancifully depicted at
     62http://lwn.net/images/ns/kernel/rcu-drop.jpg.
     63
     64We could try placing a synchronize_rcu() in the module-exit code path,
     65but this is not sufficient. Although synchronize_rcu() does wait for a
     66grace period to elapse, it does not wait for the callbacks to complete.
     67
     68One might be tempted to try several back-to-back synchronize_rcu()
     69calls, but this is still not guaranteed to work. If there is a very
     70heavy RCU-callback load, then some of the callbacks might be deferred
     71in order to allow other processing to proceed. Such deferral is required
     72in realtime kernels in order to avoid excessive scheduling latencies.
     73
     74
     75rcu_barrier()
     76-------------
     77
     78We instead need the rcu_barrier() primitive.  Rather than waiting for
     79a grace period to elapse, rcu_barrier() waits for all outstanding RCU
     80callbacks to complete.  Please note that rcu_barrier() does **not** imply
     81synchronize_rcu(), in particular, if there are no RCU callbacks queued
     82anywhere, rcu_barrier() is within its rights to return immediately,
     83without waiting for a grace period to elapse.
     84
     85Pseudo-code using rcu_barrier() is as follows:
     86
     87   1. Prevent any new RCU callbacks from being posted.
     88   2. Execute rcu_barrier().
     89   3. Allow the module to be unloaded.
     90
     91There is also an srcu_barrier() function for SRCU, and you of course
     92must match the flavor of rcu_barrier() with that of call_rcu().  If your
     93module uses multiple flavors of call_rcu(), then it must also use multiple
     94flavors of rcu_barrier() when unloading that module.  For example, if
     95it uses call_rcu(), call_srcu() on srcu_struct_1, and call_srcu() on
     96srcu_struct_2, then the following three lines of code will be required
     97when unloading::
     98
     99 1 rcu_barrier();
    100 2 srcu_barrier(&srcu_struct_1);
    101 3 srcu_barrier(&srcu_struct_2);
    102
    103The rcutorture module makes use of rcu_barrier() in its exit function
    104as follows::
    105
    106 1  static void
    107 2  rcu_torture_cleanup(void)
    108 3  {
    109 4    int i;
    110 5
    111 6    fullstop = 1;
    112 7    if (shuffler_task != NULL) {
    113 8     VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task");
    114 9     kthread_stop(shuffler_task);
    115 10   }
    116 11   shuffler_task = NULL;
    117 12
    118 13   if (writer_task != NULL) {
    119 14     VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task");
    120 15     kthread_stop(writer_task);
    121 16   }
    122 17   writer_task = NULL;
    123 18
    124 19   if (reader_tasks != NULL) {
    125 20     for (i = 0; i < nrealreaders; i++) {
    126 21       if (reader_tasks[i] != NULL) {
    127 22         VERBOSE_PRINTK_STRING(
    128 23           "Stopping rcu_torture_reader task");
    129 24         kthread_stop(reader_tasks[i]);
    130 25       }
    131 26       reader_tasks[i] = NULL;
    132 27     }
    133 28     kfree(reader_tasks);
    134 29     reader_tasks = NULL;
    135 30   }
    136 31   rcu_torture_current = NULL;
    137 32
    138 33   if (fakewriter_tasks != NULL) {
    139 34     for (i = 0; i < nfakewriters; i++) {
    140 35       if (fakewriter_tasks[i] != NULL) {
    141 36         VERBOSE_PRINTK_STRING(
    142 37           "Stopping rcu_torture_fakewriter task");
    143 38         kthread_stop(fakewriter_tasks[i]);
    144 39       }
    145 40       fakewriter_tasks[i] = NULL;
    146 41     }
    147 42     kfree(fakewriter_tasks);
    148 43     fakewriter_tasks = NULL;
    149 44   }
    150 45
    151 46   if (stats_task != NULL) {
    152 47     VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task");
    153 48     kthread_stop(stats_task);
    154 49   }
    155 50   stats_task = NULL;
    156 51
    157 52   /* Wait for all RCU callbacks to fire. */
    158 53   rcu_barrier();
    159 54
    160 55   rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
    161 56
    162 57   if (cur_ops->cleanup != NULL)
    163 58     cur_ops->cleanup();
    164 59   if (atomic_read(&n_rcu_torture_error))
    165 60     rcu_torture_print_module_parms("End of test: FAILURE");
    166 61   else
    167 62     rcu_torture_print_module_parms("End of test: SUCCESS");
    168 63 }
    169
    170Line 6 sets a global variable that prevents any RCU callbacks from
    171re-posting themselves. This will not be necessary in most cases, since
    172RCU callbacks rarely include calls to call_rcu(). However, the rcutorture
    173module is an exception to this rule, and therefore needs to set this
    174global variable.
    175
    176Lines 7-50 stop all the kernel tasks associated with the rcutorture
    177module. Therefore, once execution reaches line 53, no more rcutorture
    178RCU callbacks will be posted. The rcu_barrier() call on line 53 waits
    179for any pre-existing callbacks to complete.
    180
    181Then lines 55-62 print status and do operation-specific cleanup, and
    182then return, permitting the module-unload operation to be completed.
    183
    184.. _rcubarrier_quiz_1:
    185
    186Quick Quiz #1:
    187	Is there any other situation where rcu_barrier() might
    188	be required?
    189
    190:ref:`Answer to Quick Quiz #1 <answer_rcubarrier_quiz_1>`
    191
    192Your module might have additional complications. For example, if your
    193module invokes call_rcu() from timers, you will need to first cancel all
    194the timers, and only then invoke rcu_barrier() to wait for any remaining
    195RCU callbacks to complete.
    196
    197Of course, if you module uses call_rcu(), you will need to invoke
    198rcu_barrier() before unloading.  Similarly, if your module uses
    199call_srcu(), you will need to invoke srcu_barrier() before unloading,
    200and on the same srcu_struct structure.  If your module uses call_rcu()
    201**and** call_srcu(), then you will need to invoke rcu_barrier() **and**
    202srcu_barrier().
    203
    204
    205Implementing rcu_barrier()
    206--------------------------
    207
    208Dipankar Sarma's implementation of rcu_barrier() makes use of the fact
    209that RCU callbacks are never reordered once queued on one of the per-CPU
    210queues. His implementation queues an RCU callback on each of the per-CPU
    211callback queues, and then waits until they have all started executing, at
    212which point, all earlier RCU callbacks are guaranteed to have completed.
    213
    214The original code for rcu_barrier() was as follows::
    215
    216 1  void rcu_barrier(void)
    217 2  {
    218 3    BUG_ON(in_interrupt());
    219 4    /* Take cpucontrol mutex to protect against CPU hotplug */
    220 5    mutex_lock(&rcu_barrier_mutex);
    221 6    init_completion(&rcu_barrier_completion);
    222 7    atomic_set(&rcu_barrier_cpu_count, 0);
    223 8    on_each_cpu(rcu_barrier_func, NULL, 0, 1);
    224 9    wait_for_completion(&rcu_barrier_completion);
    225 10   mutex_unlock(&rcu_barrier_mutex);
    226 11 }
    227
    228Line 3 verifies that the caller is in process context, and lines 5 and 10
    229use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the
    230global completion and counters at a time, which are initialized on lines
    2316 and 7. Line 8 causes each CPU to invoke rcu_barrier_func(), which is
    232shown below. Note that the final "1" in on_each_cpu()'s argument list
    233ensures that all the calls to rcu_barrier_func() will have completed
    234before on_each_cpu() returns. Line 9 then waits for the completion.
    235
    236This code was rewritten in 2008 and several times thereafter, but this
    237still gives the general idea.
    238
    239The rcu_barrier_func() runs on each CPU, where it invokes call_rcu()
    240to post an RCU callback, as follows::
    241
    242 1  static void rcu_barrier_func(void *notused)
    243 2  {
    244 3    int cpu = smp_processor_id();
    245 4    struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
    246 5    struct rcu_head *head;
    247 6
    248 7    head = &rdp->barrier;
    249 8    atomic_inc(&rcu_barrier_cpu_count);
    250 9    call_rcu(head, rcu_barrier_callback);
    251 10 }
    252
    253Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure,
    254which contains the struct rcu_head that needed for the later call to
    255call_rcu(). Line 7 picks up a pointer to this struct rcu_head, and line
    2568 increments a global counter. This counter will later be decremented
    257by the callback. Line 9 then registers the rcu_barrier_callback() on
    258the current CPU's queue.
    259
    260The rcu_barrier_callback() function simply atomically decrements the
    261rcu_barrier_cpu_count variable and finalizes the completion when it
    262reaches zero, as follows::
    263
    264 1 static void rcu_barrier_callback(struct rcu_head *notused)
    265 2 {
    266 3   if (atomic_dec_and_test(&rcu_barrier_cpu_count))
    267 4     complete(&rcu_barrier_completion);
    268 5 }
    269
    270.. _rcubarrier_quiz_2:
    271
    272Quick Quiz #2:
    273	What happens if CPU 0's rcu_barrier_func() executes
    274	immediately (thus incrementing rcu_barrier_cpu_count to the
    275	value one), but the other CPU's rcu_barrier_func() invocations
    276	are delayed for a full grace period? Couldn't this result in
    277	rcu_barrier() returning prematurely?
    278
    279:ref:`Answer to Quick Quiz #2 <answer_rcubarrier_quiz_2>`
    280
    281The current rcu_barrier() implementation is more complex, due to the need
    282to avoid disturbing idle CPUs (especially on battery-powered systems)
    283and the need to minimally disturb non-idle CPUs in real-time systems.
    284However, the code above illustrates the concepts.
    285
    286
    287rcu_barrier() Summary
    288---------------------
    289
    290The rcu_barrier() primitive has seen relatively little use, since most
    291code using RCU is in the core kernel rather than in modules. However, if
    292you are using RCU from an unloadable module, you need to use rcu_barrier()
    293so that your module may be safely unloaded.
    294
    295
    296Answers to Quick Quizzes
    297------------------------
    298
    299.. _answer_rcubarrier_quiz_1:
    300
    301Quick Quiz #1:
    302	Is there any other situation where rcu_barrier() might
    303	be required?
    304
    305Answer: Interestingly enough, rcu_barrier() was not originally
    306	implemented for module unloading. Nikita Danilov was using
    307	RCU in a filesystem, which resulted in a similar situation at
    308	filesystem-unmount time. Dipankar Sarma coded up rcu_barrier()
    309	in response, so that Nikita could invoke it during the
    310	filesystem-unmount process.
    311
    312	Much later, yours truly hit the RCU module-unload problem when
    313	implementing rcutorture, and found that rcu_barrier() solves
    314	this problem as well.
    315
    316:ref:`Back to Quick Quiz #1 <rcubarrier_quiz_1>`
    317
    318.. _answer_rcubarrier_quiz_2:
    319
    320Quick Quiz #2:
    321	What happens if CPU 0's rcu_barrier_func() executes
    322	immediately (thus incrementing rcu_barrier_cpu_count to the
    323	value one), but the other CPU's rcu_barrier_func() invocations
    324	are delayed for a full grace period? Couldn't this result in
    325	rcu_barrier() returning prematurely?
    326
    327Answer: This cannot happen. The reason is that on_each_cpu() has its last
    328	argument, the wait flag, set to "1". This flag is passed through
    329	to smp_call_function() and further to smp_call_function_on_cpu(),
    330	causing this latter to spin until the cross-CPU invocation of
    331	rcu_barrier_func() has completed. This by itself would prevent
    332	a grace period from completing on non-CONFIG_PREEMPTION kernels,
    333	since each CPU must undergo a context switch (or other quiescent
    334	state) before the grace period can complete. However, this is
    335	of no use in CONFIG_PREEMPTION kernels.
    336
    337	Therefore, on_each_cpu() disables preemption across its call
    338	to smp_call_function() and also across the local call to
    339	rcu_barrier_func(). This prevents the local CPU from context
    340	switching, again preventing grace periods from completing. This
    341	means that all CPUs have executed rcu_barrier_func() before
    342	the first rcu_barrier_callback() can possibly execute, in turn
    343	preventing rcu_barrier_cpu_count from prematurely reaching zero.
    344
    345	Currently, -rt implementations of RCU keep but a single global
    346	queue for RCU callbacks, and thus do not suffer from this
    347	problem. However, when the -rt RCU eventually does have per-CPU
    348	callback queues, things will have to change. One simple change
    349	is to add an rcu_read_lock() before line 8 of rcu_barrier()
    350	and an rcu_read_unlock() after line 8 of this same function. If
    351	you can think of a better change, please let me know!
    352
    353:ref:`Back to Quick Quiz #2 <rcubarrier_quiz_2>`