torture.rst (13799B)
1.. SPDX-License-Identifier: GPL-2.0 2 3========================== 4RCU Torture Test Operation 5========================== 6 7 8CONFIG_RCU_TORTURE_TEST 9======================= 10 11The CONFIG_RCU_TORTURE_TEST config option is available for all RCU 12implementations. It creates an rcutorture kernel module that can 13be loaded to run a torture test. The test periodically outputs 14status messages via printk(), which can be examined via the dmesg 15command (perhaps grepping for "torture"). The test is started 16when the module is loaded, and stops when the module is unloaded. 17 18Module parameters are prefixed by "rcutorture." in 19Documentation/admin-guide/kernel-parameters.txt. 20 21Output 22====== 23 24The statistics output is as follows:: 25 26 rcu-torture:--- Start of test: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4 27 rcu-torture: rtc: (null) ver: 155441 tfle: 0 rta: 155441 rtaf: 8884 rtf: 155440 rtmbe: 0 rtbe: 0 rtbke: 0 rtbre: 0 rtbf: 0 rtb: 0 nt: 3055767 28 rcu-torture: Reader Pipe: 727860534 34213 0 0 0 0 0 0 0 0 0 29 rcu-torture: Reader Batch: 727877838 17003 0 0 0 0 0 0 0 0 0 30 rcu-torture: Free-Block Circulation: 155440 155440 155440 155440 155440 155440 155440 155440 155440 155440 0 31 rcu-torture:--- End of test: SUCCESS: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4 32 33The command "dmesg | grep torture:" will extract this information on 34most systems. On more esoteric configurations, it may be necessary to 35use other commands to access the output of the printk()s used by 36the RCU torture test. The printk()s use KERN_ALERT, so they should 37be evident. ;-) 38 39The first and last lines show the rcutorture module parameters, and the 40last line shows either "SUCCESS" or "FAILURE", based on rcutorture's 41automatic determination as to whether RCU operated correctly. 42 43The entries are as follows: 44 45* "rtc": The hexadecimal address of the structure currently visible 46 to readers. 47 48* "ver": The number of times since boot that the RCU writer task 49 has changed the structure visible to readers. 50 51* "tfle": If non-zero, indicates that the "torture freelist" 52 containing structures to be placed into the "rtc" area is empty. 53 This condition is important, since it can fool you into thinking 54 that RCU is working when it is not. :-/ 55 56* "rta": Number of structures allocated from the torture freelist. 57 58* "rtaf": Number of allocations from the torture freelist that have 59 failed due to the list being empty. It is not unusual for this 60 to be non-zero, but it is bad for it to be a large fraction of 61 the value indicated by "rta". 62 63* "rtf": Number of frees into the torture freelist. 64 65* "rtmbe": A non-zero value indicates that rcutorture believes that 66 rcu_assign_pointer() and rcu_dereference() are not working 67 correctly. This value should be zero. 68 69* "rtbe": A non-zero value indicates that one of the rcu_barrier() 70 family of functions is not working correctly. 71 72* "rtbke": rcutorture was unable to create the real-time kthreads 73 used to force RCU priority inversion. This value should be zero. 74 75* "rtbre": Although rcutorture successfully created the kthreads 76 used to force RCU priority inversion, it was unable to set them 77 to the real-time priority level of 1. This value should be zero. 78 79* "rtbf": The number of times that RCU priority boosting failed 80 to resolve RCU priority inversion. 81 82* "rtb": The number of times that rcutorture attempted to force 83 an RCU priority inversion condition. If you are testing RCU 84 priority boosting via the "test_boost" module parameter, this 85 value should be non-zero. 86 87* "nt": The number of times rcutorture ran RCU read-side code from 88 within a timer handler. This value should be non-zero only 89 if you specified the "irqreader" module parameter. 90 91* "Reader Pipe": Histogram of "ages" of structures seen by readers. 92 If any entries past the first two are non-zero, RCU is broken. 93 And rcutorture prints the error flag string "!!!" to make sure 94 you notice. The age of a newly allocated structure is zero, 95 it becomes one when removed from reader visibility, and is 96 incremented once per grace period subsequently -- and is freed 97 after passing through (RCU_TORTURE_PIPE_LEN-2) grace periods. 98 99 The output displayed above was taken from a correctly working 100 RCU. If you want to see what it looks like when broken, break 101 it yourself. ;-) 102 103* "Reader Batch": Another histogram of "ages" of structures seen 104 by readers, but in terms of counter flips (or batches) rather 105 than in terms of grace periods. The legal number of non-zero 106 entries is again two. The reason for this separate view is that 107 it is sometimes easier to get the third entry to show up in the 108 "Reader Batch" list than in the "Reader Pipe" list. 109 110* "Free-Block Circulation": Shows the number of torture structures 111 that have reached a given point in the pipeline. The first element 112 should closely correspond to the number of structures allocated, 113 the second to the number that have been removed from reader view, 114 and all but the last remaining to the corresponding number of 115 passes through a grace period. The last entry should be zero, 116 as it is only incremented if a torture structure's counter 117 somehow gets incremented farther than it should. 118 119Different implementations of RCU can provide implementation-specific 120additional information. For example, Tree SRCU provides the following 121additional line:: 122 123 srcud-torture: Tree SRCU per-CPU(idx=0): 0(35,-21) 1(-4,24) 2(1,1) 3(-26,20) 4(28,-47) 5(-9,4) 6(-10,14) 7(-14,11) T(1,6) 124 125This line shows the per-CPU counter state, in this case for Tree SRCU 126using a dynamically allocated srcu_struct (hence "srcud-" rather than 127"srcu-"). The numbers in parentheses are the values of the "old" and 128"current" counters for the corresponding CPU. The "idx" value maps the 129"old" and "current" values to the underlying array, and is useful for 130debugging. The final "T" entry contains the totals of the counters. 131 132Usage on Specific Kernel Builds 133=============================== 134 135It is sometimes desirable to torture RCU on a specific kernel build, 136for example, when preparing to put that kernel build into production. 137In that case, the kernel should be built with CONFIG_RCU_TORTURE_TEST=m 138so that the test can be started using modprobe and terminated using rmmod. 139 140For example, the following script may be used to torture RCU:: 141 142 #!/bin/sh 143 144 modprobe rcutorture 145 sleep 3600 146 rmmod rcutorture 147 dmesg | grep torture: 148 149The output can be manually inspected for the error flag of "!!!". 150One could of course create a more elaborate script that automatically 151checked for such errors. The "rmmod" command forces a "SUCCESS", 152"FAILURE", or "RCU_HOTPLUG" indication to be printk()ed. The first 153two are self-explanatory, while the last indicates that while there 154were no RCU failures, CPU-hotplug problems were detected. 155 156 157Usage on Mainline Kernels 158========================= 159 160When using rcutorture to test changes to RCU itself, it is often 161necessary to build a number of kernels in order to test that change 162across a broad range of combinations of the relevant Kconfig options 163and of the relevant kernel boot parameters. In this situation, use 164of modprobe and rmmod can be quite time-consuming and error-prone. 165 166Therefore, the tools/testing/selftests/rcutorture/bin/kvm.sh 167script is available for mainline testing for x86, arm64, and 168powerpc. By default, it will run the series of tests specified by 169tools/testing/selftests/rcutorture/configs/rcu/CFLIST, with each test 170running for 30 minutes within a guest OS using a minimal userspace 171supplied by an automatically generated initrd. After the tests are 172complete, the resulting build products and console output are analyzed 173for errors and the results of the runs are summarized. 174 175On larger systems, rcutorture testing can be accelerated by passing the 176--cpus argument to kvm.sh. For example, on a 64-CPU system, "--cpus 43" 177would use up to 43 CPUs to run tests concurrently, which as of v5.4 would 178complete all the scenarios in two batches, reducing the time to complete 179from about eight hours to about one hour (not counting the time to build 180the sixteen kernels). The "--dryrun sched" argument will not run tests, 181but rather tell you how the tests would be scheduled into batches. This 182can be useful when working out how many CPUs to specify in the --cpus 183argument. 184 185Not all changes require that all scenarios be run. For example, a change 186to Tree SRCU might run only the SRCU-N and SRCU-P scenarios using the 187--configs argument to kvm.sh as follows: "--configs 'SRCU-N SRCU-P'". 188Large systems can run multiple copies of of the full set of scenarios, 189for example, a system with 448 hardware threads can run five instances 190of the full set concurrently. To make this happen:: 191 192 kvm.sh --cpus 448 --configs '5*CFLIST' 193 194Alternatively, such a system can run 56 concurrent instances of a single 195eight-CPU scenario:: 196 197 kvm.sh --cpus 448 --configs '56*TREE04' 198 199Or 28 concurrent instances of each of two eight-CPU scenarios:: 200 201 kvm.sh --cpus 448 --configs '28*TREE03 28*TREE04' 202 203Of course, each concurrent instance will use memory, which can be 204limited using the --memory argument, which defaults to 512M. Small 205values for memory may require disabling the callback-flooding tests 206using the --bootargs parameter discussed below. 207 208Sometimes additional debugging is useful, and in such cases the --kconfig 209parameter to kvm.sh may be used, for example, ``--kconfig 'CONFIG_KASAN=y'``. 210 211Kernel boot arguments can also be supplied, for example, to control 212rcutorture's module parameters. For example, to test a change to RCU's 213CPU stall-warning code, use "--bootargs 'rcutorture.stall_cpu=30'". 214This will of course result in the scripting reporting a failure, namely 215the resuling RCU CPU stall warning. As noted above, reducing memory may 216require disabling rcutorture's callback-flooding tests:: 217 218 kvm.sh --cpus 448 --configs '56*TREE04' --memory 128M \ 219 --bootargs 'rcutorture.fwd_progress=0' 220 221Sometimes all that is needed is a full set of kernel builds. This is 222what the --buildonly argument does. 223 224Finally, the --trust-make argument allows each kernel build to reuse what 225it can from the previous kernel build. 226 227There are additional more arcane arguments that are documented in the 228source code of the kvm.sh script. 229 230If a run contains failures, the number of buildtime and runtime failures 231is listed at the end of the kvm.sh output, which you really should redirect 232to a file. The build products and console output of each run is kept in 233tools/testing/selftests/rcutorture/res in timestamped directories. A 234given directory can be supplied to kvm-find-errors.sh in order to have 235it cycle you through summaries of errors and full error logs. For example:: 236 237 tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh \ 238 tools/testing/selftests/rcutorture/res/2020.01.20-15.54.23 239 240However, it is often more convenient to access the files directly. 241Files pertaining to all scenarios in a run reside in the top-level 242directory (2020.01.20-15.54.23 in the example above), while per-scenario 243files reside in a subdirectory named after the scenario (for example, 244"TREE04"). If a given scenario ran more than once (as in "--configs 245'56*TREE04'" above), the directories corresponding to the second and 246subsequent runs of that scenario include a sequence number, for example, 247"TREE04.2", "TREE04.3", and so on. 248 249The most frequently used file in the top-level directory is testid.txt. 250If the test ran in a git repository, then this file contains the commit 251that was tested and any uncommitted changes in diff format. 252 253The most frequently used files in each per-scenario-run directory are: 254 255.config: 256 This file contains the Kconfig options. 257 258Make.out: 259 This contains build output for a specific scenario. 260 261console.log: 262 This contains the console output for a specific scenario. 263 This file may be examined once the kernel has booted, but 264 it might not exist if the build failed. 265 266vmlinux: 267 This contains the kernel, which can be useful with tools like 268 objdump and gdb. 269 270A number of additional files are available, but are less frequently used. 271Many are intended for debugging of rcutorture itself or of its scripting. 272 273As of v5.4, a successful run with the default set of scenarios produces 274the following summary at the end of the run on a 12-CPU system:: 275 276 SRCU-N ------- 804233 GPs (148.932/s) [srcu: g10008272 f0x0 ] 277 SRCU-P ------- 202320 GPs (37.4667/s) [srcud: g1809476 f0x0 ] 278 SRCU-t ------- 1122086 GPs (207.794/s) [srcu: g0 f0x0 ] 279 SRCU-u ------- 1111285 GPs (205.794/s) [srcud: g1 f0x0 ] 280 TASKS01 ------- 19666 GPs (3.64185/s) [tasks: g0 f0x0 ] 281 TASKS02 ------- 20541 GPs (3.80389/s) [tasks: g0 f0x0 ] 282 TASKS03 ------- 19416 GPs (3.59556/s) [tasks: g0 f0x0 ] 283 TINY01 ------- 836134 GPs (154.84/s) [rcu: g0 f0x0 ] n_max_cbs: 34198 284 TINY02 ------- 850371 GPs (157.476/s) [rcu: g0 f0x0 ] n_max_cbs: 2631 285 TREE01 ------- 162625 GPs (30.1157/s) [rcu: g1124169 f0x0 ] 286 TREE02 ------- 333003 GPs (61.6672/s) [rcu: g2647753 f0x0 ] n_max_cbs: 35844 287 TREE03 ------- 306623 GPs (56.782/s) [rcu: g2975325 f0x0 ] n_max_cbs: 1496497 288 CPU count limited from 16 to 12 289 TREE04 ------- 246149 GPs (45.5831/s) [rcu: g1695737 f0x0 ] n_max_cbs: 434961 290 TREE05 ------- 314603 GPs (58.2598/s) [rcu: g2257741 f0x2 ] n_max_cbs: 193997 291 TREE07 ------- 167347 GPs (30.9902/s) [rcu: g1079021 f0x0 ] n_max_cbs: 478732 292 CPU count limited from 16 to 12 293 TREE09 ------- 752238 GPs (139.303/s) [rcu: g13075057 f0x0 ] n_max_cbs: 99011