torture.rst 17 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374
  1. .. SPDX-License-Identifier: GPL-2.0
  2. ==========================
  3. RCU Torture Test Operation
  4. ==========================
  5. CONFIG_RCU_TORTURE_TEST
  6. =======================
  7. The CONFIG_RCU_TORTURE_TEST config option is available for all RCU
  8. implementations. It creates an rcutorture kernel module that can
  9. be loaded to run a torture test. The test periodically outputs
  10. status messages via printk(), which can be examined via the dmesg
  11. command (perhaps grepping for "torture"). The test is started
  12. when the module is loaded, and stops when the module is unloaded.
  13. Module parameters are prefixed by "rcutorture." in
  14. Documentation/admin-guide/kernel-parameters.txt.
  15. Output
  16. ======
  17. The statistics output is as follows::
  18. rcu-torture:--- Start of test: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4
  19. rcu-torture: rtc: (null) ver: 155441 tfle: 0 rta: 155441 rtaf: 8884 rtf: 155440 rtmbe: 0 rtbe: 0 rtbke: 0 rtbre: 0 rtbf: 0 rtb: 0 nt: 3055767
  20. rcu-torture: Reader Pipe: 727860534 34213 0 0 0 0 0 0 0 0 0
  21. rcu-torture: Reader Batch: 727877838 17003 0 0 0 0 0 0 0 0 0
  22. rcu-torture: Free-Block Circulation: 155440 155440 155440 155440 155440 155440 155440 155440 155440 155440 0
  23. rcu-torture:--- End of test: SUCCESS: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4
  24. The command "dmesg | grep torture:" will extract this information on
  25. most systems. On more esoteric configurations, it may be necessary to
  26. use other commands to access the output of the printk()s used by
  27. the RCU torture test. The printk()s use KERN_ALERT, so they should
  28. be evident. ;-)
  29. The first and last lines show the rcutorture module parameters, and the
  30. last line shows either "SUCCESS" or "FAILURE", based on rcutorture's
  31. automatic determination as to whether RCU operated correctly.
  32. The entries are as follows:
  33. * "rtc": The hexadecimal address of the structure currently visible
  34. to readers.
  35. * "ver": The number of times since boot that the RCU writer task
  36. has changed the structure visible to readers.
  37. * "tfle": If non-zero, indicates that the "torture freelist"
  38. containing structures to be placed into the "rtc" area is empty.
  39. This condition is important, since it can fool you into thinking
  40. that RCU is working when it is not. :-/
  41. * "rta": Number of structures allocated from the torture freelist.
  42. * "rtaf": Number of allocations from the torture freelist that have
  43. failed due to the list being empty. It is not unusual for this
  44. to be non-zero, but it is bad for it to be a large fraction of
  45. the value indicated by "rta".
  46. * "rtf": Number of frees into the torture freelist.
  47. * "rtmbe": A non-zero value indicates that rcutorture believes that
  48. rcu_assign_pointer() and rcu_dereference() are not working
  49. correctly. This value should be zero.
  50. * "rtbe": A non-zero value indicates that one of the rcu_barrier()
  51. family of functions is not working correctly.
  52. * "rtbke": rcutorture was unable to create the real-time kthreads
  53. used to force RCU priority inversion. This value should be zero.
  54. * "rtbre": Although rcutorture successfully created the kthreads
  55. used to force RCU priority inversion, it was unable to set them
  56. to the real-time priority level of 1. This value should be zero.
  57. * "rtbf": The number of times that RCU priority boosting failed
  58. to resolve RCU priority inversion.
  59. * "rtb": The number of times that rcutorture attempted to force
  60. an RCU priority inversion condition. If you are testing RCU
  61. priority boosting via the "test_boost" module parameter, this
  62. value should be non-zero.
  63. * "nt": The number of times rcutorture ran RCU read-side code from
  64. within a timer handler. This value should be non-zero only
  65. if you specified the "irqreader" module parameter.
  66. * "Reader Pipe": Histogram of "ages" of structures seen by readers.
  67. If any entries past the first two are non-zero, RCU is broken.
  68. And rcutorture prints the error flag string "!!!" to make sure
  69. you notice. The age of a newly allocated structure is zero,
  70. it becomes one when removed from reader visibility, and is
  71. incremented once per grace period subsequently -- and is freed
  72. after passing through (RCU_TORTURE_PIPE_LEN-2) grace periods.
  73. The output displayed above was taken from a correctly working
  74. RCU. If you want to see what it looks like when broken, break
  75. it yourself. ;-)
  76. * "Reader Batch": Another histogram of "ages" of structures seen
  77. by readers, but in terms of counter flips (or batches) rather
  78. than in terms of grace periods. The legal number of non-zero
  79. entries is again two. The reason for this separate view is that
  80. it is sometimes easier to get the third entry to show up in the
  81. "Reader Batch" list than in the "Reader Pipe" list.
  82. * "Free-Block Circulation": Shows the number of torture structures
  83. that have reached a given point in the pipeline. The first element
  84. should closely correspond to the number of structures allocated,
  85. the second to the number that have been removed from reader view,
  86. and all but the last remaining to the corresponding number of
  87. passes through a grace period. The last entry should be zero,
  88. as it is only incremented if a torture structure's counter
  89. somehow gets incremented farther than it should.
  90. Different implementations of RCU can provide implementation-specific
  91. additional information. For example, Tree SRCU provides the following
  92. additional line::
  93. srcud-torture: Tree SRCU per-CPU(idx=0): 0(35,-21) 1(-4,24) 2(1,1) 3(-26,20) 4(28,-47) 5(-9,4) 6(-10,14) 7(-14,11) T(1,6)
  94. This line shows the per-CPU counter state, in this case for Tree SRCU
  95. using a dynamically allocated srcu_struct (hence "srcud-" rather than
  96. "srcu-"). The numbers in parentheses are the values of the "old" and
  97. "current" counters for the corresponding CPU. The "idx" value maps the
  98. "old" and "current" values to the underlying array, and is useful for
  99. debugging. The final "T" entry contains the totals of the counters.
  100. Usage on Specific Kernel Builds
  101. ===============================
  102. It is sometimes desirable to torture RCU on a specific kernel build,
  103. for example, when preparing to put that kernel build into production.
  104. In that case, the kernel should be built with CONFIG_RCU_TORTURE_TEST=m
  105. so that the test can be started using modprobe and terminated using rmmod.
  106. For example, the following script may be used to torture RCU::
  107. #!/bin/sh
  108. modprobe rcutorture
  109. sleep 3600
  110. rmmod rcutorture
  111. dmesg | grep torture:
  112. The output can be manually inspected for the error flag of "!!!".
  113. One could of course create a more elaborate script that automatically
  114. checked for such errors. The "rmmod" command forces a "SUCCESS",
  115. "FAILURE", or "RCU_HOTPLUG" indication to be printk()ed. The first
  116. two are self-explanatory, while the last indicates that while there
  117. were no RCU failures, CPU-hotplug problems were detected.
  118. Usage on Mainline Kernels
  119. =========================
  120. When using rcutorture to test changes to RCU itself, it is often
  121. necessary to build a number of kernels in order to test that change
  122. across a broad range of combinations of the relevant Kconfig options
  123. and of the relevant kernel boot parameters. In this situation, use
  124. of modprobe and rmmod can be quite time-consuming and error-prone.
  125. Therefore, the tools/testing/selftests/rcutorture/bin/kvm.sh
  126. script is available for mainline testing for x86, arm64, and
  127. powerpc. By default, it will run the series of tests specified by
  128. tools/testing/selftests/rcutorture/configs/rcu/CFLIST, with each test
  129. running for 30 minutes within a guest OS using a minimal userspace
  130. supplied by an automatically generated initrd. After the tests are
  131. complete, the resulting build products and console output are analyzed
  132. for errors and the results of the runs are summarized.
  133. On larger systems, rcutorture testing can be accelerated by passing the
  134. --cpus argument to kvm.sh. For example, on a 64-CPU system, "--cpus 43"
  135. would use up to 43 CPUs to run tests concurrently, which as of v5.4 would
  136. complete all the scenarios in two batches, reducing the time to complete
  137. from about eight hours to about one hour (not counting the time to build
  138. the sixteen kernels). The "--dryrun sched" argument will not run tests,
  139. but rather tell you how the tests would be scheduled into batches. This
  140. can be useful when working out how many CPUs to specify in the --cpus
  141. argument.
  142. Not all changes require that all scenarios be run. For example, a change
  143. to Tree SRCU might run only the SRCU-N and SRCU-P scenarios using the
  144. --configs argument to kvm.sh as follows: "--configs 'SRCU-N SRCU-P'".
  145. Large systems can run multiple copies of the full set of scenarios,
  146. for example, a system with 448 hardware threads can run five instances
  147. of the full set concurrently. To make this happen::
  148. kvm.sh --cpus 448 --configs '5*CFLIST'
  149. Alternatively, such a system can run 56 concurrent instances of a single
  150. eight-CPU scenario::
  151. kvm.sh --cpus 448 --configs '56*TREE04'
  152. Or 28 concurrent instances of each of two eight-CPU scenarios::
  153. kvm.sh --cpus 448 --configs '28*TREE03 28*TREE04'
  154. Of course, each concurrent instance will use memory, which can be
  155. limited using the --memory argument, which defaults to 512M. Small
  156. values for memory may require disabling the callback-flooding tests
  157. using the --bootargs parameter discussed below.
  158. Sometimes additional debugging is useful, and in such cases the --kconfig
  159. parameter to kvm.sh may be used, for example, ``--kconfig 'CONFIG_RCU_EQS_DEBUG=y'``.
  160. In addition, there are the --gdb, --kasan, and --kcsan parameters.
  161. Note that --gdb limits you to one scenario per kvm.sh run and requires
  162. that you have another window open from which to run ``gdb`` as instructed
  163. by the script.
  164. Kernel boot arguments can also be supplied, for example, to control
  165. rcutorture's module parameters. For example, to test a change to RCU's
  166. CPU stall-warning code, use "--bootargs 'rcutorture.stall_cpu=30'".
  167. This will of course result in the scripting reporting a failure, namely
  168. the resulting RCU CPU stall warning. As noted above, reducing memory may
  169. require disabling rcutorture's callback-flooding tests::
  170. kvm.sh --cpus 448 --configs '56*TREE04' --memory 128M \
  171. --bootargs 'rcutorture.fwd_progress=0'
  172. Sometimes all that is needed is a full set of kernel builds. This is
  173. what the --buildonly parameter does.
  174. The --duration parameter can override the default run time of 30 minutes.
  175. For example, ``--duration 2d`` would run for two days, ``--duration 3h``
  176. would run for three hours, ``--duration 5m`` would run for five minutes,
  177. and ``--duration 45s`` would run for 45 seconds. This last can be useful
  178. for tracking down rare boot-time failures.
  179. Finally, the --trust-make parameter allows each kernel build to reuse what
  180. it can from the previous kernel build. Please note that without the
  181. --trust-make parameter, your tags files may be demolished.
  182. There are additional more arcane arguments that are documented in the
  183. source code of the kvm.sh script.
  184. If a run contains failures, the number of buildtime and runtime failures
  185. is listed at the end of the kvm.sh output, which you really should redirect
  186. to a file. The build products and console output of each run is kept in
  187. tools/testing/selftests/rcutorture/res in timestamped directories. A
  188. given directory can be supplied to kvm-find-errors.sh in order to have
  189. it cycle you through summaries of errors and full error logs. For example::
  190. tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh \
  191. tools/testing/selftests/rcutorture/res/2020.01.20-15.54.23
  192. However, it is often more convenient to access the files directly.
  193. Files pertaining to all scenarios in a run reside in the top-level
  194. directory (2020.01.20-15.54.23 in the example above), while per-scenario
  195. files reside in a subdirectory named after the scenario (for example,
  196. "TREE04"). If a given scenario ran more than once (as in "--configs
  197. '56*TREE04'" above), the directories corresponding to the second and
  198. subsequent runs of that scenario include a sequence number, for example,
  199. "TREE04.2", "TREE04.3", and so on.
  200. The most frequently used file in the top-level directory is testid.txt.
  201. If the test ran in a git repository, then this file contains the commit
  202. that was tested and any uncommitted changes in diff format.
  203. The most frequently used files in each per-scenario-run directory are:
  204. .config:
  205. This file contains the Kconfig options.
  206. Make.out:
  207. This contains build output for a specific scenario.
  208. console.log:
  209. This contains the console output for a specific scenario.
  210. This file may be examined once the kernel has booted, but
  211. it might not exist if the build failed.
  212. vmlinux:
  213. This contains the kernel, which can be useful with tools like
  214. objdump and gdb.
  215. A number of additional files are available, but are less frequently used.
  216. Many are intended for debugging of rcutorture itself or of its scripting.
  217. As of v5.4, a successful run with the default set of scenarios produces
  218. the following summary at the end of the run on a 12-CPU system::
  219. SRCU-N ------- 804233 GPs (148.932/s) [srcu: g10008272 f0x0 ]
  220. SRCU-P ------- 202320 GPs (37.4667/s) [srcud: g1809476 f0x0 ]
  221. SRCU-t ------- 1122086 GPs (207.794/s) [srcu: g0 f0x0 ]
  222. SRCU-u ------- 1111285 GPs (205.794/s) [srcud: g1 f0x0 ]
  223. TASKS01 ------- 19666 GPs (3.64185/s) [tasks: g0 f0x0 ]
  224. TASKS02 ------- 20541 GPs (3.80389/s) [tasks: g0 f0x0 ]
  225. TASKS03 ------- 19416 GPs (3.59556/s) [tasks: g0 f0x0 ]
  226. TINY01 ------- 836134 GPs (154.84/s) [rcu: g0 f0x0 ] n_max_cbs: 34198
  227. TINY02 ------- 850371 GPs (157.476/s) [rcu: g0 f0x0 ] n_max_cbs: 2631
  228. TREE01 ------- 162625 GPs (30.1157/s) [rcu: g1124169 f0x0 ]
  229. TREE02 ------- 333003 GPs (61.6672/s) [rcu: g2647753 f0x0 ] n_max_cbs: 35844
  230. TREE03 ------- 306623 GPs (56.782/s) [rcu: g2975325 f0x0 ] n_max_cbs: 1496497
  231. CPU count limited from 16 to 12
  232. TREE04 ------- 246149 GPs (45.5831/s) [rcu: g1695737 f0x0 ] n_max_cbs: 434961
  233. TREE05 ------- 314603 GPs (58.2598/s) [rcu: g2257741 f0x2 ] n_max_cbs: 193997
  234. TREE07 ------- 167347 GPs (30.9902/s) [rcu: g1079021 f0x0 ] n_max_cbs: 478732
  235. CPU count limited from 16 to 12
  236. TREE09 ------- 752238 GPs (139.303/s) [rcu: g13075057 f0x0 ] n_max_cbs: 99011
  237. Repeated Runs
  238. =============
  239. Suppose that you are chasing down a rare boot-time failure. Although you
  240. could use kvm.sh, doing so will rebuild the kernel on each run. If you
  241. need (say) 1,000 runs to have confidence that you have fixed the bug,
  242. these pointless rebuilds can become extremely annoying.
  243. This is why kvm-again.sh exists.
  244. Suppose that a previous kvm.sh run left its output in this directory::
  245. tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28
  246. Then this run can be re-run without rebuilding as follow::
  247. kvm-again.sh tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28
  248. A few of the original run's kvm.sh parameters may be overridden, perhaps
  249. most notably --duration and --bootargs. For example::
  250. kvm-again.sh tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28 \
  251. --duration 45s
  252. would re-run the previous test, but for only 45 seconds, thus facilitating
  253. tracking down the aforementioned rare boot-time failure.
  254. Distributed Runs
  255. ================
  256. Although kvm.sh is quite useful, its testing is confined to a single
  257. system. It is not all that hard to use your favorite framework to cause
  258. (say) 5 instances of kvm.sh to run on your 5 systems, but this will very
  259. likely unnecessarily rebuild kernels. In addition, manually distributing
  260. the desired rcutorture scenarios across the available systems can be
  261. painstaking and error-prone.
  262. And this is why the kvm-remote.sh script exists.
  263. If you the following command works::
  264. ssh system0 date
  265. and if it also works for system1, system2, system3, system4, and system5,
  266. and all of these systems have 64 CPUs, you can type::
  267. kvm-remote.sh "system0 system1 system2 system3 system4 system5" \
  268. --cpus 64 --duration 8h --configs "5*CFLIST"
  269. This will build each default scenario's kernel on the local system, then
  270. spread each of five instances of each scenario over the systems listed,
  271. running each scenario for eight hours. At the end of the runs, the
  272. results will be gathered, recorded, and printed. Most of the parameters
  273. that kvm.sh will accept can be passed to kvm-remote.sh, but the list of
  274. systems must come first.
  275. The kvm.sh ``--dryrun scenarios`` argument is useful for working out
  276. how many scenarios may be run in one batch across a group of systems.
  277. You can also re-run a previous remote run in a manner similar to kvm.sh:
  278. kvm-remote.sh "system0 system1 system2 system3 system4 system5" \
  279. tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28-remote \
  280. --duration 24h
  281. In this case, most of the kvm-again.sh parameters may be supplied following
  282. the pathname of the old run-results directory.