stallwarn.rst 22 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484
  1. .. SPDX-License-Identifier: GPL-2.0
  2. ==============================
  3. Using RCU's CPU Stall Detector
  4. ==============================
  5. This document first discusses what sorts of issues RCU's CPU stall
  6. detector can locate, and then discusses kernel parameters and Kconfig
  7. options that can be used to fine-tune the detector's operation. Finally,
  8. this document explains the stall detector's "splat" format.
  9. What Causes RCU CPU Stall Warnings?
  10. ===================================
  11. So your kernel printed an RCU CPU stall warning. The next question is
  12. "What caused it?" The following problems can result in RCU CPU stall
  13. warnings:
  14. - A CPU looping in an RCU read-side critical section.
  15. - A CPU looping with interrupts disabled.
  16. - A CPU looping with preemption disabled.
  17. - A CPU looping with bottom halves disabled.
  18. - For !CONFIG_PREEMPTION kernels, a CPU looping anywhere in the
  19. kernel without potentially invoking schedule(). If the looping
  20. in the kernel is really expected and desirable behavior, you
  21. might need to add some calls to cond_resched().
  22. - Booting Linux using a console connection that is too slow to
  23. keep up with the boot-time console-message rate. For example,
  24. a 115Kbaud serial console can be *way* too slow to keep up
  25. with boot-time message rates, and will frequently result in
  26. RCU CPU stall warning messages. Especially if you have added
  27. debug printk()s.
  28. - Anything that prevents RCU's grace-period kthreads from running.
  29. This can result in the "All QSes seen" console-log message.
  30. This message will include information on when the kthread last
  31. ran and how often it should be expected to run. It can also
  32. result in the ``rcu_.*kthread starved for`` console-log message,
  33. which will include additional debugging information.
  34. - A CPU-bound real-time task in a CONFIG_PREEMPTION kernel, which might
  35. happen to preempt a low-priority task in the middle of an RCU
  36. read-side critical section. This is especially damaging if
  37. that low-priority task is not permitted to run on any other CPU,
  38. in which case the next RCU grace period can never complete, which
  39. will eventually cause the system to run out of memory and hang.
  40. While the system is in the process of running itself out of
  41. memory, you might see stall-warning messages.
  42. - A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that
  43. is running at a higher priority than the RCU softirq threads.
  44. This will prevent RCU callbacks from ever being invoked,
  45. and in a CONFIG_PREEMPT_RCU kernel will further prevent
  46. RCU grace periods from ever completing. Either way, the
  47. system will eventually run out of memory and hang. In the
  48. CONFIG_PREEMPT_RCU case, you might see stall-warning
  49. messages.
  50. You can use the rcutree.kthread_prio kernel boot parameter to
  51. increase the scheduling priority of RCU's kthreads, which can
  52. help avoid this problem. However, please note that doing this
  53. can increase your system's context-switch rate and thus degrade
  54. performance.
  55. - A periodic interrupt whose handler takes longer than the time
  56. interval between successive pairs of interrupts. This can
  57. prevent RCU's kthreads and softirq handlers from running.
  58. Note that certain high-overhead debugging options, for example
  59. the function_graph tracer, can result in interrupt handler taking
  60. considerably longer than normal, which can in turn result in
  61. RCU CPU stall warnings.
  62. - Testing a workload on a fast system, tuning the stall-warning
  63. timeout down to just barely avoid RCU CPU stall warnings, and then
  64. running the same workload with the same stall-warning timeout on a
  65. slow system. Note that thermal throttling and on-demand governors
  66. can cause a single system to be sometimes fast and sometimes slow!
  67. - A hardware or software issue shuts off the scheduler-clock
  68. interrupt on a CPU that is not in dyntick-idle mode. This
  69. problem really has happened, and seems to be most likely to
  70. result in RCU CPU stall warnings for CONFIG_NO_HZ_COMMON=n kernels.
  71. - A hardware or software issue that prevents time-based wakeups
  72. from occurring. These issues can range from misconfigured or
  73. buggy timer hardware through bugs in the interrupt or exception
  74. path (whether hardware, firmware, or software) through bugs
  75. in Linux's timer subsystem through bugs in the scheduler, and,
  76. yes, even including bugs in RCU itself. It can also result in
  77. the ``rcu_.*timer wakeup didn't happen for`` console-log message,
  78. which will include additional debugging information.
  79. - A low-level kernel issue that either fails to invoke one of the
  80. variants of rcu_eqs_enter(true), rcu_eqs_exit(true), ct_idle_enter(),
  81. ct_idle_exit(), ct_irq_enter(), or ct_irq_exit() on the one
  82. hand, or that invokes one of them too many times on the other.
  83. Historically, the most frequent issue has been an omission
  84. of either irq_enter() or irq_exit(), which in turn invoke
  85. ct_irq_enter() or ct_irq_exit(), respectively. Building your
  86. kernel with CONFIG_RCU_EQS_DEBUG=y can help track down these types
  87. of issues, which sometimes arise in architecture-specific code.
  88. - A bug in the RCU implementation.
  89. - A hardware failure. This is quite unlikely, but is not at all
  90. uncommon in large datacenter. In one memorable case some decades
  91. back, a CPU failed in a running system, becoming unresponsive,
  92. but not causing an immediate crash. This resulted in a series
  93. of RCU CPU stall warnings, eventually leading the realization
  94. that the CPU had failed.
  95. The RCU, RCU-sched, RCU-tasks, and RCU-tasks-trace implementations have
  96. CPU stall warning. Note that SRCU does *not* have CPU stall warnings.
  97. Please note that RCU only detects CPU stalls when there is a grace period
  98. in progress. No grace period, no CPU stall warnings.
  99. To diagnose the cause of the stall, inspect the stack traces.
  100. The offending function will usually be near the top of the stack.
  101. If you have a series of stall warnings from a single extended stall,
  102. comparing the stack traces can often help determine where the stall
  103. is occurring, which will usually be in the function nearest the top of
  104. that portion of the stack which remains the same from trace to trace.
  105. If you can reliably trigger the stall, ftrace can be quite helpful.
  106. RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE
  107. and with RCU's event tracing. For information on RCU's event tracing,
  108. see include/trace/events/rcu.h.
  109. Fine-Tuning the RCU CPU Stall Detector
  110. ======================================
  111. The rcuupdate.rcu_cpu_stall_suppress module parameter disables RCU's
  112. CPU stall detector, which detects conditions that unduly delay RCU grace
  113. periods. This module parameter enables CPU stall detection by default,
  114. but may be overridden via boot-time parameter or at runtime via sysfs.
  115. The stall detector's idea of what constitutes "unduly delayed" is
  116. controlled by a set of kernel configuration variables and cpp macros:
  117. CONFIG_RCU_CPU_STALL_TIMEOUT
  118. ----------------------------
  119. This kernel configuration parameter defines the period of time
  120. that RCU will wait from the beginning of a grace period until it
  121. issues an RCU CPU stall warning. This time period is normally
  122. 21 seconds.
  123. This configuration parameter may be changed at runtime via the
  124. /sys/module/rcupdate/parameters/rcu_cpu_stall_timeout, however
  125. this parameter is checked only at the beginning of a cycle.
  126. So if you are 10 seconds into a 40-second stall, setting this
  127. sysfs parameter to (say) five will shorten the timeout for the
  128. *next* stall, or the following warning for the current stall
  129. (assuming the stall lasts long enough). It will not affect the
  130. timing of the next warning for the current stall.
  131. Stall-warning messages may be enabled and disabled completely via
  132. /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
  133. CONFIG_RCU_EXP_CPU_STALL_TIMEOUT
  134. --------------------------------
  135. Same as the CONFIG_RCU_CPU_STALL_TIMEOUT parameter but only for
  136. the expedited grace period. This parameter defines the period
  137. of time that RCU will wait from the beginning of an expedited
  138. grace period until it issues an RCU CPU stall warning. This time
  139. period is normally 20 milliseconds on Android devices. A zero
  140. value causes the CONFIG_RCU_CPU_STALL_TIMEOUT value to be used,
  141. after conversion to milliseconds.
  142. This configuration parameter may be changed at runtime via the
  143. /sys/module/rcupdate/parameters/rcu_exp_cpu_stall_timeout, however
  144. this parameter is checked only at the beginning of a cycle. If you
  145. are in a current stall cycle, setting it to a new value will change
  146. the timeout for the -next- stall.
  147. Stall-warning messages may be enabled and disabled completely via
  148. /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
  149. RCU_STALL_DELAY_DELTA
  150. ---------------------
  151. Although the lockdep facility is extremely useful, it does add
  152. some overhead. Therefore, under CONFIG_PROVE_RCU, the
  153. RCU_STALL_DELAY_DELTA macro allows five extra seconds before
  154. giving an RCU CPU stall warning message. (This is a cpp
  155. macro, not a kernel configuration parameter.)
  156. RCU_STALL_RAT_DELAY
  157. -------------------
  158. The CPU stall detector tries to make the offending CPU print its
  159. own warnings, as this often gives better-quality stack traces.
  160. However, if the offending CPU does not detect its own stall in
  161. the number of jiffies specified by RCU_STALL_RAT_DELAY, then
  162. some other CPU will complain. This delay is normally set to
  163. two jiffies. (This is a cpp macro, not a kernel configuration
  164. parameter.)
  165. rcupdate.rcu_task_stall_timeout
  166. -------------------------------
  167. This boot/sysfs parameter controls the RCU-tasks and
  168. RCU-tasks-trace stall warning intervals. A value of zero or less
  169. suppresses RCU-tasks stall warnings. A positive value sets the
  170. stall-warning interval in seconds. An RCU-tasks stall warning
  171. starts with the line:
  172. INFO: rcu_tasks detected stalls on tasks:
  173. And continues with the output of sched_show_task() for each
  174. task stalling the current RCU-tasks grace period.
  175. An RCU-tasks-trace stall warning starts (and continues) similarly:
  176. INFO: rcu_tasks_trace detected stalls on tasks
  177. Interpreting RCU's CPU Stall-Detector "Splats"
  178. ==============================================
  179. For non-RCU-tasks flavors of RCU, when a CPU detects that some other
  180. CPU is stalling, it will print a message similar to the following::
  181. INFO: rcu_sched detected stalls on CPUs/tasks:
  182. 2-...: (3 GPs behind) idle=06c/0/0 softirq=1453/1455 fqs=0
  183. 16-...: (0 ticks this GP) idle=81c/0/0 softirq=764/764 fqs=0
  184. (detected by 32, t=2603 jiffies, g=7075, q=625)
  185. This message indicates that CPU 32 detected that CPUs 2 and 16 were both
  186. causing stalls, and that the stall was affecting RCU-sched. This message
  187. will normally be followed by stack dumps for each CPU. Please note that
  188. PREEMPT_RCU builds can be stalled by tasks as well as by CPUs, and that
  189. the tasks will be indicated by PID, for example, "P3421". It is even
  190. possible for an rcu_state stall to be caused by both CPUs *and* tasks,
  191. in which case the offending CPUs and tasks will all be called out in the list.
  192. In some cases, CPUs will detect themselves stalling, which will result
  193. in a self-detected stall.
  194. CPU 2's "(3 GPs behind)" indicates that this CPU has not interacted with
  195. the RCU core for the past three grace periods. In contrast, CPU 16's "(0
  196. ticks this GP)" indicates that this CPU has not taken any scheduling-clock
  197. interrupts during the current stalled grace period.
  198. The "idle=" portion of the message prints the dyntick-idle state.
  199. The hex number before the first "/" is the low-order 16 bits of the
  200. dynticks counter, which will have an even-numbered value if the CPU
  201. is in dyntick-idle mode and an odd-numbered value otherwise. The hex
  202. number between the two "/"s is the value of the nesting, which will be
  203. a small non-negative number if in the idle loop (as shown above) and a
  204. very large positive number otherwise. The number following the final
  205. "/" is the NMI nesting, which will be a small non-negative number.
  206. The "softirq=" portion of the message tracks the number of RCU softirq
  207. handlers that the stalled CPU has executed. The number before the "/"
  208. is the number that had executed since boot at the time that this CPU
  209. last noted the beginning of a grace period, which might be the current
  210. (stalled) grace period, or it might be some earlier grace period (for
  211. example, if the CPU might have been in dyntick-idle mode for an extended
  212. time period). The number after the "/" is the number that have executed
  213. since boot until the current time. If this latter number stays constant
  214. across repeated stall-warning messages, it is possible that RCU's softirq
  215. handlers are no longer able to execute on this CPU. This can happen if
  216. the stalled CPU is spinning with interrupts are disabled, or, in -rt
  217. kernels, if a high-priority process is starving RCU's softirq handler.
  218. The "fqs=" shows the number of force-quiescent-state idle/offline
  219. detection passes that the grace-period kthread has made across this
  220. CPU since the last time that this CPU noted the beginning of a grace
  221. period.
  222. The "detected by" line indicates which CPU detected the stall (in this
  223. case, CPU 32), how many jiffies have elapsed since the start of the grace
  224. period (in this case 2603), the grace-period sequence number (7075), and
  225. an estimate of the total number of RCU callbacks queued across all CPUs
  226. (625 in this case).
  227. If the grace period ends just as the stall warning starts printing,
  228. there will be a spurious stall-warning message, which will include
  229. the following::
  230. INFO: Stall ended before state dump start
  231. This is rare, but does happen from time to time in real life. It is also
  232. possible for a zero-jiffy stall to be flagged in this case, depending
  233. on how the stall warning and the grace-period initialization happen to
  234. interact. Please note that it is not possible to entirely eliminate this
  235. sort of false positive without resorting to things like stop_machine(),
  236. which is overkill for this sort of problem.
  237. If all CPUs and tasks have passed through quiescent states, but the
  238. grace period has nevertheless failed to end, the stall-warning splat
  239. will include something like the following::
  240. All QSes seen, last rcu_preempt kthread activity 23807 (4297905177-4297881370), jiffies_till_next_fqs=3, root ->qsmask 0x0
  241. The "23807" indicates that it has been more than 23 thousand jiffies
  242. since the grace-period kthread ran. The "jiffies_till_next_fqs"
  243. indicates how frequently that kthread should run, giving the number
  244. of jiffies between force-quiescent-state scans, in this case three,
  245. which is way less than 23807. Finally, the root rcu_node structure's
  246. ->qsmask field is printed, which will normally be zero.
  247. If the relevant grace-period kthread has been unable to run prior to
  248. the stall warning, as was the case in the "All QSes seen" line above,
  249. the following additional line is printed::
  250. rcu_sched kthread starved for 23807 jiffies! g7075 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1 ->cpu=5
  251. Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
  252. Starving the grace-period kthreads of CPU time can of course result
  253. in RCU CPU stall warnings even when all CPUs and tasks have passed
  254. through the required quiescent states. The "g" number shows the current
  255. grace-period sequence number, the "f" precedes the ->gp_flags command
  256. to the grace-period kthread, the "RCU_GP_WAIT_FQS" indicates that the
  257. kthread is waiting for a short timeout, the "state" precedes value of the
  258. task_struct ->state field, and the "cpu" indicates that the grace-period
  259. kthread last ran on CPU 5.
  260. If the relevant grace-period kthread does not wake from FQS wait in a
  261. reasonable time, then the following additional line is printed::
  262. kthread timer wakeup didn't happen for 23804 jiffies! g7076 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
  263. The "23804" indicates that kthread's timer expired more than 23 thousand
  264. jiffies ago. The rest of the line has meaning similar to the kthread
  265. starvation case.
  266. Additionally, the following line is printed::
  267. Possible timer handling issue on cpu=4 timer-softirq=11142
  268. Here "cpu" indicates that the grace-period kthread last ran on CPU 4,
  269. where it queued the fqs timer. The number following the "timer-softirq"
  270. is the current ``TIMER_SOFTIRQ`` count on cpu 4. If this value does not
  271. change on successive RCU CPU stall warnings, there is further reason to
  272. suspect a timer problem.
  273. These messages are usually followed by stack dumps of the CPUs and tasks
  274. involved in the stall. These stack traces can help you locate the cause
  275. of the stall, keeping in mind that the CPU detecting the stall will have
  276. an interrupt frame that is mainly devoted to detecting the stall.
  277. Multiple Warnings From One Stall
  278. ================================
  279. If a stall lasts long enough, multiple stall-warning messages will
  280. be printed for it. The second and subsequent messages are printed at
  281. longer intervals, so that the time between (say) the first and second
  282. message will be about three times the interval between the beginning
  283. of the stall and the first message. It can be helpful to compare the
  284. stack dumps for the different messages for the same stalled grace period.
  285. Stall Warnings for Expedited Grace Periods
  286. ==========================================
  287. If an expedited grace period detects a stall, it will place a message
  288. like the following in dmesg::
  289. INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 7-... } 21119 jiffies s: 73 root: 0x2/.
  290. This indicates that CPU 7 has failed to respond to a reschedule IPI.
  291. The three periods (".") following the CPU number indicate that the CPU
  292. is online (otherwise the first period would instead have been "O"),
  293. that the CPU was online at the beginning of the expedited grace period
  294. (otherwise the second period would have instead been "o"), and that
  295. the CPU has been online at least once since boot (otherwise, the third
  296. period would instead have been "N"). The number before the "jiffies"
  297. indicates that the expedited grace period has been going on for 21,119
  298. jiffies. The number following the "s:" indicates that the expedited
  299. grace-period sequence counter is 73. The fact that this last value is
  300. odd indicates that an expedited grace period is in flight. The number
  301. following "root:" is a bitmask that indicates which children of the root
  302. rcu_node structure correspond to CPUs and/or tasks that are blocking the
  303. current expedited grace period. If the tree had more than one level,
  304. additional hex numbers would be printed for the states of the other
  305. rcu_node structures in the tree.
  306. As with normal grace periods, PREEMPT_RCU builds can be stalled by
  307. tasks as well as by CPUs, and that the tasks will be indicated by PID,
  308. for example, "P3421".
  309. It is entirely possible to see stall warnings from normal and from
  310. expedited grace periods at about the same time during the same run.
  311. RCU_CPU_STALL_CPUTIME
  312. =====================
  313. In kernels built with CONFIG_RCU_CPU_STALL_CPUTIME=y or booted with
  314. rcupdate.rcu_cpu_stall_cputime=1, the following additional information
  315. is supplied with each RCU CPU stall warning::
  316. rcu: hardirqs softirqs csw/system
  317. rcu: number: 624 45 0
  318. rcu: cputime: 69 1 2425 ==> 2500(ms)
  319. These statistics are collected during the sampling period. The values
  320. in row "number:" are the number of hard interrupts, number of soft
  321. interrupts, and number of context switches on the stalled CPU. The
  322. first three values in row "cputime:" indicate the CPU time in
  323. milliseconds consumed by hard interrupts, soft interrupts, and tasks
  324. on the stalled CPU. The last number is the measurement interval, again
  325. in milliseconds. Because user-mode tasks normally do not cause RCU CPU
  326. stalls, these tasks are typically kernel tasks, which is why only the
  327. system CPU time are considered.
  328. The sampling period is shown as follows::
  329. |<------------first timeout---------->|<-----second timeout----->|
  330. |<--half timeout-->|<--half timeout-->| |
  331. | |<--first period-->| |
  332. | |<-----------second sampling period---------->|
  333. | | | |
  334. snapshot time point 1st-stall 2nd-stall
  335. The following describes four typical scenarios:
  336. 1. A CPU looping with interrupts disabled.
  337. ::
  338. rcu: hardirqs softirqs csw/system
  339. rcu: number: 0 0 0
  340. rcu: cputime: 0 0 0 ==> 2500(ms)
  341. Because interrupts have been disabled throughout the measurement
  342. interval, there are no interrupts and no context switches.
  343. Furthermore, because CPU time consumption was measured using interrupt
  344. handlers, the system CPU consumption is misleadingly measured as zero.
  345. This scenario will normally also have "(0 ticks this GP)" printed on
  346. this CPU's summary line.
  347. 2. A CPU looping with bottom halves disabled.
  348. This is similar to the previous example, but with non-zero number of
  349. and CPU time consumed by hard interrupts, along with non-zero CPU
  350. time consumed by in-kernel execution::
  351. rcu: hardirqs softirqs csw/system
  352. rcu: number: 624 0 0
  353. rcu: cputime: 49 0 2446 ==> 2500(ms)
  354. The fact that there are zero softirqs gives a hint that these were
  355. disabled, perhaps via local_bh_disable(). It is of course possible
  356. that there were no softirqs, perhaps because all events that would
  357. result in softirq execution are confined to other CPUs. In this case,
  358. the diagnosis should continue as shown in the next example.
  359. 3. A CPU looping with preemption disabled.
  360. Here, only the number of context switches is zero::
  361. rcu: hardirqs softirqs csw/system
  362. rcu: number: 624 45 0
  363. rcu: cputime: 69 1 2425 ==> 2500(ms)
  364. This situation hints that the stalled CPU was looping with preemption
  365. disabled.
  366. 4. No looping, but massive hard and soft interrupts.
  367. ::
  368. rcu: hardirqs softirqs csw/system
  369. rcu: number: xx xx 0
  370. rcu: cputime: xx xx 0 ==> 2500(ms)
  371. Here, the number and CPU time of hard interrupts are all non-zero,
  372. but the number of context switches and the in-kernel CPU time consumed
  373. are zero. The number and cputime of soft interrupts will usually be
  374. non-zero, but could be zero, for example, if the CPU was spinning
  375. within a single hard interrupt handler.
  376. If this type of RCU CPU stall warning can be reproduced, you can
  377. narrow it down by looking at /proc/interrupts or by writing code to
  378. trace each interrupt, for example, by referring to show_interrupts().