kernel-per-CPU-kthreads.txt 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356
  1. ==========================================
  2. Reducing OS jitter due to per-cpu kthreads
  3. ==========================================
  4. This document lists per-CPU kthreads in the Linux kernel and presents
  5. options to control their OS jitter. Note that non-per-CPU kthreads are
  6. not listed here. To reduce OS jitter from non-per-CPU kthreads, bind
  7. them to a "housekeeping" CPU dedicated to such work.
  8. References
  9. ==========
  10. - Documentation/IRQ-affinity.txt: Binding interrupts to sets of CPUs.
  11. - Documentation/cgroup-v1: Using cgroups to bind tasks to sets of CPUs.
  12. - man taskset: Using the taskset command to bind tasks to sets
  13. of CPUs.
  14. - man sched_setaffinity: Using the sched_setaffinity() system
  15. call to bind tasks to sets of CPUs.
  16. - /sys/devices/system/cpu/cpuN/online: Control CPU N's hotplug state,
  17. writing "0" to offline and "1" to online.
  18. - In order to locate kernel-generated OS jitter on CPU N:
  19. cd /sys/kernel/debug/tracing
  20. echo 1 > max_graph_depth # Increase the "1" for more detail
  21. echo function_graph > current_tracer
  22. # run workload
  23. cat per_cpu/cpuN/trace
  24. kthreads
  25. ========
  26. Name:
  27. ehca_comp/%u
  28. Purpose:
  29. Periodically process Infiniband-related work.
  30. To reduce its OS jitter, do any of the following:
  31. 1. Don't use eHCA Infiniband hardware, instead choosing hardware
  32. that does not require per-CPU kthreads. This will prevent these
  33. kthreads from being created in the first place. (This will
  34. work for most people, as this hardware, though important, is
  35. relatively old and is produced in relatively low unit volumes.)
  36. 2. Do all eHCA-Infiniband-related work on other CPUs, including
  37. interrupts.
  38. 3. Rework the eHCA driver so that its per-CPU kthreads are
  39. provisioned only on selected CPUs.
  40. Name:
  41. irq/%d-%s
  42. Purpose:
  43. Handle threaded interrupts.
  44. To reduce its OS jitter, do the following:
  45. 1. Use irq affinity to force the irq threads to execute on
  46. some other CPU.
  47. Name:
  48. kcmtpd_ctr_%d
  49. Purpose:
  50. Handle Bluetooth work.
  51. To reduce its OS jitter, do one of the following:
  52. 1. Don't use Bluetooth, in which case these kthreads won't be
  53. created in the first place.
  54. 2. Use irq affinity to force Bluetooth-related interrupts to
  55. occur on some other CPU and furthermore initiate all
  56. Bluetooth activity on some other CPU.
  57. Name:
  58. ksoftirqd/%u
  59. Purpose:
  60. Execute softirq handlers when threaded or when under heavy load.
  61. To reduce its OS jitter, each softirq vector must be handled
  62. separately as follows:
  63. TIMER_SOFTIRQ
  64. -------------
  65. Do all of the following:
  66. 1. To the extent possible, keep the CPU out of the kernel when it
  67. is non-idle, for example, by avoiding system calls and by forcing
  68. both kernel threads and interrupts to execute elsewhere.
  69. 2. Build with CONFIG_HOTPLUG_CPU=y. After boot completes, force
  70. the CPU offline, then bring it back online. This forces
  71. recurring timers to migrate elsewhere. If you are concerned
  72. with multiple CPUs, force them all offline before bringing the
  73. first one back online. Once you have onlined the CPUs in question,
  74. do not offline any other CPUs, because doing so could force the
  75. timer back onto one of the CPUs in question.
  76. NET_TX_SOFTIRQ and NET_RX_SOFTIRQ
  77. ---------------------------------
  78. Do all of the following:
  79. 1. Force networking interrupts onto other CPUs.
  80. 2. Initiate any network I/O on other CPUs.
  81. 3. Once your application has started, prevent CPU-hotplug operations
  82. from being initiated from tasks that might run on the CPU to
  83. be de-jittered. (It is OK to force this CPU offline and then
  84. bring it back online before you start your application.)
  85. BLOCK_SOFTIRQ
  86. -------------
  87. Do all of the following:
  88. 1. Force block-device interrupts onto some other CPU.
  89. 2. Initiate any block I/O on other CPUs.
  90. 3. Once your application has started, prevent CPU-hotplug operations
  91. from being initiated from tasks that might run on the CPU to
  92. be de-jittered. (It is OK to force this CPU offline and then
  93. bring it back online before you start your application.)
  94. IRQ_POLL_SOFTIRQ
  95. ----------------
  96. Do all of the following:
  97. 1. Force block-device interrupts onto some other CPU.
  98. 2. Initiate any block I/O and block-I/O polling on other CPUs.
  99. 3. Once your application has started, prevent CPU-hotplug operations
  100. from being initiated from tasks that might run on the CPU to
  101. be de-jittered. (It is OK to force this CPU offline and then
  102. bring it back online before you start your application.)
  103. TASKLET_SOFTIRQ
  104. ---------------
  105. Do one or more of the following:
  106. 1. Avoid use of drivers that use tasklets. (Such drivers will contain
  107. calls to things like tasklet_schedule().)
  108. 2. Convert all drivers that you must use from tasklets to workqueues.
  109. 3. Force interrupts for drivers using tasklets onto other CPUs,
  110. and also do I/O involving these drivers on other CPUs.
  111. SCHED_SOFTIRQ
  112. -------------
  113. Do all of the following:
  114. 1. Avoid sending scheduler IPIs to the CPU to be de-jittered,
  115. for example, ensure that at most one runnable kthread is present
  116. on that CPU. If a thread that expects to run on the de-jittered
  117. CPU awakens, the scheduler will send an IPI that can result in
  118. a subsequent SCHED_SOFTIRQ.
  119. 2. CONFIG_NO_HZ_FULL=y and ensure that the CPU to be de-jittered
  120. is marked as an adaptive-ticks CPU using the "nohz_full="
  121. boot parameter. This reduces the number of scheduler-clock
  122. interrupts that the de-jittered CPU receives, minimizing its
  123. chances of being selected to do the load balancing work that
  124. runs in SCHED_SOFTIRQ context.
  125. 3. To the extent possible, keep the CPU out of the kernel when it
  126. is non-idle, for example, by avoiding system calls and by
  127. forcing both kernel threads and interrupts to execute elsewhere.
  128. This further reduces the number of scheduler-clock interrupts
  129. received by the de-jittered CPU.
  130. HRTIMER_SOFTIRQ
  131. ---------------
  132. Do all of the following:
  133. 1. To the extent possible, keep the CPU out of the kernel when it
  134. is non-idle. For example, avoid system calls and force both
  135. kernel threads and interrupts to execute elsewhere.
  136. 2. Build with CONFIG_HOTPLUG_CPU=y. Once boot completes, force the
  137. CPU offline, then bring it back online. This forces recurring
  138. timers to migrate elsewhere. If you are concerned with multiple
  139. CPUs, force them all offline before bringing the first one
  140. back online. Once you have onlined the CPUs in question, do not
  141. offline any other CPUs, because doing so could force the timer
  142. back onto one of the CPUs in question.
  143. RCU_SOFTIRQ
  144. -----------
  145. Do at least one of the following:
  146. 1. Offload callbacks and keep the CPU in either dyntick-idle or
  147. adaptive-ticks state by doing all of the following:
  148. a. CONFIG_NO_HZ_FULL=y and ensure that the CPU to be
  149. de-jittered is marked as an adaptive-ticks CPU using the
  150. "nohz_full=" boot parameter. Bind the rcuo kthreads to
  151. housekeeping CPUs, which can tolerate OS jitter.
  152. b. To the extent possible, keep the CPU out of the kernel
  153. when it is non-idle, for example, by avoiding system
  154. calls and by forcing both kernel threads and interrupts
  155. to execute elsewhere.
  156. 2. Enable RCU to do its processing remotely via dyntick-idle by
  157. doing all of the following:
  158. a. Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y.
  159. b. Ensure that the CPU goes idle frequently, allowing other
  160. CPUs to detect that it has passed through an RCU quiescent
  161. state. If the kernel is built with CONFIG_NO_HZ_FULL=y,
  162. userspace execution also allows other CPUs to detect that
  163. the CPU in question has passed through a quiescent state.
  164. c. To the extent possible, keep the CPU out of the kernel
  165. when it is non-idle, for example, by avoiding system
  166. calls and by forcing both kernel threads and interrupts
  167. to execute elsewhere.
  168. Name:
  169. kworker/%u:%d%s (cpu, id, priority)
  170. Purpose:
  171. Execute workqueue requests
  172. To reduce its OS jitter, do any of the following:
  173. 1. Run your workload at a real-time priority, which will allow
  174. preempting the kworker daemons.
  175. 2. A given workqueue can be made visible in the sysfs filesystem
  176. by passing the WQ_SYSFS to that workqueue's alloc_workqueue().
  177. Such a workqueue can be confined to a given subset of the
  178. CPUs using the ``/sys/devices/virtual/workqueue/*/cpumask`` sysfs
  179. files. The set of WQ_SYSFS workqueues can be displayed using
  180. "ls sys/devices/virtual/workqueue". That said, the workqueues
  181. maintainer would like to caution people against indiscriminately
  182. sprinkling WQ_SYSFS across all the workqueues. The reason for
  183. caution is that it is easy to add WQ_SYSFS, but because sysfs is
  184. part of the formal user/kernel API, it can be nearly impossible
  185. to remove it, even if its addition was a mistake.
  186. 3. Do any of the following needed to avoid jitter that your
  187. application cannot tolerate:
  188. a. Build your kernel with CONFIG_SLUB=y rather than
  189. CONFIG_SLAB=y, thus avoiding the slab allocator's periodic
  190. use of each CPU's workqueues to run its cache_reap()
  191. function.
  192. b. Avoid using oprofile, thus avoiding OS jitter from
  193. wq_sync_buffer().
  194. c. Limit your CPU frequency so that a CPU-frequency
  195. governor is not required, possibly enlisting the aid of
  196. special heatsinks or other cooling technologies. If done
  197. correctly, and if you CPU architecture permits, you should
  198. be able to build your kernel with CONFIG_CPU_FREQ=n to
  199. avoid the CPU-frequency governor periodically running
  200. on each CPU, including cs_dbs_timer() and od_dbs_timer().
  201. WARNING: Please check your CPU specifications to
  202. make sure that this is safe on your particular system.
  203. d. As of v3.18, Christoph Lameter's on-demand vmstat workers
  204. commit prevents OS jitter due to vmstat_update() on
  205. CONFIG_SMP=y systems. Before v3.18, is not possible
  206. to entirely get rid of the OS jitter, but you can
  207. decrease its frequency by writing a large value to
  208. /proc/sys/vm/stat_interval. The default value is HZ,
  209. for an interval of one second. Of course, larger values
  210. will make your virtual-memory statistics update more
  211. slowly. Of course, you can also run your workload at
  212. a real-time priority, thus preempting vmstat_update(),
  213. but if your workload is CPU-bound, this is a bad idea.
  214. However, there is an RFC patch from Christoph Lameter
  215. (based on an earlier one from Gilad Ben-Yossef) that
  216. reduces or even eliminates vmstat overhead for some
  217. workloads at https://lkml.org/lkml/2013/9/4/379.
  218. e. Boot with "elevator=noop" to avoid workqueue use by
  219. the block layer.
  220. f. If running on high-end powerpc servers, build with
  221. CONFIG_PPC_RTAS_DAEMON=n. This prevents the RTAS
  222. daemon from running on each CPU every second or so.
  223. (This will require editing Kconfig files and will defeat
  224. this platform's RAS functionality.) This avoids jitter
  225. due to the rtas_event_scan() function.
  226. WARNING: Please check your CPU specifications to
  227. make sure that this is safe on your particular system.
  228. g. If running on Cell Processor, build your kernel with
  229. CBE_CPUFREQ_SPU_GOVERNOR=n to avoid OS jitter from
  230. spu_gov_work().
  231. WARNING: Please check your CPU specifications to
  232. make sure that this is safe on your particular system.
  233. h. If running on PowerMAC, build your kernel with
  234. CONFIG_PMAC_RACKMETER=n to disable the CPU-meter,
  235. avoiding OS jitter from rackmeter_do_timer().
  236. Name:
  237. rcuc/%u
  238. Purpose:
  239. Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.
  240. To reduce its OS jitter, do at least one of the following:
  241. 1. Build the kernel with CONFIG_PREEMPT=n. This prevents these
  242. kthreads from being created in the first place, and also obviates
  243. the need for RCU priority boosting. This approach is feasible
  244. for workloads that do not require high degrees of responsiveness.
  245. 2. Build the kernel with CONFIG_RCU_BOOST=n. This prevents these
  246. kthreads from being created in the first place. This approach
  247. is feasible only if your workload never requires RCU priority
  248. boosting, for example, if you ensure frequent idle time on all
  249. CPUs that might execute within the kernel.
  250. 3. Build with CONFIG_RCU_NOCB_CPU=y and boot with the rcu_nocbs=
  251. boot parameter offloading RCU callbacks from all CPUs susceptible
  252. to OS jitter. This approach prevents the rcuc/%u kthreads from
  253. having any work to do, so that they are never awakened.
  254. 4. Ensure that the CPU never enters the kernel, and, in particular,
  255. avoid initiating any CPU hotplug operations on this CPU. This is
  256. another way of preventing any callbacks from being queued on the
  257. CPU, again preventing the rcuc/%u kthreads from having any work
  258. to do.
  259. Name:
  260. rcuob/%d, rcuop/%d, and rcuos/%d
  261. Purpose:
  262. Offload RCU callbacks from the corresponding CPU.
  263. To reduce its OS jitter, do at least one of the following:
  264. 1. Use affinity, cgroups, or other mechanism to force these kthreads
  265. to execute on some other CPU.
  266. 2. Build with CONFIG_RCU_NOCB_CPU=n, which will prevent these
  267. kthreads from being created in the first place. However, please
  268. note that this will not eliminate OS jitter, but will instead
  269. shift it to RCU_SOFTIRQ.
  270. Name:
  271. watchdog/%u
  272. Purpose:
  273. Detect software lockups on each CPU.
  274. To reduce its OS jitter, do at least one of the following:
  275. 1. Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these
  276. kthreads from being created in the first place.
  277. 2. Boot with "nosoftlockup=0", which will also prevent these kthreads
  278. from being created. Other related watchdog and softlockup boot
  279. parameters may be found in Documentation/admin-guide/kernel-parameters.rst
  280. and Documentation/watchdog/watchdog-parameters.txt.
  281. 3. Echo a zero to /proc/sys/kernel/watchdog to disable the
  282. watchdog timer.
  283. 4. Echo a large number of /proc/sys/kernel/watchdog_thresh in
  284. order to reduce the frequency of OS jitter due to the watchdog
  285. timer down to a level that is acceptable for your workload.