completion.txt 9.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247
  1. completions - wait for completion handling
  2. ==========================================
  3. This document was originally written based on 3.18.0 (linux-next)
  4. Introduction:
  5. -------------
  6. If you have one or more threads of execution that must wait for some process
  7. to have reached a point or a specific state, completions can provide a
  8. race-free solution to this problem. Semantically they are somewhat like a
  9. pthread_barrier and have similar use-cases.
  10. Completions are a code synchronization mechanism which is preferable to any
  11. misuse of locks. Any time you think of using yield() or some quirky
  12. msleep(1) loop to allow something else to proceed, you probably want to
  13. look into using one of the wait_for_completion*() calls instead. The
  14. advantage of using completions is clear intent of the code, but also more
  15. efficient code as both threads can continue until the result is actually
  16. needed.
  17. Completions are built on top of the generic event infrastructure in Linux,
  18. with the event reduced to a simple flag (appropriately called "done") in
  19. struct completion that tells the waiting threads of execution if they
  20. can continue safely.
  21. As completions are scheduling related, the code is found in
  22. kernel/sched/completion.c.
  23. Usage:
  24. ------
  25. There are three parts to using completions, the initialization of the
  26. struct completion, the waiting part through a call to one of the variants of
  27. wait_for_completion() and the signaling side through a call to complete()
  28. or complete_all(). Further there are some helper functions for checking the
  29. state of completions.
  30. To use completions one needs to include <linux/completion.h> and
  31. create a variable of type struct completion. The structure used for
  32. handling of completions is:
  33. struct completion {
  34. unsigned int done;
  35. wait_queue_head_t wait;
  36. };
  37. providing the wait queue to place tasks on for waiting and the flag for
  38. indicating the state of affairs.
  39. Completions should be named to convey the intent of the waiter. A good
  40. example is:
  41. wait_for_completion(&early_console_added);
  42. complete(&early_console_added);
  43. Good naming (as always) helps code readability.
  44. Initializing completions:
  45. -------------------------
  46. Initialization of dynamically allocated completions, often embedded in
  47. other structures, is done with:
  48. void init_completion(&done);
  49. Initialization is accomplished by initializing the wait queue and setting
  50. the default state to "not available", that is, "done" is set to 0.
  51. The re-initialization function, reinit_completion(), simply resets the
  52. done element to "not available", thus again to 0, without touching the
  53. wait queue. Calling init_completion() twice on the same completion object is
  54. most likely a bug as it re-initializes the queue to an empty queue and
  55. enqueued tasks could get "lost" - use reinit_completion() in that case.
  56. For static declaration and initialization, macros are available. These are:
  57. static DECLARE_COMPLETION(setup_done)
  58. used for static declarations in file scope. Within functions the static
  59. initialization should always use:
  60. DECLARE_COMPLETION_ONSTACK(setup_done)
  61. suitable for automatic/local variables on the stack and will make lockdep
  62. happy. Note also that one needs to make *sure* the completion passed to
  63. work threads remains in-scope, and no references remain to on-stack data
  64. when the initiating function returns.
  65. Using on-stack completions for code that calls any of the _timeout or
  66. _interruptible/_killable variants is not advisable as they will require
  67. additional synchronization to prevent the on-stack completion object in
  68. the timeout/signal cases from going out of scope. Consider using dynamically
  69. allocated completions when intending to use the _interruptible/_killable
  70. or _timeout variants of wait_for_completion().
  71. Waiting for completions:
  72. ------------------------
  73. For a thread of execution to wait for some concurrent work to finish, it
  74. calls wait_for_completion() on the initialized completion structure.
  75. A typical usage scenario is:
  76. struct completion setup_done;
  77. init_completion(&setup_done);
  78. initialize_work(...,&setup_done,...)
  79. /* run non-dependent code */ /* do setup */
  80. wait_for_completion(&setup_done); complete(setup_done)
  81. This is not implying any temporal order on wait_for_completion() and the
  82. call to complete() - if the call to complete() happened before the call
  83. to wait_for_completion() then the waiting side simply will continue
  84. immediately as all dependencies are satisfied if not it will block until
  85. completion is signaled by complete().
  86. Note that wait_for_completion() is calling spin_lock_irq()/spin_unlock_irq(),
  87. so it can only be called safely when you know that interrupts are enabled.
  88. Calling it from hard-irq or irqs-off atomic contexts will result in
  89. hard-to-detect spurious enabling of interrupts.
  90. wait_for_completion():
  91. void wait_for_completion(struct completion *done):
  92. The default behavior is to wait without a timeout and to mark the task as
  93. uninterruptible. wait_for_completion() and its variants are only safe
  94. in process context (as they can sleep) but not in atomic context,
  95. interrupt context, with disabled irqs. or preemption is disabled - see also
  96. try_wait_for_completion() below for handling completion in atomic/interrupt
  97. context.
  98. As all variants of wait_for_completion() can (obviously) block for a long
  99. time, you probably don't want to call this with held mutexes.
  100. Variants available:
  101. -------------------
  102. The below variants all return status and this status should be checked in
  103. most(/all) cases - in cases where the status is deliberately not checked you
  104. probably want to make a note explaining this (e.g. see
  105. arch/arm/kernel/smp.c:__cpu_up()).
  106. A common problem that occurs is to have unclean assignment of return types,
  107. so care should be taken with assigning return-values to variables of proper
  108. type. Checking for the specific meaning of return values also has been found
  109. to be quite inaccurate e.g. constructs like
  110. if (!wait_for_completion_interruptible_timeout(...)) would execute the same
  111. code path for successful completion and for the interrupted case - which is
  112. probably not what you want.
  113. int wait_for_completion_interruptible(struct completion *done)
  114. This function marks the task TASK_INTERRUPTIBLE. If a signal was received
  115. while waiting it will return -ERESTARTSYS; 0 otherwise.
  116. unsigned long wait_for_completion_timeout(struct completion *done,
  117. unsigned long timeout)
  118. The task is marked as TASK_UNINTERRUPTIBLE and will wait at most 'timeout'
  119. (in jiffies). If timeout occurs it returns 0 else the remaining time in
  120. jiffies (but at least 1). Timeouts are preferably calculated with
  121. msecs_to_jiffies() or usecs_to_jiffies(). If the returned timeout value is
  122. deliberately ignored a comment should probably explain why (e.g. see
  123. drivers/mfd/wm8350-core.c wm8350_read_auxadc())
  124. long wait_for_completion_interruptible_timeout(
  125. struct completion *done, unsigned long timeout)
  126. This function passes a timeout in jiffies and marks the task as
  127. TASK_INTERRUPTIBLE. If a signal was received it will return -ERESTARTSYS;
  128. otherwise it returns 0 if the completion timed out or the remaining time in
  129. jiffies if completion occurred.
  130. Further variants include _killable which uses TASK_KILLABLE as the
  131. designated tasks state and will return -ERESTARTSYS if it is interrupted or
  132. else 0 if completion was achieved. There is a _timeout variant as well:
  133. long wait_for_completion_killable(struct completion *done)
  134. long wait_for_completion_killable_timeout(struct completion *done,
  135. unsigned long timeout)
  136. The _io variants wait_for_completion_io() behave the same as the non-_io
  137. variants, except for accounting waiting time as waiting on IO, which has
  138. an impact on how the task is accounted in scheduling stats.
  139. void wait_for_completion_io(struct completion *done)
  140. unsigned long wait_for_completion_io_timeout(struct completion *done
  141. unsigned long timeout)
  142. Signaling completions:
  143. ----------------------
  144. A thread that wants to signal that the conditions for continuation have been
  145. achieved calls complete() to signal exactly one of the waiters that it can
  146. continue.
  147. void complete(struct completion *done)
  148. or calls complete_all() to signal all current and future waiters.
  149. void complete_all(struct completion *done)
  150. The signaling will work as expected even if completions are signaled before
  151. a thread starts waiting. This is achieved by the waiter "consuming"
  152. (decrementing) the done element of struct completion. Waiting threads
  153. wakeup order is the same in which they were enqueued (FIFO order).
  154. If complete() is called multiple times then this will allow for that number
  155. of waiters to continue - each call to complete() will simply increment the
  156. done element. Calling complete_all() multiple times is a bug though. Both
  157. complete() and complete_all() can be called in hard-irq/atomic context safely.
  158. There only can be one thread calling complete() or complete_all() on a
  159. particular struct completion at any time - serialized through the wait
  160. queue spinlock. Any such concurrent calls to complete() or complete_all()
  161. probably are a design bug.
  162. Signaling completion from hard-irq context is fine as it will appropriately
  163. lock with spin_lock_irqsave/spin_unlock_irqrestore and it will never sleep.
  164. try_wait_for_completion()/completion_done():
  165. --------------------------------------------
  166. The try_wait_for_completion() function will not put the thread on the wait
  167. queue but rather returns false if it would need to enqueue (block) the thread,
  168. else it consumes one posted completion and returns true.
  169. bool try_wait_for_completion(struct completion *done)
  170. Finally, to check the state of a completion without changing it in any way,
  171. call completion_done(), which returns false if there are no posted
  172. completions that were not yet consumed by waiters (implying that there are
  173. waiters) and true otherwise;
  174. bool completion_done(struct completion *done)
  175. Both try_wait_for_completion() and completion_done() are safe to be called in
  176. hard-irq or atomic context.