pin_user_pages.rst 12 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286
  1. .. SPDX-License-Identifier: GPL-2.0
  2. ====================================================
  3. pin_user_pages() and related calls
  4. ====================================================
  5. .. contents:: :local:
  6. Overview
  7. ========
  8. This document describes the following functions::
  9. pin_user_pages()
  10. pin_user_pages_fast()
  11. pin_user_pages_remote()
  12. Basic description of FOLL_PIN
  13. =============================
  14. FOLL_PIN and FOLL_LONGTERM are flags that can be passed to the get_user_pages*()
  15. ("gup") family of functions. FOLL_PIN has significant interactions and
  16. interdependencies with FOLL_LONGTERM, so both are covered here.
  17. FOLL_PIN is internal to gup, meaning that it should not appear at the gup call
  18. sites. This allows the associated wrapper functions (pin_user_pages*() and
  19. others) to set the correct combination of these flags, and to check for problems
  20. as well.
  21. FOLL_LONGTERM, on the other hand, *is* allowed to be set at the gup call sites.
  22. This is in order to avoid creating a large number of wrapper functions to cover
  23. all combinations of get*(), pin*(), FOLL_LONGTERM, and more. Also, the
  24. pin_user_pages*() APIs are clearly distinct from the get_user_pages*() APIs, so
  25. that's a natural dividing line, and a good point to make separate wrapper calls.
  26. In other words, use pin_user_pages*() for DMA-pinned pages, and
  27. get_user_pages*() for other cases. There are five cases described later on in
  28. this document, to further clarify that concept.
  29. FOLL_PIN and FOLL_GET are mutually exclusive for a given gup call. However,
  30. multiple threads and call sites are free to pin the same struct pages, via both
  31. FOLL_PIN and FOLL_GET. It's just the call site that needs to choose one or the
  32. other, not the struct page(s).
  33. The FOLL_PIN implementation is nearly the same as FOLL_GET, except that FOLL_PIN
  34. uses a different reference counting technique.
  35. FOLL_PIN is a prerequisite to FOLL_LONGTERM. Another way of saying that is,
  36. FOLL_LONGTERM is a specific case, more restrictive case of FOLL_PIN.
  37. Which flags are set by each wrapper
  38. ===================================
  39. For these pin_user_pages*() functions, FOLL_PIN is OR'd in with whatever gup
  40. flags the caller provides. The caller is required to pass in a non-null struct
  41. pages* array, and the function then pins pages by incrementing each by a special
  42. value: GUP_PIN_COUNTING_BIAS.
  43. For large folios, the GUP_PIN_COUNTING_BIAS scheme is not used. Instead,
  44. the extra space available in the struct folio is used to store the
  45. pincount directly.
  46. This approach for large folios avoids the counting upper limit problems
  47. that are discussed below. Those limitations would have been aggravated
  48. severely by huge pages, because each tail page adds a refcount to the
  49. head page. And in fact, testing revealed that, without a separate pincount
  50. field, refcount overflows were seen in some huge page stress tests.
  51. This also means that huge pages and large folios do not suffer
  52. from the false positives problem that is mentioned below.::
  53. Function
  54. --------
  55. pin_user_pages FOLL_PIN is always set internally by this function.
  56. pin_user_pages_fast FOLL_PIN is always set internally by this function.
  57. pin_user_pages_remote FOLL_PIN is always set internally by this function.
  58. For these get_user_pages*() functions, FOLL_GET might not even be specified.
  59. Behavior is a little more complex than above. If FOLL_GET was *not* specified,
  60. but the caller passed in a non-null struct pages* array, then the function
  61. sets FOLL_GET for you, and proceeds to pin pages by incrementing the refcount
  62. of each page by +1.::
  63. Function
  64. --------
  65. get_user_pages FOLL_GET is sometimes set internally by this function.
  66. get_user_pages_fast FOLL_GET is sometimes set internally by this function.
  67. get_user_pages_remote FOLL_GET is sometimes set internally by this function.
  68. Tracking dma-pinned pages
  69. =========================
  70. Some of the key design constraints, and solutions, for tracking dma-pinned
  71. pages:
  72. * An actual reference count, per struct page, is required. This is because
  73. multiple processes may pin and unpin a page.
  74. * False positives (reporting that a page is dma-pinned, when in fact it is not)
  75. are acceptable, but false negatives are not.
  76. * struct page may not be increased in size for this, and all fields are already
  77. used.
  78. * Given the above, we can overload the page->_refcount field by using, sort of,
  79. the upper bits in that field for a dma-pinned count. "Sort of", means that,
  80. rather than dividing page->_refcount into bit fields, we simple add a medium-
  81. large value (GUP_PIN_COUNTING_BIAS, initially chosen to be 1024: 10 bits) to
  82. page->_refcount. This provides fuzzy behavior: if a page has get_page() called
  83. on it 1024 times, then it will appear to have a single dma-pinned count.
  84. And again, that's acceptable.
  85. This also leads to limitations: there are only 31-10==21 bits available for a
  86. counter that increments 10 bits at a time.
  87. * Because of that limitation, special handling is applied to the zero pages
  88. when using FOLL_PIN. We only pretend to pin a zero page - we don't alter its
  89. refcount or pincount at all (it is permanent, so there's no need). The
  90. unpinning functions also don't do anything to a zero page. This is
  91. transparent to the caller.
  92. * Callers must specifically request "dma-pinned tracking of pages". In other
  93. words, just calling get_user_pages() will not suffice; a new set of functions,
  94. pin_user_page() and related, must be used.
  95. FOLL_PIN, FOLL_GET, FOLL_LONGTERM: when to use which flags
  96. ==========================================================
  97. Thanks to Jan Kara, Vlastimil Babka and several other -mm people, for describing
  98. these categories:
  99. CASE 1: Direct IO (DIO)
  100. -----------------------
  101. There are GUP references to pages that are serving
  102. as DIO buffers. These buffers are needed for a relatively short time (so they
  103. are not "long term"). No special synchronization with folio_mkclean() or
  104. munmap() is provided. Therefore, flags to set at the call site are: ::
  105. FOLL_PIN
  106. ...but rather than setting FOLL_PIN directly, call sites should use one of
  107. the pin_user_pages*() routines that set FOLL_PIN.
  108. CASE 2: RDMA
  109. ------------
  110. There are GUP references to pages that are serving as DMA
  111. buffers. These buffers are needed for a long time ("long term"). No special
  112. synchronization with folio_mkclean() or munmap() is provided. Therefore, flags
  113. to set at the call site are: ::
  114. FOLL_PIN | FOLL_LONGTERM
  115. NOTE: Some pages, such as DAX pages, cannot be pinned with longterm pins. That's
  116. because DAX pages do not have a separate page cache, and so "pinning" implies
  117. locking down file system blocks, which is not (yet) supported in that way.
  118. .. _mmu-notifier-registration-case:
  119. CASE 3: MMU notifier registration, with or without page faulting hardware
  120. -------------------------------------------------------------------------
  121. Device drivers can pin pages via get_user_pages*(), and register for mmu
  122. notifier callbacks for the memory range. Then, upon receiving a notifier
  123. "invalidate range" callback , stop the device from using the range, and unpin
  124. the pages. There may be other possible schemes, such as for example explicitly
  125. synchronizing against pending IO, that accomplish approximately the same thing.
  126. Or, if the hardware supports replayable page faults, then the device driver can
  127. avoid pinning entirely (this is ideal), as follows: register for mmu notifier
  128. callbacks as above, but instead of stopping the device and unpinning in the
  129. callback, simply remove the range from the device's page tables.
  130. Either way, as long as the driver unpins the pages upon mmu notifier callback,
  131. then there is proper synchronization with both filesystem and mm
  132. (folio_mkclean(), munmap(), etc). Therefore, neither flag needs to be set.
  133. CASE 4: Pinning for struct page manipulation only
  134. -------------------------------------------------
  135. If only struct page data (as opposed to the actual memory contents that a page
  136. is tracking) is affected, then normal GUP calls are sufficient, and neither flag
  137. needs to be set.
  138. CASE 5: Pinning in order to write to the data within the page
  139. -------------------------------------------------------------
  140. Even though neither DMA nor Direct IO is involved, just a simple case of "pin,
  141. write to a page's data, unpin" can cause a problem. Case 5 may be considered a
  142. superset of Case 1, plus Case 2, plus anything that invokes that pattern. In
  143. other words, if the code is neither Case 1 nor Case 2, it may still require
  144. FOLL_PIN, for patterns like this:
  145. Correct (uses FOLL_PIN calls):
  146. pin_user_pages()
  147. write to the data within the pages
  148. unpin_user_pages()
  149. INCORRECT (uses FOLL_GET calls):
  150. get_user_pages()
  151. write to the data within the pages
  152. put_page()
  153. folio_maybe_dma_pinned(): the whole point of pinning
  154. ====================================================
  155. The whole point of marking folios as "DMA-pinned" or "gup-pinned" is to be able
  156. to query, "is this folio DMA-pinned?" That allows code such as folio_mkclean()
  157. (and file system writeback code in general) to make informed decisions about
  158. what to do when a folio cannot be unmapped due to such pins.
  159. What to do in those cases is the subject of a years-long series of discussions
  160. and debates (see the References at the end of this document). It's a TODO item
  161. here: fill in the details once that's worked out. Meanwhile, it's safe to say
  162. that having this available: ::
  163. static inline bool folio_maybe_dma_pinned(struct folio *folio)
  164. ...is a prerequisite to solving the long-running gup+DMA problem.
  165. Another way of thinking about FOLL_GET, FOLL_PIN, and FOLL_LONGTERM
  166. ===================================================================
  167. Another way of thinking about these flags is as a progression of restrictions:
  168. FOLL_GET is for struct page manipulation, without affecting the data that the
  169. struct page refers to. FOLL_PIN is a *replacement* for FOLL_GET, and is for
  170. short term pins on pages whose data *will* get accessed. As such, FOLL_PIN is
  171. a "more severe" form of pinning. And finally, FOLL_LONGTERM is an even more
  172. restrictive case that has FOLL_PIN as a prerequisite: this is for pages that
  173. will be pinned longterm, and whose data will be accessed.
  174. Unit testing
  175. ============
  176. This file::
  177. tools/testing/selftests/mm/gup_test.c
  178. has the following new calls to exercise the new pin*() wrapper functions:
  179. * PIN_FAST_BENCHMARK (./gup_test -a)
  180. * PIN_BASIC_TEST (./gup_test -b)
  181. You can monitor how many total dma-pinned pages have been acquired and released
  182. since the system was booted, via two new /proc/vmstat entries: ::
  183. /proc/vmstat/nr_foll_pin_acquired
  184. /proc/vmstat/nr_foll_pin_released
  185. Under normal conditions, these two values will be equal unless there are any
  186. long-term [R]DMA pins in place, or during pin/unpin transitions.
  187. * nr_foll_pin_acquired: This is the number of logical pins that have been
  188. acquired since the system was powered on. For huge pages, the head page is
  189. pinned once for each page (head page and each tail page) within the huge page.
  190. This follows the same sort of behavior that get_user_pages() uses for huge
  191. pages: the head page is refcounted once for each tail or head page in the huge
  192. page, when get_user_pages() is applied to a huge page.
  193. * nr_foll_pin_released: The number of logical pins that have been released since
  194. the system was powered on. Note that pages are released (unpinned) on a
  195. PAGE_SIZE granularity, even if the original pin was applied to a huge page.
  196. Becaused of the pin count behavior described above in "nr_foll_pin_acquired",
  197. the accounting balances out, so that after doing this::
  198. pin_user_pages(huge_page);
  199. for (each page in huge_page)
  200. unpin_user_page(page);
  201. ...the following is expected::
  202. nr_foll_pin_released == nr_foll_pin_acquired
  203. (...unless it was already out of balance due to a long-term RDMA pin being in
  204. place.)
  205. Other diagnostics
  206. =================
  207. dump_page() has been enhanced slightly to handle these new counting
  208. fields, and to better report on large folios in general. Specifically,
  209. for large folios, the exact pincount is reported.
  210. References
  211. ==========
  212. * `Some slow progress on get_user_pages() (Apr 2, 2019) <https://lwn.net/Articles/784574/>`_
  213. * `DMA and get_user_pages() (LPC: Dec 12, 2018) <https://lwn.net/Articles/774411/>`_
  214. * `The trouble with get_user_pages() (Apr 30, 2018) <https://lwn.net/Articles/753027/>`_
  215. * `LWN kernel index: get_user_pages() <https://lwn.net/Kernel/Index/#Memory_management-get_user_pages>`_
  216. John Hubbard, October, 2019