userfaultfd.rst 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241
  1. .. _userfaultfd:
  2. ===========
  3. Userfaultfd
  4. ===========
  5. Objective
  6. =========
  7. Userfaults allow the implementation of on-demand paging from userland
  8. and more generally they allow userland to take control of various
  9. memory page faults, something otherwise only the kernel code could do.
  10. For example userfaults allows a proper and more optimal implementation
  11. of the PROT_NONE+SIGSEGV trick.
  12. Design
  13. ======
  14. Userfaults are delivered and resolved through the userfaultfd syscall.
  15. The userfaultfd (aside from registering and unregistering virtual
  16. memory ranges) provides two primary functionalities:
  17. 1) read/POLLIN protocol to notify a userland thread of the faults
  18. happening
  19. 2) various UFFDIO_* ioctls that can manage the virtual memory regions
  20. registered in the userfaultfd that allows userland to efficiently
  21. resolve the userfaults it receives via 1) or to manage the virtual
  22. memory in the background
  23. The real advantage of userfaults if compared to regular virtual memory
  24. management of mremap/mprotect is that the userfaults in all their
  25. operations never involve heavyweight structures like vmas (in fact the
  26. userfaultfd runtime load never takes the mmap_sem for writing).
  27. Vmas are not suitable for page- (or hugepage) granular fault tracking
  28. when dealing with virtual address spaces that could span
  29. Terabytes. Too many vmas would be needed for that.
  30. The userfaultfd once opened by invoking the syscall, can also be
  31. passed using unix domain sockets to a manager process, so the same
  32. manager process could handle the userfaults of a multitude of
  33. different processes without them being aware about what is going on
  34. (well of course unless they later try to use the userfaultfd
  35. themselves on the same region the manager is already tracking, which
  36. is a corner case that would currently return -EBUSY).
  37. API
  38. ===
  39. When first opened the userfaultfd must be enabled invoking the
  40. UFFDIO_API ioctl specifying a uffdio_api.api value set to UFFD_API (or
  41. a later API version) which will specify the read/POLLIN protocol
  42. userland intends to speak on the UFFD and the uffdio_api.features
  43. userland requires. The UFFDIO_API ioctl if successful (i.e. if the
  44. requested uffdio_api.api is spoken also by the running kernel and the
  45. requested features are going to be enabled) will return into
  46. uffdio_api.features and uffdio_api.ioctls two 64bit bitmasks of
  47. respectively all the available features of the read(2) protocol and
  48. the generic ioctl available.
  49. The uffdio_api.features bitmask returned by the UFFDIO_API ioctl
  50. defines what memory types are supported by the userfaultfd and what
  51. events, except page fault notifications, may be generated.
  52. If the kernel supports registering userfaultfd ranges on hugetlbfs
  53. virtual memory areas, UFFD_FEATURE_MISSING_HUGETLBFS will be set in
  54. uffdio_api.features. Similarly, UFFD_FEATURE_MISSING_SHMEM will be
  55. set if the kernel supports registering userfaultfd ranges on shared
  56. memory (covering all shmem APIs, i.e. tmpfs, IPCSHM, /dev/zero
  57. MAP_SHARED, memfd_create, etc).
  58. The userland application that wants to use userfaultfd with hugetlbfs
  59. or shared memory need to set the corresponding flag in
  60. uffdio_api.features to enable those features.
  61. If the userland desires to receive notifications for events other than
  62. page faults, it has to verify that uffdio_api.features has appropriate
  63. UFFD_FEATURE_EVENT_* bits set. These events are described in more
  64. detail below in "Non-cooperative userfaultfd" section.
  65. Once the userfaultfd has been enabled the UFFDIO_REGISTER ioctl should
  66. be invoked (if present in the returned uffdio_api.ioctls bitmask) to
  67. register a memory range in the userfaultfd by setting the
  68. uffdio_register structure accordingly. The uffdio_register.mode
  69. bitmask will specify to the kernel which kind of faults to track for
  70. the range (UFFDIO_REGISTER_MODE_MISSING would track missing
  71. pages). The UFFDIO_REGISTER ioctl will return the
  72. uffdio_register.ioctls bitmask of ioctls that are suitable to resolve
  73. userfaults on the range registered. Not all ioctls will necessarily be
  74. supported for all memory types depending on the underlying virtual
  75. memory backend (anonymous memory vs tmpfs vs real filebacked
  76. mappings).
  77. Userland can use the uffdio_register.ioctls to manage the virtual
  78. address space in the background (to add or potentially also remove
  79. memory from the userfaultfd registered range). This means a userfault
  80. could be triggering just before userland maps in the background the
  81. user-faulted page.
  82. The primary ioctl to resolve userfaults is UFFDIO_COPY. That
  83. atomically copies a page into the userfault registered range and wakes
  84. up the blocked userfaults (unless uffdio_copy.mode &
  85. UFFDIO_COPY_MODE_DONTWAKE is set). Other ioctl works similarly to
  86. UFFDIO_COPY. They're atomic as in guaranteeing that nothing can see an
  87. half copied page since it'll keep userfaulting until the copy has
  88. finished.
  89. QEMU/KVM
  90. ========
  91. QEMU/KVM is using the userfaultfd syscall to implement postcopy live
  92. migration. Postcopy live migration is one form of memory
  93. externalization consisting of a virtual machine running with part or
  94. all of its memory residing on a different node in the cloud. The
  95. userfaultfd abstraction is generic enough that not a single line of
  96. KVM kernel code had to be modified in order to add postcopy live
  97. migration to QEMU.
  98. Guest async page faults, FOLL_NOWAIT and all other GUP features work
  99. just fine in combination with userfaults. Userfaults trigger async
  100. page faults in the guest scheduler so those guest processes that
  101. aren't waiting for userfaults (i.e. network bound) can keep running in
  102. the guest vcpus.
  103. It is generally beneficial to run one pass of precopy live migration
  104. just before starting postcopy live migration, in order to avoid
  105. generating userfaults for readonly guest regions.
  106. The implementation of postcopy live migration currently uses one
  107. single bidirectional socket but in the future two different sockets
  108. will be used (to reduce the latency of the userfaults to the minimum
  109. possible without having to decrease /proc/sys/net/ipv4/tcp_wmem).
  110. The QEMU in the source node writes all pages that it knows are missing
  111. in the destination node, into the socket, and the migration thread of
  112. the QEMU running in the destination node runs UFFDIO_COPY|ZEROPAGE
  113. ioctls on the userfaultfd in order to map the received pages into the
  114. guest (UFFDIO_ZEROCOPY is used if the source page was a zero page).
  115. A different postcopy thread in the destination node listens with
  116. poll() to the userfaultfd in parallel. When a POLLIN event is
  117. generated after a userfault triggers, the postcopy thread read() from
  118. the userfaultfd and receives the fault address (or -EAGAIN in case the
  119. userfault was already resolved and waken by a UFFDIO_COPY|ZEROPAGE run
  120. by the parallel QEMU migration thread).
  121. After the QEMU postcopy thread (running in the destination node) gets
  122. the userfault address it writes the information about the missing page
  123. into the socket. The QEMU source node receives the information and
  124. roughly "seeks" to that page address and continues sending all
  125. remaining missing pages from that new page offset. Soon after that
  126. (just the time to flush the tcp_wmem queue through the network) the
  127. migration thread in the QEMU running in the destination node will
  128. receive the page that triggered the userfault and it'll map it as
  129. usual with the UFFDIO_COPY|ZEROPAGE (without actually knowing if it
  130. was spontaneously sent by the source or if it was an urgent page
  131. requested through a userfault).
  132. By the time the userfaults start, the QEMU in the destination node
  133. doesn't need to keep any per-page state bitmap relative to the live
  134. migration around and a single per-page bitmap has to be maintained in
  135. the QEMU running in the source node to know which pages are still
  136. missing in the destination node. The bitmap in the source node is
  137. checked to find which missing pages to send in round robin and we seek
  138. over it when receiving incoming userfaults. After sending each page of
  139. course the bitmap is updated accordingly. It's also useful to avoid
  140. sending the same page twice (in case the userfault is read by the
  141. postcopy thread just before UFFDIO_COPY|ZEROPAGE runs in the migration
  142. thread).
  143. Non-cooperative userfaultfd
  144. ===========================
  145. When the userfaultfd is monitored by an external manager, the manager
  146. must be able to track changes in the process virtual memory
  147. layout. Userfaultfd can notify the manager about such changes using
  148. the same read(2) protocol as for the page fault notifications. The
  149. manager has to explicitly enable these events by setting appropriate
  150. bits in uffdio_api.features passed to UFFDIO_API ioctl:
  151. UFFD_FEATURE_EVENT_FORK
  152. enable userfaultfd hooks for fork(). When this feature is
  153. enabled, the userfaultfd context of the parent process is
  154. duplicated into the newly created process. The manager
  155. receives UFFD_EVENT_FORK with file descriptor of the new
  156. userfaultfd context in the uffd_msg.fork.
  157. UFFD_FEATURE_EVENT_REMAP
  158. enable notifications about mremap() calls. When the
  159. non-cooperative process moves a virtual memory area to a
  160. different location, the manager will receive
  161. UFFD_EVENT_REMAP. The uffd_msg.remap will contain the old and
  162. new addresses of the area and its original length.
  163. UFFD_FEATURE_EVENT_REMOVE
  164. enable notifications about madvise(MADV_REMOVE) and
  165. madvise(MADV_DONTNEED) calls. The event UFFD_EVENT_REMOVE will
  166. be generated upon these calls to madvise. The uffd_msg.remove
  167. will contain start and end addresses of the removed area.
  168. UFFD_FEATURE_EVENT_UNMAP
  169. enable notifications about memory unmapping. The manager will
  170. get UFFD_EVENT_UNMAP with uffd_msg.remove containing start and
  171. end addresses of the unmapped area.
  172. Although the UFFD_FEATURE_EVENT_REMOVE and UFFD_FEATURE_EVENT_UNMAP
  173. are pretty similar, they quite differ in the action expected from the
  174. userfaultfd manager. In the former case, the virtual memory is
  175. removed, but the area is not, the area remains monitored by the
  176. userfaultfd, and if a page fault occurs in that area it will be
  177. delivered to the manager. The proper resolution for such page fault is
  178. to zeromap the faulting address. However, in the latter case, when an
  179. area is unmapped, either explicitly (with munmap() system call), or
  180. implicitly (e.g. during mremap()), the area is removed and in turn the
  181. userfaultfd context for such area disappears too and the manager will
  182. not get further userland page faults from the removed area. Still, the
  183. notification is required in order to prevent manager from using
  184. UFFDIO_COPY on the unmapped area.
  185. Unlike userland page faults which have to be synchronous and require
  186. explicit or implicit wakeup, all the events are delivered
  187. asynchronously and the non-cooperative process resumes execution as
  188. soon as manager executes read(). The userfaultfd manager should
  189. carefully synchronize calls to UFFDIO_COPY with the events
  190. processing. To aid the synchronization, the UFFDIO_COPY ioctl will
  191. return -ENOSPC when the monitored process exits at the time of
  192. UFFDIO_COPY, and -ENOENT, when the non-cooperative process has changed
  193. its virtual memory layout simultaneously with outstanding UFFDIO_COPY
  194. operation.
  195. The current asynchronous model of the event delivery is optimal for
  196. single threaded non-cooperative userfaultfd manager implementations. A
  197. synchronous event delivery model can be added later as a new
  198. userfaultfd feature to facilitate multithreading enhancements of the
  199. non cooperative manager, for example to allow UFFDIO_COPY ioctls to
  200. run in parallel to the event reception. Single threaded
  201. implementations should continue to use the current async event
  202. delivery model instead.