swiotlb.rst 19 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321
  1. .. SPDX-License-Identifier: GPL-2.0
  2. ===============
  3. DMA and swiotlb
  4. ===============
  5. swiotlb is a memory buffer allocator used by the Linux kernel DMA layer. It is
  6. typically used when a device doing DMA can't directly access the target memory
  7. buffer because of hardware limitations or other requirements. In such a case,
  8. the DMA layer calls swiotlb to allocate a temporary memory buffer that conforms
  9. to the limitations. The DMA is done to/from this temporary memory buffer, and
  10. the CPU copies the data between the temporary buffer and the original target
  11. memory buffer. This approach is generically called "bounce buffering", and the
  12. temporary memory buffer is called a "bounce buffer".
  13. Device drivers don't interact directly with swiotlb. Instead, drivers inform
  14. the DMA layer of the DMA attributes of the devices they are managing, and use
  15. the normal DMA map, unmap, and sync APIs when programming a device to do DMA.
  16. These APIs use the device DMA attributes and kernel-wide settings to determine
  17. if bounce buffering is necessary. If so, the DMA layer manages the allocation,
  18. freeing, and sync'ing of bounce buffers. Since the DMA attributes are per
  19. device, some devices in a system may use bounce buffering while others do not.
  20. Because the CPU copies data between the bounce buffer and the original target
  21. memory buffer, doing bounce buffering is slower than doing DMA directly to the
  22. original memory buffer, and it consumes more CPU resources. So it is used only
  23. when necessary for providing DMA functionality.
  24. Usage Scenarios
  25. ---------------
  26. swiotlb was originally created to handle DMA for devices with addressing
  27. limitations. As physical memory sizes grew beyond 4 GiB, some devices could
  28. only provide 32-bit DMA addresses. By allocating bounce buffer memory below
  29. the 4 GiB line, these devices with addressing limitations could still work and
  30. do DMA.
  31. More recently, Confidential Computing (CoCo) VMs have the guest VM's memory
  32. encrypted by default, and the memory is not accessible by the host hypervisor
  33. and VMM. For the host to do I/O on behalf of the guest, the I/O must be
  34. directed to guest memory that is unencrypted. CoCo VMs set a kernel-wide option
  35. to force all DMA I/O to use bounce buffers, and the bounce buffer memory is set
  36. up as unencrypted. The host does DMA I/O to/from the bounce buffer memory, and
  37. the Linux kernel DMA layer does "sync" operations to cause the CPU to copy the
  38. data to/from the original target memory buffer. The CPU copying bridges between
  39. the unencrypted and the encrypted memory. This use of bounce buffers allows
  40. device drivers to "just work" in a CoCo VM, with no modifications
  41. needed to handle the memory encryption complexity.
  42. Other edge case scenarios arise for bounce buffers. For example, when IOMMU
  43. mappings are set up for a DMA operation to/from a device that is considered
  44. "untrusted", the device should be given access only to the memory containing
  45. the data being transferred. But if that memory occupies only part of an IOMMU
  46. granule, other parts of the granule may contain unrelated kernel data. Since
  47. IOMMU access control is per-granule, the untrusted device can gain access to
  48. the unrelated kernel data. This problem is solved by bounce buffering the DMA
  49. operation and ensuring that unused portions of the bounce buffers do not
  50. contain any unrelated kernel data.
  51. Core Functionality
  52. ------------------
  53. The primary swiotlb APIs are swiotlb_tbl_map_single() and
  54. swiotlb_tbl_unmap_single(). The "map" API allocates a bounce buffer of a
  55. specified size in bytes and returns the physical address of the buffer. The
  56. buffer memory is physically contiguous. The expectation is that the DMA layer
  57. maps the physical memory address to a DMA address, and returns the DMA address
  58. to the driver for programming into the device. If a DMA operation specifies
  59. multiple memory buffer segments, a separate bounce buffer must be allocated for
  60. each segment. swiotlb_tbl_map_single() always does a "sync" operation (i.e., a
  61. CPU copy) to initialize the bounce buffer to match the contents of the original
  62. buffer.
  63. swiotlb_tbl_unmap_single() does the reverse. If the DMA operation might have
  64. updated the bounce buffer memory and DMA_ATTR_SKIP_CPU_SYNC is not set, the
  65. unmap does a "sync" operation to cause a CPU copy of the data from the bounce
  66. buffer back to the original buffer. Then the bounce buffer memory is freed.
  67. swiotlb also provides "sync" APIs that correspond to the dma_sync_*() APIs that
  68. a driver may use when control of a buffer transitions between the CPU and the
  69. device. The swiotlb "sync" APIs cause a CPU copy of the data between the
  70. original buffer and the bounce buffer. Like the dma_sync_*() APIs, the swiotlb
  71. "sync" APIs support doing a partial sync, where only a subset of the bounce
  72. buffer is copied to/from the original buffer.
  73. Core Functionality Constraints
  74. ------------------------------
  75. The swiotlb map/unmap/sync APIs must operate without blocking, as they are
  76. called by the corresponding DMA APIs which may run in contexts that cannot
  77. block. Hence the default memory pool for swiotlb allocations must be
  78. pre-allocated at boot time (but see Dynamic swiotlb below). Because swiotlb
  79. allocations must be physically contiguous, the entire default memory pool is
  80. allocated as a single contiguous block.
  81. The need to pre-allocate the default swiotlb pool creates a boot-time tradeoff.
  82. The pool should be large enough to ensure that bounce buffer requests can
  83. always be satisfied, as the non-blocking requirement means requests can't wait
  84. for space to become available. But a large pool potentially wastes memory, as
  85. this pre-allocated memory is not available for other uses in the system. The
  86. tradeoff is particularly acute in CoCo VMs that use bounce buffers for all DMA
  87. I/O. These VMs use a heuristic to set the default pool size to ~6% of memory,
  88. with a max of 1 GiB, which has the potential to be very wasteful of memory.
  89. Conversely, the heuristic might produce a size that is insufficient, depending
  90. on the I/O patterns of the workload in the VM. The dynamic swiotlb feature
  91. described below can help, but has limitations. Better management of the swiotlb
  92. default memory pool size remains an open issue.
  93. A single allocation from swiotlb is limited to IO_TLB_SIZE * IO_TLB_SEGSIZE
  94. bytes, which is 256 KiB with current definitions. When a device's DMA settings
  95. are such that the device might use swiotlb, the maximum size of a DMA segment
  96. must be limited to that 256 KiB. This value is communicated to higher-level
  97. kernel code via dma_map_mapping_size() and swiotlb_max_mapping_size(). If the
  98. higher-level code fails to account for this limit, it may make requests that
  99. are too large for swiotlb, and get a "swiotlb full" error.
  100. A key device DMA setting is "min_align_mask", which is a power of 2 minus 1
  101. so that some number of low order bits are set, or it may be zero. swiotlb
  102. allocations ensure these min_align_mask bits of the physical address of the
  103. bounce buffer match the same bits in the address of the original buffer. When
  104. min_align_mask is non-zero, it may produce an "alignment offset" in the address
  105. of the bounce buffer that slightly reduces the maximum size of an allocation.
  106. This potential alignment offset is reflected in the value returned by
  107. swiotlb_max_mapping_size(), which can show up in places like
  108. /sys/block/<device>/queue/max_sectors_kb. For example, if a device does not use
  109. swiotlb, max_sectors_kb might be 512 KiB or larger. If a device might use
  110. swiotlb, max_sectors_kb will be 256 KiB. When min_align_mask is non-zero,
  111. max_sectors_kb might be even smaller, such as 252 KiB.
  112. swiotlb_tbl_map_single() also takes an "alloc_align_mask" parameter. This
  113. parameter specifies the allocation of bounce buffer space must start at a
  114. physical address with the alloc_align_mask bits set to zero. But the actual
  115. bounce buffer might start at a larger address if min_align_mask is non-zero.
  116. Hence there may be pre-padding space that is allocated prior to the start of
  117. the bounce buffer. Similarly, the end of the bounce buffer is rounded up to an
  118. alloc_align_mask boundary, potentially resulting in post-padding space. Any
  119. pre-padding or post-padding space is not initialized by swiotlb code. The
  120. "alloc_align_mask" parameter is used by IOMMU code when mapping for untrusted
  121. devices. It is set to the granule size - 1 so that the bounce buffer is
  122. allocated entirely from granules that are not used for any other purpose.
  123. Data structures concepts
  124. ------------------------
  125. Memory used for swiotlb bounce buffers is allocated from overall system memory
  126. as one or more "pools". The default pool is allocated during system boot with a
  127. default size of 64 MiB. The default pool size may be modified with the
  128. "swiotlb=" kernel boot line parameter. The default size may also be adjusted
  129. due to other conditions, such as running in a CoCo VM, as described above. If
  130. CONFIG_SWIOTLB_DYNAMIC is enabled, additional pools may be allocated later in
  131. the life of the system. Each pool must be a contiguous range of physical
  132. memory. The default pool is allocated below the 4 GiB physical address line so
  133. it works for devices that can only address 32-bits of physical memory (unless
  134. architecture-specific code provides the SWIOTLB_ANY flag). In a CoCo VM, the
  135. pool memory must be decrypted before swiotlb is used.
  136. Each pool is divided into "slots" of size IO_TLB_SIZE, which is 2 KiB with
  137. current definitions. IO_TLB_SEGSIZE contiguous slots (128 slots) constitute
  138. what might be called a "slot set". When a bounce buffer is allocated, it
  139. occupies one or more contiguous slots. A slot is never shared by multiple
  140. bounce buffers. Furthermore, a bounce buffer must be allocated from a single
  141. slot set, which leads to the maximum bounce buffer size being IO_TLB_SIZE *
  142. IO_TLB_SEGSIZE. Multiple smaller bounce buffers may co-exist in a single slot
  143. set if the alignment and size constraints can be met.
  144. Slots are also grouped into "areas", with the constraint that a slot set exists
  145. entirely in a single area. Each area has its own spin lock that must be held to
  146. manipulate the slots in that area. The division into areas avoids contending
  147. for a single global spin lock when swiotlb is heavily used, such as in a CoCo
  148. VM. The number of areas defaults to the number of CPUs in the system for
  149. maximum parallelism, but since an area can't be smaller than IO_TLB_SEGSIZE
  150. slots, it might be necessary to assign multiple CPUs to the same area. The
  151. number of areas can also be set via the "swiotlb=" kernel boot parameter.
  152. When allocating a bounce buffer, if the area associated with the calling CPU
  153. does not have enough free space, areas associated with other CPUs are tried
  154. sequentially. For each area tried, the area's spin lock must be obtained before
  155. trying an allocation, so contention may occur if swiotlb is relatively busy
  156. overall. But an allocation request does not fail unless all areas do not have
  157. enough free space.
  158. IO_TLB_SIZE, IO_TLB_SEGSIZE, and the number of areas must all be powers of 2 as
  159. the code uses shifting and bit masking to do many of the calculations. The
  160. number of areas is rounded up to a power of 2 if necessary to meet this
  161. requirement.
  162. The default pool is allocated with PAGE_SIZE alignment. If an alloc_align_mask
  163. argument to swiotlb_tbl_map_single() specifies a larger alignment, one or more
  164. initial slots in each slot set might not meet the alloc_align_mask criterium.
  165. Because a bounce buffer allocation can't cross a slot set boundary, eliminating
  166. those initial slots effectively reduces the max size of a bounce buffer.
  167. Currently, there's no problem because alloc_align_mask is set based on IOMMU
  168. granule size, and granules cannot be larger than PAGE_SIZE. But if that were to
  169. change in the future, the initial pool allocation might need to be done with
  170. alignment larger than PAGE_SIZE.
  171. Dynamic swiotlb
  172. ---------------
  173. When CONFIG_SWIOTLB_DYNAMIC is enabled, swiotlb can do on-demand expansion of
  174. the amount of memory available for allocation as bounce buffers. If a bounce
  175. buffer request fails due to lack of available space, an asynchronous background
  176. task is kicked off to allocate memory from general system memory and turn it
  177. into an swiotlb pool. Creating an additional pool must be done asynchronously
  178. because the memory allocation may block, and as noted above, swiotlb requests
  179. are not allowed to block. Once the background task is kicked off, the bounce
  180. buffer request creates a "transient pool" to avoid returning an "swiotlb full"
  181. error. A transient pool has the size of the bounce buffer request, and is
  182. deleted when the bounce buffer is freed. Memory for this transient pool comes
  183. from the general system memory atomic pool so that creation does not block.
  184. Creating a transient pool has relatively high cost, particularly in a CoCo VM
  185. where the memory must be decrypted, so it is done only as a stopgap until the
  186. background task can add another non-transient pool.
  187. Adding a dynamic pool has limitations. Like with the default pool, the memory
  188. must be physically contiguous, so the size is limited to MAX_PAGE_ORDER pages
  189. (e.g., 4 MiB on a typical x86 system). Due to memory fragmentation, a max size
  190. allocation may not be available. The dynamic pool allocator tries smaller sizes
  191. until it succeeds, but with a minimum size of 1 MiB. Given sufficient system
  192. memory fragmentation, dynamically adding a pool might not succeed at all.
  193. The number of areas in a dynamic pool may be different from the number of areas
  194. in the default pool. Because the new pool size is typically a few MiB at most,
  195. the number of areas will likely be smaller. For example, with a new pool size
  196. of 4 MiB and the 256 KiB minimum area size, only 16 areas can be created. If
  197. the system has more than 16 CPUs, multiple CPUs must share an area, creating
  198. more lock contention.
  199. New pools added via dynamic swiotlb are linked together in a linear list.
  200. swiotlb code frequently must search for the pool containing a particular
  201. swiotlb physical address, so that search is linear and not performant with a
  202. large number of dynamic pools. The data structures could be improved for
  203. faster searches.
  204. Overall, dynamic swiotlb works best for small configurations with relatively
  205. few CPUs. It allows the default swiotlb pool to be smaller so that memory is
  206. not wasted, with dynamic pools making more space available if needed (as long
  207. as fragmentation isn't an obstacle). It is less useful for large CoCo VMs.
  208. Data Structure Details
  209. ----------------------
  210. swiotlb is managed with four primary data structures: io_tlb_mem, io_tlb_pool,
  211. io_tlb_area, and io_tlb_slot. io_tlb_mem describes a swiotlb memory allocator,
  212. which includes the default memory pool and any dynamic or transient pools
  213. linked to it. Limited statistics on swiotlb usage are kept per memory allocator
  214. and are stored in this data structure. These statistics are available under
  215. /sys/kernel/debug/swiotlb when CONFIG_DEBUG_FS is set.
  216. io_tlb_pool describes a memory pool, either the default pool, a dynamic pool,
  217. or a transient pool. The description includes the start and end addresses of
  218. the memory in the pool, a pointer to an array of io_tlb_area structures, and a
  219. pointer to an array of io_tlb_slot structures that are associated with the pool.
  220. io_tlb_area describes an area. The primary field is the spin lock used to
  221. serialize access to slots in the area. The io_tlb_area array for a pool has an
  222. entry for each area, and is accessed using a 0-based area index derived from the
  223. calling processor ID. Areas exist solely to allow parallel access to swiotlb
  224. from multiple CPUs.
  225. io_tlb_slot describes an individual memory slot in the pool, with size
  226. IO_TLB_SIZE (2 KiB currently). The io_tlb_slot array is indexed by the slot
  227. index computed from the bounce buffer address relative to the starting memory
  228. address of the pool. The size of struct io_tlb_slot is 24 bytes, so the
  229. overhead is about 1% of the slot size.
  230. The io_tlb_slot array is designed to meet several requirements. First, the DMA
  231. APIs and the corresponding swiotlb APIs use the bounce buffer address as the
  232. identifier for a bounce buffer. This address is returned by
  233. swiotlb_tbl_map_single(), and then passed as an argument to
  234. swiotlb_tbl_unmap_single() and the swiotlb_sync_*() functions. The original
  235. memory buffer address obviously must be passed as an argument to
  236. swiotlb_tbl_map_single(), but it is not passed to the other APIs. Consequently,
  237. swiotlb data structures must save the original memory buffer address so that it
  238. can be used when doing sync operations. This original address is saved in the
  239. io_tlb_slot array.
  240. Second, the io_tlb_slot array must handle partial sync requests. In such cases,
  241. the argument to swiotlb_sync_*() is not the address of the start of the bounce
  242. buffer but an address somewhere in the middle of the bounce buffer, and the
  243. address of the start of the bounce buffer isn't known to swiotlb code. But
  244. swiotlb code must be able to calculate the corresponding original memory buffer
  245. address to do the CPU copy dictated by the "sync". So an adjusted original
  246. memory buffer address is populated into the struct io_tlb_slot for each slot
  247. occupied by the bounce buffer. An adjusted "alloc_size" of the bounce buffer is
  248. also recorded in each struct io_tlb_slot so a sanity check can be performed on
  249. the size of the "sync" operation. The "alloc_size" field is not used except for
  250. the sanity check.
  251. Third, the io_tlb_slot array is used to track available slots. The "list" field
  252. in struct io_tlb_slot records how many contiguous available slots exist starting
  253. at that slot. A "0" indicates that the slot is occupied. A value of "1"
  254. indicates only the current slot is available. A value of "2" indicates the
  255. current slot and the next slot are available, etc. The maximum value is
  256. IO_TLB_SEGSIZE, which can appear in the first slot in a slot set, and indicates
  257. that the entire slot set is available. These values are used when searching for
  258. available slots to use for a new bounce buffer. They are updated when allocating
  259. a new bounce buffer and when freeing a bounce buffer. At pool creation time, the
  260. "list" field is initialized to IO_TLB_SEGSIZE down to 1 for the slots in every
  261. slot set.
  262. Fourth, the io_tlb_slot array keeps track of any "padding slots" allocated to
  263. meet alloc_align_mask requirements described above. When
  264. swiotlb_tlb_map_single() allocates bounce buffer space to meet alloc_align_mask
  265. requirements, it may allocate pre-padding space across zero or more slots. But
  266. when swiotbl_tlb_unmap_single() is called with the bounce buffer address, the
  267. alloc_align_mask value that governed the allocation, and therefore the
  268. allocation of any padding slots, is not known. The "pad_slots" field records
  269. the number of padding slots so that swiotlb_tbl_unmap_single() can free them.
  270. The "pad_slots" value is recorded only in the first non-padding slot allocated
  271. to the bounce buffer.
  272. Restricted pools
  273. ----------------
  274. The swiotlb machinery is also used for "restricted pools", which are pools of
  275. memory separate from the default swiotlb pool, and that are dedicated for DMA
  276. use by a particular device. Restricted pools provide a level of DMA memory
  277. protection on systems with limited hardware protection capabilities, such as
  278. those lacking an IOMMU. Such usage is specified by DeviceTree entries and
  279. requires that CONFIG_DMA_RESTRICTED_POOL is set. Each restricted pool is based
  280. on its own io_tlb_mem data structure that is independent of the main swiotlb
  281. io_tlb_mem.
  282. Restricted pools add swiotlb_alloc() and swiotlb_free() APIs, which are called
  283. from the dma_alloc_*() and dma_free_*() APIs. The swiotlb_alloc/free() APIs
  284. allocate/free slots from/to the restricted pool directly and do not go through
  285. swiotlb_tbl_map/unmap_single().