concepts.rst 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220
  1. =================
  2. Concepts overview
  3. =================
  4. The memory management in Linux is a complex system that evolved over the
  5. years and included more and more functionality to support a variety of
  6. systems from MMU-less microcontrollers to supercomputers. The memory
  7. management for systems without an MMU is called ``nommu`` and it
  8. definitely deserves a dedicated document, which hopefully will be
  9. eventually written. Yet, although some of the concepts are the same,
  10. here we assume that an MMU is available and a CPU can translate a virtual
  11. address to a physical address.
  12. .. contents:: :local:
  13. Virtual Memory Primer
  14. =====================
  15. The physical memory in a computer system is a limited resource and
  16. even for systems that support memory hotplug there is a hard limit on
  17. the amount of memory that can be installed. The physical memory is not
  18. necessarily contiguous; it might be accessible as a set of distinct
  19. address ranges. Besides, different CPU architectures, and even
  20. different implementations of the same architecture have different views
  21. of how these address ranges are defined.
  22. All this makes dealing directly with physical memory quite complex and
  23. to avoid this complexity a concept of virtual memory was developed.
  24. The virtual memory abstracts the details of physical memory from the
  25. application software, allows to keep only needed information in the
  26. physical memory (demand paging) and provides a mechanism for the
  27. protection and controlled sharing of data between processes.
  28. With virtual memory, each and every memory access uses a virtual
  29. address. When the CPU decodes an instruction that reads (or
  30. writes) from (or to) the system memory, it translates the `virtual`
  31. address encoded in that instruction to a `physical` address that the
  32. memory controller can understand.
  33. The physical system memory is divided into page frames, or pages. The
  34. size of each page is architecture specific. Some architectures allow
  35. selection of the page size from several supported values; this
  36. selection is performed at the kernel build time by setting an
  37. appropriate kernel configuration option.
  38. Each physical memory page can be mapped as one or more virtual
  39. pages. These mappings are described by page tables that allow
  40. translation from a virtual address used by programs to the physical
  41. memory address. The page tables are organized hierarchically.
  42. The tables at the lowest level of the hierarchy contain physical
  43. addresses of actual pages used by the software. The tables at higher
  44. levels contain physical addresses of the pages belonging to the lower
  45. levels. The pointer to the top level page table resides in a
  46. register. When the CPU performs the address translation, it uses this
  47. register to access the top level page table. The high bits of the
  48. virtual address are used to index an entry in the top level page
  49. table. That entry is then used to access the next level in the
  50. hierarchy with the next bits of the virtual address as the index to
  51. that level page table. The lowest bits in the virtual address define
  52. the offset inside the actual page.
  53. Huge Pages
  54. ==========
  55. The address translation requires several memory accesses and memory
  56. accesses are slow relatively to CPU speed. To avoid spending precious
  57. processor cycles on the address translation, CPUs maintain a cache of
  58. such translations called Translation Lookaside Buffer (or
  59. TLB). Usually TLB is pretty scarce resource and applications with
  60. large memory working set will experience performance hit because of
  61. TLB misses.
  62. Many modern CPU architectures allow mapping of the memory pages
  63. directly by the higher levels in the page table. For instance, on x86,
  64. it is possible to map 2M and even 1G pages using entries in the second
  65. and the third level page tables. In Linux such pages are called
  66. `huge`. Usage of huge pages significantly reduces pressure on TLB,
  67. improves TLB hit-rate and thus improves overall system performance.
  68. There are two mechanisms in Linux that enable mapping of the physical
  69. memory with the huge pages. The first one is `HugeTLB filesystem`, or
  70. hugetlbfs. It is a pseudo filesystem that uses RAM as its backing
  71. store. For the files created in this filesystem the data resides in
  72. the memory and mapped using huge pages. The hugetlbfs is described at
  73. Documentation/admin-guide/mm/hugetlbpage.rst.
  74. Another, more recent, mechanism that enables use of the huge pages is
  75. called `Transparent HugePages`, or THP. Unlike the hugetlbfs that
  76. requires users and/or system administrators to configure what parts of
  77. the system memory should and can be mapped by the huge pages, THP
  78. manages such mappings transparently to the user and hence the
  79. name. See Documentation/admin-guide/mm/transhuge.rst for more details
  80. about THP.
  81. Zones
  82. =====
  83. Often hardware poses restrictions on how different physical memory
  84. ranges can be accessed. In some cases, devices cannot perform DMA to
  85. all the addressable memory. In other cases, the size of the physical
  86. memory exceeds the maximal addressable size of virtual memory and
  87. special actions are required to access portions of the memory. Linux
  88. groups memory pages into `zones` according to their possible
  89. usage. For example, ZONE_DMA will contain memory that can be used by
  90. devices for DMA, ZONE_HIGHMEM will contain memory that is not
  91. permanently mapped into kernel's address space and ZONE_NORMAL will
  92. contain normally addressed pages.
  93. The actual layout of the memory zones is hardware dependent as not all
  94. architectures define all zones, and requirements for DMA are different
  95. for different platforms.
  96. Nodes
  97. =====
  98. Many multi-processor machines are NUMA - Non-Uniform Memory Access -
  99. systems. In such systems the memory is arranged into banks that have
  100. different access latency depending on the "distance" from the
  101. processor. Each bank is referred to as a `node` and for each node Linux
  102. constructs an independent memory management subsystem. A node has its
  103. own set of zones, lists of free and used pages and various statistics
  104. counters. You can find more details about NUMA in
  105. Documentation/mm/numa.rst` and in
  106. Documentation/admin-guide/mm/numa_memory_policy.rst.
  107. Page cache
  108. ==========
  109. The physical memory is volatile and the common case for getting data
  110. into the memory is to read it from files. Whenever a file is read, the
  111. data is put into the `page cache` to avoid expensive disk access on
  112. the subsequent reads. Similarly, when one writes to a file, the data
  113. is placed in the page cache and eventually gets into the backing
  114. storage device. The written pages are marked as `dirty` and when Linux
  115. decides to reuse them for other purposes, it makes sure to synchronize
  116. the file contents on the device with the updated data.
  117. Anonymous Memory
  118. ================
  119. The `anonymous memory` or `anonymous mappings` represent memory that
  120. is not backed by a filesystem. Such mappings are implicitly created
  121. for program's stack and heap or by explicit calls to mmap(2) system
  122. call. Usually, the anonymous mappings only define virtual memory areas
  123. that the program is allowed to access. The read accesses will result
  124. in creation of a page table entry that references a special physical
  125. page filled with zeroes. When the program performs a write, a regular
  126. physical page will be allocated to hold the written data. The page
  127. will be marked dirty and if the kernel decides to repurpose it,
  128. the dirty page will be swapped out.
  129. Reclaim
  130. =======
  131. Throughout the system lifetime, a physical page can be used for storing
  132. different types of data. It can be kernel internal data structures,
  133. DMA'able buffers for device drivers use, data read from a filesystem,
  134. memory allocated by user space processes etc.
  135. Depending on the page usage it is treated differently by the Linux
  136. memory management. The pages that can be freed at any time, either
  137. because they cache the data available elsewhere, for instance, on a
  138. hard disk, or because they can be swapped out, again, to the hard
  139. disk, are called `reclaimable`. The most notable categories of the
  140. reclaimable pages are page cache and anonymous memory.
  141. In most cases, the pages holding internal kernel data and used as DMA
  142. buffers cannot be repurposed, and they remain pinned until freed by
  143. their user. Such pages are called `unreclaimable`. However, in certain
  144. circumstances, even pages occupied with kernel data structures can be
  145. reclaimed. For instance, in-memory caches of filesystem metadata can
  146. be re-read from the storage device and therefore it is possible to
  147. discard them from the main memory when system is under memory
  148. pressure.
  149. The process of freeing the reclaimable physical memory pages and
  150. repurposing them is called (surprise!) `reclaim`. Linux can reclaim
  151. pages either asynchronously or synchronously, depending on the state
  152. of the system. When the system is not loaded, most of the memory is free
  153. and allocation requests will be satisfied immediately from the free
  154. pages supply. As the load increases, the amount of the free pages goes
  155. down and when it reaches a certain threshold (low watermark), an
  156. allocation request will awaken the ``kswapd`` daemon. It will
  157. asynchronously scan memory pages and either just free them if the data
  158. they contain is available elsewhere, or evict to the backing storage
  159. device (remember those dirty pages?). As memory usage increases even
  160. more and reaches another threshold - min watermark - an allocation
  161. will trigger `direct reclaim`. In this case allocation is stalled
  162. until enough memory pages are reclaimed to satisfy the request.
  163. Compaction
  164. ==========
  165. As the system runs, tasks allocate and free the memory and it becomes
  166. fragmented. Although with virtual memory it is possible to present
  167. scattered physical pages as virtually contiguous range, sometimes it is
  168. necessary to allocate large physically contiguous memory areas. Such
  169. need may arise, for instance, when a device driver requires a large
  170. buffer for DMA, or when THP allocates a huge page. Memory `compaction`
  171. addresses the fragmentation issue. This mechanism moves occupied pages
  172. from the lower part of a memory zone to free pages in the upper part
  173. of the zone. When a compaction scan is finished free pages are grouped
  174. together at the beginning of the zone and allocations of large
  175. physically contiguous areas become possible.
  176. Like reclaim, the compaction may happen asynchronously in the ``kcompactd``
  177. daemon or synchronously as a result of a memory allocation request.
  178. OOM killer
  179. ==========
  180. It is possible that on a loaded machine memory will be exhausted and the
  181. kernel will be unable to reclaim enough memory to continue to operate. In
  182. order to save the rest of the system, it invokes the `OOM killer`.
  183. The `OOM killer` selects a task to sacrifice for the sake of the overall
  184. system health. The selected task is killed in a hope that after it exits
  185. enough memory will be freed to continue normal operation.