split_page_table_lock.rst 3.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103
  1. =====================
  2. Split page table lock
  3. =====================
  4. Originally, mm->page_table_lock spinlock protected all page tables of the
  5. mm_struct. But this approach leads to poor page fault scalability of
  6. multi-threaded applications due high contention on the lock. To improve
  7. scalability, split page table lock was introduced.
  8. With split page table lock we have separate per-table lock to serialize
  9. access to the table. At the moment we use split lock for PTE and PMD
  10. tables. Access to higher level tables protected by mm->page_table_lock.
  11. There are helpers to lock/unlock a table and other accessor functions:
  12. - pte_offset_map_lock()
  13. maps PTE and takes PTE table lock, returns pointer to PTE with
  14. pointer to its PTE table lock, or returns NULL if no PTE table;
  15. - pte_offset_map_nolock()
  16. maps PTE, returns pointer to PTE with pointer to its PTE table
  17. lock (not taken), or returns NULL if no PTE table;
  18. - pte_offset_map()
  19. maps PTE, returns pointer to PTE, or returns NULL if no PTE table;
  20. - pte_unmap()
  21. unmaps PTE table;
  22. - pte_unmap_unlock()
  23. unlocks and unmaps PTE table;
  24. - pte_alloc_map_lock()
  25. allocates PTE table if needed and takes its lock, returns pointer to
  26. PTE with pointer to its lock, or returns NULL if allocation failed;
  27. - pmd_lock()
  28. takes PMD table lock, returns pointer to taken lock;
  29. - pmd_lockptr()
  30. returns pointer to PMD table lock;
  31. Split page table lock for PTE tables is enabled compile-time if
  32. CONFIG_SPLIT_PTLOCK_CPUS (usually 4) is less or equal to NR_CPUS.
  33. If split lock is disabled, all tables are guarded by mm->page_table_lock.
  34. Split page table lock for PMD tables is enabled, if it's enabled for PTE
  35. tables and the architecture supports it (see below).
  36. Hugetlb and split page table lock
  37. =================================
  38. Hugetlb can support several page sizes. We use split lock only for PMD
  39. level, but not for PUD.
  40. Hugetlb-specific helpers:
  41. - huge_pte_lock()
  42. takes pmd split lock for PMD_SIZE page, mm->page_table_lock
  43. otherwise;
  44. - huge_pte_lockptr()
  45. returns pointer to table lock;
  46. Support of split page table lock by an architecture
  47. ===================================================
  48. There's no need in special enabling of PTE split page table lock: everything
  49. required is done by pagetable_pte_ctor() and pagetable_pte_dtor(), which
  50. must be called on PTE table allocation / freeing.
  51. Make sure the architecture doesn't use slab allocator for page table
  52. allocation: slab uses page->slab_cache for its pages.
  53. This field shares storage with page->ptl.
  54. PMD split lock only makes sense if you have more than two page table
  55. levels.
  56. PMD split lock enabling requires pagetable_pmd_ctor() call on PMD table
  57. allocation and pagetable_pmd_dtor() on freeing.
  58. Allocation usually happens in pmd_alloc_one(), freeing in pmd_free() and
  59. pmd_free_tlb(), but make sure you cover all PMD table allocation / freeing
  60. paths: i.e X86_PAE preallocate few PMDs on pgd_alloc().
  61. With everything in place you can set CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK.
  62. NOTE: pagetable_pte_ctor() and pagetable_pmd_ctor() can fail -- it must
  63. be handled properly.
  64. page->ptl
  65. =========
  66. page->ptl is used to access split page table lock, where 'page' is struct
  67. page of page containing the table. It shares storage with page->private
  68. (and few other fields in union).
  69. To avoid increasing size of struct page and have best performance, we use a
  70. trick:
  71. - if spinlock_t fits into long, we use page->ptr as spinlock, so we
  72. can avoid indirect access and save a cache line.
  73. - if size of spinlock_t is bigger then size of long, we use page->ptl as
  74. pointer to spinlock_t and allocate it dynamically. This allows to use
  75. split lock with enabled DEBUG_SPINLOCK or DEBUG_LOCK_ALLOC, but costs
  76. one more cache line for indirect access;
  77. The spinlock_t allocated in pagetable_pte_ctor() for PTE table and in
  78. pagetable_pmd_ctor() for PMD table.
  79. Please, never access page->ptl directly -- use appropriate helper.