directory-locking.rst 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286
  1. =================
  2. Directory Locking
  3. =================
  4. Locking scheme used for directory operations is based on two
  5. kinds of locks - per-inode (->i_rwsem) and per-filesystem
  6. (->s_vfs_rename_mutex).
  7. When taking the i_rwsem on multiple non-directory objects, we
  8. always acquire the locks in order by increasing address. We'll call
  9. that "inode pointer" order in the following.
  10. Primitives
  11. ==========
  12. For our purposes all operations fall in 6 classes:
  13. 1. read access. Locking rules:
  14. * lock the directory we are accessing (shared)
  15. 2. object creation. Locking rules:
  16. * lock the directory we are accessing (exclusive)
  17. 3. object removal. Locking rules:
  18. * lock the parent (exclusive)
  19. * find the victim
  20. * lock the victim (exclusive)
  21. 4. link creation. Locking rules:
  22. * lock the parent (exclusive)
  23. * check that the source is not a directory
  24. * lock the source (exclusive; probably could be weakened to shared)
  25. 5. rename that is _not_ cross-directory. Locking rules:
  26. * lock the parent (exclusive)
  27. * find the source and target
  28. * decide which of the source and target need to be locked.
  29. The source needs to be locked if it's a non-directory, target - if it's
  30. a non-directory or about to be removed.
  31. * take the locks that need to be taken (exclusive), in inode pointer order
  32. if need to take both (that can happen only when both source and target
  33. are non-directories - the source because it wouldn't need to be locked
  34. otherwise and the target because mixing directory and non-directory is
  35. allowed only with RENAME_EXCHANGE, and that won't be removing the target).
  36. 6. cross-directory rename. The trickiest in the whole bunch. Locking rules:
  37. * lock the filesystem
  38. * if the parents don't have a common ancestor, fail the operation.
  39. * lock the parents in "ancestors first" order (exclusive). If neither is an
  40. ancestor of the other, lock the parent of source first.
  41. * find the source and target.
  42. * verify that the source is not a descendent of the target and
  43. target is not a descendent of source; fail the operation otherwise.
  44. * lock the subdirectories involved (exclusive), source before target.
  45. * lock the non-directories involved (exclusive), in inode pointer order.
  46. The rules above obviously guarantee that all directories that are going
  47. to be read, modified or removed by method will be locked by the caller.
  48. Splicing
  49. ========
  50. There is one more thing to consider - splicing. It's not an operation
  51. in its own right; it may happen as part of lookup. We speak of the
  52. operations on directory trees, but we obviously do not have the full
  53. picture of those - especially for network filesystems. What we have
  54. is a bunch of subtrees visible in dcache and locking happens on those.
  55. Trees grow as we do operations; memory pressure prunes them. Normally
  56. that's not a problem, but there is a nasty twist - what should we do
  57. when one growing tree reaches the root of another? That can happen in
  58. several scenarios, starting from "somebody mounted two nested subtrees
  59. from the same NFS4 server and doing lookups in one of them has reached
  60. the root of another"; there's also open-by-fhandle stuff, and there's a
  61. possibility that directory we see in one place gets moved by the server
  62. to another and we run into it when we do a lookup.
  63. For a lot of reasons we want to have the same directory present in dcache
  64. only once. Multiple aliases are not allowed. So when lookup runs into
  65. a subdirectory that already has an alias, something needs to be done with
  66. dcache trees. Lookup is already holding the parent locked. If alias is
  67. a root of separate tree, it gets attached to the directory we are doing a
  68. lookup in, under the name we'd been looking for. If the alias is already
  69. a child of the directory we are looking in, it changes name to the one
  70. we'd been looking for. No extra locking is involved in these two cases.
  71. However, if it's a child of some other directory, the things get trickier.
  72. First of all, we verify that it is *not* an ancestor of our directory
  73. and fail the lookup if it is. Then we try to lock the filesystem and the
  74. current parent of the alias. If either trylock fails, we fail the lookup.
  75. If trylocks succeed, we detach the alias from its current parent and
  76. attach to our directory, under the name we are looking for.
  77. Note that splicing does *not* involve any modification of the filesystem;
  78. all we change is the view in dcache. Moreover, holding a directory locked
  79. exclusive prevents such changes involving its children and holding the
  80. filesystem lock prevents any changes of tree topology, other than having a
  81. root of one tree becoming a child of directory in another. In particular,
  82. if two dentries have been found to have a common ancestor after taking
  83. the filesystem lock, their relationship will remain unchanged until
  84. the lock is dropped. So from the directory operations' point of view
  85. splicing is almost irrelevant - the only place where it matters is one
  86. step in cross-directory renames; we need to be careful when checking if
  87. parents have a common ancestor.
  88. Multiple-filesystem stuff
  89. =========================
  90. For some filesystems a method can involve a directory operation on
  91. another filesystem; it may be ecryptfs doing operation in the underlying
  92. filesystem, overlayfs doing something to the layers, network filesystem
  93. using a local one as a cache, etc. In all such cases the operations
  94. on other filesystems must follow the same locking rules. Moreover, "a
  95. directory operation on this filesystem might involve directory operations
  96. on that filesystem" should be an asymmetric relation (or, if you will,
  97. it should be possible to rank the filesystems so that directory operation
  98. on a filesystem could trigger directory operations only on higher-ranked
  99. ones - in these terms overlayfs ranks lower than its layers, network
  100. filesystem ranks lower than whatever it caches on, etc.)
  101. Deadlock avoidance
  102. ==================
  103. If no directory is its own ancestor, the scheme above is deadlock-free.
  104. Proof:
  105. There is a ranking on the locks, such that all primitives take
  106. them in order of non-decreasing rank. Namely,
  107. * rank ->i_rwsem of non-directories on given filesystem in inode pointer
  108. order.
  109. * put ->i_rwsem of all directories on a filesystem at the same rank,
  110. lower than ->i_rwsem of any non-directory on the same filesystem.
  111. * put ->s_vfs_rename_mutex at rank lower than that of any ->i_rwsem
  112. on the same filesystem.
  113. * among the locks on different filesystems use the relative
  114. rank of those filesystems.
  115. For example, if we have NFS filesystem caching on a local one, we have
  116. 1. ->s_vfs_rename_mutex of NFS filesystem
  117. 2. ->i_rwsem of directories on that NFS filesystem, same rank for all
  118. 3. ->i_rwsem of non-directories on that filesystem, in order of
  119. increasing address of inode
  120. 4. ->s_vfs_rename_mutex of local filesystem
  121. 5. ->i_rwsem of directories on the local filesystem, same rank for all
  122. 6. ->i_rwsem of non-directories on local filesystem, in order of
  123. increasing address of inode.
  124. It's easy to verify that operations never take a lock with rank
  125. lower than that of an already held lock.
  126. Suppose deadlocks are possible. Consider the minimal deadlocked
  127. set of threads. It is a cycle of several threads, each blocked on a lock
  128. held by the next thread in the cycle.
  129. Since the locking order is consistent with the ranking, all
  130. contended locks in the minimal deadlock will be of the same rank,
  131. i.e. they all will be ->i_rwsem of directories on the same filesystem.
  132. Moreover, without loss of generality we can assume that all operations
  133. are done directly to that filesystem and none of them has actually
  134. reached the method call.
  135. In other words, we have a cycle of threads, T1,..., Tn,
  136. and the same number of directories (D1,...,Dn) such that
  137. T1 is blocked on D1 which is held by T2
  138. T2 is blocked on D2 which is held by T3
  139. ...
  140. Tn is blocked on Dn which is held by T1.
  141. Each operation in the minimal cycle must have locked at least
  142. one directory and blocked on attempt to lock another. That leaves
  143. only 3 possible operations: directory removal (locks parent, then
  144. child), same-directory rename killing a subdirectory (ditto) and
  145. cross-directory rename of some sort.
  146. There must be a cross-directory rename in the set; indeed,
  147. if all operations had been of the "lock parent, then child" sort
  148. we would have Dn a parent of D1, which is a parent of D2, which is
  149. a parent of D3, ..., which is a parent of Dn. Relationships couldn't
  150. have changed since the moment directory locks had been acquired,
  151. so they would all hold simultaneously at the deadlock time and
  152. we would have a loop.
  153. Since all operations are on the same filesystem, there can't be
  154. more than one cross-directory rename among them. Without loss of
  155. generality we can assume that T1 is the one doing a cross-directory
  156. rename and everything else is of the "lock parent, then child" sort.
  157. In other words, we have a cross-directory rename that locked
  158. Dn and blocked on attempt to lock D1, which is a parent of D2, which is
  159. a parent of D3, ..., which is a parent of Dn. Relationships between
  160. D1,...,Dn all hold simultaneously at the deadlock time. Moreover,
  161. cross-directory rename does not get to locking any directories until it
  162. has acquired filesystem lock and verified that directories involved have
  163. a common ancestor, which guarantees that ancestry relationships between
  164. all of them had been stable.
  165. Consider the order in which directories are locked by the
  166. cross-directory rename; parents first, then possibly their children.
  167. Dn and D1 would have to be among those, with Dn locked before D1.
  168. Which pair could it be?
  169. It can't be the parents - indeed, since D1 is an ancestor of Dn,
  170. it would be the first parent to be locked. Therefore at least one of the
  171. children must be involved and thus neither of them could be a descendent
  172. of another - otherwise the operation would not have progressed past
  173. locking the parents.
  174. It can't be a parent and its child; otherwise we would've had
  175. a loop, since the parents are locked before the children, so the parent
  176. would have to be a descendent of its child.
  177. It can't be a parent and a child of another parent either.
  178. Otherwise the child of the parent in question would've been a descendent
  179. of another child.
  180. That leaves only one possibility - namely, both Dn and D1 are
  181. among the children, in some order. But that is also impossible, since
  182. neither of the children is a descendent of another.
  183. That concludes the proof, since the set of operations with the
  184. properties required for a minimal deadlock can not exist.
  185. Note that the check for having a common ancestor in cross-directory
  186. rename is crucial - without it a deadlock would be possible. Indeed,
  187. suppose the parents are initially in different trees; we would lock the
  188. parent of source, then try to lock the parent of target, only to have
  189. an unrelated lookup splice a distant ancestor of source to some distant
  190. descendent of the parent of target. At that point we have cross-directory
  191. rename holding the lock on parent of source and trying to lock its
  192. distant ancestor. Add a bunch of rmdir() attempts on all directories
  193. in between (all of those would fail with -ENOTEMPTY, had they ever gotten
  194. the locks) and voila - we have a deadlock.
  195. Loop avoidance
  196. ==============
  197. These operations are guaranteed to avoid loop creation. Indeed,
  198. the only operation that could introduce loops is cross-directory rename.
  199. Suppose after the operation there is a loop; since there hadn't been such
  200. loops before the operation, at least on of the nodes in that loop must've
  201. had its parent changed. In other words, the loop must be passing through
  202. the source or, in case of exchange, possibly the target.
  203. Since the operation has succeeded, neither source nor target could have
  204. been ancestors of each other. Therefore the chain of ancestors starting
  205. in the parent of source could not have passed through the target and
  206. vice versa. On the other hand, the chain of ancestors of any node could
  207. not have passed through the node itself, or we would've had a loop before
  208. the operation. But everything other than source and target has kept
  209. the parent after the operation, so the operation does not change the
  210. chains of ancestors of (ex-)parents of source and target. In particular,
  211. those chains must end after a finite number of steps.
  212. Now consider the loop created by the operation. It passes through either
  213. source or target; the next node in the loop would be the ex-parent of
  214. target or source resp. After that the loop would follow the chain of
  215. ancestors of that parent. But as we have just shown, that chain must
  216. end after a finite number of steps, which means that it can't be a part
  217. of any loop. Q.E.D.
  218. While this locking scheme works for arbitrary DAGs, it relies on
  219. ability to check that directory is a descendent of another object. Current
  220. implementation assumes that directory graph is a tree. This assumption is
  221. also preserved by all operations (cross-directory rename on a tree that would
  222. not introduce a cycle will leave it a tree and link() fails for directories).
  223. Notice that "directory" in the above == "anything that might have
  224. children", so if we are going to introduce hybrid objects we will need
  225. either to make sure that link(2) doesn't work for them or to make changes
  226. in is_subdir() that would make it work even in presence of such beasts.