squashfs.rst 14 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325
  1. .. SPDX-License-Identifier: GPL-2.0
  2. =======================
  3. Squashfs 4.0 Filesystem
  4. =======================
  5. Squashfs is a compressed read-only filesystem for Linux.
  6. It uses zlib, lz4, lzo, or xz compression to compress files, inodes and
  7. directories. Inodes in the system are very small and all blocks are packed to
  8. minimise data overhead. Block sizes greater than 4K are supported up to a
  9. maximum of 1Mbytes (default block size 128K).
  10. Squashfs is intended for general read-only filesystem use, for archival
  11. use (i.e. in cases where a .tar.gz file may be used), and in constrained
  12. block device/memory systems (e.g. embedded systems) where low overhead is
  13. needed.
  14. Mailing list: squashfs-devel@lists.sourceforge.net
  15. Web site: www.squashfs.org
  16. 1. Filesystem Features
  17. ----------------------
  18. Squashfs filesystem features versus Cramfs:
  19. ============================== ========= ==========
  20. Squashfs Cramfs
  21. ============================== ========= ==========
  22. Max filesystem size 2^64 256 MiB
  23. Max file size ~ 2 TiB 16 MiB
  24. Max files unlimited unlimited
  25. Max directories unlimited unlimited
  26. Max entries per directory unlimited unlimited
  27. Max block size 1 MiB 4 KiB
  28. Metadata compression yes no
  29. Directory indexes yes no
  30. Sparse file support yes no
  31. Tail-end packing (fragments) yes no
  32. Exportable (NFS etc.) yes no
  33. Hard link support yes no
  34. "." and ".." in readdir yes no
  35. Real inode numbers yes no
  36. 32-bit uids/gids yes no
  37. File creation time yes no
  38. Xattr support yes no
  39. ACL support no no
  40. ============================== ========= ==========
  41. Squashfs compresses data, inodes and directories. In addition, inode and
  42. directory data are highly compacted, and packed on byte boundaries. Each
  43. compressed inode is on average 8 bytes in length (the exact length varies on
  44. file type, i.e. regular file, directory, symbolic link, and block/char device
  45. inodes have different sizes).
  46. 2. Using Squashfs
  47. -----------------
  48. As squashfs is a read-only filesystem, the mksquashfs program must be used to
  49. create populated squashfs filesystems. This and other squashfs utilities
  50. can be obtained from http://www.squashfs.org. Usage instructions can be
  51. obtained from this site also.
  52. The squashfs-tools development tree is now located on kernel.org
  53. git://git.kernel.org/pub/scm/fs/squashfs/squashfs-tools.git
  54. 2.1 Mount options
  55. -----------------
  56. =================== =========================================================
  57. errors=%s Specify whether squashfs errors trigger a kernel panic
  58. or not
  59. ========== =============================================
  60. continue errors don't trigger a panic (default)
  61. panic trigger a panic when errors are encountered,
  62. similar to several other filesystems (e.g.
  63. btrfs, ext4, f2fs, GFS2, jfs, ntfs, ubifs)
  64. This allows a kernel dump to be saved,
  65. useful for analyzing and debugging the
  66. corruption.
  67. ========== =============================================
  68. threads=%s Select the decompression mode or the number of threads
  69. If SQUASHFS_CHOICE_DECOMP_BY_MOUNT is set:
  70. ========== =============================================
  71. single use single-threaded decompression (default)
  72. Only one block (data or metadata) can be
  73. decompressed at any one time. This limits
  74. CPU and memory usage to a minimum, but it
  75. also gives poor performance on parallel I/O
  76. workloads when using multiple CPU machines
  77. due to waiting on decompressor availability.
  78. multi use up to two parallel decompressors per core
  79. If you have a parallel I/O workload and your
  80. system has enough memory, using this option
  81. may improve overall I/O performance. It
  82. dynamically allocates decompressors on a
  83. demand basis.
  84. percpu use a maximum of one decompressor per core
  85. It uses percpu variables to ensure
  86. decompression is load-balanced across the
  87. cores.
  88. 1|2|3|... configure the number of threads used for
  89. decompression
  90. The upper limit is num_online_cpus() * 2.
  91. ========== =============================================
  92. If SQUASHFS_CHOICE_DECOMP_BY_MOUNT is **not** set and
  93. SQUASHFS_DECOMP_MULTI, SQUASHFS_MOUNT_DECOMP_THREADS are
  94. both set:
  95. ========== =============================================
  96. 2|3|... configure the number of threads used for
  97. decompression
  98. The upper limit is num_online_cpus() * 2.
  99. ========== =============================================
  100. =================== =========================================================
  101. 3. Squashfs Filesystem Design
  102. -----------------------------
  103. A squashfs filesystem consists of a maximum of nine parts, packed together on a
  104. byte alignment::
  105. ---------------
  106. | superblock |
  107. |---------------|
  108. | compression |
  109. | options |
  110. |---------------|
  111. | datablocks |
  112. | & fragments |
  113. |---------------|
  114. | inode table |
  115. |---------------|
  116. | directory |
  117. | table |
  118. |---------------|
  119. | fragment |
  120. | table |
  121. |---------------|
  122. | export |
  123. | table |
  124. |---------------|
  125. | uid/gid |
  126. | lookup table |
  127. |---------------|
  128. | xattr |
  129. | table |
  130. ---------------
  131. Compressed data blocks are written to the filesystem as files are read from
  132. the source directory, and checked for duplicates. Once all file data has been
  133. written the completed inode, directory, fragment, export, uid/gid lookup and
  134. xattr tables are written.
  135. 3.1 Compression options
  136. -----------------------
  137. Compressors can optionally support compression specific options (e.g.
  138. dictionary size). If non-default compression options have been used, then
  139. these are stored here.
  140. 3.2 Inodes
  141. ----------
  142. Metadata (inodes and directories) are compressed in 8Kbyte blocks. Each
  143. compressed block is prefixed by a two byte length, the top bit is set if the
  144. block is uncompressed. A block will be uncompressed if the -noI option is set,
  145. or if the compressed block was larger than the uncompressed block.
  146. Inodes are packed into the metadata blocks, and are not aligned to block
  147. boundaries, therefore inodes overlap compressed blocks. Inodes are identified
  148. by a 48-bit number which encodes the location of the compressed metadata block
  149. containing the inode, and the byte offset into that block where the inode is
  150. placed (<block, offset>).
  151. To maximise compression there are different inodes for each file type
  152. (regular file, directory, device, etc.), the inode contents and length
  153. varying with the type.
  154. To further maximise compression, two types of regular file inode and
  155. directory inode are defined: inodes optimised for frequently occurring
  156. regular files and directories, and extended types where extra
  157. information has to be stored.
  158. 3.3 Directories
  159. ---------------
  160. Like inodes, directories are packed into compressed metadata blocks, stored
  161. in a directory table. Directories are accessed using the start address of
  162. the metablock containing the directory and the offset into the
  163. decompressed block (<block, offset>).
  164. Directories are organised in a slightly complex way, and are not simply
  165. a list of file names. The organisation takes advantage of the
  166. fact that (in most cases) the inodes of the files will be in the same
  167. compressed metadata block, and therefore, can share the start block.
  168. Directories are therefore organised in a two level list, a directory
  169. header containing the shared start block value, and a sequence of directory
  170. entries, each of which share the shared start block. A new directory header
  171. is written once/if the inode start block changes. The directory
  172. header/directory entry list is repeated as many times as necessary.
  173. Directories are sorted, and can contain a directory index to speed up
  174. file lookup. Directory indexes store one entry per metablock, each entry
  175. storing the index/filename mapping to the first directory header
  176. in each metadata block. Directories are sorted in alphabetical order,
  177. and at lookup the index is scanned linearly looking for the first filename
  178. alphabetically larger than the filename being looked up. At this point the
  179. location of the metadata block the filename is in has been found.
  180. The general idea of the index is to ensure only one metadata block needs to be
  181. decompressed to do a lookup irrespective of the length of the directory.
  182. This scheme has the advantage that it doesn't require extra memory overhead
  183. and doesn't require much extra storage on disk.
  184. 3.4 File data
  185. -------------
  186. Regular files consist of a sequence of contiguous compressed blocks, and/or a
  187. compressed fragment block (tail-end packed block). The compressed size
  188. of each datablock is stored in a block list contained within the
  189. file inode.
  190. To speed up access to datablocks when reading 'large' files (256 Mbytes or
  191. larger), the code implements an index cache that caches the mapping from
  192. block index to datablock location on disk.
  193. The index cache allows Squashfs to handle large files (up to 1.75 TiB) while
  194. retaining a simple and space-efficient block list on disk. The cache
  195. is split into slots, caching up to eight 224 GiB files (128 KiB blocks).
  196. Larger files use multiple slots, with 1.75 TiB files using all 8 slots.
  197. The index cache is designed to be memory efficient, and by default uses
  198. 16 KiB.
  199. 3.5 Fragment lookup table
  200. -------------------------
  201. Regular files can contain a fragment index which is mapped to a fragment
  202. location on disk and compressed size using a fragment lookup table. This
  203. fragment lookup table is itself stored compressed into metadata blocks.
  204. A second index table is used to locate these. This second index table for
  205. speed of access (and because it is small) is read at mount time and cached
  206. in memory.
  207. 3.6 Uid/gid lookup table
  208. ------------------------
  209. For space efficiency regular files store uid and gid indexes, which are
  210. converted to 32-bit uids/gids using an id look up table. This table is
  211. stored compressed into metadata blocks. A second index table is used to
  212. locate these. This second index table for speed of access (and because it
  213. is small) is read at mount time and cached in memory.
  214. 3.7 Export table
  215. ----------------
  216. To enable Squashfs filesystems to be exportable (via NFS etc.) filesystems
  217. can optionally (disabled with the -no-exports Mksquashfs option) contain
  218. an inode number to inode disk location lookup table. This is required to
  219. enable Squashfs to map inode numbers passed in filehandles to the inode
  220. location on disk, which is necessary when the export code reinstantiates
  221. expired/flushed inodes.
  222. This table is stored compressed into metadata blocks. A second index table is
  223. used to locate these. This second index table for speed of access (and because
  224. it is small) is read at mount time and cached in memory.
  225. 3.8 Xattr table
  226. ---------------
  227. The xattr table contains extended attributes for each inode. The xattrs
  228. for each inode are stored in a list, each list entry containing a type,
  229. name and value field. The type field encodes the xattr prefix
  230. ("user.", "trusted." etc) and it also encodes how the name/value fields
  231. should be interpreted. Currently the type indicates whether the value
  232. is stored inline (in which case the value field contains the xattr value),
  233. or if it is stored out of line (in which case the value field stores a
  234. reference to where the actual value is stored). This allows large values
  235. to be stored out of line improving scanning and lookup performance and it
  236. also allows values to be de-duplicated, the value being stored once, and
  237. all other occurrences holding an out of line reference to that value.
  238. The xattr lists are packed into compressed 8K metadata blocks.
  239. To reduce overhead in inodes, rather than storing the on-disk
  240. location of the xattr list inside each inode, a 32-bit xattr id
  241. is stored. This xattr id is mapped into the location of the xattr
  242. list using a second xattr id lookup table.
  243. 4. TODOs and Outstanding Issues
  244. -------------------------------
  245. 4.1 TODO list
  246. -------------
  247. Implement ACL support.
  248. 4.2 Squashfs Internal Cache
  249. ---------------------------
  250. Blocks in Squashfs are compressed. To avoid repeatedly decompressing
  251. recently accessed data Squashfs uses two small metadata and fragment caches.
  252. The cache is not used for file datablocks, these are decompressed and cached in
  253. the page-cache in the normal way. The cache is used to temporarily cache
  254. fragment and metadata blocks which have been read as a result of a metadata
  255. (i.e. inode or directory) or fragment access. Because metadata and fragments
  256. are packed together into blocks (to gain greater compression) the read of a
  257. particular piece of metadata or fragment will retrieve other metadata/fragments
  258. which have been packed with it, these because of locality-of-reference may be
  259. read in the near future. Temporarily caching them ensures they are available
  260. for near future access without requiring an additional read and decompress.
  261. In the future this internal cache may be replaced with an implementation which
  262. uses the kernel page cache. Because the page cache operates on page sized
  263. units this may introduce additional complexity in terms of locking and
  264. associated race conditions.