unstriped.txt 4.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124
  1. Introduction
  2. ============
  3. The device-mapper "unstriped" target provides a transparent mechanism to
  4. unstripe a device-mapper "striped" target to access the underlying disks
  5. without having to touch the true backing block-device. It can also be
  6. used to unstripe a hardware RAID-0 to access backing disks.
  7. Parameters:
  8. <number of stripes> <chunk size> <stripe #> <dev_path> <offset>
  9. <number of stripes>
  10. The number of stripes in the RAID 0.
  11. <chunk size>
  12. The amount of 512B sectors in the chunk striping.
  13. <dev_path>
  14. The block device you wish to unstripe.
  15. <stripe #>
  16. The stripe number within the device that corresponds to physical
  17. drive you wish to unstripe. This must be 0 indexed.
  18. Why use this module?
  19. ====================
  20. An example of undoing an existing dm-stripe
  21. -------------------------------------------
  22. This small bash script will setup 4 loop devices and use the existing
  23. striped target to combine the 4 devices into one. It then will use
  24. the unstriped target ontop of the striped device to access the
  25. individual backing loop devices. We write data to the newly exposed
  26. unstriped devices and verify the data written matches the correct
  27. underlying device on the striped array.
  28. #!/bin/bash
  29. MEMBER_SIZE=$((128 * 1024 * 1024))
  30. NUM=4
  31. SEQ_END=$((${NUM}-1))
  32. CHUNK=256
  33. BS=4096
  34. RAID_SIZE=$((${MEMBER_SIZE}*${NUM}/512))
  35. DM_PARMS="0 ${RAID_SIZE} striped ${NUM} ${CHUNK}"
  36. COUNT=$((${MEMBER_SIZE} / ${BS}))
  37. for i in $(seq 0 ${SEQ_END}); do
  38. dd if=/dev/zero of=member-${i} bs=${MEMBER_SIZE} count=1 oflag=direct
  39. losetup /dev/loop${i} member-${i}
  40. DM_PARMS+=" /dev/loop${i} 0"
  41. done
  42. echo $DM_PARMS | dmsetup create raid0
  43. for i in $(seq 0 ${SEQ_END}); do
  44. echo "0 1 unstriped ${NUM} ${CHUNK} ${i} /dev/mapper/raid0 0" | dmsetup create set-${i}
  45. done;
  46. for i in $(seq 0 ${SEQ_END}); do
  47. dd if=/dev/urandom of=/dev/mapper/set-${i} bs=${BS} count=${COUNT} oflag=direct
  48. diff /dev/mapper/set-${i} member-${i}
  49. done;
  50. for i in $(seq 0 ${SEQ_END}); do
  51. dmsetup remove set-${i}
  52. done
  53. dmsetup remove raid0
  54. for i in $(seq 0 ${SEQ_END}); do
  55. losetup -d /dev/loop${i}
  56. rm -f member-${i}
  57. done
  58. Another example
  59. ---------------
  60. Intel NVMe drives contain two cores on the physical device.
  61. Each core of the drive has segregated access to its LBA range.
  62. The current LBA model has a RAID 0 128k chunk on each core, resulting
  63. in a 256k stripe across the two cores:
  64. Core 0: Core 1:
  65. __________ __________
  66. | LBA 512| | LBA 768|
  67. | LBA 0 | | LBA 256|
  68. ---------- ----------
  69. The purpose of this unstriping is to provide better QoS in noisy
  70. neighbor environments. When two partitions are created on the
  71. aggregate drive without this unstriping, reads on one partition
  72. can affect writes on another partition. This is because the partitions
  73. are striped across the two cores. When we unstripe this hardware RAID 0
  74. and make partitions on each new exposed device the two partitions are now
  75. physically separated.
  76. With the dm-unstriped target we're able to segregate an fio script that
  77. has read and write jobs that are independent of each other. Compared to
  78. when we run the test on a combined drive with partitions, we were able
  79. to get a 92% reduction in read latency using this device mapper target.
  80. Example dmsetup usage
  81. =====================
  82. unstriped ontop of Intel NVMe device that has 2 cores
  83. -----------------------------------------------------
  84. dmsetup create nvmset0 --table '0 512 unstriped 2 256 0 /dev/nvme0n1 0'
  85. dmsetup create nvmset1 --table '0 512 unstriped 2 256 1 /dev/nvme0n1 0'
  86. There will now be two devices that expose Intel NVMe core 0 and 1
  87. respectively:
  88. /dev/mapper/nvmset0
  89. /dev/mapper/nvmset1
  90. unstriped ontop of striped with 4 drives using 128K chunk size
  91. --------------------------------------------------------------
  92. dmsetup create raid_disk0 --table '0 512 unstriped 4 256 0 /dev/mapper/striped 0'
  93. dmsetup create raid_disk1 --table '0 512 unstriped 4 256 1 /dev/mapper/striped 0'
  94. dmsetup create raid_disk2 --table '0 512 unstriped 4 256 2 /dev/mapper/striped 0'
  95. dmsetup create raid_disk3 --table '0 512 unstriped 4 256 3 /dev/mapper/striped 0'