cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

unstriped.rst (4309B)


      1================================
      2Device-mapper "unstriped" target
      3================================
      4
      5Introduction
      6============
      7
      8The device-mapper "unstriped" target provides a transparent mechanism to
      9unstripe a device-mapper "striped" target to access the underlying disks
     10without having to touch the true backing block-device.  It can also be
     11used to unstripe a hardware RAID-0 to access backing disks.
     12
     13Parameters:
     14<number of stripes> <chunk size> <stripe #> <dev_path> <offset>
     15
     16<number of stripes>
     17        The number of stripes in the RAID 0.
     18
     19<chunk size>
     20	The amount of 512B sectors in the chunk striping.
     21
     22<dev_path>
     23	The block device you wish to unstripe.
     24
     25<stripe #>
     26        The stripe number within the device that corresponds to physical
     27        drive you wish to unstripe.  This must be 0 indexed.
     28
     29
     30Why use this module?
     31====================
     32
     33An example of undoing an existing dm-stripe
     34-------------------------------------------
     35
     36This small bash script will setup 4 loop devices and use the existing
     37striped target to combine the 4 devices into one.  It then will use
     38the unstriped target ontop of the striped device to access the
     39individual backing loop devices.  We write data to the newly exposed
     40unstriped devices and verify the data written matches the correct
     41underlying device on the striped array::
     42
     43  #!/bin/bash
     44
     45  MEMBER_SIZE=$((128 * 1024 * 1024))
     46  NUM=4
     47  SEQ_END=$((${NUM}-1))
     48  CHUNK=256
     49  BS=4096
     50
     51  RAID_SIZE=$((${MEMBER_SIZE}*${NUM}/512))
     52  DM_PARMS="0 ${RAID_SIZE} striped ${NUM} ${CHUNK}"
     53  COUNT=$((${MEMBER_SIZE} / ${BS}))
     54
     55  for i in $(seq 0 ${SEQ_END}); do
     56    dd if=/dev/zero of=member-${i} bs=${MEMBER_SIZE} count=1 oflag=direct
     57    losetup /dev/loop${i} member-${i}
     58    DM_PARMS+=" /dev/loop${i} 0"
     59  done
     60
     61  echo $DM_PARMS | dmsetup create raid0
     62  for i in $(seq 0 ${SEQ_END}); do
     63    echo "0 1 unstriped ${NUM} ${CHUNK} ${i} /dev/mapper/raid0 0" | dmsetup create set-${i}
     64  done;
     65
     66  for i in $(seq 0 ${SEQ_END}); do
     67    dd if=/dev/urandom of=/dev/mapper/set-${i} bs=${BS} count=${COUNT} oflag=direct
     68    diff /dev/mapper/set-${i} member-${i}
     69  done;
     70
     71  for i in $(seq 0 ${SEQ_END}); do
     72    dmsetup remove set-${i}
     73  done
     74
     75  dmsetup remove raid0
     76
     77  for i in $(seq 0 ${SEQ_END}); do
     78    losetup -d /dev/loop${i}
     79    rm -f member-${i}
     80  done
     81
     82Another example
     83---------------
     84
     85Intel NVMe drives contain two cores on the physical device.
     86Each core of the drive has segregated access to its LBA range.
     87The current LBA model has a RAID 0 128k chunk on each core, resulting
     88in a 256k stripe across the two cores::
     89
     90   Core 0:       Core 1:
     91  __________    __________
     92  | LBA 512|    | LBA 768|
     93  | LBA 0  |    | LBA 256|
     94  ----------    ----------
     95
     96The purpose of this unstriping is to provide better QoS in noisy
     97neighbor environments. When two partitions are created on the
     98aggregate drive without this unstriping, reads on one partition
     99can affect writes on another partition.  This is because the partitions
    100are striped across the two cores.  When we unstripe this hardware RAID 0
    101and make partitions on each new exposed device the two partitions are now
    102physically separated.
    103
    104With the dm-unstriped target we're able to segregate an fio script that
    105has read and write jobs that are independent of each other.  Compared to
    106when we run the test on a combined drive with partitions, we were able
    107to get a 92% reduction in read latency using this device mapper target.
    108
    109
    110Example dmsetup usage
    111=====================
    112
    113unstriped ontop of Intel NVMe device that has 2 cores
    114-----------------------------------------------------
    115
    116::
    117
    118  dmsetup create nvmset0 --table '0 512 unstriped 2 256 0 /dev/nvme0n1 0'
    119  dmsetup create nvmset1 --table '0 512 unstriped 2 256 1 /dev/nvme0n1 0'
    120
    121There will now be two devices that expose Intel NVMe core 0 and 1
    122respectively::
    123
    124  /dev/mapper/nvmset0
    125  /dev/mapper/nvmset1
    126
    127unstriped ontop of striped with 4 drives using 128K chunk size
    128--------------------------------------------------------------
    129
    130::
    131
    132  dmsetup create raid_disk0 --table '0 512 unstriped 4 256 0 /dev/mapper/striped 0'
    133  dmsetup create raid_disk1 --table '0 512 unstriped 4 256 1 /dev/mapper/striped 0'
    134  dmsetup create raid_disk2 --table '0 512 unstriped 4 256 2 /dev/mapper/striped 0'
    135  dmsetup create raid_disk3 --table '0 512 unstriped 4 256 3 /dev/mapper/striped 0'