cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

ntfs.rst (20716B)


      1.. SPDX-License-Identifier: GPL-2.0
      2
      3================================
      4The Linux NTFS filesystem driver
      5================================
      6
      7
      8.. Table of contents
      9
     10   - Overview
     11   - Web site
     12   - Features
     13   - Supported mount options
     14   - Known bugs and (mis-)features
     15   - Using NTFS volume and stripe sets
     16     - The Device-Mapper driver
     17     - The Software RAID / MD driver
     18     - Limitations when using the MD driver
     19
     20
     21Overview
     22========
     23
     24Linux-NTFS comes with a number of user-space programs known as ntfsprogs.
     25These include mkntfs, a full-featured ntfs filesystem format utility,
     26ntfsundelete used for recovering files that were unintentionally deleted
     27from an NTFS volume and ntfsresize which is used to resize an NTFS partition.
     28See the web site for more information.
     29
     30To mount an NTFS 1.2/3.x (Windows NT4/2000/XP/2003) volume, use the file
     31system type 'ntfs'.  The driver currently supports read-only mode (with no
     32fault-tolerance, encryption or journalling) and very limited, but safe, write
     33support.
     34
     35For fault tolerance and raid support (i.e. volume and stripe sets), you can
     36use the kernel's Software RAID / MD driver.  See section "Using Software RAID
     37with NTFS" for details.
     38
     39
     40Web site
     41========
     42
     43There is plenty of additional information on the linux-ntfs web site
     44at http://www.linux-ntfs.org/
     45
     46The web site has a lot of additional information, such as a comprehensive
     47FAQ, documentation on the NTFS on-disk format, information on the Linux-NTFS
     48userspace utilities, etc.
     49
     50
     51Features
     52========
     53
     54- This is a complete rewrite of the NTFS driver that used to be in the 2.4 and
     55  earlier kernels.  This new driver implements NTFS read support and is
     56  functionally equivalent to the old ntfs driver and it also implements limited
     57  write support.  The biggest limitation at present is that files/directories
     58  cannot be created or deleted.  See below for the list of write features that
     59  are so far supported.  Another limitation is that writing to compressed files
     60  is not implemented at all.  Also, neither read nor write access to encrypted
     61  files is so far implemented.
     62- The new driver has full support for sparse files on NTFS 3.x volumes which
     63  the old driver isn't happy with.
     64- The new driver supports execution of binaries due to mmap() now being
     65  supported.
     66- The new driver supports loopback mounting of files on NTFS which is used by
     67  some Linux distributions to enable the user to run Linux from an NTFS
     68  partition by creating a large file while in Windows and then loopback
     69  mounting the file while in Linux and creating a Linux filesystem on it that
     70  is used to install Linux on it.
     71- A comparison of the two drivers using::
     72
     73	time find . -type f -exec md5sum "{}" \;
     74
     75  run three times in sequence with each driver (after a reboot) on a 1.4GiB
     76  NTFS partition, showed the new driver to be 20% faster in total time elapsed
     77  (from 9:43 minutes on average down to 7:53).  The time spent in user space
     78  was unchanged but the time spent in the kernel was decreased by a factor of
     79  2.5 (from 85 CPU seconds down to 33).
     80- The driver does not support short file names in general.  For backwards
     81  compatibility, we implement access to files using their short file names if
     82  they exist.  The driver will not create short file names however, and a
     83  rename will discard any existing short file name.
     84- The new driver supports exporting of mounted NTFS volumes via NFS.
     85- The new driver supports async io (aio).
     86- The new driver supports fsync(2), fdatasync(2), and msync(2).
     87- The new driver supports readv(2) and writev(2).
     88- The new driver supports access time updates (including mtime and ctime).
     89- The new driver supports truncate(2) and open(2) with O_TRUNC.  But at present
     90  only very limited support for highly fragmented files, i.e. ones which have
     91  their data attribute split across multiple extents, is included.  Another
     92  limitation is that at present truncate(2) will never create sparse files,
     93  since to mark a file sparse we need to modify the directory entry for the
     94  file and we do not implement directory modifications yet.
     95- The new driver supports write(2) which can both overwrite existing data and
     96  extend the file size so that you can write beyond the existing data.  Also,
     97  writing into sparse regions is supported and the holes are filled in with
     98  clusters.  But at present only limited support for highly fragmented files,
     99  i.e. ones which have their data attribute split across multiple extents, is
    100  included.  Another limitation is that write(2) will never create sparse
    101  files, since to mark a file sparse we need to modify the directory entry for
    102  the file and we do not implement directory modifications yet.
    103
    104Supported mount options
    105=======================
    106
    107In addition to the generic mount options described by the manual page for the
    108mount command (man 8 mount, also see man 5 fstab), the NTFS driver supports the
    109following mount options:
    110
    111======================= =======================================================
    112iocharset=name		Deprecated option.  Still supported but please use
    113			nls=name in the future.  See description for nls=name.
    114
    115nls=name		Character set to use when returning file names.
    116			Unlike VFAT, NTFS suppresses names that contain
    117			unconvertible characters.  Note that most character
    118			sets contain insufficient characters to represent all
    119			possible Unicode characters that can exist on NTFS.
    120			To be sure you are not missing any files, you are
    121			advised to use nls=utf8 which is capable of
    122			representing all Unicode characters.
    123
    124utf8=<bool>		Option no longer supported.  Currently mapped to
    125			nls=utf8 but please use nls=utf8 in the future and
    126			make sure utf8 is compiled either as module or into
    127			the kernel.  See description for nls=name.
    128
    129uid=
    130gid=
    131umask=			Provide default owner, group, and access mode mask.
    132			These options work as documented in mount(8).  By
    133			default, the files/directories are owned by root and
    134			he/she has read and write permissions, as well as
    135			browse permission for directories.  No one else has any
    136			access permissions.  I.e. the mode on all files is by
    137			default rw------- and for directories rwx------, a
    138			consequence of the default fmask=0177 and dmask=0077.
    139			Using a umask of zero will grant all permissions to
    140			everyone, i.e. all files and directories will have mode
    141			rwxrwxrwx.
    142
    143fmask=
    144dmask=			Instead of specifying umask which applies both to
    145			files and directories, fmask applies only to files and
    146			dmask only to directories.
    147
    148sloppy=<BOOL>		If sloppy is specified, ignore unknown mount options.
    149			Otherwise the default behaviour is to abort mount if
    150			any unknown options are found.
    151
    152show_sys_files=<BOOL>	If show_sys_files is specified, show the system files
    153			in directory listings.  Otherwise the default behaviour
    154			is to hide the system files.
    155			Note that even when show_sys_files is specified, "$MFT"
    156			will not be visible due to bugs/mis-features in glibc.
    157			Further, note that irrespective of show_sys_files, all
    158			files are accessible by name, i.e. you can always do
    159			"ls -l \$UpCase" for example to specifically show the
    160			system file containing the Unicode upcase table.
    161
    162case_sensitive=<BOOL>	If case_sensitive is specified, treat all file names as
    163			case sensitive and create file names in the POSIX
    164			namespace.  Otherwise the default behaviour is to treat
    165			file names as case insensitive and to create file names
    166			in the WIN32/LONG name space.  Note, the Linux NTFS
    167			driver will never create short file names and will
    168			remove them on rename/delete of the corresponding long
    169			file name.
    170			Note that files remain accessible via their short file
    171			name, if it exists.  If case_sensitive, you will need
    172			to provide the correct case of the short file name.
    173
    174disable_sparse=<BOOL>	If disable_sparse is specified, creation of sparse
    175			regions, i.e. holes, inside files is disabled for the
    176			volume (for the duration of this mount only).  By
    177			default, creation of sparse regions is enabled, which
    178			is consistent with the behaviour of traditional Unix
    179			filesystems.
    180
    181errors=opt		What to do when critical filesystem errors are found.
    182			Following values can be used for "opt":
    183
    184			  ========  =========================================
    185			  continue  DEFAULT, try to clean-up as much as
    186				    possible, e.g. marking a corrupt inode as
    187				    bad so it is no longer accessed, and then
    188				    continue.
    189			  recover   At present only supported is recovery of
    190				    the boot sector from the backup copy.
    191				    If read-only mount, the recovery is done
    192				    in memory only and not written to disk.
    193			  ========  =========================================
    194
    195			Note that the options are additive, i.e. specifying::
    196
    197			   errors=continue,errors=recover
    198
    199			means the driver will attempt to recover and if that
    200			fails it will clean-up as much as possible and
    201			continue.
    202
    203mft_zone_multiplier=	Set the MFT zone multiplier for the volume (this
    204			setting is not persistent across mounts and can be
    205			changed from mount to mount but cannot be changed on
    206			remount).  Values of 1 to 4 are allowed, 1 being the
    207			default.  The MFT zone multiplier determines how much
    208			space is reserved for the MFT on the volume.  If all
    209			other space is used up, then the MFT zone will be
    210			shrunk dynamically, so this has no impact on the
    211			amount of free space.  However, it can have an impact
    212			on performance by affecting fragmentation of the MFT.
    213			In general use the default.  If you have a lot of small
    214			files then use a higher value.  The values have the
    215			following meaning:
    216
    217			      =====	    =================================
    218			      Value	     MFT zone size (% of volume size)
    219			      =====	    =================================
    220				1		12.5%
    221				2		25%
    222				3		37.5%
    223				4		50%
    224			      =====	    =================================
    225
    226			Note this option is irrelevant for read-only mounts.
    227======================= =======================================================
    228
    229
    230Known bugs and (mis-)features
    231=============================
    232
    233- The link count on each directory inode entry is set to 1, due to Linux not
    234  supporting directory hard links.  This may well confuse some user space
    235  applications, since the directory names will have the same inode numbers.
    236  This also speeds up ntfs_read_inode() immensely.  And we haven't found any
    237  problems with this approach so far.  If you find a problem with this, please
    238  let us know.
    239
    240
    241Please send bug reports/comments/feedback/abuse to the Linux-NTFS development
    242list at sourceforge: linux-ntfs-dev@lists.sourceforge.net
    243
    244
    245Using NTFS volume and stripe sets
    246=================================
    247
    248For support of volume and stripe sets, you can either use the kernel's
    249Device-Mapper driver or the kernel's Software RAID / MD driver.  The former is
    250the recommended one to use for linear raid.  But the latter is required for
    251raid level 5.  For striping and mirroring, either driver should work fine.
    252
    253
    254The Device-Mapper driver
    255------------------------
    256
    257You will need to create a table of the components of the volume/stripe set and
    258how they fit together and load this into the kernel using the dmsetup utility
    259(see man 8 dmsetup).
    260
    261Linear volume sets, i.e. linear raid, has been tested and works fine.  Even
    262though untested, there is no reason why stripe sets, i.e. raid level 0, and
    263mirrors, i.e. raid level 1 should not work, too.  Stripes with parity, i.e.
    264raid level 5, unfortunately cannot work yet because the current version of the
    265Device-Mapper driver does not support raid level 5.  You may be able to use the
    266Software RAID / MD driver for raid level 5, see the next section for details.
    267
    268To create the table describing your volume you will need to know each of its
    269components and their sizes in sectors, i.e. multiples of 512-byte blocks.
    270
    271For NT4 fault tolerant volumes you can obtain the sizes using fdisk.  So for
    272example if one of your partitions is /dev/hda2 you would do::
    273
    274    $ fdisk -ul /dev/hda
    275
    276    Disk /dev/hda: 81.9 GB, 81964302336 bytes
    277    255 heads, 63 sectors/track, 9964 cylinders, total 160086528 sectors
    278    Units = sectors of 1 * 512 = 512 bytes
    279
    280	Device Boot      Start         End      Blocks   Id  System
    281	/dev/hda1   *          63     4209029     2104483+  83  Linux
    282	/dev/hda2         4209030    37768814    16779892+  86  NTFS
    283	/dev/hda3        37768815    46170809     4200997+  83  Linux
    284
    285And you would know that /dev/hda2 has a size of 37768814 - 4209030 + 1 =
    28633559785 sectors.
    287
    288For Win2k and later dynamic disks, you can for example use the ldminfo utility
    289which is part of the Linux LDM tools (the latest version at the time of
    290writing is linux-ldm-0.0.8.tar.bz2).  You can download it from:
    291
    292	http://www.linux-ntfs.org/
    293
    294Simply extract the downloaded archive (tar xvjf linux-ldm-0.0.8.tar.bz2), go
    295into it (cd linux-ldm-0.0.8) and change to the test directory (cd test).  You
    296will find the precompiled (i386) ldminfo utility there.  NOTE: You will not be
    297able to compile this yourself easily so use the binary version!
    298
    299Then you would use ldminfo in dump mode to obtain the necessary information::
    300
    301    $ ./ldminfo --dump /dev/hda
    302
    303This would dump the LDM database found on /dev/hda which describes all of your
    304dynamic disks and all the volumes on them.  At the bottom you will see the
    305VOLUME DEFINITIONS section which is all you really need.  You may need to look
    306further above to determine which of the disks in the volume definitions is
    307which device in Linux.  Hint: Run ldminfo on each of your dynamic disks and
    308look at the Disk Id close to the top of the output for each (the PRIVATE HEADER
    309section).  You can then find these Disk Ids in the VBLK DATABASE section in the
    310<Disk> components where you will get the LDM Name for the disk that is found in
    311the VOLUME DEFINITIONS section.
    312
    313Note you will also need to enable the LDM driver in the Linux kernel.  If your
    314distribution did not enable it, you will need to recompile the kernel with it
    315enabled.  This will create the LDM partitions on each device at boot time.  You
    316would then use those devices (for /dev/hda they would be /dev/hda1, 2, 3, etc)
    317in the Device-Mapper table.
    318
    319You can also bypass using the LDM driver by using the main device (e.g.
    320/dev/hda) and then using the offsets of the LDM partitions into this device as
    321the "Start sector of device" when creating the table.  Once again ldminfo would
    322give you the correct information to do this.
    323
    324Assuming you know all your devices and their sizes things are easy.
    325
    326For a linear raid the table would look like this (note all values are in
    327512-byte sectors)::
    328
    329    # Offset into	Size of this	Raid type	Device		Start sector
    330    # volume	device						of device
    331    0		1028161		linear		/dev/hda1	0
    332    1028161		3903762		linear		/dev/hdb2	0
    333    4931923		2103211		linear		/dev/hdc1	0
    334
    335For a striped volume, i.e. raid level 0, you will need to know the chunk size
    336you used when creating the volume.  Windows uses 64kiB as the default, so it
    337will probably be this unless you changes the defaults when creating the array.
    338
    339For a raid level 0 the table would look like this (note all values are in
    340512-byte sectors)::
    341
    342    # Offset   Size	    Raid     Number   Chunk  1st        Start	2nd	  Start
    343    # into     of the   type     of	      size   Device	in	Device	  in
    344    # volume   volume	     stripes			device		  device
    345    0	   2056320  striped  2	      128    /dev/hda1	0	/dev/hdb1 0
    346
    347If there are more than two devices, just add each of them to the end of the
    348line.
    349
    350Finally, for a mirrored volume, i.e. raid level 1, the table would look like
    351this (note all values are in 512-byte sectors)::
    352
    353    # Ofs Size   Raid   Log  Number Region Should Number Source  Start Target Start
    354    # in  of the type   type of log size   sync?  of     Device  in    Device in
    355    # vol volume		 params		     mirrors	     Device	  Device
    356    0    2056320 mirror core 2	16     nosync 2	   /dev/hda1 0   /dev/hdb1 0
    357
    358If you are mirroring to multiple devices you can specify further targets at the
    359end of the line.
    360
    361Note the "Should sync?" parameter "nosync" means that the two mirrors are
    362already in sync which will be the case on a clean shutdown of Windows.  If the
    363mirrors are not clean, you can specify the "sync" option instead of "nosync"
    364and the Device-Mapper driver will then copy the entirety of the "Source Device"
    365to the "Target Device" or if you specified multiple target devices to all of
    366them.
    367
    368Once you have your table, save it in a file somewhere (e.g. /etc/ntfsvolume1),
    369and hand it over to dmsetup to work with, like so::
    370
    371    $ dmsetup create myvolume1 /etc/ntfsvolume1
    372
    373You can obviously replace "myvolume1" with whatever name you like.
    374
    375If it all worked, you will now have the device /dev/device-mapper/myvolume1
    376which you can then just use as an argument to the mount command as usual to
    377mount the ntfs volume.  For example::
    378
    379    $ mount -t ntfs -o ro /dev/device-mapper/myvolume1 /mnt/myvol1
    380
    381(You need to create the directory /mnt/myvol1 first and of course you can use
    382anything you like instead of /mnt/myvol1 as long as it is an existing
    383directory.)
    384
    385It is advisable to do the mount read-only to see if the volume has been setup
    386correctly to avoid the possibility of causing damage to the data on the ntfs
    387volume.
    388
    389
    390The Software RAID / MD driver
    391-----------------------------
    392
    393An alternative to using the Device-Mapper driver is to use the kernel's
    394Software RAID / MD driver.  For which you need to set up your /etc/raidtab
    395appropriately (see man 5 raidtab).
    396
    397Linear volume sets, i.e. linear raid, as well as stripe sets, i.e. raid level
    3980, have been tested and work fine (though see section "Limitations when using
    399the MD driver with NTFS volumes" especially if you want to use linear raid).
    400Even though untested, there is no reason why mirrors, i.e. raid level 1, and
    401stripes with parity, i.e. raid level 5, should not work, too.
    402
    403You have to use the "persistent-superblock 0" option for each raid-disk in the
    404NTFS volume/stripe you are configuring in /etc/raidtab as the persistent
    405superblock used by the MD driver would damage the NTFS volume.
    406
    407Windows by default uses a stripe chunk size of 64k, so you probably want the
    408"chunk-size 64k" option for each raid-disk, too.
    409
    410For example, if you have a stripe set consisting of two partitions /dev/hda5
    411and /dev/hdb1 your /etc/raidtab would look like this::
    412
    413    raiddev /dev/md0
    414	    raid-level	0
    415	    nr-raid-disks	2
    416	    nr-spare-disks	0
    417	    persistent-superblock	0
    418	    chunk-size	64k
    419	    device		/dev/hda5
    420	    raid-disk	0
    421	    device		/dev/hdb1
    422	    raid-disk	1
    423
    424For linear raid, just change the raid-level above to "raid-level linear", for
    425mirrors, change it to "raid-level 1", and for stripe sets with parity, change
    426it to "raid-level 5".
    427
    428Note for stripe sets with parity you will also need to tell the MD driver
    429which parity algorithm to use by specifying the option "parity-algorithm
    430which", where you need to replace "which" with the name of the algorithm to
    431use (see man 5 raidtab for available algorithms) and you will have to try the
    432different available algorithms until you find one that works.  Make sure you
    433are working read-only when playing with this as you may damage your data
    434otherwise.  If you find which algorithm works please let us know (email the
    435linux-ntfs developers list linux-ntfs-dev@lists.sourceforge.net or drop in on
    436IRC in channel #ntfs on the irc.freenode.net network) so we can update this
    437documentation.
    438
    439Once the raidtab is setup, run for example raid0run -a to start all devices or
    440raid0run /dev/md0 to start a particular md device, in this case /dev/md0.
    441
    442Then just use the mount command as usual to mount the ntfs volume using for
    443example::
    444
    445    mount -t ntfs -o ro /dev/md0 /mnt/myntfsvolume
    446
    447It is advisable to do the mount read-only to see if the md volume has been
    448setup correctly to avoid the possibility of causing damage to the data on the
    449ntfs volume.
    450
    451
    452Limitations when using the Software RAID / MD driver
    453-----------------------------------------------------
    454
    455Using the md driver will not work properly if any of your NTFS partitions have
    456an odd number of sectors.  This is especially important for linear raid as all
    457data after the first partition with an odd number of sectors will be offset by
    458one or more sectors so if you mount such a partition with write support you
    459will cause massive damage to the data on the volume which will only become
    460apparent when you try to use the volume again under Windows.
    461
    462So when using linear raid, make sure that all your partitions have an even
    463number of sectors BEFORE attempting to use it.  You have been warned!
    464
    465Even better is to simply use the Device-Mapper for linear raid and then you do
    466not have this problem with odd numbers of sectors.