cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

hmm.rst (21530B)


      1.. _hmm:
      2
      3=====================================
      4Heterogeneous Memory Management (HMM)
      5=====================================
      6
      7Provide infrastructure and helpers to integrate non-conventional memory (device
      8memory like GPU on board memory) into regular kernel path, with the cornerstone
      9of this being specialized struct page for such memory (see sections 5 to 7 of
     10this document).
     11
     12HMM also provides optional helpers for SVM (Share Virtual Memory), i.e.,
     13allowing a device to transparently access program addresses coherently with
     14the CPU meaning that any valid pointer on the CPU is also a valid pointer
     15for the device. This is becoming mandatory to simplify the use of advanced
     16heterogeneous computing where GPU, DSP, or FPGA are used to perform various
     17computations on behalf of a process.
     18
     19This document is divided as follows: in the first section I expose the problems
     20related to using device specific memory allocators. In the second section, I
     21expose the hardware limitations that are inherent to many platforms. The third
     22section gives an overview of the HMM design. The fourth section explains how
     23CPU page-table mirroring works and the purpose of HMM in this context. The
     24fifth section deals with how device memory is represented inside the kernel.
     25Finally, the last section presents a new migration helper that allows
     26leveraging the device DMA engine.
     27
     28.. contents:: :local:
     29
     30Problems of using a device specific memory allocator
     31====================================================
     32
     33Devices with a large amount of on board memory (several gigabytes) like GPUs
     34have historically managed their memory through dedicated driver specific APIs.
     35This creates a disconnect between memory allocated and managed by a device
     36driver and regular application memory (private anonymous, shared memory, or
     37regular file backed memory). From here on I will refer to this aspect as split
     38address space. I use shared address space to refer to the opposite situation:
     39i.e., one in which any application memory region can be used by a device
     40transparently.
     41
     42Split address space happens because devices can only access memory allocated
     43through a device specific API. This implies that all memory objects in a program
     44are not equal from the device point of view which complicates large programs
     45that rely on a wide set of libraries.
     46
     47Concretely, this means that code that wants to leverage devices like GPUs needs
     48to copy objects between generically allocated memory (malloc, mmap private, mmap
     49share) and memory allocated through the device driver API (this still ends up
     50with an mmap but of the device file).
     51
     52For flat data sets (array, grid, image, ...) this isn't too hard to achieve but
     53for complex data sets (list, tree, ...) it's hard to get right. Duplicating a
     54complex data set needs to re-map all the pointer relations between each of its
     55elements. This is error prone and programs get harder to debug because of the
     56duplicate data set and addresses.
     57
     58Split address space also means that libraries cannot transparently use data
     59they are getting from the core program or another library and thus each library
     60might have to duplicate its input data set using the device specific memory
     61allocator. Large projects suffer from this and waste resources because of the
     62various memory copies.
     63
     64Duplicating each library API to accept as input or output memory allocated by
     65each device specific allocator is not a viable option. It would lead to a
     66combinatorial explosion in the library entry points.
     67
     68Finally, with the advance of high level language constructs (in C++ but in
     69other languages too) it is now possible for the compiler to leverage GPUs and
     70other devices without programmer knowledge. Some compiler identified patterns
     71are only do-able with a shared address space. It is also more reasonable to use
     72a shared address space for all other patterns.
     73
     74
     75I/O bus, device memory characteristics
     76======================================
     77
     78I/O buses cripple shared address spaces due to a few limitations. Most I/O
     79buses only allow basic memory access from device to main memory; even cache
     80coherency is often optional. Access to device memory from a CPU is even more
     81limited. More often than not, it is not cache coherent.
     82
     83If we only consider the PCIE bus, then a device can access main memory (often
     84through an IOMMU) and be cache coherent with the CPUs. However, it only allows
     85a limited set of atomic operations from the device on main memory. This is worse
     86in the other direction: the CPU can only access a limited range of the device
     87memory and cannot perform atomic operations on it. Thus device memory cannot
     88be considered the same as regular memory from the kernel point of view.
     89
     90Another crippling factor is the limited bandwidth (~32GBytes/s with PCIE 4.0
     91and 16 lanes). This is 33 times less than the fastest GPU memory (1 TBytes/s).
     92The final limitation is latency. Access to main memory from the device has an
     93order of magnitude higher latency than when the device accesses its own memory.
     94
     95Some platforms are developing new I/O buses or additions/modifications to PCIE
     96to address some of these limitations (OpenCAPI, CCIX). They mainly allow
     97two-way cache coherency between CPU and device and allow all atomic operations the
     98architecture supports. Sadly, not all platforms are following this trend and
     99some major architectures are left without hardware solutions to these problems.
    100
    101So for shared address space to make sense, not only must we allow devices to
    102access any memory but we must also permit any memory to be migrated to device
    103memory while the device is using it (blocking CPU access while it happens).
    104
    105
    106Shared address space and migration
    107==================================
    108
    109HMM intends to provide two main features. The first one is to share the address
    110space by duplicating the CPU page table in the device page table so the same
    111address points to the same physical memory for any valid main memory address in
    112the process address space.
    113
    114To achieve this, HMM offers a set of helpers to populate the device page table
    115while keeping track of CPU page table updates. Device page table updates are
    116not as easy as CPU page table updates. To update the device page table, you must
    117allocate a buffer (or use a pool of pre-allocated buffers) and write GPU
    118specific commands in it to perform the update (unmap, cache invalidations, and
    119flush, ...). This cannot be done through common code for all devices. Hence
    120why HMM provides helpers to factor out everything that can be while leaving the
    121hardware specific details to the device driver.
    122
    123The second mechanism HMM provides is a new kind of ZONE_DEVICE memory that
    124allows allocating a struct page for each page of device memory. Those pages
    125are special because the CPU cannot map them. However, they allow migrating
    126main memory to device memory using existing migration mechanisms and everything
    127looks like a page that is swapped out to disk from the CPU point of view. Using a
    128struct page gives the easiest and cleanest integration with existing mm
    129mechanisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE
    130memory for the device memory and second to perform migration. Policy decisions
    131of what and when to migrate is left to the device driver.
    132
    133Note that any CPU access to a device page triggers a page fault and a migration
    134back to main memory. For example, when a page backing a given CPU address A is
    135migrated from a main memory page to a device page, then any CPU access to
    136address A triggers a page fault and initiates a migration back to main memory.
    137
    138With these two features, HMM not only allows a device to mirror process address
    139space and keeps both CPU and device page tables synchronized, but also
    140leverages device memory by migrating the part of the data set that is actively being
    141used by the device.
    142
    143
    144Address space mirroring implementation and API
    145==============================================
    146
    147Address space mirroring's main objective is to allow duplication of a range of
    148CPU page table into a device page table; HMM helps keep both synchronized. A
    149device driver that wants to mirror a process address space must start with the
    150registration of a mmu_interval_notifier::
    151
    152 int mmu_interval_notifier_insert(struct mmu_interval_notifier *interval_sub,
    153				  struct mm_struct *mm, unsigned long start,
    154				  unsigned long length,
    155				  const struct mmu_interval_notifier_ops *ops);
    156
    157During the ops->invalidate() callback the device driver must perform the
    158update action to the range (mark range read only, or fully unmap, etc.). The
    159device must complete the update before the driver callback returns.
    160
    161When the device driver wants to populate a range of virtual addresses, it can
    162use::
    163
    164  int hmm_range_fault(struct hmm_range *range);
    165
    166It will trigger a page fault on missing or read-only entries if write access is
    167requested (see below). Page faults use the generic mm page fault code path just
    168like a CPU page fault.
    169
    170Both functions copy CPU page table entries into their pfns array argument. Each
    171entry in that array corresponds to an address in the virtual range. HMM
    172provides a set of flags to help the driver identify special CPU page table
    173entries.
    174
    175Locking within the sync_cpu_device_pagetables() callback is the most important
    176aspect the driver must respect in order to keep things properly synchronized.
    177The usage pattern is::
    178
    179 int driver_populate_range(...)
    180 {
    181      struct hmm_range range;
    182      ...
    183
    184      range.notifier = &interval_sub;
    185      range.start = ...;
    186      range.end = ...;
    187      range.hmm_pfns = ...;
    188
    189      if (!mmget_not_zero(interval_sub->notifier.mm))
    190          return -EFAULT;
    191
    192 again:
    193      range.notifier_seq = mmu_interval_read_begin(&interval_sub);
    194      mmap_read_lock(mm);
    195      ret = hmm_range_fault(&range);
    196      if (ret) {
    197          mmap_read_unlock(mm);
    198          if (ret == -EBUSY)
    199                 goto again;
    200          return ret;
    201      }
    202      mmap_read_unlock(mm);
    203
    204      take_lock(driver->update);
    205      if (mmu_interval_read_retry(&ni, range.notifier_seq) {
    206          release_lock(driver->update);
    207          goto again;
    208      }
    209
    210      /* Use pfns array content to update device page table,
    211       * under the update lock */
    212
    213      release_lock(driver->update);
    214      return 0;
    215 }
    216
    217The driver->update lock is the same lock that the driver takes inside its
    218invalidate() callback. That lock must be held before calling
    219mmu_interval_read_retry() to avoid any race with a concurrent CPU page table
    220update.
    221
    222Leverage default_flags and pfn_flags_mask
    223=========================================
    224
    225The hmm_range struct has 2 fields, default_flags and pfn_flags_mask, that specify
    226fault or snapshot policy for the whole range instead of having to set them
    227for each entry in the pfns array.
    228
    229For instance if the device driver wants pages for a range with at least read
    230permission, it sets::
    231
    232    range->default_flags = HMM_PFN_REQ_FAULT;
    233    range->pfn_flags_mask = 0;
    234
    235and calls hmm_range_fault() as described above. This will fill fault all pages
    236in the range with at least read permission.
    237
    238Now let's say the driver wants to do the same except for one page in the range for
    239which it wants to have write permission. Now driver set::
    240
    241    range->default_flags = HMM_PFN_REQ_FAULT;
    242    range->pfn_flags_mask = HMM_PFN_REQ_WRITE;
    243    range->pfns[index_of_write] = HMM_PFN_REQ_WRITE;
    244
    245With this, HMM will fault in all pages with at least read (i.e., valid) and for the
    246address == range->start + (index_of_write << PAGE_SHIFT) it will fault with
    247write permission i.e., if the CPU pte does not have write permission set then HMM
    248will call handle_mm_fault().
    249
    250After hmm_range_fault completes the flag bits are set to the current state of
    251the page tables, ie HMM_PFN_VALID | HMM_PFN_WRITE will be set if the page is
    252writable.
    253
    254
    255Represent and manage device memory from core kernel point of view
    256=================================================================
    257
    258Several different designs were tried to support device memory. The first one
    259used a device specific data structure to keep information about migrated memory
    260and HMM hooked itself in various places of mm code to handle any access to
    261addresses that were backed by device memory. It turns out that this ended up
    262replicating most of the fields of struct page and also needed many kernel code
    263paths to be updated to understand this new kind of memory.
    264
    265Most kernel code paths never try to access the memory behind a page
    266but only care about struct page contents. Because of this, HMM switched to
    267directly using struct page for device memory which left most kernel code paths
    268unaware of the difference. We only need to make sure that no one ever tries to
    269map those pages from the CPU side.
    270
    271Migration to and from device memory
    272===================================
    273
    274Because the CPU cannot access device memory directly, the device driver must
    275use hardware DMA or device specific load/store instructions to migrate data.
    276The migrate_vma_setup(), migrate_vma_pages(), and migrate_vma_finalize()
    277functions are designed to make drivers easier to write and to centralize common
    278code across drivers.
    279
    280Before migrating pages to device private memory, special device private
    281``struct page`` need to be created. These will be used as special "swap"
    282page table entries so that a CPU process will fault if it tries to access
    283a page that has been migrated to device private memory.
    284
    285These can be allocated and freed with::
    286
    287    struct resource *res;
    288    struct dev_pagemap pagemap;
    289
    290    res = request_free_mem_region(&iomem_resource, /* number of bytes */,
    291                                  "name of driver resource");
    292    pagemap.type = MEMORY_DEVICE_PRIVATE;
    293    pagemap.range.start = res->start;
    294    pagemap.range.end = res->end;
    295    pagemap.nr_range = 1;
    296    pagemap.ops = &device_devmem_ops;
    297    memremap_pages(&pagemap, numa_node_id());
    298
    299    memunmap_pages(&pagemap);
    300    release_mem_region(pagemap.range.start, range_len(&pagemap.range));
    301
    302There are also devm_request_free_mem_region(), devm_memremap_pages(),
    303devm_memunmap_pages(), and devm_release_mem_region() when the resources can
    304be tied to a ``struct device``.
    305
    306The overall migration steps are similar to migrating NUMA pages within system
    307memory (see :ref:`Page migration <page_migration>`) but the steps are split
    308between device driver specific code and shared common code:
    309
    3101. ``mmap_read_lock()``
    311
    312   The device driver has to pass a ``struct vm_area_struct`` to
    313   migrate_vma_setup() so the mmap_read_lock() or mmap_write_lock() needs to
    314   be held for the duration of the migration.
    315
    3162. ``migrate_vma_setup(struct migrate_vma *args)``
    317
    318   The device driver initializes the ``struct migrate_vma`` fields and passes
    319   the pointer to migrate_vma_setup(). The ``args->flags`` field is used to
    320   filter which source pages should be migrated. For example, setting
    321   ``MIGRATE_VMA_SELECT_SYSTEM`` will only migrate system memory and
    322   ``MIGRATE_VMA_SELECT_DEVICE_PRIVATE`` will only migrate pages residing in
    323   device private memory. If the latter flag is set, the ``args->pgmap_owner``
    324   field is used to identify device private pages owned by the driver. This
    325   avoids trying to migrate device private pages residing in other devices.
    326   Currently only anonymous private VMA ranges can be migrated to or from
    327   system memory and device private memory.
    328
    329   One of the first steps migrate_vma_setup() does is to invalidate other
    330   device's MMUs with the ``mmu_notifier_invalidate_range_start(()`` and
    331   ``mmu_notifier_invalidate_range_end()`` calls around the page table
    332   walks to fill in the ``args->src`` array with PFNs to be migrated.
    333   The ``invalidate_range_start()`` callback is passed a
    334   ``struct mmu_notifier_range`` with the ``event`` field set to
    335   ``MMU_NOTIFY_MIGRATE`` and the ``owner`` field set to
    336   the ``args->pgmap_owner`` field passed to migrate_vma_setup(). This is
    337   allows the device driver to skip the invalidation callback and only
    338   invalidate device private MMU mappings that are actually migrating.
    339   This is explained more in the next section.
    340
    341   While walking the page tables, a ``pte_none()`` or ``is_zero_pfn()``
    342   entry results in a valid "zero" PFN stored in the ``args->src`` array.
    343   This lets the driver allocate device private memory and clear it instead
    344   of copying a page of zeros. Valid PTE entries to system memory or
    345   device private struct pages will be locked with ``lock_page()``, isolated
    346   from the LRU (if system memory since device private pages are not on
    347   the LRU), unmapped from the process, and a special migration PTE is
    348   inserted in place of the original PTE.
    349   migrate_vma_setup() also clears the ``args->dst`` array.
    350
    3513. The device driver allocates destination pages and copies source pages to
    352   destination pages.
    353
    354   The driver checks each ``src`` entry to see if the ``MIGRATE_PFN_MIGRATE``
    355   bit is set and skips entries that are not migrating. The device driver
    356   can also choose to skip migrating a page by not filling in the ``dst``
    357   array for that page.
    358
    359   The driver then allocates either a device private struct page or a
    360   system memory page, locks the page with ``lock_page()``, and fills in the
    361   ``dst`` array entry with::
    362
    363     dst[i] = migrate_pfn(page_to_pfn(dpage));
    364
    365   Now that the driver knows that this page is being migrated, it can
    366   invalidate device private MMU mappings and copy device private memory
    367   to system memory or another device private page. The core Linux kernel
    368   handles CPU page table invalidations so the device driver only has to
    369   invalidate its own MMU mappings.
    370
    371   The driver can use ``migrate_pfn_to_page(src[i])`` to get the
    372   ``struct page`` of the source and either copy the source page to the
    373   destination or clear the destination device private memory if the pointer
    374   is ``NULL`` meaning the source page was not populated in system memory.
    375
    3764. ``migrate_vma_pages()``
    377
    378   This step is where the migration is actually "committed".
    379
    380   If the source page was a ``pte_none()`` or ``is_zero_pfn()`` page, this
    381   is where the newly allocated page is inserted into the CPU's page table.
    382   This can fail if a CPU thread faults on the same page. However, the page
    383   table is locked and only one of the new pages will be inserted.
    384   The device driver will see that the ``MIGRATE_PFN_MIGRATE`` bit is cleared
    385   if it loses the race.
    386
    387   If the source page was locked, isolated, etc. the source ``struct page``
    388   information is now copied to destination ``struct page`` finalizing the
    389   migration on the CPU side.
    390
    3915. Device driver updates device MMU page tables for pages still migrating,
    392   rolling back pages not migrating.
    393
    394   If the ``src`` entry still has ``MIGRATE_PFN_MIGRATE`` bit set, the device
    395   driver can update the device MMU and set the write enable bit if the
    396   ``MIGRATE_PFN_WRITE`` bit is set.
    397
    3986. ``migrate_vma_finalize()``
    399
    400   This step replaces the special migration page table entry with the new
    401   page's page table entry and releases the reference to the source and
    402   destination ``struct page``.
    403
    4047. ``mmap_read_unlock()``
    405
    406   The lock can now be released.
    407
    408Exclusive access memory
    409=======================
    410
    411Some devices have features such as atomic PTE bits that can be used to implement
    412atomic access to system memory. To support atomic operations to a shared virtual
    413memory page such a device needs access to that page which is exclusive of any
    414userspace access from the CPU. The ``make_device_exclusive_range()`` function
    415can be used to make a memory range inaccessible from userspace.
    416
    417This replaces all mappings for pages in the given range with special swap
    418entries. Any attempt to access the swap entry results in a fault which is
    419resovled by replacing the entry with the original mapping. A driver gets
    420notified that the mapping has been changed by MMU notifiers, after which point
    421it will no longer have exclusive access to the page. Exclusive access is
    422guranteed to last until the driver drops the page lock and page reference, at
    423which point any CPU faults on the page may proceed as described.
    424
    425Memory cgroup (memcg) and rss accounting
    426========================================
    427
    428For now, device memory is accounted as any regular page in rss counters (either
    429anonymous if device page is used for anonymous, file if device page is used for
    430file backed page, or shmem if device page is used for shared memory). This is a
    431deliberate choice to keep existing applications, that might start using device
    432memory without knowing about it, running unimpacted.
    433
    434A drawback is that the OOM killer might kill an application using a lot of
    435device memory and not a lot of regular system memory and thus not freeing much
    436system memory. We want to gather more real world experience on how applications
    437and system react under memory pressure in the presence of device memory before
    438deciding to account device memory differently.
    439
    440
    441Same decision was made for memory cgroup. Device memory pages are accounted
    442against same memory cgroup a regular page would be accounted to. This does
    443simplify migration to and from device memory. This also means that migration
    444back from device memory to regular memory cannot fail because it would
    445go above memory cgroup limit. We might revisit this choice latter on once we
    446get more experience in how device memory is used and its impact on memory
    447resource control.
    448
    449
    450Note that device memory can never be pinned by a device driver nor through GUP
    451and thus such memory is always free upon process exit. Or when last reference
    452is dropped in case of shared memory or file backed memory.