cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

drm-mm.rst (18697B)


      1=====================
      2DRM Memory Management
      3=====================
      4
      5Modern Linux systems require large amount of graphics memory to store
      6frame buffers, textures, vertices and other graphics-related data. Given
      7the very dynamic nature of many of that data, managing graphics memory
      8efficiently is thus crucial for the graphics stack and plays a central
      9role in the DRM infrastructure.
     10
     11The DRM core includes two memory managers, namely Translation Table Manager
     12(TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory
     13manager to be developed and tried to be a one-size-fits-them all
     14solution. It provides a single userspace API to accommodate the need of
     15all hardware, supporting both Unified Memory Architecture (UMA) devices
     16and devices with dedicated video RAM (i.e. most discrete video cards).
     17This resulted in a large, complex piece of code that turned out to be
     18hard to use for driver development.
     19
     20GEM started as an Intel-sponsored project in reaction to TTM's
     21complexity. Its design philosophy is completely different: instead of
     22providing a solution to every graphics memory-related problems, GEM
     23identified common code between drivers and created a support library to
     24share it. GEM has simpler initialization and execution requirements than
     25TTM, but has no video RAM management capabilities and is thus limited to
     26UMA devices.
     27
     28The Translation Table Manager (TTM)
     29===================================
     30
     31.. kernel-doc:: drivers/gpu/drm/ttm/ttm_module.c
     32   :doc: TTM
     33
     34.. kernel-doc:: include/drm/ttm/ttm_caching.h
     35   :internal:
     36
     37TTM device object reference
     38---------------------------
     39
     40.. kernel-doc:: include/drm/ttm/ttm_device.h
     41   :internal:
     42
     43.. kernel-doc:: drivers/gpu/drm/ttm/ttm_device.c
     44   :export:
     45
     46TTM resource placement reference
     47--------------------------------
     48
     49.. kernel-doc:: include/drm/ttm/ttm_placement.h
     50   :internal:
     51
     52TTM resource object reference
     53-----------------------------
     54
     55.. kernel-doc:: include/drm/ttm/ttm_resource.h
     56   :internal:
     57
     58.. kernel-doc:: drivers/gpu/drm/ttm/ttm_resource.c
     59   :export:
     60
     61TTM TT object reference
     62-----------------------
     63
     64.. kernel-doc:: include/drm/ttm/ttm_tt.h
     65   :internal:
     66
     67.. kernel-doc:: drivers/gpu/drm/ttm/ttm_tt.c
     68   :export:
     69
     70TTM page pool reference
     71-----------------------
     72
     73.. kernel-doc:: include/drm/ttm/ttm_pool.h
     74   :internal:
     75
     76.. kernel-doc:: drivers/gpu/drm/ttm/ttm_pool.c
     77   :export:
     78
     79The Graphics Execution Manager (GEM)
     80====================================
     81
     82The GEM design approach has resulted in a memory manager that doesn't
     83provide full coverage of all (or even all common) use cases in its
     84userspace or kernel API. GEM exposes a set of standard memory-related
     85operations to userspace and a set of helper functions to drivers, and
     86let drivers implement hardware-specific operations with their own
     87private API.
     88
     89The GEM userspace API is described in the `GEM - the Graphics Execution
     90Manager <http://lwn.net/Articles/283798/>`__ article on LWN. While
     91slightly outdated, the document provides a good overview of the GEM API
     92principles. Buffer allocation and read and write operations, described
     93as part of the common GEM API, are currently implemented using
     94driver-specific ioctls.
     95
     96GEM is data-agnostic. It manages abstract buffer objects without knowing
     97what individual buffers contain. APIs that require knowledge of buffer
     98contents or purpose, such as buffer allocation or synchronization
     99primitives, are thus outside of the scope of GEM and must be implemented
    100using driver-specific ioctls.
    101
    102On a fundamental level, GEM involves several operations:
    103
    104-  Memory allocation and freeing
    105-  Command execution
    106-  Aperture management at command execution time
    107
    108Buffer object allocation is relatively straightforward and largely
    109provided by Linux's shmem layer, which provides memory to back each
    110object.
    111
    112Device-specific operations, such as command execution, pinning, buffer
    113read & write, mapping, and domain ownership transfers are left to
    114driver-specific ioctls.
    115
    116GEM Initialization
    117------------------
    118
    119Drivers that use GEM must set the DRIVER_GEM bit in the struct
    120:c:type:`struct drm_driver <drm_driver>` driver_features
    121field. The DRM core will then automatically initialize the GEM core
    122before calling the load operation. Behind the scene, this will create a
    123DRM Memory Manager object which provides an address space pool for
    124object allocation.
    125
    126In a KMS configuration, drivers need to allocate and initialize a
    127command ring buffer following core GEM initialization if required by the
    128hardware. UMA devices usually have what is called a "stolen" memory
    129region, which provides space for the initial framebuffer and large,
    130contiguous memory regions required by the device. This space is
    131typically not managed by GEM, and must be initialized separately into
    132its own DRM MM object.
    133
    134GEM Objects Creation
    135--------------------
    136
    137GEM splits creation of GEM objects and allocation of the memory that
    138backs them in two distinct operations.
    139
    140GEM objects are represented by an instance of struct :c:type:`struct
    141drm_gem_object <drm_gem_object>`. Drivers usually need to
    142extend GEM objects with private information and thus create a
    143driver-specific GEM object structure type that embeds an instance of
    144struct :c:type:`struct drm_gem_object <drm_gem_object>`.
    145
    146To create a GEM object, a driver allocates memory for an instance of its
    147specific GEM object type and initializes the embedded struct
    148:c:type:`struct drm_gem_object <drm_gem_object>` with a call
    149to drm_gem_object_init(). The function takes a pointer
    150to the DRM device, a pointer to the GEM object and the buffer object
    151size in bytes.
    152
    153GEM uses shmem to allocate anonymous pageable memory.
    154drm_gem_object_init() will create an shmfs file of the
    155requested size and store it into the struct :c:type:`struct
    156drm_gem_object <drm_gem_object>` filp field. The memory is
    157used as either main storage for the object when the graphics hardware
    158uses system memory directly or as a backing store otherwise.
    159
    160Drivers are responsible for the actual physical pages allocation by
    161calling shmem_read_mapping_page_gfp() for each page.
    162Note that they can decide to allocate pages when initializing the GEM
    163object, or to delay allocation until the memory is needed (for instance
    164when a page fault occurs as a result of a userspace memory access or
    165when the driver needs to start a DMA transfer involving the memory).
    166
    167Anonymous pageable memory allocation is not always desired, for instance
    168when the hardware requires physically contiguous system memory as is
    169often the case in embedded devices. Drivers can create GEM objects with
    170no shmfs backing (called private GEM objects) by initializing them with a call
    171to drm_gem_private_object_init() instead of drm_gem_object_init(). Storage for
    172private GEM objects must be managed by drivers.
    173
    174GEM Objects Lifetime
    175--------------------
    176
    177All GEM objects are reference-counted by the GEM core. References can be
    178acquired and release by calling drm_gem_object_get() and drm_gem_object_put()
    179respectively.
    180
    181When the last reference to a GEM object is released the GEM core calls
    182the :c:type:`struct drm_gem_object_funcs <gem_object_funcs>` free
    183operation. That operation is mandatory for GEM-enabled drivers and must
    184free the GEM object and all associated resources.
    185
    186void (\*free) (struct drm_gem_object \*obj); Drivers are
    187responsible for freeing all GEM object resources. This includes the
    188resources created by the GEM core, which need to be released with
    189drm_gem_object_release().
    190
    191GEM Objects Naming
    192------------------
    193
    194Communication between userspace and the kernel refers to GEM objects
    195using local handles, global names or, more recently, file descriptors.
    196All of those are 32-bit integer values; the usual Linux kernel limits
    197apply to the file descriptors.
    198
    199GEM handles are local to a DRM file. Applications get a handle to a GEM
    200object through a driver-specific ioctl, and can use that handle to refer
    201to the GEM object in other standard or driver-specific ioctls. Closing a
    202DRM file handle frees all its GEM handles and dereferences the
    203associated GEM objects.
    204
    205To create a handle for a GEM object drivers call drm_gem_handle_create(). The
    206function takes a pointer to the DRM file and the GEM object and returns a
    207locally unique handle.  When the handle is no longer needed drivers delete it
    208with a call to drm_gem_handle_delete(). Finally the GEM object associated with a
    209handle can be retrieved by a call to drm_gem_object_lookup().
    210
    211Handles don't take ownership of GEM objects, they only take a reference
    212to the object that will be dropped when the handle is destroyed. To
    213avoid leaking GEM objects, drivers must make sure they drop the
    214reference(s) they own (such as the initial reference taken at object
    215creation time) as appropriate, without any special consideration for the
    216handle. For example, in the particular case of combined GEM object and
    217handle creation in the implementation of the dumb_create operation,
    218drivers must drop the initial reference to the GEM object before
    219returning the handle.
    220
    221GEM names are similar in purpose to handles but are not local to DRM
    222files. They can be passed between processes to reference a GEM object
    223globally. Names can't be used directly to refer to objects in the DRM
    224API, applications must convert handles to names and names to handles
    225using the DRM_IOCTL_GEM_FLINK and DRM_IOCTL_GEM_OPEN ioctls
    226respectively. The conversion is handled by the DRM core without any
    227driver-specific support.
    228
    229GEM also supports buffer sharing with dma-buf file descriptors through
    230PRIME. GEM-based drivers must use the provided helpers functions to
    231implement the exporting and importing correctly. See ?. Since sharing
    232file descriptors is inherently more secure than the easily guessable and
    233global GEM names it is the preferred buffer sharing mechanism. Sharing
    234buffers through GEM names is only supported for legacy userspace.
    235Furthermore PRIME also allows cross-device buffer sharing since it is
    236based on dma-bufs.
    237
    238GEM Objects Mapping
    239-------------------
    240
    241Because mapping operations are fairly heavyweight GEM favours
    242read/write-like access to buffers, implemented through driver-specific
    243ioctls, over mapping buffers to userspace. However, when random access
    244to the buffer is needed (to perform software rendering for instance),
    245direct access to the object can be more efficient.
    246
    247The mmap system call can't be used directly to map GEM objects, as they
    248don't have their own file handle. Two alternative methods currently
    249co-exist to map GEM objects to userspace. The first method uses a
    250driver-specific ioctl to perform the mapping operation, calling
    251do_mmap() under the hood. This is often considered
    252dubious, seems to be discouraged for new GEM-enabled drivers, and will
    253thus not be described here.
    254
    255The second method uses the mmap system call on the DRM file handle. void
    256\*mmap(void \*addr, size_t length, int prot, int flags, int fd, off_t
    257offset); DRM identifies the GEM object to be mapped by a fake offset
    258passed through the mmap offset argument. Prior to being mapped, a GEM
    259object must thus be associated with a fake offset. To do so, drivers
    260must call drm_gem_create_mmap_offset() on the object.
    261
    262Once allocated, the fake offset value must be passed to the application
    263in a driver-specific way and can then be used as the mmap offset
    264argument.
    265
    266The GEM core provides a helper method drm_gem_mmap() to
    267handle object mapping. The method can be set directly as the mmap file
    268operation handler. It will look up the GEM object based on the offset
    269value and set the VMA operations to the :c:type:`struct drm_driver
    270<drm_driver>` gem_vm_ops field. Note that drm_gem_mmap() doesn't map memory to
    271userspace, but relies on the driver-provided fault handler to map pages
    272individually.
    273
    274To use drm_gem_mmap(), drivers must fill the struct :c:type:`struct drm_driver
    275<drm_driver>` gem_vm_ops field with a pointer to VM operations.
    276
    277The VM operations is a :c:type:`struct vm_operations_struct <vm_operations_struct>`
    278made up of several fields, the more interesting ones being:
    279
    280.. code-block:: c
    281
    282	struct vm_operations_struct {
    283		void (*open)(struct vm_area_struct * area);
    284		void (*close)(struct vm_area_struct * area);
    285		vm_fault_t (*fault)(struct vm_fault *vmf);
    286	};
    287
    288
    289The open and close operations must update the GEM object reference
    290count. Drivers can use the drm_gem_vm_open() and drm_gem_vm_close() helper
    291functions directly as open and close handlers.
    292
    293The fault operation handler is responsible for mapping individual pages
    294to userspace when a page fault occurs. Depending on the memory
    295allocation scheme, drivers can allocate pages at fault time, or can
    296decide to allocate memory for the GEM object at the time the object is
    297created.
    298
    299Drivers that want to map the GEM object upfront instead of handling page
    300faults can implement their own mmap file operation handler.
    301
    302For platforms without MMU the GEM core provides a helper method
    303drm_gem_cma_get_unmapped_area(). The mmap() routines will call this to get a
    304proposed address for the mapping.
    305
    306To use drm_gem_cma_get_unmapped_area(), drivers must fill the struct
    307:c:type:`struct file_operations <file_operations>` get_unmapped_area field with
    308a pointer on drm_gem_cma_get_unmapped_area().
    309
    310More detailed information about get_unmapped_area can be found in
    311Documentation/admin-guide/mm/nommu-mmap.rst
    312
    313Memory Coherency
    314----------------
    315
    316When mapped to the device or used in a command buffer, backing pages for
    317an object are flushed to memory and marked write combined so as to be
    318coherent with the GPU. Likewise, if the CPU accesses an object after the
    319GPU has finished rendering to the object, then the object must be made
    320coherent with the CPU's view of memory, usually involving GPU cache
    321flushing of various kinds. This core CPU<->GPU coherency management is
    322provided by a device-specific ioctl, which evaluates an object's current
    323domain and performs any necessary flushing or synchronization to put the
    324object into the desired coherency domain (note that the object may be
    325busy, i.e. an active render target; in that case, setting the domain
    326blocks the client and waits for rendering to complete before performing
    327any necessary flushing operations).
    328
    329Command Execution
    330-----------------
    331
    332Perhaps the most important GEM function for GPU devices is providing a
    333command execution interface to clients. Client programs construct
    334command buffers containing references to previously allocated memory
    335objects, and then submit them to GEM. At that point, GEM takes care to
    336bind all the objects into the GTT, execute the buffer, and provide
    337necessary synchronization between clients accessing the same buffers.
    338This often involves evicting some objects from the GTT and re-binding
    339others (a fairly expensive operation), and providing relocation support
    340which hides fixed GTT offsets from clients. Clients must take care not
    341to submit command buffers that reference more objects than can fit in
    342the GTT; otherwise, GEM will reject them and no rendering will occur.
    343Similarly, if several objects in the buffer require fence registers to
    344be allocated for correct rendering (e.g. 2D blits on pre-965 chips),
    345care must be taken not to require more fence registers than are
    346available to the client. Such resource management should be abstracted
    347from the client in libdrm.
    348
    349GEM Function Reference
    350----------------------
    351
    352.. kernel-doc:: include/drm/drm_gem.h
    353   :internal:
    354
    355.. kernel-doc:: drivers/gpu/drm/drm_gem.c
    356   :export:
    357
    358GEM CMA Helper Functions Reference
    359----------------------------------
    360
    361.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c
    362   :doc: cma helpers
    363
    364.. kernel-doc:: include/drm/drm_gem_cma_helper.h
    365   :internal:
    366
    367.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c
    368   :export:
    369
    370GEM SHMEM Helper Function Reference
    371-----------------------------------
    372
    373.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
    374   :doc: overview
    375
    376.. kernel-doc:: include/drm/drm_gem_shmem_helper.h
    377   :internal:
    378
    379.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
    380   :export:
    381
    382GEM VRAM Helper Functions Reference
    383-----------------------------------
    384
    385.. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
    386   :doc: overview
    387
    388.. kernel-doc:: include/drm/drm_gem_vram_helper.h
    389   :internal:
    390
    391.. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
    392   :export:
    393
    394GEM TTM Helper Functions Reference
    395-----------------------------------
    396
    397.. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c
    398   :doc: overview
    399
    400.. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c
    401   :export:
    402
    403VMA Offset Manager
    404==================
    405
    406.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c
    407   :doc: vma offset manager
    408
    409.. kernel-doc:: include/drm/drm_vma_manager.h
    410   :internal:
    411
    412.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c
    413   :export:
    414
    415.. _prime_buffer_sharing:
    416
    417PRIME Buffer Sharing
    418====================
    419
    420PRIME is the cross device buffer sharing framework in drm, originally
    421created for the OPTIMUS range of multi-gpu platforms. To userspace PRIME
    422buffers are dma-buf based file descriptors.
    423
    424Overview and Lifetime Rules
    425---------------------------
    426
    427.. kernel-doc:: drivers/gpu/drm/drm_prime.c
    428   :doc: overview and lifetime rules
    429
    430PRIME Helper Functions
    431----------------------
    432
    433.. kernel-doc:: drivers/gpu/drm/drm_prime.c
    434   :doc: PRIME Helpers
    435
    436PRIME Function References
    437-------------------------
    438
    439.. kernel-doc:: include/drm/drm_prime.h
    440   :internal:
    441
    442.. kernel-doc:: drivers/gpu/drm/drm_prime.c
    443   :export:
    444
    445DRM MM Range Allocator
    446======================
    447
    448Overview
    449--------
    450
    451.. kernel-doc:: drivers/gpu/drm/drm_mm.c
    452   :doc: Overview
    453
    454LRU Scan/Eviction Support
    455-------------------------
    456
    457.. kernel-doc:: drivers/gpu/drm/drm_mm.c
    458   :doc: lru scan roster
    459
    460DRM MM Range Allocator Function References
    461------------------------------------------
    462
    463.. kernel-doc:: include/drm/drm_mm.h
    464   :internal:
    465
    466.. kernel-doc:: drivers/gpu/drm/drm_mm.c
    467   :export:
    468
    469DRM Buddy Allocator
    470===================
    471
    472DRM Buddy Function References
    473-----------------------------
    474
    475.. kernel-doc:: drivers/gpu/drm/drm_buddy.c
    476   :export:
    477
    478DRM Cache Handling and Fast WC memcpy()
    479=======================================
    480
    481.. kernel-doc:: drivers/gpu/drm/drm_cache.c
    482   :export:
    483
    484DRM Sync Objects
    485===========================
    486
    487.. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
    488   :doc: Overview
    489
    490.. kernel-doc:: include/drm/drm_syncobj.h
    491   :internal:
    492
    493.. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
    494   :export:
    495
    496GPU Scheduler
    497=============
    498
    499Overview
    500--------
    501
    502.. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c
    503   :doc: Overview
    504
    505Scheduler Function References
    506-----------------------------
    507
    508.. kernel-doc:: include/drm/gpu_scheduler.h
    509   :internal:
    510
    511.. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c
    512   :export:
    513
    514.. kernel-doc:: drivers/gpu/drm/scheduler/sched_entity.c
    515   :export: