cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

sched-domains.rst (4424B)


      1=================
      2Scheduler Domains
      3=================
      4
      5Each CPU has a "base" scheduling domain (struct sched_domain). The domain
      6hierarchy is built from these base domains via the ->parent pointer. ->parent
      7MUST be NULL terminated, and domain structures should be per-CPU as they are
      8locklessly updated.
      9
     10Each scheduling domain spans a number of CPUs (stored in the ->span field).
     11A domain's span MUST be a superset of it child's span (this restriction could
     12be relaxed if the need arises), and a base domain for CPU i MUST span at least
     13i. The top domain for each CPU will generally span all CPUs in the system
     14although strictly it doesn't have to, but this could lead to a case where some
     15CPUs will never be given tasks to run unless the CPUs allowed mask is
     16explicitly set. A sched domain's span means "balance process load among these
     17CPUs".
     18
     19Each scheduling domain must have one or more CPU groups (struct sched_group)
     20which are organised as a circular one way linked list from the ->groups
     21pointer. The union of cpumasks of these groups MUST be the same as the
     22domain's span. The group pointed to by the ->groups pointer MUST contain the CPU
     23to which the domain belongs. Groups may be shared among CPUs as they contain
     24read only data after they have been set up. The intersection of cpumasks from
     25any two of these groups may be non empty. If this is the case the SD_OVERLAP
     26flag is set on the corresponding scheduling domain and its groups may not be
     27shared between CPUs.
     28
     29Balancing within a sched domain occurs between groups. That is, each group
     30is treated as one entity. The load of a group is defined as the sum of the
     31load of each of its member CPUs, and only when the load of a group becomes
     32out of balance are tasks moved between groups.
     33
     34In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU
     35through scheduler_tick(). It raises a softirq after the next regularly scheduled
     36rebalancing event for the current runqueue has arrived. The actual load
     37balancing workhorse, run_rebalance_domains()->rebalance_domains(), is then run
     38in softirq context (SCHED_SOFTIRQ).
     39
     40The latter function takes two arguments: the runqueue of current CPU and whether
     41the CPU was idle at the time the scheduler_tick() happened and iterates over all
     42sched domains our CPU is on, starting from its base domain and going up the ->parent
     43chain. While doing that, it checks to see if the current domain has exhausted its
     44rebalance interval. If so, it runs load_balance() on that domain. It then checks
     45the parent sched_domain (if it exists), and the parent of the parent and so
     46forth.
     47
     48Initially, load_balance() finds the busiest group in the current sched domain.
     49If it succeeds, it looks for the busiest runqueue of all the CPUs' runqueues in
     50that group. If it manages to find such a runqueue, it locks both our initial
     51CPU's runqueue and the newly found busiest one and starts moving tasks from it
     52to our runqueue. The exact number of tasks amounts to an imbalance previously
     53computed while iterating over this sched domain's groups.
     54
     55Implementing sched domains
     56==========================
     57
     58The "base" domain will "span" the first level of the hierarchy. In the case
     59of SMT, you'll span all siblings of the physical CPU, with each group being
     60a single virtual CPU.
     61
     62In SMP, the parent of the base domain will span all physical CPUs in the
     63node. Each group being a single physical CPU. Then with NUMA, the parent
     64of the SMP domain will span the entire machine, with each group having the
     65cpumask of a node. Or, you could do multi-level NUMA or Opteron, for example,
     66might have just one domain covering its one NUMA level.
     67
     68The implementor should read comments in include/linux/sched/sd_flags.h:
     69SD_* to get an idea of the specifics and what to tune for the SD flags
     70of a sched_domain.
     71
     72Architectures may override the generic domain builder and the default SD flags
     73for a given topology level by creating a sched_domain_topology_level array and
     74calling set_sched_topology() with this array as the parameter.
     75
     76The sched-domains debugging infrastructure can be enabled by enabling
     77CONFIG_SCHED_DEBUG and adding 'sched_verbose' to your cmdline. If you
     78forgot to tweak your cmdline, you can also flip the
     79/sys/kernel/debug/sched/verbose knob. This enables an error checking parse of
     80the sched domains which should catch most possible errors (described above). It
     81also prints out the domain structure in a visual format.