cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

sched-stats.rst (7582B)


      1====================
      2Scheduler Statistics
      3====================
      4
      5Version 15 of schedstats dropped counters for some sched_yield:
      6yld_exp_empty, yld_act_empty and yld_both_empty. Otherwise, it is
      7identical to version 14.
      8
      9Version 14 of schedstats includes support for sched_domains, which hit the
     10mainline kernel in 2.6.20 although it is identical to the stats from version
     1112 which was in the kernel from 2.6.13-2.6.19 (version 13 never saw a kernel
     12release).  Some counters make more sense to be per-runqueue; other to be
     13per-domain.  Note that domains (and their associated information) will only
     14be pertinent and available on machines utilizing CONFIG_SMP.
     15
     16In version 14 of schedstat, there is at least one level of domain
     17statistics for each cpu listed, and there may well be more than one
     18domain.  Domains have no particular names in this implementation, but
     19the highest numbered one typically arbitrates balancing across all the
     20cpus on the machine, while domain0 is the most tightly focused domain,
     21sometimes balancing only between pairs of cpus.  At this time, there
     22are no architectures which need more than three domain levels. The first
     23field in the domain stats is a bit map indicating which cpus are affected
     24by that domain.
     25
     26These fields are counters, and only increment.  Programs which make use
     27of these will need to start with a baseline observation and then calculate
     28the change in the counters at each subsequent observation.  A perl script
     29which does this for many of the fields is available at
     30
     31    http://eaglet.pdxhosts.com/rick/linux/schedstat/
     32
     33Note that any such script will necessarily be version-specific, as the main
     34reason to change versions is changes in the output format.  For those wishing
     35to write their own scripts, the fields are described here.
     36
     37CPU statistics
     38--------------
     39cpu<N> 1 2 3 4 5 6 7 8 9
     40
     41First field is a sched_yield() statistic:
     42
     43     1) # of times sched_yield() was called
     44
     45Next three are schedule() statistics:
     46
     47     2) This field is a legacy array expiration count field used in the O(1)
     48	scheduler. We kept it for ABI compatibility, but it is always set to zero.
     49     3) # of times schedule() was called
     50     4) # of times schedule() left the processor idle
     51
     52Next two are try_to_wake_up() statistics:
     53
     54     5) # of times try_to_wake_up() was called
     55     6) # of times try_to_wake_up() was called to wake up the local cpu
     56
     57Next three are statistics describing scheduling latency:
     58
     59     7) sum of all time spent running by tasks on this processor (in nanoseconds)
     60     8) sum of all time spent waiting to run by tasks on this processor (in
     61        nanoseconds)
     62     9) # of timeslices run on this cpu
     63
     64
     65Domain statistics
     66-----------------
     67One of these is produced per domain for each cpu described. (Note that if
     68CONFIG_SMP is not defined, *no* domains are utilized and these lines
     69will not appear in the output.)
     70
     71domain<N> <cpumask> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
     72
     73The first field is a bit mask indicating what cpus this domain operates over.
     74
     75The next 24 are a variety of load_balance() statistics in grouped into types
     76of idleness (idle, busy, and newly idle):
     77
     78    1)  # of times in this domain load_balance() was called when the
     79        cpu was idle
     80    2)  # of times in this domain load_balance() checked but found
     81        the load did not require balancing when the cpu was idle
     82    3)  # of times in this domain load_balance() tried to move one or
     83        more tasks and failed, when the cpu was idle
     84    4)  sum of imbalances discovered (if any) with each call to
     85        load_balance() in this domain when the cpu was idle
     86    5)  # of times in this domain pull_task() was called when the cpu
     87        was idle
     88    6)  # of times in this domain pull_task() was called even though
     89        the target task was cache-hot when idle
     90    7)  # of times in this domain load_balance() was called but did
     91        not find a busier queue while the cpu was idle
     92    8)  # of times in this domain a busier queue was found while the
     93        cpu was idle but no busier group was found
     94    9)  # of times in this domain load_balance() was called when the
     95        cpu was busy
     96    10) # of times in this domain load_balance() checked but found the
     97        load did not require balancing when busy
     98    11) # of times in this domain load_balance() tried to move one or
     99        more tasks and failed, when the cpu was busy
    100    12) sum of imbalances discovered (if any) with each call to
    101        load_balance() in this domain when the cpu was busy
    102    13) # of times in this domain pull_task() was called when busy
    103    14) # of times in this domain pull_task() was called even though the
    104        target task was cache-hot when busy
    105    15) # of times in this domain load_balance() was called but did not
    106        find a busier queue while the cpu was busy
    107    16) # of times in this domain a busier queue was found while the cpu
    108        was busy but no busier group was found
    109
    110    17) # of times in this domain load_balance() was called when the
    111        cpu was just becoming idle
    112    18) # of times in this domain load_balance() checked but found the
    113        load did not require balancing when the cpu was just becoming idle
    114    19) # of times in this domain load_balance() tried to move one or more
    115        tasks and failed, when the cpu was just becoming idle
    116    20) sum of imbalances discovered (if any) with each call to
    117        load_balance() in this domain when the cpu was just becoming idle
    118    21) # of times in this domain pull_task() was called when newly idle
    119    22) # of times in this domain pull_task() was called even though the
    120        target task was cache-hot when just becoming idle
    121    23) # of times in this domain load_balance() was called but did not
    122        find a busier queue while the cpu was just becoming idle
    123    24) # of times in this domain a busier queue was found while the cpu
    124        was just becoming idle but no busier group was found
    125
    126   Next three are active_load_balance() statistics:
    127
    128    25) # of times active_load_balance() was called
    129    26) # of times active_load_balance() tried to move a task and failed
    130    27) # of times active_load_balance() successfully moved a task
    131
    132   Next three are sched_balance_exec() statistics:
    133
    134    28) sbe_cnt is not used
    135    29) sbe_balanced is not used
    136    30) sbe_pushed is not used
    137
    138   Next three are sched_balance_fork() statistics:
    139
    140    31) sbf_cnt is not used
    141    32) sbf_balanced is not used
    142    33) sbf_pushed is not used
    143
    144   Next three are try_to_wake_up() statistics:
    145
    146    34) # of times in this domain try_to_wake_up() awoke a task that
    147        last ran on a different cpu in this domain
    148    35) # of times in this domain try_to_wake_up() moved a task to the
    149        waking cpu because it was cache-cold on its own cpu anyway
    150    36) # of times in this domain try_to_wake_up() started passive balancing
    151
    152/proc/<pid>/schedstat
    153---------------------
    154schedstats also adds a new /proc/<pid>/schedstat file to include some of
    155the same information on a per-process level.  There are three fields in
    156this file correlating for that process to:
    157
    158     1) time spent on the cpu (in nanoseconds)
    159     2) time spent waiting on a runqueue (in nanoseconds)
    160     3) # of timeslices run on this cpu
    161
    162A program could be easily written to make use of these extra fields to
    163report on how well a particular process or set of processes is faring
    164under the scheduler's policies.  A simple version of such a program is
    165available at
    166
    167    http://eaglet.pdxhosts.com/rick/linux/schedstat/v12/latency.c