cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

perf-c2c.txt (8153B)


      1perf-c2c(1)
      2===========
      3
      4NAME
      5----
      6perf-c2c - Shared Data C2C/HITM Analyzer.
      7
      8SYNOPSIS
      9--------
     10[verse]
     11'perf c2c record' [<options>] <command>
     12'perf c2c record' [<options>] \-- [<record command options>] <command>
     13'perf c2c report' [<options>]
     14
     15DESCRIPTION
     16-----------
     17C2C stands for Cache To Cache.
     18
     19The perf c2c tool provides means for Shared Data C2C/HITM analysis. It allows
     20you to track down the cacheline contentions.
     21
     22On x86, the tool is based on load latency and precise store facility events
     23provided by Intel CPUs. On PowerPC, the tool uses random instruction sampling
     24with thresholding feature.
     25
     26These events provide:
     27  - memory address of the access
     28  - type of the access (load and store details)
     29  - latency (in cycles) of the load access
     30
     31The c2c tool provide means to record this data and report back access details
     32for cachelines with highest contention - highest number of HITM accesses.
     33
     34The basic workflow with this tool follows the standard record/report phase.
     35User uses the record command to record events data and report command to
     36display it.
     37
     38
     39RECORD OPTIONS
     40--------------
     41-e::
     42--event=::
     43	Select the PMU event. Use 'perf c2c record -e list'
     44	to list available events.
     45
     46-v::
     47--verbose::
     48	Be more verbose (show counter open errors, etc).
     49
     50-l::
     51--ldlat::
     52	Configure mem-loads latency. (x86 only)
     53
     54-k::
     55--all-kernel::
     56	Configure all used events to run in kernel space.
     57
     58-u::
     59--all-user::
     60	Configure all used events to run in user space.
     61
     62REPORT OPTIONS
     63--------------
     64-k::
     65--vmlinux=<file>::
     66	vmlinux pathname
     67
     68-v::
     69--verbose::
     70	Be more verbose (show counter open errors, etc).
     71
     72-i::
     73--input::
     74	Specify the input file to process.
     75
     76-N::
     77--node-info::
     78	Show extra node info in report (see NODE INFO section)
     79
     80-c::
     81--coalesce::
     82	Specify sorting fields for single cacheline display.
     83	Following fields are available: tid,pid,iaddr,dso
     84	(see COALESCE)
     85
     86-g::
     87--call-graph::
     88	Setup callchains parameters.
     89	Please refer to perf-report man page for details.
     90
     91--stdio::
     92	Force the stdio output (see STDIO OUTPUT)
     93
     94--stats::
     95	Display only statistic tables and force stdio mode.
     96
     97--full-symbols::
     98	Display full length of symbols.
     99
    100--no-source::
    101	Do not display Source:Line column.
    102
    103--show-all::
    104	Show all captured HITM lines, with no regard to HITM % 0.0005 limit.
    105
    106-f::
    107--force::
    108	Don't do ownership validation.
    109
    110-d::
    111--display::
    112	Switch to HITM type (rmt, lcl) to display and sort on. Total HITMs as default.
    113
    114--stitch-lbr::
    115	Show callgraph with stitched LBRs, which may have more complete
    116	callgraph. The perf.data file must have been obtained using
    117	perf c2c record --call-graph lbr.
    118	Disabled by default. In common cases with call stack overflows,
    119	it can recreate better call stacks than the default lbr call stack
    120	output. But this approach is not full proof. There can be cases
    121	where it creates incorrect call stacks from incorrect matches.
    122	The known limitations include exception handing such as
    123	setjmp/longjmp will have calls/returns not match.
    124
    125C2C RECORD
    126----------
    127The perf c2c record command setup options related to HITM cacheline analysis
    128and calls standard perf record command.
    129
    130Following perf record options are configured by default:
    131(check perf record man page for details)
    132
    133  -W,-d,--phys-data,--sample-cpu
    134
    135Unless specified otherwise with '-e' option, following events are monitored by
    136default on x86:
    137
    138  cpu/mem-loads,ldlat=30/P
    139  cpu/mem-stores/P
    140
    141and following on PowerPC:
    142
    143  cpu/mem-loads/
    144  cpu/mem-stores/
    145
    146User can pass any 'perf record' option behind '--' mark, like (to enable
    147callchains and system wide monitoring):
    148
    149  $ perf c2c record -- -g -a
    150
    151Please check RECORD OPTIONS section for specific c2c record options.
    152
    153C2C REPORT
    154----------
    155The perf c2c report command displays shared data analysis.  It comes in two
    156display modes: stdio and tui (default).
    157
    158The report command workflow is following:
    159  - sort all the data based on the cacheline address
    160  - store access details for each cacheline
    161  - sort all cachelines based on user settings
    162  - display data
    163
    164In general perf report output consist of 2 basic views:
    165  1) most expensive cachelines list
    166  2) offsets details for each cacheline
    167
    168For each cacheline in the 1) list we display following data:
    169(Both stdio and TUI modes follow the same fields output)
    170
    171  Index
    172  - zero based index to identify the cacheline
    173
    174  Cacheline
    175  - cacheline address (hex number)
    176
    177  Rmt/Lcl Hitm
    178  - cacheline percentage of all Remote/Local HITM accesses
    179
    180  LLC Load Hitm - Total, LclHitm, RmtHitm
    181  - count of Total/Local/Remote load HITMs
    182
    183  Total records
    184  - sum of all cachelines accesses
    185
    186  Total loads
    187  - sum of all load accesses
    188
    189  Total stores
    190  - sum of all store accesses
    191
    192  Store Reference - L1Hit, L1Miss, N/A
    193    L1Hit - store accesses that hit L1
    194    L1Miss - store accesses that missed L1
    195    N/A - store accesses with memory level is not available
    196
    197  Core Load Hit - FB, L1, L2
    198  - count of load hits in FB (Fill Buffer), L1 and L2 cache
    199
    200  LLC Load Hit - LlcHit, LclHitm
    201  - count of LLC load accesses, includes LLC hits and LLC HITMs
    202
    203  RMT Load Hit - RmtHit, RmtHitm
    204  - count of remote load accesses, includes remote hits and remote HITMs
    205
    206  Load Dram - Lcl, Rmt
    207  - count of local and remote DRAM accesses
    208
    209For each offset in the 2) list we display following data:
    210
    211  HITM - Rmt, Lcl
    212  - % of Remote/Local HITM accesses for given offset within cacheline
    213
    214  Store Refs - L1 Hit, L1 Miss, N/A
    215  - % of store accesses that hit L1, missed L1 and N/A (no available) memory
    216    level for given offset within cacheline
    217
    218  Data address - Offset
    219  - offset address
    220
    221  Pid
    222  - pid of the process responsible for the accesses
    223
    224  Tid
    225  - tid of the process responsible for the accesses
    226
    227  Code address
    228  - code address responsible for the accesses
    229
    230  cycles - rmt hitm, lcl hitm, load
    231    - sum of cycles for given accesses - Remote/Local HITM and generic load
    232
    233  cpu cnt
    234    - number of cpus that participated on the access
    235
    236  Symbol
    237    - code symbol related to the 'Code address' value
    238
    239  Shared Object
    240    - shared object name related to the 'Code address' value
    241
    242  Source:Line
    243    - source information related to the 'Code address' value
    244
    245  Node
    246    - nodes participating on the access (see NODE INFO section)
    247
    248NODE INFO
    249---------
    250The 'Node' field displays nodes that accesses given cacheline
    251offset. Its output comes in 3 flavors:
    252  - node IDs separated by ','
    253  - node IDs with stats for each ID, in following format:
    254      Node{cpus %hitms %stores}
    255  - node IDs with list of affected CPUs in following format:
    256      Node{cpu list}
    257
    258User can switch between above flavors with -N option or
    259use 'n' key to interactively switch in TUI mode.
    260
    261COALESCE
    262--------
    263User can specify how to sort offsets for cacheline.
    264
    265Following fields are available and governs the final
    266output fields set for cacheline offsets output:
    267
    268  tid   - coalesced by process TIDs
    269  pid   - coalesced by process PIDs
    270  iaddr - coalesced by code address, following fields are displayed:
    271             Code address, Code symbol, Shared Object, Source line
    272  dso   - coalesced by shared object
    273
    274By default the coalescing is setup with 'pid,iaddr'.
    275
    276STDIO OUTPUT
    277------------
    278The stdio output displays data on standard output.
    279
    280Following tables are displayed:
    281  Trace Event Information
    282  - overall statistics of memory accesses
    283
    284  Global Shared Cache Line Event Information
    285  - overall statistics on shared cachelines
    286
    287  Shared Data Cache Line Table
    288  - list of most expensive cachelines
    289
    290  Shared Cache Line Distribution Pareto
    291  - list of all accessed offsets for each cacheline
    292
    293TUI OUTPUT
    294----------
    295The TUI output provides interactive interface to navigate
    296through cachelines list and to display offset details.
    297
    298For details please refer to the help window by pressing '?' key.
    299
    300CREDITS
    301-------
    302Although Don Zickus, Dick Fowles and Joe Mario worked together
    303to get this implemented, we got lots of early help from Arnaldo
    304Carvalho de Melo, Stephane Eranian, Jiri Olsa and Andi Kleen.
    305
    306C2C BLOG
    307--------
    308Check Joe's blog on c2c tool for detailed use case explanation:
    309  https://joemario.github.io/blog/2016/09/01/c2c-blog/
    310
    311SEE ALSO
    312--------
    313linkperf:perf-record[1], linkperf:perf-mem[1]