cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

lockup-watchdogs.rst (4228B)


      1===============================================================
      2Softlockup detector and hardlockup detector (aka nmi_watchdog)
      3===============================================================
      4
      5The Linux kernel can act as a watchdog to detect both soft and hard
      6lockups.
      7
      8A 'softlockup' is defined as a bug that causes the kernel to loop in
      9kernel mode for more than 20 seconds (see "Implementation" below for
     10details), without giving other tasks a chance to run. The current
     11stack trace is displayed upon detection and, by default, the system
     12will stay locked up. Alternatively, the kernel can be configured to
     13panic; a sysctl, "kernel.softlockup_panic", a kernel parameter,
     14"softlockup_panic" (see "Documentation/admin-guide/kernel-parameters.rst" for
     15details), and a compile option, "BOOTPARAM_SOFTLOCKUP_PANIC", are
     16provided for this.
     17
     18A 'hardlockup' is defined as a bug that causes the CPU to loop in
     19kernel mode for more than 10 seconds (see "Implementation" below for
     20details), without letting other interrupts have a chance to run.
     21Similarly to the softlockup case, the current stack trace is displayed
     22upon detection and the system will stay locked up unless the default
     23behavior is changed, which can be done through a sysctl,
     24'hardlockup_panic', a compile time knob, "BOOTPARAM_HARDLOCKUP_PANIC",
     25and a kernel parameter, "nmi_watchdog"
     26(see "Documentation/admin-guide/kernel-parameters.rst" for details).
     27
     28The panic option can be used in combination with panic_timeout (this
     29timeout is set through the confusingly named "kernel.panic" sysctl),
     30to cause the system to reboot automatically after a specified amount
     31of time.
     32
     33Implementation
     34==============
     35
     36The soft and hard lockup detectors are built on top of the hrtimer and
     37perf subsystems, respectively. A direct consequence of this is that,
     38in principle, they should work in any architecture where these
     39subsystems are present.
     40
     41A periodic hrtimer runs to generate interrupts and kick the watchdog
     42job. An NMI perf event is generated every "watchdog_thresh"
     43(compile-time initialized to 10 and configurable through sysctl of the
     44same name) seconds to check for hardlockups. If any CPU in the system
     45does not receive any hrtimer interrupt during that time the
     46'hardlockup detector' (the handler for the NMI perf event) will
     47generate a kernel warning or call panic, depending on the
     48configuration.
     49
     50The watchdog job runs in a stop scheduling thread that updates a
     51timestamp every time it is scheduled. If that timestamp is not updated
     52for 2*watchdog_thresh seconds (the softlockup threshold) the
     53'softlockup detector' (coded inside the hrtimer callback function)
     54will dump useful debug information to the system log, after which it
     55will call panic if it was instructed to do so or resume execution of
     56other kernel code.
     57
     58The period of the hrtimer is 2*watchdog_thresh/5, which means it has
     59two or three chances to generate an interrupt before the hardlockup
     60detector kicks in.
     61
     62As explained above, a kernel knob is provided that allows
     63administrators to configure the period of the hrtimer and the perf
     64event. The right value for a particular environment is a trade-off
     65between fast response to lockups and detection overhead.
     66
     67By default, the watchdog runs on all online cores.  However, on a
     68kernel configured with NO_HZ_FULL, by default the watchdog runs only
     69on the housekeeping cores, not the cores specified in the "nohz_full"
     70boot argument.  If we allowed the watchdog to run by default on
     71the "nohz_full" cores, we would have to run timer ticks to activate
     72the scheduler, which would prevent the "nohz_full" functionality
     73from protecting the user code on those cores from the kernel.
     74Of course, disabling it by default on the nohz_full cores means that
     75when those cores do enter the kernel, by default we will not be
     76able to detect if they lock up.  However, allowing the watchdog
     77to continue to run on the housekeeping (non-tickless) cores means
     78that we will continue to detect lockups properly on those cores.
     79
     80In either case, the set of cores excluded from running the watchdog
     81may be adjusted via the kernel.watchdog_cpumask sysctl.  For
     82nohz_full cores, this may be useful for debugging a case where the
     83kernel seems to be hanging on the nohz_full cores.