knfsd-stats.rst (5203B)
1============================ 2Kernel NFS Server Statistics 3============================ 4 5:Authors: Greg Banks <gnb@sgi.com> - 26 Mar 2009 6 7This document describes the format and semantics of the statistics 8which the kernel NFS server makes available to userspace. These 9statistics are available in several text form pseudo files, each of 10which is described separately below. 11 12In most cases you don't need to know these formats, as the nfsstat(8) 13program from the nfs-utils distribution provides a helpful command-line 14interface for extracting and printing them. 15 16All the files described here are formatted as a sequence of text lines, 17separated by newline '\n' characters. Lines beginning with a hash 18'#' character are comments intended for humans and should be ignored 19by parsing routines. All other lines contain a sequence of fields 20separated by whitespace. 21 22/proc/fs/nfsd/pool_stats 23======================== 24 25This file is available in kernels from 2.6.30 onwards, if the 26/proc/fs/nfsd filesystem is mounted (it almost always should be). 27 28The first line is a comment which describes the fields present in 29all the other lines. The other lines present the following data as 30a sequence of unsigned decimal numeric fields. One line is shown 31for each NFS thread pool. 32 33All counters are 64 bits wide and wrap naturally. There is no way 34to zero these counters, instead applications should do their own 35rate conversion. 36 37pool 38 The id number of the NFS thread pool to which this line applies. 39 This number does not change. 40 41 Thread pool ids are a contiguous set of small integers starting 42 at zero. The maximum value depends on the thread pool mode, but 43 currently cannot be larger than the number of CPUs in the system. 44 Note that in the default case there will be a single thread pool 45 which contains all the nfsd threads and all the CPUs in the system, 46 and thus this file will have a single line with a pool id of "0". 47 48packets-arrived 49 Counts how many NFS packets have arrived. More precisely, this 50 is the number of times that the network stack has notified the 51 sunrpc server layer that new data may be available on a transport 52 (e.g. an NFS or UDP socket or an NFS/RDMA endpoint). 53 54 Depending on the NFS workload patterns and various network stack 55 effects (such as Large Receive Offload) which can combine packets 56 on the wire, this may be either more or less than the number 57 of NFS calls received (which statistic is available elsewhere). 58 However this is a more accurate and less workload-dependent measure 59 of how much CPU load is being placed on the sunrpc server layer 60 due to NFS network traffic. 61 62sockets-enqueued 63 Counts how many times an NFS transport is enqueued to wait for 64 an nfsd thread to service it, i.e. no nfsd thread was considered 65 available. 66 67 The circumstance this statistic tracks indicates that there was NFS 68 network-facing work to be done but it couldn't be done immediately, 69 thus introducing a small delay in servicing NFS calls. The ideal 70 rate of change for this counter is zero; significantly non-zero 71 values may indicate a performance limitation. 72 73 This can happen because there are too few nfsd threads in the thread 74 pool for the NFS workload (the workload is thread-limited), in which 75 case configuring more nfsd threads will probably improve the 76 performance of the NFS workload. 77 78threads-woken 79 Counts how many times an idle nfsd thread is woken to try to 80 receive some data from an NFS transport. 81 82 This statistic tracks the circumstance where incoming 83 network-facing NFS work is being handled quickly, which is a good 84 thing. The ideal rate of change for this counter will be close 85 to but less than the rate of change of the packets-arrived counter. 86 87threads-timedout 88 Counts how many times an nfsd thread triggered an idle timeout, 89 i.e. was not woken to handle any incoming network packets for 90 some time. 91 92 This statistic counts a circumstance where there are more nfsd 93 threads configured than can be used by the NFS workload. This is 94 a clue that the number of nfsd threads can be reduced without 95 affecting performance. Unfortunately, it's only a clue and not 96 a strong indication, for a couple of reasons: 97 98 - Currently the rate at which the counter is incremented is quite 99 slow; the idle timeout is 60 minutes. Unless the NFS workload 100 remains constant for hours at a time, this counter is unlikely 101 to be providing information that is still useful. 102 103 - It is usually a wise policy to provide some slack, 104 i.e. configure a few more nfsds than are currently needed, 105 to allow for future spikes in load. 106 107 108Note that incoming packets on NFS transports will be dealt with in 109one of three ways. An nfsd thread can be woken (threads-woken counts 110this case), or the transport can be enqueued for later attention 111(sockets-enqueued counts this case), or the packet can be temporarily 112deferred because the transport is currently being used by an nfsd 113thread. This last case is not very interesting and is not explicitly 114counted, but can be inferred from the other counters thus:: 115 116 packets-deferred = packets-arrived - ( sockets-enqueued + threads-woken ) 117 118 119More 120==== 121 122Descriptions of the other statistics file should go here.