cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

net.rst (16487B)


      1================================
      2Documentation for /proc/sys/net/
      3================================
      4
      5Copyright
      6
      7Copyright (c) 1999
      8
      9	- Terrehon Bowden <terrehon@pacbell.net>
     10	- Bodo Bauer <bb@ricochet.net>
     11
     12Copyright (c) 2000
     13
     14	- Jorge Nerin <comandante@zaralinux.com>
     15
     16Copyright (c) 2009
     17
     18	- Shen Feng <shen@cn.fujitsu.com>
     19
     20For general info and legal blurb, please look in index.rst.
     21
     22------------------------------------------------------------------------------
     23
     24This file contains the documentation for the sysctl files in
     25/proc/sys/net
     26
     27The interface  to  the  networking  parts  of  the  kernel  is  located  in
     28/proc/sys/net. The following table shows all possible subdirectories.  You may
     29see only some of them, depending on your kernel's configuration.
     30
     31
     32Table : Subdirectories in /proc/sys/net
     33
     34 ========= =================== = ========== ==================
     35 Directory Content               Directory  Content
     36 ========= =================== = ========== ==================
     37 core      General parameter     appletalk  Appletalk protocol
     38 unix      Unix domain sockets   netrom     NET/ROM
     39 802       E802 protocol         ax25       AX25
     40 ethernet  Ethernet protocol     rose       X.25 PLP layer
     41 ipv4      IP version 4          x25        X.25 protocol
     42 bridge    Bridging              decnet     DEC net
     43 ipv6      IP version 6          tipc       TIPC
     44 ========= =================== = ========== ==================
     45
     461. /proc/sys/net/core - Network core options
     47============================================
     48
     49bpf_jit_enable
     50--------------
     51
     52This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
     53and efficient infrastructure allowing to execute bytecode at various
     54hook points. It is used in a number of Linux kernel subsystems such
     55as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
     56and security (e.g. seccomp). LLVM has a BPF back end that can compile
     57restricted C into a sequence of BPF instructions. After program load
     58through bpf(2) and passing a verifier in the kernel, a JIT will then
     59translate these BPF proglets into native CPU instructions. There are
     60two flavors of JITs, the newer eBPF JIT currently supported on:
     61
     62  - x86_64
     63  - x86_32
     64  - arm64
     65  - arm32
     66  - ppc64
     67  - ppc32
     68  - sparc64
     69  - mips64
     70  - s390x
     71  - riscv64
     72  - riscv32
     73
     74And the older cBPF JIT supported on the following archs:
     75
     76  - mips
     77  - sparc
     78
     79eBPF JITs are a superset of cBPF JITs, meaning the kernel will
     80migrate cBPF instructions into eBPF instructions and then JIT
     81compile them transparently. Older cBPF JITs can only translate
     82tcpdump filters, seccomp rules, etc, but not mentioned eBPF
     83programs loaded through bpf(2).
     84
     85Values:
     86
     87	- 0 - disable the JIT (default value)
     88	- 1 - enable the JIT
     89	- 2 - enable the JIT and ask the compiler to emit traces on kernel log.
     90
     91bpf_jit_harden
     92--------------
     93
     94This enables hardening for the BPF JIT compiler. Supported are eBPF
     95JIT backends. Enabling hardening trades off performance, but can
     96mitigate JIT spraying.
     97
     98Values:
     99
    100	- 0 - disable JIT hardening (default value)
    101	- 1 - enable JIT hardening for unprivileged users only
    102	- 2 - enable JIT hardening for all users
    103
    104bpf_jit_kallsyms
    105----------------
    106
    107When BPF JIT compiler is enabled, then compiled images are unknown
    108addresses to the kernel, meaning they neither show up in traces nor
    109in /proc/kallsyms. This enables export of these addresses, which can
    110be used for debugging/tracing. If bpf_jit_harden is enabled, this
    111feature is disabled.
    112
    113Values :
    114
    115	- 0 - disable JIT kallsyms export (default value)
    116	- 1 - enable JIT kallsyms export for privileged users only
    117
    118bpf_jit_limit
    119-------------
    120
    121This enforces a global limit for memory allocations to the BPF JIT
    122compiler in order to reject unprivileged JIT requests once it has
    123been surpassed. bpf_jit_limit contains the value of the global limit
    124in bytes.
    125
    126dev_weight
    127----------
    128
    129The maximum number of packets that kernel can handle on a NAPI interrupt,
    130it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware
    131aggregated packet is counted as one packet in this context.
    132
    133Default: 64
    134
    135dev_weight_rx_bias
    136------------------
    137
    138RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function
    139of the driver for the per softirq cycle netdev_budget. This parameter influences
    140the proportion of the configured netdev_budget that is spent on RPS based packet
    141processing during RX softirq cycles. It is further meant for making current
    142dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack.
    143(see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based
    144on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias).
    145
    146Default: 1
    147
    148dev_weight_tx_bias
    149------------------
    150
    151Scales the maximum number of packets that can be processed during a TX softirq cycle.
    152Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric
    153net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog.
    154
    155Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias).
    156
    157Default: 1
    158
    159default_qdisc
    160-------------
    161
    162The default queuing discipline to use for network devices. This allows
    163overriding the default of pfifo_fast with an alternative. Since the default
    164queuing discipline is created without additional parameters so is best suited
    165to queuing disciplines that work well without configuration like stochastic
    166fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use
    167queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin
    168which require setting up classes and bandwidths. Note that physical multiqueue
    169interfaces still use mq as root qdisc, which in turn uses this default for its
    170leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead
    171default to noqueue.
    172
    173Default: pfifo_fast
    174
    175busy_read
    176---------
    177
    178Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL)
    179Approximate time in us to busy loop waiting for packets on the device queue.
    180This sets the default value of the SO_BUSY_POLL socket option.
    181Can be set or overridden per socket by setting socket option SO_BUSY_POLL,
    182which is the preferred method of enabling. If you need to enable the feature
    183globally via sysctl, a value of 50 is recommended.
    184
    185Will increase power usage.
    186
    187Default: 0 (off)
    188
    189busy_poll
    190----------------
    191Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL)
    192Approximate time in us to busy loop waiting for events.
    193Recommended value depends on the number of sockets you poll on.
    194For several sockets 50, for several hundreds 100.
    195For more than that you probably want to use epoll.
    196Note that only sockets with SO_BUSY_POLL set will be busy polled,
    197so you want to either selectively set SO_BUSY_POLL on those sockets or set
    198sysctl.net.busy_read globally.
    199
    200Will increase power usage.
    201
    202Default: 0 (off)
    203
    204rmem_default
    205------------
    206
    207The default setting of the socket receive buffer in bytes.
    208
    209rmem_max
    210--------
    211
    212The maximum receive socket buffer size in bytes.
    213
    214tstamp_allow_data
    215-----------------
    216Allow processes to receive tx timestamps looped together with the original
    217packet contents. If disabled, transmit timestamp requests from unprivileged
    218processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set.
    219
    220Default: 1 (on)
    221
    222
    223wmem_default
    224------------
    225
    226The default setting (in bytes) of the socket send buffer.
    227
    228wmem_max
    229--------
    230
    231The maximum send socket buffer size in bytes.
    232
    233message_burst and message_cost
    234------------------------------
    235
    236These parameters  are used to limit the warning messages written to the kernel
    237log from  the  networking  code.  They  enforce  a  rate  limit  to  make  a
    238denial-of-service attack  impossible. A higher message_cost factor, results in
    239fewer messages that will be written. Message_burst controls when messages will
    240be dropped.  The  default  settings  limit  warning messages to one every five
    241seconds.
    242
    243warnings
    244--------
    245
    246This sysctl is now unused.
    247
    248This was used to control console messages from the networking stack that
    249occur because of problems on the network like duplicate address or bad
    250checksums.
    251
    252These messages are now emitted at KERN_DEBUG and can generally be enabled
    253and controlled by the dynamic_debug facility.
    254
    255netdev_budget
    256-------------
    257
    258Maximum number of packets taken from all interfaces in one polling cycle (NAPI
    259poll). In one polling cycle interfaces which are registered to polling are
    260probed in a round-robin manner. Also, a polling cycle may not exceed
    261netdev_budget_usecs microseconds, even if netdev_budget has not been
    262exhausted.
    263
    264netdev_budget_usecs
    265---------------------
    266
    267Maximum number of microseconds in one NAPI polling cycle. Polling
    268will exit when either netdev_budget_usecs have elapsed during the
    269poll cycle or the number of packets processed reaches netdev_budget.
    270
    271netdev_max_backlog
    272------------------
    273
    274Maximum number  of  packets,  queued  on  the  INPUT  side, when the interface
    275receives packets faster than kernel can process them.
    276
    277netdev_rss_key
    278--------------
    279
    280RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is
    281randomly generated.
    282Some user space might need to gather its content even if drivers do not
    283provide ethtool -x support yet.
    284
    285::
    286
    287  myhost:~# cat /proc/sys/net/core/netdev_rss_key
    288  84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total)
    289
    290File contains nul bytes if no driver ever called netdev_rss_key_fill() function.
    291
    292Note:
    293  /proc/sys/net/core/netdev_rss_key contains 52 bytes of key,
    294  but most drivers only use 40 bytes of it.
    295
    296::
    297
    298  myhost:~# ethtool -x eth0
    299  RX flow hash indirection table for eth0 with 8 RX ring(s):
    300      0:    0     1     2     3     4     5     6     7
    301  RSS hash key:
    302  84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
    303
    304netdev_tstamp_prequeue
    305----------------------
    306
    307If set to 0, RX packet timestamps can be sampled after RPS processing, when
    308the target CPU processes packets. It might give some delay on timestamps, but
    309permit to distribute the load on several cpus.
    310
    311If set to 1 (default), timestamps are sampled as soon as possible, before
    312queueing.
    313
    314netdev_unregister_timeout_secs
    315------------------------------
    316
    317Unregister network device timeout in seconds.
    318This option controls the timeout (in seconds) used to issue a warning while
    319waiting for a network device refcount to drop to 0 during device
    320unregistration. A lower value may be useful during bisection to detect
    321a leaked reference faster. A larger value may be useful to prevent false
    322warnings on slow/loaded systems.
    323Default value is 10, minimum 1, maximum 3600.
    324
    325skb_defer_max
    326-------------
    327
    328Max size (in skbs) of the per-cpu list of skbs being freed
    329by the cpu which allocated them. Used by TCP stack so far.
    330
    331Default: 64
    332
    333optmem_max
    334----------
    335
    336Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence
    337of struct cmsghdr structures with appended data.
    338
    339fb_tunnels_only_for_init_net
    340----------------------------
    341
    342Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0,
    343sit0, ip6tnl0, ip6gre0) are automatically created. There are 3 possibilities
    344(a) value = 0; respective fallback tunnels are created when module is
    345loaded in every net namespaces (backward compatible behavior).
    346(b) value = 1; [kcmd value: initns] respective fallback tunnels are
    347created only in init net namespace and every other net namespace will
    348not have them.
    349(c) value = 2; [kcmd value: none] fallback tunnels are not created
    350when a module is loaded in any of the net namespace. Setting value to
    351"2" is pointless after boot if these modules are built-in, so there is
    352a kernel command-line option that can change this default. Please refer to
    353Documentation/admin-guide/kernel-parameters.txt for additional details.
    354
    355Not creating fallback tunnels gives control to userspace to create
    356whatever is needed only and avoid creating devices which are redundant.
    357
    358Default : 0  (for compatibility reasons)
    359
    360devconf_inherit_init_net
    361------------------------
    362
    363Controls if a new network namespace should inherit all current
    364settings under /proc/sys/net/{ipv4,ipv6}/conf/{all,default}/. By
    365default, we keep the current behavior: for IPv4 we inherit all current
    366settings from init_net and for IPv6 we reset all settings to default.
    367
    368If set to 1, both IPv4 and IPv6 settings are forced to inherit from
    369current ones in init_net. If set to 2, both IPv4 and IPv6 settings are
    370forced to reset to their default values. If set to 3, both IPv4 and IPv6
    371settings are forced to inherit from current ones in the netns where this
    372new netns has been created.
    373
    374Default : 0  (for compatibility reasons)
    375
    376txrehash
    377--------
    378
    379Controls default hash rethink behaviour on listening socket when SO_TXREHASH
    380option is set to SOCK_TXREHASH_DEFAULT (i. e. not overridden by setsockopt).
    381
    382If set to 1 (default), hash rethink is performed on listening socket.
    383If set to 0, hash rethink is not performed.
    384
    385gro_normal_batch
    386----------------
    387
    388Maximum number of the segments to batch up on output of GRO. When a packet
    389exits GRO, either as a coalesced superframe or as an original packet which
    390GRO has decided not to coalesce, it is placed on a per-NAPI list. This
    391list is then passed to the stack when the number of segments reaches the
    392gro_normal_batch limit.
    393
    3942. /proc/sys/net/unix - Parameters for Unix domain sockets
    395----------------------------------------------------------
    396
    397There is only one file in this directory.
    398unix_dgram_qlen limits the max number of datagrams queued in Unix domain
    399socket's buffer. It will not take effect unless PF_UNIX flag is specified.
    400
    401
    4023. /proc/sys/net/ipv4 - IPV4 settings
    403-------------------------------------
    404Please see: Documentation/networking/ip-sysctl.rst and
    405Documentation/admin-guide/sysctl/net.rst for descriptions of these entries.
    406
    407
    4084. Appletalk
    409------------
    410
    411The /proc/sys/net/appletalk  directory  holds the Appletalk configuration data
    412when Appletalk is loaded. The configurable parameters are:
    413
    414aarp-expiry-time
    415----------------
    416
    417The amount  of  time  we keep an ARP entry before expiring it. Used to age out
    418old hosts.
    419
    420aarp-resolve-time
    421-----------------
    422
    423The amount of time we will spend trying to resolve an Appletalk address.
    424
    425aarp-retransmit-limit
    426---------------------
    427
    428The number of times we will retransmit a query before giving up.
    429
    430aarp-tick-time
    431--------------
    432
    433Controls the rate at which expires are checked.
    434
    435The directory  /proc/net/appletalk  holds the list of active Appletalk sockets
    436on a machine.
    437
    438The fields  indicate  the DDP type, the local address (in network:node format)
    439the remote  address,  the  size of the transmit pending queue, the size of the
    440received queue  (bytes waiting for applications to read) the state and the uid
    441owning the socket.
    442
    443/proc/net/atalk_iface lists  all  the  interfaces  configured for appletalk.It
    444shows the  name  of the interface, its Appletalk address, the network range on
    445that address  (or  network number for phase 1 networks), and the status of the
    446interface.
    447
    448/proc/net/atalk_route lists  each  known  network  route.  It lists the target
    449(network) that the route leads to, the router (may be directly connected), the
    450route flags, and the device the route is using.
    451
    4525. TIPC
    453-------
    454
    455tipc_rmem
    456---------
    457
    458The TIPC protocol now has a tunable for the receive memory, similar to the
    459tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max)
    460
    461::
    462
    463    # cat /proc/sys/net/tipc/tipc_rmem
    464    4252725 34021800        68043600
    465    #
    466
    467The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values
    468are scaled (shifted) versions of that same value.  Note that the min value
    469is not at this point in time used in any meaningful way, but the triplet is
    470preserved in order to be consistent with things like tcp_rmem.
    471
    472named_timeout
    473-------------
    474
    475TIPC name table updates are distributed asynchronously in a cluster, without
    476any form of transaction handling. This means that different race scenarios are
    477possible. One such is that a name withdrawal sent out by one node and received
    478by another node may arrive after a second, overlapping name publication already
    479has been accepted from a third node, although the conflicting updates
    480originally may have been issued in the correct sequential order.
    481If named_timeout is nonzero, failed topology updates will be placed on a defer
    482queue until another event arrives that clears the error, or until the timeout
    483expires. Value is in milliseconds.