cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

Kconfig (13391B)


      1# SPDX-License-Identifier: GPL-2.0-only
      2#
      3# IP Virtual Server configuration
      4#
      5menuconfig IP_VS
      6	tristate "IP virtual server support"
      7	depends on INET && NETFILTER
      8	depends on (NF_CONNTRACK || NF_CONNTRACK=n)
      9	help
     10	  IP Virtual Server support will let you build a high-performance
     11	  virtual server based on cluster of two or more real servers. This
     12	  option must be enabled for at least one of the clustered computers
     13	  that will take care of intercepting incoming connections to a
     14	  single IP address and scheduling them to real servers.
     15
     16	  Three request dispatching techniques are implemented, they are
     17	  virtual server via NAT, virtual server via tunneling and virtual
     18	  server via direct routing. The several scheduling algorithms can
     19	  be used to choose which server the connection is directed to,
     20	  thus load balancing can be achieved among the servers.  For more
     21	  information and its administration program, please visit the
     22	  following URL: <http://www.linuxvirtualserver.org/>.
     23
     24	  If you want to compile it in kernel, say Y. To compile it as a
     25	  module, choose M here. If unsure, say N.
     26
     27if IP_VS
     28
     29config	IP_VS_IPV6
     30	bool "IPv6 support for IPVS"
     31	depends on IPV6 = y || IP_VS = IPV6
     32	select NF_DEFRAG_IPV6
     33	help
     34	  Add IPv6 support to IPVS.
     35
     36	  Say Y if unsure.
     37
     38config	IP_VS_DEBUG
     39	bool "IP virtual server debugging"
     40	help
     41	  Say Y here if you want to get additional messages useful in
     42	  debugging the IP virtual server code. You can change the debug
     43	  level in /proc/sys/net/ipv4/vs/debug_level
     44
     45config	IP_VS_TAB_BITS
     46	int "IPVS connection table size (the Nth power of 2)"
     47	range 8 20
     48	default 12
     49	help
     50	  The IPVS connection hash table uses the chaining scheme to handle
     51	  hash collisions. Using a big IPVS connection hash table will greatly
     52	  reduce conflicts when there are hundreds of thousands of connections
     53	  in the hash table.
     54
     55	  Note the table size must be power of 2. The table size will be the
     56	  value of 2 to the your input number power. The number to choose is
     57	  from 8 to 20, the default number is 12, which means the table size
     58	  is 4096. Don't input the number too small, otherwise you will lose
     59	  performance on it. You can adapt the table size yourself, according
     60	  to your virtual server application. It is good to set the table size
     61	  not far less than the number of connections per second multiplying
     62	  average lasting time of connection in the table.  For example, your
     63	  virtual server gets 200 connections per second, the connection lasts
     64	  for 200 seconds in average in the connection table, the table size
     65	  should be not far less than 200x200, it is good to set the table
     66	  size 32768 (2**15).
     67
     68	  Another note that each connection occupies 128 bytes effectively and
     69	  each hash entry uses 8 bytes, so you can estimate how much memory is
     70	  needed for your box.
     71
     72	  You can overwrite this number setting conn_tab_bits module parameter
     73	  or by appending ip_vs.conn_tab_bits=? to the kernel command line
     74	  if IP VS was compiled built-in.
     75
     76comment "IPVS transport protocol load balancing support"
     77
     78config	IP_VS_PROTO_TCP
     79	bool "TCP load balancing support"
     80	help
     81	  This option enables support for load balancing TCP transport
     82	  protocol. Say Y if unsure.
     83
     84config	IP_VS_PROTO_UDP
     85	bool "UDP load balancing support"
     86	help
     87	  This option enables support for load balancing UDP transport
     88	  protocol. Say Y if unsure.
     89
     90config	IP_VS_PROTO_AH_ESP
     91	def_bool IP_VS_PROTO_ESP || IP_VS_PROTO_AH
     92
     93config	IP_VS_PROTO_ESP
     94	bool "ESP load balancing support"
     95	help
     96	  This option enables support for load balancing ESP (Encapsulation
     97	  Security Payload) transport protocol. Say Y if unsure.
     98
     99config	IP_VS_PROTO_AH
    100	bool "AH load balancing support"
    101	help
    102	  This option enables support for load balancing AH (Authentication
    103	  Header) transport protocol. Say Y if unsure.
    104
    105config  IP_VS_PROTO_SCTP
    106	bool "SCTP load balancing support"
    107	select LIBCRC32C
    108	help
    109	  This option enables support for load balancing SCTP transport
    110	  protocol. Say Y if unsure.
    111
    112comment "IPVS scheduler"
    113
    114config	IP_VS_RR
    115	tristate "round-robin scheduling"
    116	help
    117	  The robin-robin scheduling algorithm simply directs network
    118	  connections to different real servers in a round-robin manner.
    119
    120	  If you want to compile it in kernel, say Y. To compile it as a
    121	  module, choose M here. If unsure, say N.
    122 
    123config	IP_VS_WRR
    124	tristate "weighted round-robin scheduling"
    125	help
    126	  The weighted robin-robin scheduling algorithm directs network
    127	  connections to different real servers based on server weights
    128	  in a round-robin manner. Servers with higher weights receive
    129	  new connections first than those with less weights, and servers
    130	  with higher weights get more connections than those with less
    131	  weights and servers with equal weights get equal connections.
    132
    133	  If you want to compile it in kernel, say Y. To compile it as a
    134	  module, choose M here. If unsure, say N.
    135
    136config	IP_VS_LC
    137	tristate "least-connection scheduling"
    138	help
    139	  The least-connection scheduling algorithm directs network
    140	  connections to the server with the least number of active 
    141	  connections.
    142
    143	  If you want to compile it in kernel, say Y. To compile it as a
    144	  module, choose M here. If unsure, say N.
    145
    146config	IP_VS_WLC
    147	tristate "weighted least-connection scheduling"
    148	help
    149	  The weighted least-connection scheduling algorithm directs network
    150	  connections to the server with the least active connections
    151	  normalized by the server weight.
    152
    153	  If you want to compile it in kernel, say Y. To compile it as a
    154	  module, choose M here. If unsure, say N.
    155
    156config  IP_VS_FO
    157		tristate "weighted failover scheduling"
    158	help
    159	  The weighted failover scheduling algorithm directs network
    160	  connections to the server with the highest weight that is
    161	  currently available.
    162
    163	  If you want to compile it in kernel, say Y. To compile it as a
    164	  module, choose M here. If unsure, say N.
    165
    166config  IP_VS_OVF
    167	tristate "weighted overflow scheduling"
    168	help
    169	  The weighted overflow scheduling algorithm directs network
    170	  connections to the server with the highest weight that is
    171	  currently available and overflows to the next when active
    172	  connections exceed the node's weight.
    173
    174	  If you want to compile it in kernel, say Y. To compile it as a
    175	  module, choose M here. If unsure, say N.
    176
    177config	IP_VS_LBLC
    178	tristate "locality-based least-connection scheduling"
    179	help
    180	  The locality-based least-connection scheduling algorithm is for
    181	  destination IP load balancing. It is usually used in cache cluster.
    182	  This algorithm usually directs packet destined for an IP address to
    183	  its server if the server is alive and under load. If the server is
    184	  overloaded (its active connection numbers is larger than its weight)
    185	  and there is a server in its half load, then allocate the weighted
    186	  least-connection server to this IP address.
    187
    188	  If you want to compile it in kernel, say Y. To compile it as a
    189	  module, choose M here. If unsure, say N.
    190
    191config  IP_VS_LBLCR
    192	tristate "locality-based least-connection with replication scheduling"
    193	help
    194	  The locality-based least-connection with replication scheduling
    195	  algorithm is also for destination IP load balancing. It is 
    196	  usually used in cache cluster. It differs from the LBLC scheduling
    197	  as follows: the load balancer maintains mappings from a target
    198	  to a set of server nodes that can serve the target. Requests for
    199	  a target are assigned to the least-connection node in the target's
    200	  server set. If all the node in the server set are over loaded,
    201	  it picks up a least-connection node in the cluster and adds it
    202	  in the sever set for the target. If the server set has not been
    203	  modified for the specified time, the most loaded node is removed
    204	  from the server set, in order to avoid high degree of replication.
    205
    206	  If you want to compile it in kernel, say Y. To compile it as a
    207	  module, choose M here. If unsure, say N.
    208
    209config	IP_VS_DH
    210	tristate "destination hashing scheduling"
    211	help
    212	  The destination hashing scheduling algorithm assigns network
    213	  connections to the servers through looking up a statically assigned
    214	  hash table by their destination IP addresses.
    215
    216	  If you want to compile it in kernel, say Y. To compile it as a
    217	  module, choose M here. If unsure, say N.
    218
    219config	IP_VS_SH
    220	tristate "source hashing scheduling"
    221	help
    222	  The source hashing scheduling algorithm assigns network
    223	  connections to the servers through looking up a statically assigned
    224	  hash table by their source IP addresses.
    225
    226	  If you want to compile it in kernel, say Y. To compile it as a
    227	  module, choose M here. If unsure, say N.
    228
    229config	IP_VS_MH
    230	tristate "maglev hashing scheduling"
    231	help
    232	  The maglev consistent hashing scheduling algorithm provides the
    233	  Google's Maglev hashing algorithm as a IPVS scheduler. It assigns
    234	  network connections to the servers through looking up a statically
    235	  assigned special hash table called the lookup table. Maglev hashing
    236	  is to assign a preference list of all the lookup table positions
    237	  to each destination.
    238
    239	  Through this operation, The maglev hashing gives an almost equal
    240	  share of the lookup table to each of the destinations and provides
    241	  minimal disruption by using the lookup table. When the set of
    242	  destinations changes, a connection will likely be sent to the same
    243	  destination as it was before.
    244
    245	  If you want to compile it in kernel, say Y. To compile it as a
    246	  module, choose M here. If unsure, say N.
    247
    248config	IP_VS_SED
    249	tristate "shortest expected delay scheduling"
    250	help
    251	  The shortest expected delay scheduling algorithm assigns network
    252	  connections to the server with the shortest expected delay. The 
    253	  expected delay that the job will experience is (Ci + 1) / Ui if 
    254	  sent to the ith server, in which Ci is the number of connections
    255	  on the ith server and Ui is the fixed service rate (weight)
    256	  of the ith server.
    257
    258	  If you want to compile it in kernel, say Y. To compile it as a
    259	  module, choose M here. If unsure, say N.
    260
    261config	IP_VS_NQ
    262	tristate "never queue scheduling"
    263	help
    264	  The never queue scheduling algorithm adopts a two-speed model.
    265	  When there is an idle server available, the job will be sent to
    266	  the idle server, instead of waiting for a fast one. When there
    267	  is no idle server available, the job will be sent to the server
    268	  that minimize its expected delay (The Shortest Expected Delay
    269	  scheduling algorithm).
    270
    271	  If you want to compile it in kernel, say Y. To compile it as a
    272	  module, choose M here. If unsure, say N.
    273
    274config	IP_VS_TWOS
    275	tristate "weighted random twos choice least-connection scheduling"
    276	help
    277	  The weighted random twos choice least-connection scheduling
    278	  algorithm picks two random real servers and directs network
    279	  connections to the server with the least active connections
    280	  normalized by the server weight.
    281
    282	  If you want to compile it in kernel, say Y. To compile it as a
    283	  module, choose M here. If unsure, say N.
    284
    285comment 'IPVS SH scheduler'
    286
    287config IP_VS_SH_TAB_BITS
    288	int "IPVS source hashing table size (the Nth power of 2)"
    289	range 4 20
    290	default 8
    291	help
    292	  The source hashing scheduler maps source IPs to destinations
    293	  stored in a hash table. This table is tiled by each destination
    294	  until all slots in the table are filled. When using weights to
    295	  allow destinations to receive more connections, the table is
    296	  tiled an amount proportional to the weights specified. The table
    297	  needs to be large enough to effectively fit all the destinations
    298	  multiplied by their respective weights.
    299
    300comment 'IPVS MH scheduler'
    301
    302config IP_VS_MH_TAB_INDEX
    303	int "IPVS maglev hashing table index of size (the prime numbers)"
    304	range 8 17
    305	default 12
    306	help
    307	  The maglev hashing scheduler maps source IPs to destinations
    308	  stored in a hash table. This table is assigned by a preference
    309	  list of the positions to each destination until all slots in
    310	  the table are filled. The index determines the prime for size of
    311	  the table as 251, 509, 1021, 2039, 4093, 8191, 16381, 32749,
    312	  65521 or 131071. When using weights to allow destinations to
    313	  receive more connections, the table is assigned an amount
    314	  proportional to the weights specified. The table needs to be large
    315	  enough to effectively fit all the destinations multiplied by their
    316	  respective weights.
    317
    318comment 'IPVS application helper'
    319
    320config	IP_VS_FTP
    321	tristate "FTP protocol helper"
    322	depends on IP_VS_PROTO_TCP && NF_CONNTRACK && NF_NAT && \
    323		NF_CONNTRACK_FTP
    324	select IP_VS_NFCT
    325	help
    326	  FTP is a protocol that transfers IP address and/or port number in
    327	  the payload. In the virtual server via Network Address Translation,
    328	  the IP address and port number of real servers cannot be sent to
    329	  clients in ftp connections directly, so FTP protocol helper is
    330	  required for tracking the connection and mangling it back to that of
    331	  virtual service.
    332
    333	  If you want to compile it in kernel, say Y. To compile it as a
    334	  module, choose M here. If unsure, say N.
    335
    336config	IP_VS_NFCT
    337	bool "Netfilter connection tracking"
    338	depends on NF_CONNTRACK
    339	help
    340	  The Netfilter connection tracking support allows the IPVS
    341	  connection state to be exported to the Netfilter framework
    342	  for filtering purposes.
    343
    344config	IP_VS_PE_SIP
    345	tristate "SIP persistence engine"
    346	depends on IP_VS_PROTO_UDP
    347	depends on NF_CONNTRACK_SIP
    348	help
    349	  Allow persistence based on the SIP Call-ID
    350
    351endif # IP_VS